Updates from: 11/03/2023 02:13:38
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md
Previously updated : 08/10/2023 Last updated : 10/27/2023
You can manage default network access rules for Azure AI services resources thro
You can configure Azure AI services resources to allow access from specific subnets only. The allowed subnets might belong to a virtual network in the same subscription or in a different subscription. The other subscription can belong to a different Microsoft Entra tenant.
-Enable a *service endpoint* for Azure AI services within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure AI services service. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
+Enable a *service endpoint* for Azure AI services within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure AI service. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource to allow requests from specific subnets in a virtual network. Clients granted access by these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data.
You can use [private endpoints](../private-link/private-endpoint-overview.md) fo
Private endpoints for Azure AI services resources let you: -- Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service.
+- Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI service.
- Increase security for the virtual network, by enabling you to block exfiltration of data from the virtual network. - Securely connect to Azure AI services resources from on-premises networks that connect to the virtual network by using [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering. ### Understand private endpoints
-A private endpoint is a special network interface for an Azure resource in your [virtual network](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your virtual network and your resource. The private endpoint is assigned an IP address from the IP address range of your virtual network. The connection between the private endpoint and the Azure AI services service uses a secure private link.
+A private endpoint is a special network interface for an Azure resource in your [virtual network](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your virtual network and your resource. The private endpoint is assigned an IP address from the IP address range of your virtual network. The connection between the private endpoint and the Azure AI service uses a secure private link.
Applications in the virtual network can connect to the service over the private endpoint seamlessly. Connections use the same connection strings and authorization mechanisms that they would use otherwise. The exception is Speech Services, which require a separate endpoint. For more information, see [Private endpoints with the Speech Services](#use-private-endpoints-with-the-speech-service) in this article. Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST.
For more information on configuring your own DNS server to support private endpo
- [Name resolution that uses your own DNS server](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) - [DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration)
+## Grant access to trusted Azure services for Azure OpenAI
+
+You can grant a subset of trusted Azure services access to Azure OpenAI, while maintaining network rules for other apps. These trusted services will then use managed identity to authenticate your Azure OpenAI service. The following table lists the services that can access Azure OpenAI if the managed identity of those services have the appropriate role assignment.
++
+|Service |Resource provider name |
+|||
+|Azure AI Services | `Microsoft.CognitiveServices` |
+|Azure Machine Learning |`Microsoft.MachineLearningServices` |
+|Azure Cognitive Search | `Microsoft.Search` |
++
+You can grant networking access to trusted Azure services by creating a network rule exception using the REST API:
+```bash
+
+accessToken=$(az account get-access-token --resource https://management.azure.com --query "accessToken" --output tsv)
+rid="/subscriptions/<your subscription id>/resourceGroups/<your resource group>/providers/Microsoft.CognitiveServices/accounts/<your Azure AI resource name>"
+
+curl -i -X PATCH https://management.azure.com$rid?api-version=2023-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Authorization: Bearer $accessToken" \
+-d \
+'
+{
+ "properties":
+ {
+ "networkAcls": {
+ "bypass": "AzureServices"
+ }
+ }
+}
+'
+```
+
+To revoke the exception, set `networkAcls.bypass` to `None`.
+ ### Pricing For pricing details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link).
ai-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/best-practices.md
Once the request is sent, you can track the progress of the training job in Lang
> [!NOTE] > You have to retrain your model after updating the `confidenceThreshold` project setting. Afterwards, you'll need to republish the app for the new threshold to take effect.
+### Normalization in model version 2023-04-15
+
+Model version 2023-04-15, conversational language understanding provides normalization in the inference layer that doesn't affect training.
+
+The normalization layer normalizes the classification confidence scores to a confined range. The range selected currently is from `[-a,a]` where "a" is the square root of the number of intents. As a result, the normalization depends on the number of intents in the app. If there is a very low number of intents, the normalization layer has a very small range to work with. With a fairly large number of intents, the normalization is more effective.
+
+If this normalization doesnΓÇÖt seem to help intents that are out of scope to the extent that the confidence threshold can be used to filter out of scope utterances, it might be related to the number of intents in the app. Consider adding more intents to the app, or if you are using an orchestrated architecture, consider merging apps that belong to the same domain together.
+ ## Debugging composed entities Entities are functions that emit spans in your input with an associated type. The function is defined by one or more components. You can mark components as needed, and you can decide whether to enable the *combine components* setting. When you combine components, all spans that overlap will be merged into a single span. If the setting isn't used, each individual component span will be emitted.
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Resolution:
This means the storage account is not accessible with the given credentials. In this case, please review the storage account credentials passed to the API and ensure the storage account is not hidden behind a private endpoint (if a private endpoint is not configured for this resource). ## Custom parameters
-In the **Data parameters** section in Azure OpenAI Studio, you can modify following additional settings.
+You can modify the following additional settings in the **Data parameters** section in Azure OpenAI Studio and [the API](../reference.md#completions-extensions).
|Parameter name | Description |
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/exten
| `stream` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a message `"messages": [{"delta": {"content": "[DONE]"}, "index": 2, "end_turn": true}]` | | `stop` | string or array | Optional | null | Up to 2 sequences where the API will stop generating further tokens. | | `max_tokens` | integer | Optional | 1000 | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return is `4096 - prompt_tokens`. |
+| `retrieved_documents` | number | Optional | 3 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. |
+| `strictness` | number | Optional | 3 | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. |
+ The following parameters can be used inside of the `parameters` field inside of `dataSources`.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md
With the cross-lingual feature, you can transfer your custom neural voice model
# [Pronunciation assessment](#tab/pronunciation-assessment)
-The table in this section summarizes the 23 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 22 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
+The table in this section summarizes the 24 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 23 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
[!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)]
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
Microsoft maintains all certificates mentioned in this section, except for the c
* Check the expiration date of the cluster certificate using the `kubectl config view` command. ```console
- kubectl config view --raw -o jsonpath="{.users[?(@.name == 'clusterUser_rg_myAKSCluster')].user.client-certificate-data}" | base64 -d | openssl x509 -text | grep -A2 Validity
+ kubectl config view --raw -o jsonpath="{.clusters[?(@.name == '')].cluster.certificate-authority-data}" | base64 -d | openssl x509 -text | grep -A2 Validity
``` ### Check API server certificate expiration date
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
To deploy the application, you use a manifest file to create all the objects req
3. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest: ```console
- kubectl apply -f azure-vote.yaml
+ kubectl apply -f aks-store-quickstart.yaml
``` The following example output shows the deployments and
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
AKS regularly provides new node images. Linux node images are updated weekly, an
* View the upgrade events in the default namespaces using the `kubectl get events` command. ```console
- kubectl get events
+ kubectl get events --field-selector source=upgrader
``` The following example output shows some of the above events listed during an upgrade:
For more information on AKS, see the [AKS overview][aks-intro]. For guidance on
[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup [aks-auto-upgrade]: ./auto-upgrade-cluster.md [auto-upgrade-node-image]: ./auto-upgrade-node-image.md
-[node-image-upgrade]: ./node-image-upgrade.md
+[node-image-upgrade]: ./node-image-upgrade.md
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
The following table compares features available in the managed gateway versus th
| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ❌ | ✔️<sup>3</sup> | | [Managed domain certificates](configure-custom-domain.md?tabs=managed#domain-certificate-options) | ✔️ | ✔️ | ❌ | | [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ |
-| **HTTP/2** (Client-to-gateway) | ❌ | ❌ | ✔️ |
+| **HTTP/2** (Client-to-gateway) | ✔️<sup>4</sup> | ❌ | ✔️ |
| **HTTP/2** (Gateway-to-backend) | ❌ | ❌ | ✔️ |
-| API threat detection with [Defender for APIs](protect-with-defender-for-apis.md) | ✔️ | ❌ | ❌ |
+| API threat detection with [Defender for APIs](protect-with-defender-for-apis.md) | ✔️ | ❌ | ❌ |
<sup>1</sup> Depends on how the gateway is deployed, but is the responsibility of the customer.<br/> <sup>2</sup> Connectivity to the self-hosted gateway v2 [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies) requires DNS resolution of the endpoint hostname.<br/>
+<sup>3</sup>CA root certificates for self-hosted gateway are managed separately per gateway<br/>
+<sup>4</sup> Client protocol needs to be enabled.
### Backend APIs
api-management Migrate Stv1 To Stv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md
For more information about the `stv1` and `stv2` platforms and the benefits of u
> Support for API Management instances hosted on the `stv1` platform will be [retired by 31 August 2024](breaking-changes/stv1-platform-retirement-august-2024.md). To ensure continued support and operation of your API Management instance, you must migrate any instance hosted on the `stv1` platform to `stv2` before that date. > [!CAUTION]
-> * Migrating your API Management instance to new infrastructure is a long-running operation. Depending on your service configuration, you may have temporary downtime during migration, and you may need to update your network dependencies after migration to reach your API Management instance. Plan your migration accordingly.
+> * Migrating your API Management instance to new infrastructure is a long-running operation. Depending on your service configuration, you might have temporary downtime during migration, and you might need to update your network dependencies after migration to reach your API Management instance. Plan your migration accordingly.
> * Migration to `stv2` is not reversible. [!INCLUDE [api-management-availability-premium-dev-standard-basic](../../includes/api-management-availability-premium-dev-standard-basic.md)]
+## What happens during migration?
+
+API Management platform migration from `stv1` to `stv2` involves updating the underlying compute alone and has no impact on the service/api configuration persisted in the storage layer.
+
+* The upgrade process involves creating a new compute in parallel the old compute. Both instances coexist for 48 hours.
+* The API Management status in the Portal will be "Updating".
+* Azure manages the management endpoint DNS, and updates to the new compute immediately on successful migration.
+* The Gateway DNS still points to the old compute if custom domain is in use.
+* If custom DNS isn't in use, the Gateway and Portal DNS points to the new compute immediately.
+* For an instance in internal VNet mode, customer manages the DNS, so the DNS entries continue to point to old compute until updated by the customer.
+* It's the DNS that points to either the new or the old compute and hence no downtime to the APIs.
+* Changes are required to your firewall rules, if any, to allow the new compute subnet reach the backends.
+ ## Prerequisites * An API Management instance hosted on the `stv1` compute platform. To confirm that your instance is hosted on the `stv1` platform, see [How do I know which platform hosts my API Management instance?](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance).
For more information about the `stv1` and `stv2` platforms and the benefits of u
## Scenario 1: Migrate API Management instance, not injected in a VNet
-For an API Management instance that's not deployed in a VNet, migrate your instance using the **Platform migration** blade in the portal, or invoke the Migrate to `stv2` REST API.
+For an API Management instance that's not deployed in a VNet, migrate your instance using the **Platform migration** blade in the Azure portal, or invoke the Migrate to `stv2` REST API.
You can choose whether the virtual IP address of API Management will change, or whether the original VIP address is preserved.
az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03
# Alternate call to migrate to stv2 and preserve VIP address # az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03-01-preview" --body '{"mode": "PreserveIp"}' ```-
+### Verify migration
+
+To verify that the migration was successful, when the status changes to `Online`, check the [platform version](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance) of your API Management instance. After successful migration, the value is `stv2`.
+
+### Update network dependencies
+
+On successful migration, update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
+ ## Scenario 2: Migrate a network-injected API Management instance Trigger migration of a network-injected API Management instance to the `stv2` platform by updating the existing network configuration to use new network settings (see the following section). After that update completes, as an optional step, you can migrate back to the original VNet and subnet you used.
You can optionally migrate back to the original VNet and subnet you used in each
1. Verify that the original IP addresses were released by API Management. Under **Available IPs**, note the number of IP addresses available in the subnet. All addresses (except for Azure reserved addresses) should be available. If necessary, wait for IP addresses to be released. 1. Repeat the migration steps in the preceding section. In each region, specify the original VNet and subnet, and a new IP address resource.
-## Verify migration
+### Verify migration
-To verify that the migration was successful, check the [platform version](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance) of your API Management instance. After successful migration, the value is `stv2`.
+* To verify that the migration was successful, when the status changes to `Online`, check the [platform version](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance) of your API Management instance. After successful migration, the value is `stv2`.
+* Additionally check the Network status to ensure connectivity of the instance to its dependencies. In the portal, in the left-hand menu, under **Deployment and infrastructure**, select **Network** > **Network status**.
+### Update network dependencies
+
+On successful migration, update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address/subnet address space.
[!INCLUDE [api-management-migration-support](../../includes/api-management-migration-support.md)]
+## Frequently asked questions
+
+- **What information do we need to choose a migration path?**
+
+ - What is the Network mode of the API Management instance?
+ - Are custom domains configured?
+ - Is a firewall involved?
+ - Any known dependencies taken by upstream/downstream on the IPs involved?
+ - Is it a multi-geo deployment?
+ - Can we modify the existing instance or is a parallel setup required?
+ - Can there be downtime?
+ - Can the migration be done in nonbusiness hours?
+
+- **What are the prerequisites for the migration?**
+
+ ***VNet-injected instances:*** you'll need a new subnet and public IP address to migrate (either External or Internal modes). The subnet must have an NSG attached to it following the rules for `stv2` platform as described [here](./api-management-using-with-vnet.md?tabs=stv2#configure-nsg-rules).
+
+ ***Non-VNet instances:*** no prerequisites are required. If you migrate preserving your public IP address, this will render your API Management instance unresponsive for approximately 15 minutes. If you can't afford any downtime, then choose the *"New IP"* option that makes API Management available on a new IP. Network dependencies need to be updated with the new public virtual IP address.
+
+- **Will the migration cause a downtime?**
+
+ ***VNet-injected instances:*** there's no downtime as the old and new managed gateways are available for 48 hours, to facilitate validation and DNS update. However, if the default domain names are in use, traffic is routed to the new managed gateway immediately. It's critical that all network dependencies are taken care of upfront, for the impacted APIs to be functional.
+
+ ***Non-VNet instances:*** there's a downtime of approximately 15 minutes only if you choose to preserve the original IP address. However, there's no downtime if you migrate with a new IP address.
+
+- **My traffic is force tunneled through a firewall. What changes are required?**
+
+ - First of all, make sure that the new subnet you created for the migration retains the following configuration (they should be already configured in your current subnet):
+ - Enable service endpoints as described [here](./api-management-using-with-vnet.md?tabs=stv2#force-tunnel-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance)
+ - The UDR (user-defined route) has the hop from **ApiManagement** service tag set to "Internet" and not only to your firewall address
+ - The [requirements for NSG configuration for stv2](./api-management-using-with-vnet.md?tabs=stv2#configure-nsg-rules) remain the same whether you have firewall or not; make sure your new subnet has it
+ - Firewall rules referring to the current IP address range of the API Management instance should be updated to use the IP address range of your new subnet.
+
+- **Is it impossible that data or configuration losses can occur by/during the migration?**
+
+ `stv1` to `stv2` migration involves updating the compute platform alone and the internal storage layer isn't changed. Hence all the configuration is safe during the migration process.
+
+- **How to confirm that the migration is complete and successful?**
+
+ The migration is considered complete and successful when the status in the overview page reads *"Online"* along with the platform version being either 2.0 or 2.1. Also verify that the network status in the network blade shows green for all required connectivity.
+
+- **Can I do the migration using the portal?**
+
+ - Yes, the [Platform migration](./migrate-stv1-to-stv2.md?tabs=portal#scenario-1-migrate-api-management-instance-not-injected-in-a-vnet) blade in Azure portal guides through the options for non-VNet injected instances.
+ - VNet-injected instances can be migrated by changing the subnet in the **Network** blade.
+
+- **Can I preserve the IP address of the instance?**
+
+ **VNet-injected instances:** there's no way currently to preserve the IP address if your instance is injected into a VNet
+
+ **Non-VNet instances:** the IP address can be preserved, but there will be a downtime of approximately 15 minutes.
+
+- **Is there a migration path without modifying the existing instance?**
+
+ Yes, you need a side-by-side migration. That means you create a new API Management instance in parallel with your current instance and copy the configuration over to the new instance.
+
+- **What happens if the migration fails?**
+
+ If your API Management instance doesn't show the platform version as `stv2` and status as *"Online"* after you initiated the migration, it probably failed. Your service is automatically rolled back to the old instance and no changes are made. If you have problems (such as if status is *"Updating"* for more than 2 hours), contact Azure support.
+
+- **What functionality is not available during migration?**
+
+ **VNet injected instances:** API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) is locked for 30 minutes. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
+
+ **Non-VNet-injected instances:**
+ - If you opted to preserve the original IP address: If you preserve the VIP address, API requests are unresponsive for approximately 15 minutes while the IP address is migrated to the new infrastructure. Infrastructure configuration (such as custom domains, locations, and CA certificates) is locked for 45 minutes.
+ - If you opted to migrate to a new IP address: API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) is locked for 30 minutes. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
+
+- **How long will the migration take?**
+
+ The expected duration for the whole migration is approximately 45 minutes. The indicator to check if the migration was already performed is to check if Status of your instance is back to *"Online"* and not *"Updating"*. If it says *"Updating"* for more than 2 hours, contact Azure support.
+
+- **Is there a way to validate the VNet configuration before attempting migration?**
+
+ You can optionally deploy a new API Management instance with the new VNet, subnet, and VIP that you use for the actual migration. Navigate to the **Network status** page after the deployment is completed, and verify if every endpoint connectivity status is green. If yes, you can remove this new API Management instance and proceed with the real migration with your original `stv1` service.
+
+- **Can I roll back the migration if required?**
+
+ Yes, you can. If there's a failure during the migration process, the instance will automatically roll back to the `stv1` platform. However, if you encounter any other issues post migration, you have 48 hours to request a rollback by contacting Azure support. You should contact support if the instance is stuck in an "Updating" status for more than 2 hours.
+
+- **Is there any change required in custom domain/private DNS zones?**
+
+ **VNet-injected instances:** you'll need to update the private DNS zones to the new VNet IP address acquired after the migration. Pay attention to update non-Azure DNS zones too (for example your on-premises DNS servers pointing to API Management private IP address). However, in external mode, the migration process will automatically update the default domains if in use.
+
+ **Non-VNet injected instances:** No changes are required if the IP is preserved. If opted for a new IP, custom domains referring to the IP should be updated.
+
+- **My stv1 instance is deployed to multiple Azure regions (multi-geo). How do I upgrade to stv2?**
+
+ Multi-geo deployments include more managed gateways deployed in other locations. Each location should be migrated separately by providing a new subnet and a new public IP. Navigate to the *Locations* blade and perform the changes on each listed location. The instance is considered migrated to the new platform only when all the locations are migrated. Both gateways continue to operate normally throughout the migration process.
++
+- **Do we need a public IP even if the API Management instance is VNet injected in internal mode only?**
+
+ API Management `stv1` uses an Azure managed public IP even in an internal mode for management traffic. However `stv2` requires a user managed public IP for the same purpose. This public IP is only used for Azure internal management operations and not to expose your instance to the internet. More details [here](./api-management-howto-ip-addresses.md#ip-addresses-of-api-management-service-in-vnet).
+
+- **Can I upgrade my stv1 instance to the same subnet?**
+
+ - You can't migrate the stv1 instance to the same subnet in a single pass without downtime. However, you can optionally move your migrated instance back to the original subnet. More details [here](#optional-migrate-back-to-original-vnet-and-subnet).
+ - The old gateway takes up to 48 hours to vacate the subnet, so that you can initiate the move. However, you can request for a faster release of the subnet by submitting the subscription IDs and the desired release time through a support ticket.
+ - Releasing the old subnet calls for a purge of the old gateway, which forfeits the rollback to the old gateway if desired.
+ - A new public IP is required for each switch
+ - Ensure that the old subnet networking for [NSG](./api-management-using-with-internal-vnet.md?tabs=stv2#configure-nsg-rules) and [firewall](./api-management-using-with-vnet.md?tabs=stv2#force-tunnel-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance) is updated for `stv2` dependencies.
+
+- **Can I test the new gateway before switching the live traffic?**
+
+ - Post successful migration, the old and the new managed gateways are active to receive traffic. The old gateway remains active for 48 hours.
+ - The migration process automatically updates the default domain names, and if being used, the traffic routes to the new gateways immediately.
+ - If custom domain names are in use, the corresponding DNS records might need to be updated with the new IP address if not using CNAME. Customers can update their host file to the new API Management IP and validate the instance before making the switch. During this validation process, the old gateway continues to serve the live traffic.
+
+- **Are there any considerations when using default domain name?**
+
+ Instances that are using the default DNS name in external mode have the DNS autoupdated by the migration process. Moreover, the management endpoint, which always uses the default domain name is automatically updated by the migration process. Since the switch happens immediately on a successful migration, the new instance starts receiving traffic immediately, and it's critical that any networking restrictions/dependencies are taken care of upfront to avoid impacted APIs being unavailable.
+
+- **What should we consider for self hosted gateways?**
+
+ You don't need to do anything in your self-hosted gateways. You just need to migrate API Management instances running in Azure that are impacted by the `stv1` platform retirement. Note that there could be a new IP for the Configuration endpoint of the API Management instance, and any networking restrictions pinned to the IP should be updated.
+
+- **How is the developer portal impacted by migration?**
+
+ There's no impact on developer portal. If custom domains are used, the DNS record should be updated with the effective IP, post migration. However, if the default domains are in use, they're automatically updated on successful migration. There's no downtime for the developer portal during the migration.
+
+- **Is there any impact on cost once we migrated to stv2?**
+
+ The billing model remains the same for `stv2` and there won't be any more cost incurred after the migration.
+
+- **How can we get help during migration?**
+
+ Check details [here](#help-and-support).
++ ## Related content * Learn about [stv1 platform retirement](breaking-changes/stv1-platform-retirement-august-2024.md).
app-service Configure Basic Auth Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-basic-auth-disable.md
+
+ Title: Disable basic authentication for deployment
+description: Learn how to secure App Service deployment by disabling basic authentication.
+keywords: azure app service, security, deployment, FTP, MsDeploy
+ Last updated : 11/05/2023++++
+# Disable basic authentication in App Service deployments
+
+This article shows you how to disable basic authentication (username and password authentication) when deploying code to App Service apps.
+
+App Service provides basic authentication for FTP and WebDeploy clients to connect to it by using [deployment credentials](deploy-configure-credentials.md). These APIs are great for browsing your siteΓÇÖs file system, uploading drivers and utilities, and deploying with MsBuild. However, enterprises often require more secure deployment methods than basic authentication, such as [Microsoft Entra ID](/entr)). Entra ID uses OAuth 2.0 token-based authorization and has many benefits and improvements that help mitigate the issues in basic authentication. For example, OAuth access tokens have a limited usable lifetime, and are specific to the applications and resources for which they're issued, so they can't be reused. Entra ID also lets you deploy from other Azure services using managed identities.
+
+## Disable basic authentication
+
+### [Azure portal](#tab/portal)
+
+1. In the [Azure portal], search for and select **App Services**, and then select your app.
+
+1. In the app's left menu, select **Configuration**.
+
+1. For **Basic Auth Publishing Credentials**, select **Off**, then select **Save**.
+
+ :::image type="content" source="media/configure-basic-auth-disable/basic-auth-disable.png" alt-text="A screenshot showing how to disable basic authentication for Azure App Service in the Azure portal.":::
+
+### [Azure CLI](#tab/cli)
+
+There are two different settings to configure when you disable basic authentication with Azure CLI, one for FTP and one for WebDeploy and Git.
+
+#### Disable for FTP
+
+To disable FTP access using basic authentication, you must have owner-level access to the app. Run the following CLI command by replacing the placeholders with your resource group name and app name:
+
+```azurecli-interactive
+az resource update --resource-group <group-name> --name ftp --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<app-name> --set properties.allow=false
+```
+
+#### Disable for WebDeploy and Git
+
+To disable basic authentication access to the WebDeploy port and the Git deploy URL (https://\<app-name>.scm.azurewebsites.net), run the following CLI command. Replace the placeholders with your resource group name and app name.
+
+```azurecli-interactive
+az resource update --resource-group <resource-group> --name scm --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<app-name> --set properties.allow=false
+```
+
+--
+
+To confirm that FTP access is blocked, try [connecting to your app using FTP/S](deploy-ftp.md). You should get a `401 Unauthenticted` message.
+
+To confirm that Git access is blocked, try [local Git deployment](deploy-local-git.md). You should get an `Authentication failed` message.
+
+## Deployment without basic authentication
+
+When you disable basic authentication, deployment methods based on basic authentication stop working, such as FTP and local Git deployment. For alternate deployment methods, see [Authentication types by deployment methods in Azure App Service](deploy-authentication-types.md).
+
+<!-- Azure Pipelines with App Service deploy task (manual config) need the newer version hosted agent that supports vs2022.
+OIDC GitHub actions -->
+
+## Create a custom role with no permissions for basic authentication
+
+To prevent a lower-priveldged user from enabling basic authentication for any app, you can create a custom role and assign the user to the role.
+
+### [Azure portal](#tab/portal)
+
+1. In the Azure portal, in the top menu, search for and select the subscription you want to create the custom role in.
+1. From the left navigation, select **Access Control (IAM)** > **Add** > **Add custom role**.
+1. Set the **Basic** tab as you wish, then select **Next**.
+1. In the **Permissions** tab, and select **Exclude permissions**.
+1. Find and select **Microsoft Web Apps**, then search for the following operations:
+
+ |Operation |Description |
+ |||
+ |`microsoft.web/sites/basicPublishingCredentialsPolicies/ftp` | FTP publishing credentials for App Service apps. |
+ |`microsoft.web/sites/basicPublishingCredentialsPolicies/scm` | SCM publishing credentials for App Service apps. |
+ |`microsoft.web/sites/slots/basicPublishingCredentialsPolicies/ftp` | FTP publishing credentials for App Service slots. |
+ |`microsoft.web/sites/slots/basicPublishingCredentialsPolicies/scm` | SCM publishing credentials for App Service slots. |
+
+1. Under each of these operations, select the box for **Write**, then select **Add**. This step adds the operation as **NotActions** for the role.
+
+ Your Permissions tab should look like the following screenshot:
+
+ :::image type="content" source="media/configure-basic-auth-disable/custom-role-no-basic-auth.png" alt-text="A screenshot showing the creation of a custom role with all basic authentication permissions excluded.":::
+
+1. Select **Review + create**, then select **Create**.
+
+1. You can now assign this role to your organizationΓÇÖs users.
+
+For more information, see [Create or update Azure custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md#step-2-choose-how-to-start)
+
+### [Azure CLI](#tab/cli)
+
+In the following command, replace *\<role-name>* and *\<subscription-guid>* (with the GUID of your subscription) and run in the cloud shell:
+
+```azurecli-interactive
+az role definition create --role-definition '{
+ "Name": "<role-name>",
+ "IsCustom": true,
+ "Description": "Prevents users from enabling basic authentication for all App Service apps or slots.",
+ "NotActions": [
+ "Microsoft.Web/sites/basicPublishingCredentialsPolicies/ftp/Write",
+ "Microsoft.Web/sites/basicPublishingCredentialsPolicies/scm/Write",
+ "Microsoft.Web/sites/slots/basicPublishingCredentialsPolicies/ftp/Write",
+ "Microsoft.Web/sites/slots/basicPublishingCredentialsPolicies/scm/Write"
+ ],
+ "AssignableScopes": ["/subscriptions/<subscription-guid>"]
+}'
+```
+
+You can now assign this role to your organizationΓÇÖs users.
+
+For more information, see [Create or update Azure custom roles using Azure CLI](../role-based-access-control/custom-roles-cli.md).
+
+--
+
+## Monitor for basic authentication attempts
+
+All successful and attempted logins are logged to the Azure Monitor `AppServiceAuditLogs` log type. To audit the attempted and successful logins on FTP and WebDeploy, follow the steps at [Send logs to Azure Monitor](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor) and enable shipping of the `AppServiceAuditLogs` log type.
+
+To confirm that the logs are shipped to your selected service(s), try logging in via FTP or WebDeploy. The following example shows a Storage Account log.
+
+<pre>
+{
+ "time": "2020-07-16T17:42:32.9322528Z",
+ "ResourceId": "/SUBSCRIPTIONS/EF90E930-9D7F-4A60-8A99-748E0EEA69DE/RESOURCEGROUPS/FREEBERGDEMO/PROVIDERS/MICROSOFT.WEB/SITES/FREEBERG-WINDOWS",
+ "Category": "AppServiceAuditLogs",
+ "OperationName": "Authorization",
+ "Properties": {
+ "User": "$freeberg-windows",
+ "UserDisplayName": "$freeberg-windows",
+ "UserAddress": "24.19.191.170",
+ "Protocol": "FTP"
+ }
+}
+</pre>
+
+## Basic authentication related policies
+
+[Azure Policy](../governance/policy/overview.md) can help you enforce organizational standards and to assess compliance at-scale. You can use Azure Policy to audit for any apps that still use basic authentication, and remediate any noncompliant resources. The following are built-in policies for auditing and remediating basic authentication on App Service:
+
+- [Audit policy for FTP](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F871b205b-57cf-4e1e-a234-492616998bf7)
+- [Audit policy for SCM](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faede300b-d67f-480a-ae26-4b3dfb1a1fdc)
+- [Remediation policy for FTP](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff493116f-3b7f-4ab3-bf80-0c2af35e46c2)
+- [Remediation policy for SCM](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c034a29-2a5f-4857-b120-f800fe5549ae)
+
+The following are corresponding policies for slots:
+
+- [Audit policy for FTP](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fec71c0bc-6a45-4b1f-9587-80dc83e6898c)
+- [Audit policy for SCM](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F847ef871-e2fe-4e6e-907e-4adbf71de5cf)
+- [Remediation policy for FTP](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff493116f-3b7f-4ab3-bf80-0c2af35e46c2)
+- [Remediation policy for SCM](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c034a29-2a5f-4857-b120-f800fe5549ae)
+
app-service Deploy Authentication Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-authentication-types.md
Azure App Service lets you deploy your web application code and configuration by using multiple options. These deployment options may support one or more authentication mechanisms. This article provides details about various authentication mechanisms supported by different deployment methods. > [!NOTE]
-> To disable basic authentication for your App Service app, see [Configure deployment credentials](deploy-configure-credentials.md).
+> To disable basic authentication for your App Service app, see [Disable basic authentication in App Service deployments](configure-basic-auth-disable.md).
|Deployment method|Authentication  |Reference Documents | |:-|:-|:-|
app-service Deploy Configure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-configure-credentials.md
Invoke-AzResourceAction -ResourceGroupName <group-name> -ResourceType Microsoft.
## Disable basic authentication
-Some organizations need to meet security requirements and would rather disable access via FTP or WebDeploy. This way, the organization's members can only access its App Services through APIs that are controlled by Microsoft Entra ID.
-
-### FTP
-
-To disable FTP access to the site, run the following CLI command. Replace the placeholders with your resource group and site name.
-
-```azurecli-interactive
-az resource update --resource-group <resource-group> --name ftp --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<site-name> --set properties.allow=false
-```
-
-To confirm that FTP access is blocked, you can try to authenticate using an FTP client such as FileZilla. To retrieve the publishing credentials, go to the overview blade of your site and click Download Publish Profile. Use the fileΓÇÖs FTP hostname, username, and password to authenticate, and you will get a 401 error response, indicating that you are not authorized.
-
-### WebDeploy and SCM
-
-To disable basic auth access to the WebDeploy port and SCM site, run the following CLI command. Replace the placeholders with your resource group and site name.
-
-```azurecli-interactive
-az resource update --resource-group <resource-group> --name scm --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<site-name> --set properties.allow=false
-```
-
-To confirm that the publish profile credentials are blocked on WebDeploy, try [publishing a web app using Visual Studio 2019](/visualstudio/deployment/quickstart-deploy-to-azure).
-
-### Disable access to the API
-
-The API in the previous section is backed Azure role-based access control (Azure RBAC), which means you can [create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role) and assign lower-priveldged users to the role so they cannot enable basic auth on any sites. To configure the custom role, [follow these instructions](https://azure.github.io/AppService/2020/08/10/securing-data-plane-access.html#create-a-custom-rbac-role).
-
-You can also use [Azure Monitor](https://azure.github.io/AppService/2020/08/10/securing-data-plane-access.html#audit-with-azure-monitor) to audit any successful authentication requests and use [Azure Policy](https://azure.github.io/AppService/2020/08/10/securing-data-plane-access.html#enforce-compliance-with-azure-policy) to enforce this configuration for all sites in your subscription.
+See [Disable basic authentication in App Service deployments](configure-basic-auth-disable.md).
## Next steps
azure-app-configuration Enable Dynamic Configuration Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-python.md
Add the following key-value to your App Configuration store. For more informatio
print("Update the `message` in your App Configuration store using Azure portal or CLI.") print("First, update the `message` value, and then update the `sentinel` key value.")
- while (true):
+ while (True):
# Refreshing the configuration setting config.refresh()
azure-app-configuration Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python.md
Add the following key-value to the App Configuration store and leave **Label** a
echo "$AZURE_APPCONFIG_CONNECTION_STRING" ```
-1. Restart the command prompt to allow the change to take effect. Print out the value of the environment variable to validate that it is set properly.
- ## Code samples The sample code snippets in this section show you how to perform common operations with the App Configuration client library for Python. Add these code snippets to the `try` block in *app-configuration-example.py* file you created earlier.
azure-arc Network Requirements Consolidated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md
Title: Azure Arc network requirements description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols. Previously updated : 10/18/2023 Last updated : 11/01/2023
For more information, see [Support matrix for Azure Arc-enabled VMware vSphere (
## Additional endpoints
-Depending on your scenario, you may need connectivity to other URLs, such as those used by the Azure portal, management tools, or other Azure services. In particular, review these lists to ensure that you allow connectivity to any necessary endpoints:
+Depending on your scenario, you might need connectivity to other URLs, such as those used by the Azure portal, management tools, or other Azure services. In particular, review these lists to ensure that you allow connectivity to any necessary endpoints:
- [Azure portal URLs](../azure-portal/azure-portal-safelist-urls.md) - [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints)
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
There are two ways to upgrade Arc resource bridge: cloud-managed upgrades manage
Arc resource bridge is a Microsoft-managed product. Microsoft manages upgrades of Arc resource bridge through cloud-managed upgrade. Cloud-managed upgrade allows Microsoft to ensure that the resource bridge remains on a supported version.
-> [!IMPORTANT]
-> Currently, in order to use cloud-managed upgrade, your appliance version must be on version 1.0.15 and you must request access. To do so, [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Select **Technical** for **Issue type** and **Azure Arc Resource Bridge** for **Service type**. In the **Summary** field, enter *Requesting access to cloud-managed upgrade*, and select **Resource Bridge Agent issue** for **Problem type**. Complete the rest of the support request and then select **Create**. We'll review your account and contact you to confirm your access to cloud-managed upgrade.
- Cloud-managed upgrades are handled through Azure. A notification is pushed to Azure to reflect the state of the appliance VM as it upgrades. As the resource bridge progresses through the upgrade, its status might switch back and forth between different upgrade steps. Upgrade is complete when the appliance VM `status` is `Running` and `provisioningState` is `Succeeded`. To check the status of a cloud-managed upgrade, check the Azure resource in ARM, or run the following Azure CLI command from the management machine:
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
Title: Connected Machine agent network requirements description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 10/18/2023 Last updated : 11/01/2023
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Title: How to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 10/30/2023 Last updated : 11/01/2023
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
This QuickStart shows you how to connect your SCVMM management server to Azure A
| **Requirement** | **Details** | | | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |
-| **SCVMM** | You need an SCVMM management server running version 2016 or later.<br/><br/> A private cloud with minimum free capacity of 16 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](https://learn.microsoft.com/system-center/vmm/network-pool?view=sc-vmm-2022) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP is not supported. |
+| **SCVMM** | You need an SCVMM management server running version 2016 or later.<br/><br/> A private cloud with minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](https://learn.microsoft.com/system-center/vmm/network-pool?view=sc-vmm-2022) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP is not supported. |
| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM as well as the deployment of the Arc Resource bridge VM. | | **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> To avoid network latency issues, we recommend executing the helper script directly in the VMM server machine.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. |
azure-cache-for-redis Cache Tutorial Vector Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-vector-similarity.md
In this tutorial, you learn how to:
1. Install the required Python packages: ```python
- pip install openai num2words matplotlib plotly scipy scikit-learn pandas tiktoken redis langchain
+ pip install "openai==0.28.1" num2words matplotlib plotly scipy scikit-learn pandas tiktoken redis langchain
``` ## Download the dataset
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
Use the following table to compare feature and functional differences between th
| Imperative bindings<sup>1</sup> | Not supported - instead [work with SDK types directly](./dotnet-isolated-process-guide.md#register-azure-clients) | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | | Dependency injection | [Supported](dotnet-isolated-process-guide.md#dependency-injection) (improved model consistent with .NET ecosystem) | [Supported](functions-dotnet-dependency-injection.md) | | Middleware | [Supported](dotnet-isolated-process-guide.md#middleware) | Not supported |
-| Logging | [ILogger&lt;T&gt;]/[ILogger] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)| [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via [dependency injection](functions-dotnet-dependency-injection.md) |
+| Logging | [`ILogger<T>`]/[`ILogger`] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)| [`ILogger`] passed to the function<br/>[`ILogger<T>`] via [dependency injection](functions-dotnet-dependency-injection.md) |
| Application Insights dependencies | [Supported](./dotnet-isolated-process-guide.md#application-insights) | [Supported](functions-monitoring.md#dependencies) | | Cancellation tokens | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | | Cold start times<sup>2</sup> | [Configurable optimizations](./dotnet-isolated-process-guide.md#performance-optimizations) | Optimized |
Use the following table to compare feature and functional differences between th
[migrate]: ./migrate-dotnet-to-isolated-model.md
-[ILogger]: /dotnet/api/microsoft.extensions.logging.ilogger
-[ILogger&lt;T&gt;]: /dotnet/api/microsoft.extensions.logging.logger-1
+[`ILogger`]: /dotnet/api/microsoft.extensions.logging.ilogger
+[`ILogger<T>`]: /dotnet/api/microsoft.extensions.logging.logger-1
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
In most cases, migrating requires you to add the following program.cs file to yo
# [.NET 6 (isolated)](#tab/net6-isolated)
+```csharp
+using Microsoft.Extensions.Hosting;
+
+var host = new HostBuilder()
+ .ConfigureFunctionsWebApplication()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .Build();
+
+host.Run();
+```
# [.NET 6 (in-process)](#tab/net6-in-proc)
A program.cs file isn't required when running in-process.
# [.NET 7](#tab/net7)
+```csharp
+using Microsoft.Extensions.Hosting;
+
+var host = new HostBuilder()
+ .ConfigureFunctionsWebApplication()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .Build();
+
+host.Run();
+```
# [.NET Framework 4.8](#tab/netframework48)
+```csharp
+using Microsoft.Extensions.Hosting;
+using Microsoft.Azure.Functions.Worker;
+
+namespace Company.FunctionApp
+{
+ internal class Program
+ {
+ static void Main(string[] args)
+ {
+ FunctionsDebugger.Enable();
+
+ var host = new HostBuilder()
+ .ConfigureFunctionsWorkerDefaults()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .Build();
+ host.Run();
+ }
+ }
+}
+```
# [.NET 8 Preview (isolated)](#tab/net8)
+```csharp
+using Microsoft.Extensions.Hosting;
+
+var host = new HostBuilder()
+ .ConfigureFunctionsWebApplication()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .Build();
+
+host.Run();
+```
In version 4.x, the HTTP trigger template looks like the following example:
# [.NET 6 (isolated)](#tab/net6-isolated)
+```csharp
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Extensions.Logging;
+
+namespace Company.Function
+{
+ public class HttpTriggerCSharp
+ {
+ private readonly ILogger<HttpTriggerCSharp> _logger;
+
+ public HttpTriggerCSharp(ILogger<HttpTriggerCSharp> logger)
+ {
+ _logger = logger;
+ }
+
+ [Function("HttpTriggerCSharp")]
+ public IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req)
+ {
+ _logger.LogInformation("C# HTTP trigger function processed a request.");
+
+ return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
+ }
+ }
+}
+```
# [.NET 6 (in-process)](#tab/net6-in-proc)
In version 4.x, the HTTP trigger template looks like the following example:
# [.NET 7](#tab/net7)
+```csharp
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Extensions.Logging;
+
+namespace Company.Function
+{
+ public class HttpTriggerCSharp
+ {
+ private readonly ILogger<HttpTriggerCSharp> _logger;
+
+ public HttpTriggerCSharp(ILogger<HttpTriggerCSharp> logger)
+ {
+ _logger = logger;
+ }
+
+ [Function("HttpTriggerCSharp")]
+ public IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req)
+ {
+ _logger.LogInformation("C# HTTP trigger function processed a request.");
+
+ return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
+ }
+ }
+}
+```
# [.NET Framework 4.8](#tab/netframework48)
+```csharp
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Http;
+using Microsoft.Extensions.Logging;
+using System.Net;
+
+namespace Company.Function
+{
+ public class HttpTriggerCSharp
+ {
+ private readonly ILogger<HttpTriggerCSharp> _logger;
+
+ public HttpTriggerCSharp(ILogger<HttpTriggerCSharp> logger)
+ {
+ _logger = logger;
+ }
+
+ [Function("HttpTriggerCSharp")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequestData req)
+ {
+ _logger.LogInformation("C# HTTP trigger function processed a request.");
+
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.Headers.Add("Content-Type", "text/plain; charset=utf-8");
+
+ response.WriteString($"Welcome to Azure Functions, {req.Query["name"]}!");
+
+ return response;
+ }
+ }
+}
+```
# [.NET 8 Preview (isolated)](#tab/net8)
+```csharp
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Extensions.Logging;
+
+namespace Company.Function
+{
+ public class HttpTriggerCSharp
+ {
+ private readonly ILogger<HttpTriggerCSharp> _logger;
+
+ public HttpTriggerCSharp(ILogger<HttpTriggerCSharp> logger)
+ {
+ _logger = logger;
+ }
+
+ [Function("HttpTriggerCSharp")]
+ public IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req)
+ {
+ _logger.LogInformation("C# HTTP trigger function processed a request.");
+
+ return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
+ }
+ }
+}
+```
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
When migrating to run in an isolated worker process, you must add the following
# [.NET 6 (isolated)](#tab/net6-isolated)
+```csharp
+using Microsoft.Extensions.Hosting;
+
+var host = new HostBuilder()
+ .ConfigureFunctionsWebApplication()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .Build();
+
+host.Run();
+```
# [.NET 6 (in-process)](#tab/net6-in-proc)
A program.cs file isn't required when running in-process.
# [.NET 7](#tab/net7)
+```csharp
+using Microsoft.Extensions.Hosting;
+
+var host = new HostBuilder()
+ .ConfigureFunctionsWebApplication()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .Build();
+
+host.Run();
+```
# [.NET Framework 4.8](#tab/netframework48)
+```csharp
+using Microsoft.Extensions.Hosting;
+using Microsoft.Azure.Functions.Worker;
+
+namespace Company.FunctionApp
+{
+ internal class Program
+ {
+ static void Main(string[] args)
+ {
+ FunctionsDebugger.Enable();
+
+ var host = new HostBuilder()
+ .ConfigureFunctionsWorkerDefaults()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .Build();
+ host.Run();
+ }
+ }
+}
+```
# [.NET 8 Preview (isolated)](#tab/net8)
+```csharp
+using Microsoft.Extensions.Hosting;
+
+var host = new HostBuilder()
+ .ConfigureFunctionsWebApplication()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .Build();
+
+host.Run();
+```
azure-maps Tutorial Prioritized Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md
The following steps show you how to create and display the Map control in a web
2. Save the **MapTruckRoute.html** file and refresh the page in your browser. If you zoom into any city, like Los Angeles, the streets display with current traffic flow data.
- :::image type="content" source="./media/tutorial-prioritized-routes/traffic-map.png" alt-text="A screenshot that shows a map of Los Angeles, with the streets displaying traffic flow data.":::
+ :::image type="content" source="./media/tutorial-prioritized-routes/traffic-map.png" lightbox="./media/tutorial-prioritized-routes/traffic-map.png" alt-text="A screenshot that shows a map of Los Angeles, with the streets displaying traffic flow data.":::
<a id="queryroutes"></a>
In this tutorial, two routes are calculated on the map. The first route is calcu
3. Save **TruckRoute.html** and refresh your browser. The map is now centered over Seattle. The blue teardrop pin marks the start point. The round blue pin marks the end point.
- :::image type="content" source="./media/tutorial-prioritized-routes/pins-map.png" alt-text="A screenshot that shows a map with a route containing a blue teardrop pin marking the start point and a blue round pin marking the end point.":::
+ :::image type="content" source="./media/tutorial-prioritized-routes/pins-map.png" lightbox="./media/tutorial-prioritized-routes/pins-map.png" alt-text="A screenshot that shows a map with a route containing a blue teardrop pin marking the start point and a blue round pin marking the end point.":::
<a id="multipleroutes"></a>
This section shows you how to use the Azure Maps Route service to get directions
4. Save the **TruckRoute.html** file and refresh your web browser. The map should now display both the truck and car routes.
- :::image type="content" source="./media/tutorial-prioritized-routes/prioritized-routes.png" alt-text="A screenshot that displays both a private as well as a commercial vehicle route on a map using the Azure Route Service.":::
+ :::image type="content" source="./media/tutorial-prioritized-routes/prioritized-routes.png" lightbox="./media/tutorial-prioritized-routes/prioritized-routes.png" alt-text="A screenshot that displays both a private as well as a commercial vehicle route on a map using the Azure Route Service.":::
* The truck route is displayed using a thick blue line and the car route is displayed using a thin purple line. * The car route goes across Lake Washington via I-90, passing through tunnels beneath residential areas. Because the tunnels are in residential areas, hazardous waste cargo is restricted. The truck route, which specifies a `USHazmatClass2` cargo type, is directed to use a different route that doesn't have this restriction.
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Oracle Linux 7 | Γ£ô | Γ£ô | Γ£ô | | Oracle Linux 6.4+ | | | Γ£ô | | Red Hat Enterprise Linux Server 9+ | Γ£ô | | |
-| Red Hat Enterprise Linux Server 8.6+ | Γ£ô<sup>3</sup> | Γ£ô<sup>2</sup> | Γ£ô<sup>2</sup> |
-| Red Hat Enterprise Linux Server 8.0-8.5 | Γ£ô | Γ£ô<sup>2</sup> | Γ£ô<sup>2</sup> |
+| Red Hat Enterprise Linux Server 8.6+ | Γ£ô<sup>3</sup> | Γ£ô | Γ£ô<sup>2</sup> |
+| Red Hat Enterprise Linux Server 8.0-8.5 | Γ£ô | Γ£ô | Γ£ô<sup>2</sup> |
| Red Hat Enterprise Linux Server 7 | Γ£ô | Γ£ô | Γ£ô | | Red Hat Enterprise Linux Server 6.7+ | | | Γ£ô | | Rocky Linux 8 | Γ£ô | Γ£ô | |
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multi-tenant mode</li><li>AMA installer will not install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.20.0|1.28.11|
+| October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multi-tenant mode</li><li>AMA installer will not install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11|
| September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (aka GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None | | August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ul>**Linux**<ul><li> Comming soon</li></ui>|1.19.0| Comming Soon | | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0|None|
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
1. On the **Actions** tab, select or create the required [action groups](./action-groups.md).
-1. <a name="custom-props"></a>(Optional) In the **Advanced options** at **Details tab** the **Custom properties** section is added, if you've configured action groups for this alert rule. In this ection you can add your own properties to include in the alert notification payload. You can use these properties in the actions called by the action group, such as webhook, Azure function or logic app actions.
-
- The custom properties are specified as key:value pairs, using either static text, a dynamic value extracted from the alert payload, or a combination of both.
-
- The format for extracting a dynamic value from the alert payload is: `${<path to schema field>}`. For example: ${data.essentials.monitorCondition}.
-
- Use the [common alert schema](alerts-common-schema.md) format to specify the field in the payload, whether or not the action groups configured for the alert rule use the common schema.
-
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new alert rule.":::
-
- In the following examples, values in the **custom properties** are used to utilize data from a payload that uses the common alert schema:
-
- **Example 1**
-
- This example creates an "Additional Details" tag with data regarding the "window start time" and "window end time".
-
- - **Name:** "Additional Details"
- - **Value:** "Evaluation windowStartTime: \${data.alertContext.condition.windowStartTime}. windowEndTime: \${data.alertContext.condition.windowEndTime}"
- - **Result:** "AdditionalDetails:Evaluation windowStartTime: 2023-04-04T14:39:24.492Z. windowEndTime: 2023-04-04T14:44:24.492Z"
--
- **Example 2**
- This example adds the data regarding the reason of resolving or firing the alert.
-
- - **Name:** "Alert \${data.essentials.monitorCondition} reason"
- - **Value:** "\${data.alertContext.condition.allOf[0].metricName} \${data.alertContext.condition.allOf[0].operator} \${data.alertContext.condition.allOf[0].threshold} \${data.essentials.monitorCondition}. The value is \${data.alertContext.condition.allOf[0].metricValue}"
- - **Result:** Example results could be something like:
- - "Alert Resolved reason: Percentage CPU GreaterThan5 Resolved. The value is 3.585"
- - ΓÇ£Alert Fired reason": "Percentage CPU GreaterThan5 Fired. The value is 10.585"
-
- > [!NOTE]
- > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for log alerts.
- ### Set the alert rule details 1. On the **Details** tab, define the **Project details**.
To edit an existing alert rule:
1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.
+1. <a name="custom-props"></a>(Optional) In the **Custom properties**, if you've configured action groups for this alert rule, you can add your own properties to include in the alert notification payload. You can use these properties in the actions called by the action group, such as webhook, Azure function or logic app actions.
+
+ The custom properties are specified as key:value pairs, using either static text, a dynamic value extracted from the alert payload, or a combination of both.
+
+ The format for extracting a dynamic value from the alert payload is: `${<path to schema field>}`. For example: ${data.essentials.monitorCondition}.
+
+ Use the [common alert schema](alerts-common-schema.md) format to specify the field in the payload, whether or not the action groups configured for the alert rule use the common schema.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new alert rule.":::
+
+ In the following examples, values in the **custom properties** are used to utilize data from a payload that uses the common alert schema:
+
+ **Example 1**
+
+ This example creates an "Additional Details" tag with data regarding the "window start time" and "window end time".
+
+ - **Name:** "Additional Details"
+ - **Value:** "Evaluation windowStartTime: \${data.alertContext.condition.windowStartTime}. windowEndTime: \${data.alertContext.condition.windowEndTime}"
+ - **Result:** "AdditionalDetails:Evaluation windowStartTime: 2023-04-04T14:39:24.492Z. windowEndTime: 2023-04-04T14:44:24.492Z"
++
+ **Example 2**
+ This example adds the data regarding the reason of resolving or firing the alert.
+
+ - **Name:** "Alert \${data.essentials.monitorCondition} reason"
+ - **Value:** "\${data.alertContext.condition.allOf[0].metricName} \${data.alertContext.condition.allOf[0].operator} \${data.alertContext.condition.allOf[0].threshold} \${data.essentials.monitorCondition}. The value is \${data.alertContext.condition.allOf[0].metricValue}"
+ - **Result:** Example results could be something like:
+ - "Alert Resolved reason: Percentage CPU GreaterThan5 Resolved. The value is 3.585"
+ - ΓÇ£Alert Fired reason": "Percentage CPU GreaterThan5 Fired. The value is 10.585"
+
+ > [!NOTE]
+ > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for log alerts.
+
+ ### Finish creating the alert rule 1. On the **Tags** tab, set any required tags on the alert rule resource.
azure-monitor Alerts Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md
If you can see a fired alert in the Azure portal, but did not receive the email
1. **Was the email suppressed by an [alert processing rule](../alerts/alerts-action-rules.md)**? Check by clicking on the fired alert in the portal, and look at the history tab for suppressed [action groups](./action-groups.md):-
- ![Screenshot of alert history tab with suppression from alert processing rule.](media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png" lightbox="media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png" alt-text="Screenshot of alert history tab with suppression from alert processing rule." border="false":::
1. **Is the type of action "Email Azure Resource Manager Role"?**
- This action only looks at Azure Resource Manager role assignments that are at the subscription scope, and of type *User*. Make sure that you have assigned the role at the subscription level, and not at the resource level or resource group level.
+ This action only looks at Azure Resource Manager role assignments that are at the subscription scope, and of type *User*. Make sure that you assigned the role at the subscription level, and not at the resource level or resource group level.
1. **Are your email server and mailbox accepting external emails?**
If you can see a fired alert in the Azure portal, but did not receive the email
1. **Have you accidentally unsubscribed from the action group?**
- The alert emails provide a link to unsubscribe from the action group. To check if you have accidentally unsubscribed from this action group, either:
+ The alert emails provide a link to unsubscribe from the action group. To check if you accidentally unsubscribed from this action group, either:
1. Open the action group in the portal and check the Status column:
- ![Screenshot of action group status column.](media/alerts-troubleshoot/action-group-status.png)
+ :::image type="content" source="media/alerts-troubleshoot/action-group-status.png" lightbox="media/alerts-troubleshoot/action-group-status.png" alt-text="Screenshot of action group status column.":::
2. Search your email for the unsubscribe confirmation:
- ![Screenshot of email about being unsubscribed from alert action group.](media/alerts-troubleshoot/unsubscribe-action-group.png)
+ :::image type="content" source="media/alerts-troubleshoot/unsubscribe-action-group.png" lightbox="media/alerts-troubleshoot/unsubscribe-action-group.png" alt-text="Screenshot of email about being unsubscribed from alert action group.":::
- To subscribe again ΓÇô either use the link in the unsubscribe confirmation email you have received, or remove the email address from the action group, and then add it back again.
+ To subscribe again ΓÇô either use the link in the unsubscribe confirmation email you received, or remove the email address from the action group, and then add it back again.
1. **Have you been rated limited due to many emails going to a single email address?**
- Email is [rate limited](alerts-rate-limiting.md) to no more than 100 emails every hour to each email address. If you pass this threshold, additional email notifications are dropped. Check if you have received a message indicating that your email address has been temporarily rate limited:
-
- ![Screenshot of an email about being rate limited.](media/alerts-troubleshoot/email-paused.png)
+ Email is [rate limited](alerts-rate-limiting.md) to no more than 100 emails every hour to each email address. If you pass this threshold, additional email notifications are dropped. Check if you received a message indicating that your email address is temporarily rate limited:
+ <!-- convertborder later -->
+ :::image type="content" source="media/alerts-troubleshoot/email-paused.png" lightbox="media/alerts-troubleshoot/email-paused.png" alt-text="Screenshot of an email about being rate limited." border="false":::
- If you would like to receive high-volume of notifications without rate limiting, consider using a different action, such as webhook, logic app, Azure function, or automation runbooks, none of which are rate limited.
+ If you want to receive high-volume of notifications without rate limiting, consider using a different action, such as one of the following actions:
+
+ - Webhook
+ - Logic app
+ - Azure function
+ - Automation runbooks
+
+ None of these actions are rate limited.
## Did not receive expected SMS, voice call, or push notification
If you can see a fired alert in the portal, but did not receive the SMS, voice c
1. **Was the action suppressed by an [alert suppression rule](../alerts/alerts-action-rules.md)?** Check by clicking on the fired alert in the portal, and look at the history tab for suppressed [action groups](./action-groups.md): -
- ![Screenshot of alert history tab with suppression from alert processing rule.](media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png" lightbox="media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png" alt-text="Screenshot of alert history tab with suppression from alert processing rule." border="false":::
If that was unintentional, you can modify, disable, or delete the alert processing rule.
If you can see a fired alert in the portal, but did not receive the SMS, voice c
1. **SMS / voice: have you been rate limited?**
- SMS and voice calls are rate limited to no more than one notification every five minutes per phone number. If you pass this threshold, the notifications will be dropped.
+ SMS and voice calls are rate limited to no more than one notification every five minutes per phone number. If you pass this threshold, the notifications are dropped.
- Voice call ΓÇô check your call history and see if you had a different call from Azure in the preceding five minutes.
- - SMS - check your SMS history for a message indicating that your phone number has been rate limited.
+ - SMS - check your SMS history for a message indicating that your phone number is rate limited.
- If you would like to receive high-volume of notifications without rate limiting, consider using a different action, such as webhook, logic app, Azure function, or automation runbooks, none of which are rate limited.
+ If you want to receive high-volume of notifications without rate limiting, consider using a different action, such as one of the following actions:
+
+ - Webhook
+ - Logic app
+ - Azure function
+ - Automation runbooks
+
+ None of these actions are rate limited.
1. **SMS: Have you accidentally unsubscribed from the action group?**
- Open your SMS history and check if you have opted out of SMS delivery from this specific action group (using the DISABLE action_group_short_name reply) or from all action groups (using the STOP reply). To subscribe again, either send the relevant SMS command (ENABLE action_group_short_name or START), or remove the SMS action from the action group, and then add it back again. For more information, see [SMS alert behavior in action groups](alerts-sms-behavior.md).
+ Open your SMS history and check if you opted out of SMS delivery from this specific action group (using the DISABLE action_group_short_name reply) or from all action groups (using the STOP reply). To subscribe again, either send the relevant SMS command (ENABLE action_group_short_name or START), or remove the SMS action from the action group, and then add it back again. For more information, see [SMS alert behavior in action groups](alerts-sms-behavior.md).
1. **Have you accidentally blocked the notifications on your phone?**
If you can see a fired alert in the portal, but its configured action did not tr
1. **Was the action suppressed by an alert processing rule?** Check by clicking on the fired alert in the portal, and look at the history tab for suppressed [action groups](./action-groups.md):-
- ![Screenshot of alert history tab with suppression from alert processing rule.](media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png" lightbox="media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png" alt-text="Screenshot of alert history tab with suppression from alert processing rule." border="false":::
If that was unintentional, you can modify, disable, or delete the alert processing rule.
If you can see a fired alert in the portal, but its configured action did not tr
1. **Does your webhook endpoint work correctly?**
- Verify the webhook endpoint you have configured is correct and the endpoint is working correctly. Check your webhook logs or instrument its code so you could investigate (for example, log the incoming payload).
+ Verify the webhook endpoint you configured is correct and the endpoint is working correctly. Check your webhook logs or instrument its code so you could investigate (for example, log the incoming payload).
1. **Are you calling Slack or Microsoft Teams?** Each of these endpoints expects a specific JSON format. Follow [these instructions](../alerts/action-groups-logic-app.md) to configure a logic app action instead.
If you can see a fired alert in the portal, but its configured action did not tr
## Action or notification happened more than once
-If you have received a notification for an alert (such as an email or an SMS) more than once, or the alert's action (such as webhook or Azure function) was triggered multiple times, follow these steps:
+If you received a notification for an alert (such as an email or an SMS) more than once, or the alert's action (such as webhook or Azure function) was triggered multiple times, follow these steps:
1. **Is it really the same alert?**
- In some cases, multiple similar alerts are fired at around the same time. So, it might just seem like the same alert triggered its actions multiple times. For example, an activity log alert rule might be configured to fire both when an event has started, and when it has finished (succeeded or failed), by not filtering on the event status field.
+ In some cases, multiple similar alerts are fired at around the same time. So, it might just seem like the same alert triggered its actions multiple times. For example, an activity log alert rule might be configured to fire both when an event starts and finishes (succeeded or failed), by not filtering on the event status field.
To check if these actions or notifications came from different alerts, examine the alert details, such as its timestamp and either the alert ID or its correlation ID. Alternatively, check the list of fired alerts in the portal. If that is the case, you would need to adapt the alert rule logic or otherwise configure the alert source.
If you have received a notification for an alert (such as an email or an SMS) mo
When an alert is fired, each of its action groups is processed independently. So, if an action (such as an email address) appears in multiple triggered action groups, it would be called once per action group. To check which action groups were triggered, check the alert history tab. You would see there both action groups defined in the alert rule, and action groups added to the alert by alert processing rules: -
- ![Screenshot of multiple action groups in an alert.](media/alerts-troubleshoot/action-repeated-multi-action-groups.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" lightbox="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" alt-text="Screenshot of multiple action groups in an alert." border="false":::
## Action or notification has an unexpected content Action Groups uses two different email providers to ensure email notification delivery. The primary email provider is very resilient and quick but occasionally suffers outages. In this case, the secondary email provider handles email requests. The secondary provider is only a fallback solution. Due to provider differences, an email sent from our secondary provider may have a degraded email experience. The degradation results in slightly different email formatting and content. Since email templates differ in the two systems, maintaining parity across the two systems is not feasible. You can know that you are receiving a degraded experience, if there is a note at the top of your email notification that says: "This is a degraded email experience. That means the formatting may be off or details could be missing. For more information on the degraded email experience, read here."
-If your notification does not contain this note and you have received the alert, but believe some of its fields are missing or incorrect, follow these steps:
+If your notification does not contain this note and you received the alert, but believe some of its fields are missing or incorrect, follow these steps:
1. **Did you pick the correct format for the action?** Each action type (email, webhook, etc.) has two formats ΓÇô the default, legacy format, and the [newer common schema format](../alerts/alerts-common-schema.md). When you create an action group, you specify the format you want per action ΓÇô different actions in the action groups may have different formats. For example, for webhook action:
+ <!-- convertborder later -->
+ :::image type="content" source="media/alerts-troubleshoot/webhook.png" lightbox="media/alerts-troubleshoot/webhook.png" alt-text="Screenshot of webhook action schema option." border="false":::
- ![Screenshot of webhook action schema option.](media/alerts-troubleshoot/webhook.png)
-
- Check if the format specified at the action level is what you expect. For example, you may have developed code that responds to alerts (webhook, function, logic app, etc.), expecting one format, but later in the action you or another person specified a different format.
+ Check if the format specified at the action level is what you expect. For example, you might have developed code that responds to alerts (webhook, function, logic app, etc.), expecting one format, but later in the action you or another person specified a different format.
Also, check the payload format (JSON) for [activity log alerts](../alerts/activity-log-alerts-webhook.md), for [log search alerts](../alerts/alerts-log-webhook.md) (both Application Insights and log analytics), for [metric alerts](alerts-metric-near-real-time.md#payload-schema), for the [common alert schema](../alerts/alerts-common-schema.md), and for the deprecated [classic metric alerts](./alerts-webhooks.md). 1. **Activity log alerts: Is the information available in the activity log?**
- [Activity log alerts](./activity-log-alerts.md) are alerts that are based on events written to the Azure Activity Log, such as events about creating, updating, or deleting Azure resources, service health and resource health events, or findings from Azure Advisor and Azure Policy. If you have received an alert based on the activity log but some fields that you need are missing or incorrect, first check the events in the activity log itself. If the Azure resource did not write the fields you are looking for in its activity log event, those fields will not be included in the corresponding alert.
+ [Activity log alerts](./activity-log-alerts.md) are alerts that are based on events written to the Azure Activity Log, such as events about creating, updating, or deleting Azure resources, service health and resource health events, or findings from Azure Advisor and Azure Policy. If you received an alert based on the activity log but some fields that you need are missing or incorrect, first check the events in the activity log itself. If the Azure resource did not write the fields you are looking for in its activity log event, those fields aren't included in the corresponding alert.
## Alert processing rule is not working as expected
If you can see a fired alert in the portal, but a related alert processing rule
Check the alert processing rule status field to verify that the related action role is enabled. By default, the portal rule list only shows rules that are enabled, but you can change the filter to show all rules.
- :::image type="content" source="media/alerts-troubleshoot/alerts-troubleshoot-alert-processing-rules-status.png" alt-text="Screenshot of alert processing rule list highlighting the status field and status filter.":::
+ :::image type="content" source="media/alerts-troubleshoot/alerts-troubleshoot-alert-processing-rules-status.png" lightbox="media/alerts-troubleshoot/alerts-troubleshoot-alert-processing-rules-status.png" alt-text="Screenshot of alert processing rule list highlighting the status field and status filter.":::
If it is not enabled, you can enable the alert processing rule by selecting it and clicking Enable.
If you can see a fired alert in the portal, but a related alert processing rule
1. **Did the alert processing rule act on your alert?**
- Check if the alert processing rule has processed your alert by clicking on the fired alert in the portal, and look at the history tab.
+ Check if the alert processing rule processed your alert by clicking on the fired alert in the portal, and look at the history tab.
Here is an example of alert processing rule suppressing all action groups:
-
- ![Screenshot of alert history tab with suppression from alert processing rule.](media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png" lightbox="media/alerts-troubleshoot/history-tab-alert-processing-rule-suppression.png" alt-text="Screenshot of alert history tab with suppression from alert processing rule." border="false":::
Here is an example of an alert processing rule adding another action group:-
- ![Screenshot of action repeated in multiple action groups.](media/alerts-troubleshoot/action-repeated-multi-action-groups.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" lightbox="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" alt-text="Screenshot of action repeated in multiple action groups." border="false":::
1. **Does the alert processing rule scope and filter match the fired alert?**
If you can see a fired alert in the portal, but a related alert processing rule
## How to find the alert ID of a fired alert
-When opening a case about a specific fired alert (such as ΓÇô if you did not receive its email notification), you will need to provide the alert ID.
+When opening a case about a specific fired alert (such as ΓÇô if you did not receive its email notification), you need to provide the alert ID.
To locate it, follow these steps:
To locate it, follow these steps:
1. Click on the alert to open the alert details. 1. Scroll down in the alert fields of the first tab (the summary tab) until you locate it, and copy it. That field also includes a "Copy to clipboard" helper button you can use. -
- ![Screenshot of finding the alert ID field in the alert summary tab.](media/alerts-troubleshoot/get-alert-id.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/alerts-troubleshoot/get-alert-id.png" lightbox="media/alerts-troubleshoot/get-alert-id.png" alt-text="Screenshot of finding the alert ID field in the alert summary tab." border="false":::
## Problem creating, updating, or deleting alert processing rules in the Azure portal
If you received an error while trying to create, update or delete an [alert proc
## Next steps - If using a log alert, also see [Troubleshooting Log Alerts](./alerts-troubleshoot-log.md).-- Go back to the [Azure portal](https://portal.azure.com) to check if you've solved your issue with guidance above.
+- Go back to the [Azure portal](https://portal.azure.com) to check if you solved your issue with guidance in this article.
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
Before you create a connection, install ITSMC.
1. In the Azure portal, select **Create a resource**.
- ![Screenshot that shows the menu item for creating a resource.](media/itsmc-overview/azure-add-new-resource.png)
+ :::image type="content" source="media/itsmc-overview/azure-add-new-resource.png" lightbox="media/itsmc-overview/azure-add-new-resource.png" alt-text="Screenshot that shows the menu item for creating a resource.":::
1. Search for **IT Service Management Connector** in Azure Marketplace. Then select **Create**.
- ![Screenshot that shows the Create button in Azure Marketplace.](media/itsmc-overview/add-itsmc-solution.png)
+ :::image type="content" source="media/itsmc-overview/add-itsmc-solution.png" lightbox="media/itsmc-overview/add-itsmc-solution.png" alt-text="Screenshot that shows the Create button in Azure Marketplace.":::
1. In the **Azure Log Analytics Workspace** section, select the Log Analytics workspace where you want to install ITSMC. > [!NOTE]
Before you create a connection, install ITSMC.
1. In the **Azure Log Analytics Workspace** section, select the resource group where you want to create the ITSMC resource.
- ![Screenshot that shows the Azure Log Analytics Workspace section.](media/itsmc-overview/itsmc-solution-workspace.png)
+ :::image type="content" source="media/itsmc-overview/itsmc-solution-workspace.png" lightbox="media/itsmc-overview/itsmc-solution-workspace.png" alt-text="Screenshot that shows the Azure Log Analytics Workspace section.":::
> [!NOTE] > As part of the ongoing transition from Microsoft Operations Management Suite to Azure Monitor, Operations Management workspaces are now called *Log Analytics workspaces*.
After you've installed ITSMC, and prepped your ITSM tool, create an ITSM connect
1. [Configure ServiceNow](./itsmc-connections-servicenow.md) to allow the connection from ITSMC. 1. In **All resources**, look for **ServiceDesk(*your workspace name*)**.
- ![Screenshot that shows recent resources in the Azure portal.](media/itsmc-definition/create-new-connection-from-resource.png)
+ :::image type="content" source="media/itsmc-definition/create-new-connection-from-resource.png" lightbox="media/itsmc-definition/create-new-connection-from-resource.png" alt-text="Screenshot that shows recent resources in the Azure portal.":::
1. Under **Workspace Data Sources** on the left pane, select **ITSM Connections**.
- ![Screenshot that shows the ITSM Connections menu item.](media/itsmc-overview/add-new-itsm-connection.png)
+ :::image type="content" source="media/itsmc-overview/add-new-itsm-connection.png" lightbox="media/itsmc-overview/add-new-itsm-connection.png" alt-text="Screenshot that shows the ITSM Connections menu item.":::
1. Select **Add Connection**. 1. Specify the ServiceNow connection settings. 1. By default, ITSMC refreshes the connection's configuration data once every 24 hours. To refresh your connection's data instantly to reflect any edits or template updates that you make, select the **Sync** button on your connection's pane.
- ![Screenshot that shows the Sync button on the connection's pane.](media/itsmc-overview/itsmc-connections-refresh.png)
+ :::image type="content" source="media/itsmc-overview/itsmc-connections-refresh.png" lightbox="media/itsmc-overview/itsmc-connections-refresh.png" alt-text="Screenshot that shows the Sync button on the connection's pane.":::
## Create ITSM work items from Azure alerts
To create an action group:
1. In the Azure portal, select **Monitor** > **Alerts**. 1. On the menu at the top of the screen, select **Manage actions**.
- ![Screenshot that shows selecting Action groups.](media/itsmc-overview/action-groups-selection-big.png)
+ :::image type="content" source="media/itsmc-overview/action-groups-selection-big.png" lightbox="media/itsmc-overview/action-groups-selection-big.png" alt-text="Screenshot that shows selecting Action groups.":::
1. On the **Action groups** screen, select **+Create**. The **Create action group** screen appears. 1. Select the **Subscription** and **Resource group** where you want to create your action group. Enter values in **Action group name** and **Display name** for your action group. Then select **Next: Notifications**.
- ![Screenshot that shows the Create an action group screen.](media/itsmc-overview/action-groups-details.png)
+ :::image type="content" source="media/itsmc-overview/action-groups-details.png" lightbox="media/itsmc-overview/action-groups-details.png" alt-text="Screenshot that shows the Create an action group screen.":::
1. On the **Notifications** tab, select **Next: Actions**. 1. On the **Actions** tab, select **ITSM** in the **Action type** list. For **Name**, provide a name for the action. Then select the pen button that represents **Edit details**.
- ![Screenshot that shows selections for creating an action group.](media/itsmc-definition/action-group-pen.png)
+ :::image type="content" source="media/itsmc-definition/action-group-pen.png" lightbox="media/itsmc-definition/action-group-pen.png" alt-text="Screenshot that shows selections for creating an action group.":::
1. In the **Subscription** list, select the subscription that contains your Log Analytics workspace. In the **Connection** list, select your ITSM Connector name. It will be followed by your workspace name. An example is *MyITSMConnector(MyWorkspace)*. 1. In the **Work Item** type field, select the type of work item.
To create an action group:
1. In the last section of the interface for creating an ITSM action group, if the alert is a log alert, you can define how many work items will be created for each alert. For all other alert types, one work item is created per alert. - If the work item type is **Incident**:-
- :::image type="content" source="media/itsmc-definition/itsm-action-incident.png" alt-text="Screenshot that shows the ITSM Ticket area with an incident work item type.":::
+
+ :::image type="content" source="media/itsmc-definition/itsm-action-incident.png" lightbox="media/itsmc-definition/itsm-action-incident.png" alt-text="Screenshot that shows the ITSM Ticket area with an incident work item type.":::
- If the work item type is **Event**: If you select **Create a work item for each row in the search results**, every row in the search results creates a new work item. Because several alerts occur for the same affected configuration items, there is also more than one work item. For example, an alert that has three configuration items creates three work items. An alert that has one configuration item creates one work item. If you select the **Create a work item for configuration item in the search results**, ITSMC creates a single work item for each alert rule and adds all affected configuration items to that work item. A new work item is created if the previous one is closed. This means that some of the fired alerts won't generate new work items in the ITSM tool. For example, an alert that has three configuration items creates one work item. If an alert has one configuration item, that configuration item is attached to the list of affected configuration items in the created work item. An alert for a different alert rule that has one configuration item creates one work item.-
- :::image type="content" source="media/itsmc-definition/itsm-action-event.png" alt-text="Screenshot that shoes the ITSM Ticket section with an even work item type.":::
+
+ :::image type="content" source="media/itsmc-definition/itsm-action-event.png" lightbox="media/itsmc-definition/itsm-action-event.png" alt-text="Screenshot that shoes the ITSM Ticket section with an even work item type.":::
- If the work item type is **Alert**: If you select **Create a work item for each row in the search results**, every row in the search results creates a new work item. Because several alerts occur for the same affected configuration items, there is also more than one work item. For example, an alert that has three configuration items creates three work items. An alert that has one configuration item creates one work item. If you do not select **Create a work item for each row in the search results**, ITSMC creates a single work item for each alert rule and adds all affected configuration items to that work item. A new work item is created if the previous one is closed. This means that some of the fired alerts won't generate new work items in the ITSM tool. For example, an alert that has three configuration items creates one work item. If an alert has one configuration item, that configuration item is attached to the list of affected configuration items in the created work item. An alert for a different alert rule that has one configuration item creates one work item. -
- :::image type="content" source="media/itsmc-definition/itsm-action-alert.png" alt-text="Screenshot that shows the ITSM Ticket area with an alert work item type.":::
+
+ :::image type="content" source="media/itsmc-definition/itsm-action-alert.png" lightbox="media/itsmc-definition/itsm-action-alert.png" alt-text="Screenshot that shows the ITSM Ticket area with an alert work item type.":::
1. You can configure predefined fields to contain constant values as a part of the payload. Based on the work item type, three options can be used as a part of the payload: * **None**: Use a regular payload to ServiceNow without any extra predefined fields and values.
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
Title: Review TrackAvailability() test results description: This article explains how to review data logged by TrackAvailability() tests Previously updated : 09/12/2023 Last updated : 11/02/2023 # Review TrackAvailability() test results
You can use Log Analytics to view your availability results, dependencies, and m
## Basic code sample
-The following example demonstrates a web availability test that requires a simple URL ping using the `getStringAsync()` method.
+> [!NOTE]
+> This example is designed solely to show you the mechanics of how the `TrackAvailability()` API call works within an Azure function. It doesn't show you how to write the underlying HTTP test code or business logic that's required to turn this example into a fully functional availability test. By default, if you walk through this example, you'll be creating a basic availability HTTP GET test.
+>
+> To follow these instructions, you must use the [dedicated plan](../../azure-functions/dedicated-plan.md) to allow editing code in App Service Editor.
+### Create a timer trigger function
+
+1. Create an Azure Functions resource.
+ - If you already have an Application Insights resource:
+
+ - By default, Azure Functions creates an Application Insights resource. But if you want to use a resource you created previously, you must specify that during creation.
+ - Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app) with the following modification:
+
+ On the **Monitoring** tab, select the **Application Insights** dropdown box and then enter or select the name of your resource.
+
+ :::image type="content" source="media/availability-azure-functions/app-insights-resource.png" alt-text="Screenshot that shows selecting your existing Application Insights resource on the Monitoring tab.":::
+
+ - If you don't have an Application Insights resource created yet for your timer-triggered function:
+ - By default, when you're creating your Azure Functions application, it creates an Application Insights resource for you. Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app).
+
+ > [!NOTE]
+ > You can host your functions on a Consumption, Premium, or App Service plan. If you're testing behind a virtual network or testing nonpublic endpoints, you'll need to use the Premium plan in place of the Consumption plan. Select your plan on the **Hosting** tab. Ensure the latest .NET version is selected when you create the function app.
+1. Create a timer trigger function.
+ 1. In your function app, select the **Functions** tab.
+ 1. Select **Add**. On the **Add function** pane, select the following configurations:
+ 1. **Development environment**: **Develop in portal**
+ 1. **Select a template**: **Timer trigger**
+ 1. Select **Add** to create the timer trigger function.
+
+ :::image type="content" source="media/availability-azure-functions/add-function.png" alt-text="Screenshot that shows how to add a timer trigger function to your function app." lightbox="media/availability-azure-functions/add-function.png":::
+
+### Add and edit code in the App Service Editor
+
+Go to your deployed function app, and under **Development Tools**, select the **App Service Editor** tab.
+
+To create a new file, right-click under your timer trigger function (for example, **TimerTrigger1**) and select **New File**. Then enter the name of the file and select **Enter**.
+
+1. Create a new file called **function.proj** and paste the following code:
+
+ ```xml
+ <Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>netstandard2.0</TargetFramework>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.ApplicationInsights" Version="2.15.0" /> <!-- Ensure youΓÇÖre using the latest version -->
+ </ItemGroup>
+ </Project>
+ ```
+
+ :::image type="content" source="media/availability-azure-functions/function-proj.png" alt-text=" Screenshot that shows function.proj in the App Service Editor." lightbox="media/availability-azure-functions/function-proj.png":::
+
+1. Create a new file called **runAvailabilityTest.csx** and paste the following code:
+
+ ```csharp
+ using System.Net.Http;
+
+ public async static Task RunAvailabilityTestAsync(ILogger log)
+ {
+ using (var httpClient = new HttpClient())
+ {
+ // TODO: Replace with your business logic
+ await httpClient.GetStringAsync("https://www.bing.com/");
+ }
+ }
+ ```
+
+1. Define the `REGION_NAME` environment variable as a valid Azure availability location.
+
+ Run the following command in the [Azure CLI](https://learn.microsoft.com/cli/azure/account?view=azure-cli-latest#az-account-list-locations&preserve-view=true) to list available regions.
+
+ ```azurecli
+ az account list-locations -o table
+ ```
+
+1. Copy the following code into the **run.csx** file. (You replace the pre-existing code.)
+
+ ```csharp
+ #load "runAvailabilityTest.csx"
+
+ using System;
+
+ using System.Diagnostics;
+
+ using Microsoft.ApplicationInsights;
+
+ using Microsoft.ApplicationInsights.Channel;
+
+ using Microsoft.ApplicationInsights.DataContracts;
+
+ using Microsoft.ApplicationInsights.Extensibility;
+
+ private static TelemetryClient telemetryClient;
+
+ // =============================================================
+
+ // ****************** DO NOT MODIFY THIS FILE ******************
+
+ // Business logic must be implemented in RunAvailabilityTestAsync function in runAvailabilityTest.csx
+
+ // If this file does not exist, please add it first
+
+ // =============================================================
+
+ public async static Task Run(TimerInfo myTimer, ILogger log, ExecutionContext executionContext)
+
+ {
+ if (telemetryClient == null)
+ {
+ // Initializing a telemetry configuration for Application Insights based on connection string
+
+ var telemetryConfiguration = new TelemetryConfiguration();
+ telemetryConfiguration.ConnectionString = Environment.GetEnvironmentVariable("APPLICATIONINSIGHTS_CONNECTION_STRING");
+ telemetryConfiguration.TelemetryChannel = new InMemoryChannel();
+ telemetryClient = new TelemetryClient(telemetryConfiguration);
+ }
+
+ string testName = executionContext.FunctionName;
+ string location = Environment.GetEnvironmentVariable("REGION_NAME");
+ var availability = new AvailabilityTelemetry
+ {
+ Name = testName,
+
+ RunLocation = location,
+
+ Success = false,
+ };
+
+ availability.Context.Operation.ParentId = Activity.Current.SpanId.ToString();
+ availability.Context.Operation.Id = Activity.Current.RootId;
+ var stopwatch = new Stopwatch();
+ stopwatch.Start();
+
+ try
+ {
+ using (var activity = new Activity("AvailabilityContext"))
+ {
+ activity.Start();
+ availability.Id = Activity.Current.SpanId.ToString();
+ // Run business logic
+ await RunAvailabilityTestAsync(log);
+ }
+ availability.Success = true;
+ }
-```csharp
-using System.Net.Http;
+ catch (Exception ex)
+ {
+ availability.Message = ex.Message;
+ throw;
+ }
-public async static Task RunAvailabilityTestAsync(ILogger log)
-{
- using (var httpClient = new HttpClient())
- {
- // TODO: Replace with your business logic
- await httpClient.GetStringAsync("https://www.bing.com/");
- }
-}
-```
+ finally
+ {
+ stopwatch.Stop();
+ availability.Duration = stopwatch.Elapsed;
+ availability.Timestamp = DateTimeOffset.UtcNow;
+ telemetryClient.TrackAvailability(availability);
+ telemetryClient.Flush();
+ }
+ }
-For advanced scenarios where the business logic must be adjusted to access the URL, such as obtaining tokens, setting parameters, and other test cases, custom code is necessary.
+ ```
## Next steps
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
Title: Create a new Azure Monitor Application Insights workspace-based resource description: Learn about the steps required to enable the new Azure Monitor Application Insights workspace-based resources. Previously updated : 04/12/2023 Last updated : 11/02/2023
The legacy continuous export functionality isn't supported for workspace-based r
When you're developing the next version of a web application, you don't want to mix up the [Application Insights](../../azure-monitor/app/app-insights-overview.md) telemetry from the new version and the already released version.
-To avoid confusion, send the telemetry from different development stages to separate Application Insights resources with separate instrumentation keys.
+To avoid confusion, send the telemetry from different development stages to separate Application Insights resources with separate connection strings.
-To make it easier to change the instrumentation key as a version moves from one stage to another, it can be useful to [set the instrumentation key dynamically in code](#dynamic-instrumentation-key) instead of in the configuration file.
+If your system is an instance of Azure Cloud Services, there's [another method of setting separate connection strings](../../azure-monitor/app/azure-web-apps-net-core.md).
-If your system is an instance of Azure Cloud Services, there's [another method of setting separate instrumentation keys](../../azure-monitor/app/azure-web-apps-net-core.md).
+### About resources and connection strings
-
-### About resources and instrumentation keys
-
-When you set up Application Insights monitoring for your web app, you create an Application Insights resource in Azure. You open this resource in the Azure portal to see and analyze the telemetry collected from your app. The resource is identified by an instrumentation key. When you install the Application Insights package to monitor your app, you configure it with the instrumentation key so that it knows where to send the telemetry.
+When you set up Application Insights monitoring for your web app, you create an Application Insights resource in Azure. You open this resource in the Azure portal to see and analyze the telemetry collected from your app. The resource is identified by a connection string. When you install the Application Insights package to monitor your app, you configure it with the connection string so that it knows where to send the telemetry.
Each Application Insights resource comes with metrics that are available out of the box. If separate components report to the same Application Insights resource, it might not make sense to alert on these metrics.
Be aware that:
- For Azure Service Fabric applications and classic cloud services, the SDK automatically reads from the Azure Role Environment and sets these services. For all other types of apps, you'll likely need to set this explicitly. - Live Metrics doesn't support splitting by role name.
-### <a name="dynamic-instrumentation-key"></a> Dynamic instrumentation key
-
-To make it easier to change the instrumentation key as the code moves between stages of production, reference the key dynamically in code instead of using a hardcoded or static value.
-
-Set the key in an initialization method, such as `global.asax.cs`, in an ASP.NET service:
-
-```csharp
-protected void Application_Start()
-{
- Microsoft.ApplicationInsights.Extensibility.
- TelemetryConfiguration.Active.InstrumentationKey =
- // - for example -
- WebConfigurationManager.AppSettings["ikey"];
- ...
-```
-
-In this example, the instrumentation keys for the different resources are placed in different versions of the web configuration file. Swapping the web configuration file, which you can do as part of the release script, swaps the target resource.
-
-#### Webpages
-The instrumentation key is also used in your app's webpages, in the [script that you got from the quickstart pane](../../azure-monitor/app/javascript.md). Instead of coding it literally into the script, generate it from the server state. For example, in an ASP.NET app:
-
-```javascript
-<script type="text/javascript">
-// Standard Application Insights webpage script:
-var appInsights = window.appInsights || function(config){ ...
-// Modify this part:
-}({instrumentationKey:
- // Generate from server property:
- "@Microsoft.ApplicationInsights.Extensibility.
- TelemetryConfiguration.Active.InstrumentationKey"
- }
- )
-//...
-```
- ### Create more Application Insights resources To create an Applications Insights resource, see [Create an Application Insights resource](#workspace-based-application-insights-resources).
To create an Applications Insights resource, see [Create an Application Insights
> [!WARNING] > You might incur additional network costs if your Application Insights resource is monitoring an Azure resource (i.e., telemetry producer) in a different region. Costs will vary depending on the region the telemetry is coming from and where it is going. Refer to [Azure bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/) for details.
-#### Get the instrumentation key
-The instrumentation key identifies the resource that you created.
+#### Get the connection string
+The connection string identifies the resource that you created.
-You need the instrumentation keys of all the resources to which your app will send data.
+You need the connection strings of all the resources to which your app will send data.
### Filter on the build number When you publish a new version of your app, you'll want to be able to separate the telemetry from different builds.
This section provides answers to common questions.
Moving existing Application Insights resources from one region to another is *currently not supported*. Historical data that you've collected *can't be migrated* to a new region. The only partial workaround is to:
-1. Create a new Application Insights resource ([classic](/previous-versions/azure/azure-monitor/app/create-new-resource) or [workspace based](#workspace-based-application-insights-resources)) in the new region.
+1. Create a new workspace-based Application Insights resource in the new region.
1. Re-create all unique customizations specific to the original resource in the new resource.
-1. Modify your application to use the new region resource's [instrumentation key](/previous-versions/azure/azure-monitor/app/create-new-resource#copy-the-instrumentation-key) or [connection string](./sdk-connection-string.md).
+1. Modify your application to use the new region resource's [connection string](./sdk-connection-string.md).
1. Test to confirm that everything is continuing to work as expected with your new Application Insights resource. 1. At this point, you can either keep or delete the original Application Insights resource. If you delete a classic Application Insights resource, *all historical data is lost*. If the original resource was workspace based, its data remains in Log Analytics. Keeping the original Application Insights resource allows you to access its historical data until its data retention settings run out.
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
# Migrate from Application Insights instrumentation keys to connection strings
-This article walks you through migrating from [instrumentation keys](create-workspace-resource.md#about-resources-and-instrumentation-keys) to [connection strings](sdk-connection-string.md#overview).
+This article walks through migrating from instrumentation keys to [connection strings](sdk-connection-string.md#overview).
## Prerequisites
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
When your application experiences higher load, autoscale adds resources to handl
For example, scale out your application by adding VMs when the average CPU usage per VM is above 70%. Scale it back by removing VMs when CPU usage drops to 40%. When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. You can also perform other actions like sending email, notifications, or webhooks to trigger processes in other systems.
Autoscale scales in and out, or horizontally. Scaling horizontally is an increas
In contrast, scaling up and down, or vertical scaling, keeps the same number of resource instances constant but gives them more capacity in terms of memory, CPU speed, disk space, and network. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling might also require a restart of the VM during the scaling process. Autoscale does not support vertical scaling. When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. You can also perform other actions like sending email, notifications, or webhooks to trigger processes in other systems.
You can set up autoscale via:
The following diagram shows the autoscale architecture.
- ![Diagram that shows autoscale flow.](./media/autoscale-overview/Autoscale_Overview_v4.png)
+ :::image type="content" source="./media/autoscale-overview/Autoscale_Overview_v4.png" lightbox="./media/autoscale-overview/Autoscale_Overview_v4.png" alt-text="Diagram that shows autoscale flow.":::
### Resource metrics
Autoscale uses the following terminology and structure.
| Schedule | recurrence | Indicates when autoscale should put this scale condition or profile into effect. You can have multiple scale conditions, which allow you to handle different and overlapping requirements. For example, you can have different scale conditions for different times of day or days of the week. | | Notify | notification | Defines the notifications to send when an autoscale event occurs. Autoscale can notify one or more email addresses or make a call by using one or more webhooks. You can configure multiple webhooks in the JSON but only one in the UI. |
-![Diagram that shows Azure autoscale setting, profile, and rule structure.](./media/autoscale-overview/azure-resource-manager-rule-structure-3.png)
The full list of configurable fields and descriptions is available in the [Autoscale REST API](/rest/api/monitor/autoscalesettings).
azure-monitor Autoscale Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-troubleshoot.md
An autoscale setting for a virtual machine scale set:
Let's review the metrics from the autoscale service. The following chart shows a **Percentage CPU** metric for a virtual machine scale set.-
-![Screenshot that shows a virtual machine scale set percentage CPU example.](media/autoscale-troubleshoot/autoscale-vmss-CPU-ex-full-1.png)
+<!-- convertborder later -->
The next chart shows the **Observed Metric Value** metric for an autoscale setting.-
-![Screenshot that shows another virtual machine scale set percentage CPU example.](media/autoscale-troubleshoot/autoscale-vmss-CPU-ex-full-2.png)
+<!-- convertborder later -->
The final chart shows the **Metric Threshold** and **Observed Capacity** metrics. The **Metric Threshold** metric at the top for the scale-out rule is 70. The **Observed Capacity** metric at the bottom shows the number of active instances, which is currently 3.-
-![Screenshot that shows Metric Threshold and Observed Capacity.](media/autoscale-troubleshoot/autoscale-metric-threshold-capacity-ex-full.png)
+<!-- convertborder later -->
> [!NOTE] > You can filter **Metric Threshold** by the metric trigger rule dimension scale-out (increase) rule to see the scale-out threshold and by the scale-in rule (decrease).
In this case, the autoscale engine's observed metric value is calculated as the
The following screenshots show two metric charts. The **Avg Outbound Flows** chart shows the value of the **Outbound Flows** metric. The actual value is 6.-
-![Screenshot that shows the Average Outbound Flows page with an example of a virtual machine scale set autoscale metrics chart.](media/autoscale-troubleshoot/autoscale-vmss-metric-chart-ex-1.png)
+<!-- convertborder later -->
The following chart shows a few values: - The **Observed Metric Value** metric in the middle is 3 because there are 2 active instances, and 6 divided by 2 is 3. - The **Observed Capacity** metric at the bottom shows the instance count seen by an autoscale engine. - The **Metric Threshold** metric at the top is set to 10.-
- ![Screenshot that shows a virtual machine scale set autoscale metrics charts example.](media/autoscale-troubleshoot/autoscale-vmss-metric-chart-ex-2.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/autoscale-troubleshoot/autoscale-vmss-metric-chart-ex-2.png" lightbox="media/autoscale-troubleshoot/autoscale-vmss-metric-chart-ex-2.png" alt-text="Screenshot that shows a virtual machine scale set autoscale metrics charts example." border="false":::
If there are multiple scale action rules, you can use splitting or the **add filter** option in the metrics explorer chart to look at a metric by a specific source or rule. For more information on splitting a metric chart, see [Advanced features of metric charts - splitting](../essentials/metrics-charts.md#apply-splitting). ## Example 3: Understand autoscale events In the autoscale setting screen, go to the **Run history** tab to see the most recent scale actions. The tab also shows the change in **Observed Capacity** over time. To find more information about all autoscale actions, including operations such as update/delete autoscale settings, view the activity log and filter by autoscale operations.-
-![Screenshot that shows autoscale settings run history.](media/autoscale-troubleshoot/autoscale-setting-run-history-smaller.png)
+<!-- convertborder later -->
## Autoscale resource logs
As with any Azure Monitor supported service, you can use [diagnostic settings](.
- Your Log Analytics workspace for detailed analytics. - Azure Event Hubs and then to non-Azure tools. - Your Azure Storage account for archive.-
-![Screenshot that shows autoscale diagnostic settings.](media/autoscale-troubleshoot/diagnostic-settings.png)
+<!-- convertborder later -->
The preceding screenshot shows the Azure portal autoscale **Diagnostics settings** pane. There you can select the **Diagnostic/Resource Logs** tab and enable log collection and routing. You can also perform the same action by using the REST API, the Azure CLI, PowerShell, and Azure Resource Manager templates for diagnostic settings by choosing the resource type as **Microsoft.Insights/AutoscaleSettings**.
azure-monitor Best Practices Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-logs.md
description: Provides a template for a Well-Architected Framework (WAF) article
Previously updated : 03/29/2023 Last updated : 08/16/2023
This article provides architectural best practices for Azure Monitor Logs. The g
## Reliability
-In the cloud, we acknowledge that failures happen. Instead of trying to prevent failures altogether, the goal is to minimize the effects of a single failing component. Use the following information to minimize failure of your Log Analytics workspaces and to protect the data they collect.
+[Reliability](/azure/well-architected/resiliency/overview) refers to the ability of a system to recover from failures and continue to function. Instead of trying to prevent failures altogether in the cloud, the goal is to minimize the effects of a single failing component. Use the following information to minimize failure of your Log Analytics workspaces and to protect the data they collect.
[!INCLUDE [waf-logs-reliability](includes/waf-logs-reliability.md)] ## Security
-Security is one of the most important aspects of any architecture. Azure Monitor provides features to employ both the principle of least privilege and defense-in-depth. Use the following information to maximize the security of your Log Analytics workspaces and ensure that only authorized users access collected data.
+[Security](/azure/well-architected/security/overview) is one of the most important aspects of any architecture. Azure Monitor provides features to employ both the principle of least privilege and defense-in-depth. Use the following information to maximize the security of your Log Analytics workspaces and ensure that only authorized users access collected data.
[!INCLUDE [waf-logs-security](includes/waf-logs-security.md)] ## Cost optimization
-Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](cost-usage.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+[Cost optimization](/azure/well-architected/cost/overview) refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
> [!NOTE] > See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
Cost optimization refers to ways to reduce unnecessary expenses and improve oper
## Operational excellence
-Operational excellence refers to operations processes required keep a service running reliably in production. Use the following information to minimize the operational requirements for supporting Log Analytics workspaces.
+[Operational excellence](/azure/well-architected/devops/overview) refers to operations processes required keep a service running reliably in production. Use the following information to minimize the operational requirements for supporting Log Analytics workspaces.
[!INCLUDE [waf-logs-operation](includes/waf-logs-operation.md)] ## Performance efficiency
-Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner. Use the following information to ensure that your Log Analytics workspaces and log queries are configured for maximum performance.
+[Performance efficiency](/azure/well-architected/scalability/overview) is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner. Use the following information to ensure that your Log Analytics workspaces and log queries are configured for maximum performance.
[!INCLUDE [waf-logs-performance](includes/waf-logs-performance.md)]
azure-monitor Container Insights Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-analyze.md
Azure Container Instances virtual nodes that run the Linux OS are shown after th
From an expanded node, you can drill down from the pod or container that runs on the node to the controller to view performance data filtered for that controller. Select the value under the **Controller** column for the specific node. Select controllers or containers at the top of the page to review the status and resource utilization for those objects. To review memory utilization, in the **Metric** dropdown list, select **Memory RSS** or **Memory working set**. **Memory RSS** is supported only for Kubernetes version 1.8 and later. Otherwise, you view values for **Min&nbsp;%** as *NaN&nbsp;%*, which is a numeric data type value that represents an undefined or unrepresentable value.
The row hierarchy starts with a controller. When you expand a controller, you vi
Select the value under the **Node** column for the specific controller. The information that's displayed when you view controllers is described in the following table.
The icons in the status field indicate the online status of the containers.
| Icon | Status | |--|-|
-| ![Ready running status icon.](./media/container-insights-analyze/containers-ready-icon.png) | Running (Ready)|
-| ![Waiting or Paused status icon.](./media/container-insights-analyze/containers-waiting-icon.png) | Waiting or Paused|
-| ![Last reported running status icon.](./media/container-insights-analyze/containers-grey-icon.png) | Last reported running but hasn't responded for more than 30 minutes|
-| ![Successful status icon.](./media/container-insights-analyze/containers-green-icon.png) | Successfully stopped or failed to stop|
+| :::image type="content" source="./media/container-insights-analyze/containers-ready-icon.png" alt-text="Ready running status icon.":::|
+| :::image type="content" source="./media/container-insights-analyze/containers-waiting-icon.png" alt-text="Waiting or Paused status icon."::: | Waiting or Paused|
+| :::image type="content" source="./media/container-insights-analyze/containers-grey-icon.png" alt-text="Last reported running status icon."::: | Last reported running but hasn't responded for more than 30 minutes|
+| :::image type="content" source="./media/container-insights-analyze/containers-green-icon.png" alt-text="Successful status icon."::: | Successfully stopped or failed to stop|
The status icon displays a count based on what the pod provides. It shows the worst two states. When you hover over the status, it displays a rollup status from all pods in the container. If there isn't a ready state, the status value displays **(0)**.
Here you can view the performance health of your AKS and Container Instances con
From a container, you can drill down to a pod or node to view performance data filtered for that object. Select the value under the **Pod** or **Node** column for the specific container. The information that's displayed when you view containers is described in the following table.
The icons in the status field indicate the online statuses of pods, as described
| Icon | Status | |--|-|
-| ![Ready running status icon.](./media/container-insights-analyze/containers-ready-icon.png) | Running (Ready)|
-| ![Waiting or Paused status icon.](./media/container-insights-analyze/containers-waiting-icon.png) | Waiting or Paused|
-| ![Last reported running status icon.](./media/container-insights-analyze/containers-grey-icon.png) | Last reported running but hasn't responded in more than 30 minutes|
-| ![Terminated status icon.](./media/container-insights-analyze/containers-terminated-icon.png) | Successfully stopped or failed to stop|
-| ![Failed status icon.](./media/container-insights-analyze/containers-failed-icon.png) | Failed state |
+| :::image type="content" source="./media/container-insights-analyze/containers-ready-icon.png" alt-text="Ready running status icon.":::|
+| :::image type="content" source="./media/container-insights-analyze/containers-waiting-icon.png" alt-text="Waiting or Paused status icon."::: | Waiting or Paused|
+| :::image type="content" source="./media/container-insights-analyze/containers-grey-icon.png" alt-text="Last reported running status icon."::: | Last reported running but hasn't responded in more than 30 minutes|
+| :::image type="content" source="./media/container-insights-analyze/containers-terminated-icon.png" alt-text="Terminated status icon."::: | Successfully stopped or failed to stop|
+| :::image type="content" source="./media/container-insights-analyze/containers-failed-icon.png" alt-text="Failed status icon."::: | Failed state |
## Monitor and visualize network configurations
azure-monitor Container Insights Cost Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost-config.md
This table outlines the list of the container insights Log Analytics tables for
| ContainerInsights Table Name | Is Data collection setting: interval applicable? | Is Data collection setting: namespaces applicable? | Remarks | | | | | | | ContainerInventory | Yes | Yes | |
-| ContainerNodeInventory | Yes | No | Data collection setting for namespaces is not applicable since Kubernetes Node is not a namespace scoped resource |
-| KubeNodeInventory | Yes | No | Data collection setting for namespaces is not applicable Kubernetes Node is not a namespace scoped resource |
+| ContainerNodeInventory | Yes | No | Data collection setting for namespaces isn't applicable since Kubernetes Node isn't a namespace scoped resource |
+| KubeNodeInventory | Yes | No | Data collection setting for namespaces isn't applicable Kubernetes Node isn't a namespace scoped resource |
| KubePodInventory | Yes | Yes || | KubePVInventory | Yes | Yes | | | KubeServices | Yes | Yes | |
-| KubeEvents | No | Yes | Data collection setting for interval is not applicable for the Kubernetes Events |
-| Perf | Yes | Yes\* | \*Data collection setting for namespaces is not applicable for the Kubernetes Node related metrics since the Kubernetes Node is not a namespace scoped object. |
+| KubeEvents | No | Yes | Data collection setting for interval isn't applicable for the Kubernetes Events |
+| Perf | Yes | Yes\* | \*Data collection setting for namespaces isn't applicable for the Kubernetes Node related metrics since the Kubernetes Node isn't a namespace scoped object. |
| InsightsMetrics| Yes\*\* | Yes\*\* | \*\*Data collection settings are only applicable for the metrics collecting the following namespaces: container.azm.ms/kubestate, container.azm.ms/pv and container.azm.ms/gpu | ## Custom metrics | Metric namespace | Is Data collection setting: interval applicable? | Is Data collection setting: namespaces applicable? | Remarks | | | | | |
-| Insights.container/nodes| Yes | No | Node is not a namespace scoped resource |
+| Insights.container/nodes| Yes | No | Node isn't a namespace scoped resource |
|Insights.container/pods | Yes | Yes| | | Insights.container/containers | Yes | Yes | | | Insights.container/persistentvolumes | Yes | Yes | |
This table outlines the list of the container insights Log Analytics tables for
The default container insights experience is powered through using all the existing data streams. Removing one or more of the default streams renders the container insights experience unavailable.
-[![Screenshot that shows the custom experience.](media/container-insights-cost-config/container-insights-cost-custom.png)](media/container-insights-cost-config/container-insights-cost-custom.png#lightbox)
-If you are currently using the above tables for other custom alerts or charts, then modifying your data collection settings may degrade those experiences. If you are excluding namespaces or reducing data collection frequency, review your existing alerts, dashboards, and workbooks using this data.
+If you're currently using the above tables for other custom alerts or charts, then modifying your data collection settings may degrade those experiences. If you're excluding namespaces or reducing data collection frequency, review your existing alerts, dashboards, and workbooks using this data.
To scan for alerts that may be referencing these tables, run the following Azure Resource Graph query:
Cost presets and collection settings are available for selection in the Azure po
| Cost-optimized | 5 m | Excludes kube-system, gatekeeper-system, azure-arc | Not enabled | | Syslog | 1 m | None | Enabled by default |
-[![Screenshot that shows the cost presets.](media/container-insights-cost-config/cost-profiles-options.png)](media/container-insights-cost-config/cost-profiles-options.png#lightbox)
## Custom data collection Container insights Collected Data can be customized through the Azure portal, using the following options. Selecting any options other than **All (Default)** leads to the container insights experience becoming unavailable.
Container insights Collected Data can be customized through the Azure portal, us
| | | | | All (Default) | All standard container insights tables | Required for enabling the default container insights visualizations | | Performance | Perf, InsightsMetrics | |
-| Logs and events | ContainerLog or ContainerLogV2, KubeEvents, KubePodInventory | Recommended if you have enabled managed Prometheus metrics |
+| Logs and events | ContainerLog or ContainerLogV2, KubeEvents, KubePodInventory | Recommended if you enabled managed Prometheus metrics |
| Workloads, Deployments, and HPAs | InsightsMetrics, KubePodInventory, KubeEvents, ContainerInventory, ContainerNodeInventory, KubeNodeInventory, KubeServices | | | Persistent Volumes | InsightsMetrics, KubePVInventory | |
-[![Screenshot that shows the collected data options.](media/container-insights-cost-config/collected-data-options.png)](media/container-insights-cost-config/collected-data-options.png#lightbox)
## Configuring AKS data collection settings using Azure CLI
az aks enable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName> --
## [Azure portal](#tab/create-portal) 1. In the Azure portal, select the AKS cluster that you wish to monitor. 2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
-3. If you have not previously configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar.
-4. If you are configuring Container Insights for the first time or have not migrated to using [managed identity authentication](../containers/container-insights-onboard.md#authentication), select the "Use managed identity" checkbox.
-[![Screenshot that shows the onboarding options.](media/container-insights-cost-config/cost-settings-onboarding.png)](media/container-insights-cost-config/cost-settings-onboarding.png#lightbox)
+3. If you have not configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar.
+4. If you're configuring Container Insights for the first time or have not migrated to using [managed identity authentication](../containers/container-insights-onboard.md#authentication), select the "Use managed identity" checkbox.
5. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit collection settings"
-[![Screenshot that shows the collection settings.](media/container-insights-cost-config/advanced-collection-settings.png)](media/container-insights-cost-config/advanced-collection-settings.png#lightbox)
6. Click the blue "Configure" button to finish.
The collection settings can be modified through the input of the `dataCollection
## [Azure portal](#tab/create-portal) 1. In the Azure portal, select the AKS hybrid cluster that you wish to monitor. 2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
-3. If you have not previously configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar.
-[![Screenshot that shows the onboarding options.](media/container-insights-cost-config/cost-settings-onboarding.png)](media/container-insights-cost-config/cost-settings-onboarding.png#lightbox)
-4. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit collection settings"
-[![Screenshot that shows the collection settings.](media/container-insights-cost-config/advanced-collection-settings.png)](media/container-insights-cost-config/advanced-collection-settings.png#lightbox).
+3. If you have not configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar.
+4. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit collection settings".
5. Click the blue "Configure" button to finish.
The collection settings can be modified through the input of the `dataCollection
## [Azure portal](#tab/create-portal) 1. In the Azure portal, select the Arc cluster that you wish to monitor. 2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
-3. If you have not previously configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar.
-4. If you are configuring Container Insights for the first time, select the "Use managed identity" checkbox
-[![Screenshot that shows the onboarding options.](media/container-insights-cost-config/cost-settings-onboarding.png)](media/container-insights-cost-config/cost-settings-onboarding.png#lightbox).
-5. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit advanced collection settings"
-[![Screenshot that shows the collection settings.](media/container-insights-cost-config/advanced-collection-settings.png)](media/container-insights-cost-config/advanced-collection-settings.png#lightbox).
+3. If you have not configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar.
+4. If you're configuring Container Insights for the first time, select the "Use managed identity" checkbox.
+5. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit advanced collection settings".
6. Click the blue "Configure" button to finish.
To update your data collection Settings, modify the values in parameter files an
## Limitations -- Recommended alerts will not work as intended if the Data collection interval is configured more than 1-minute interval. To continue using Recommended alerts, please migrate to the [Prometheus metrics addon](../essentials/prometheus-metrics-overview.md)
+- Recommended alerts don't work as intended if the Data collection interval is configured more than 1-minute interval. To continue using Recommended alerts, migrate to the [Prometheus metrics addon](../essentials/prometheus-metrics-overview.md)
- There may be gaps in Trend Line Charts of Deployments workbook if configured Data collection interval more than time granularity of the selected Time Range.
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
The following types of data collected from a Kubernetes cluster with Container i
Consider a scenario where your organization's different business units share Kubernetes infrastructure and a Log Analytics workspace. Each business unit is separated by a Kubernetes namespace. You can visualize how much data is ingested in each workspace by using the **Data Usage** runbook. The runbook is available from the **Reports** tab.
-[![Screenshot that shows the View Workbooks dropdown list.](media/container-insights-cost/workbooks-dropdown.png)](media/container-insights-cost/workbooks-dropdown.png#lightbox)
This workbook helps you visualize the source of your data without having to build your own library of queries from what we share in our documentation. In this workbook, you can view charts that present billable data such as the:
This workbook helps you visualize the source of your data without having to buil
- Billable container log data ingested by log source entry. - Billable diagnostic data ingested by diagnostic main node logs.
-[![Screenshot that shows the Data Usage workbook.](media/container-insights-cost/data-usage-workbook.png)](media/container-insights-cost/data-usage-workbook.png#lightbox)
To learn about managing rights and permissions to the workbook, review [Access control](../visualize/workbooks-overview.md#access-control).
Container Insights data primarily consists of metric counters (Perf, Inventory,
By navigating to the By Table section of the Data Usage workbook, you can see the breakdown of table sizes for Container Insights.
-[![Screenshot that shows the By Table breakdown in Data Usage workbook.](media/container-insights-cost/data-usage-workbook-by-table.png)](media/container-insights-cost/data-usage-workbook-by-table.png#lightbox)
If the majority of your data comes from one of these following tables: - Perf
The following list is the cluster inventory data collected by default:
## Next steps To help you understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Analyze usage in a Log Analytics workspace](../logs/analyze-usage.md).-
azure-monitor Container Insights Livedata Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-metrics.md
This feature performs a polling operation against the metrics endpoints includin
The polling interval is configured from the **Set interval** dropdown list. Use this dropdown list to set polling for new data every 1, 5, 15, and 30 seconds.
-![Screenshot that shows the Go Live dropdown polling interval.](./media/container-insights-livedata-metrics/cluster-view-polling-interval-dropdown.png)
>[!IMPORTANT] >We recommend that you set the polling interval to one second while you troubleshoot an issue for a short period of time. These requests might affect the availability and throttling of the Kubernetes API on your cluster. Afterward, reconfigure to a longer polling interval.
The following metrics are captured and displayed in four performance charts.
### Node CPU utilization % and Node memory utilization % These two performance charts map to an equivalent of invoking `kubectl top nodes` and capturing the results of the **CPU%** and **MEMORY%** columns to the respective chart.-
-![Screenshot that shows the kubectl top nodes example results.](./media/container-insights-livedata-metrics/kubectl-top-nodes-example.png)
-
-![Screenshot that shows the Node CPU utilization percent chart.](./media/container-insights-livedata-metrics/cluster-view-node-cpu-util.png)
-
-![Screenshot that shows the Node memory utilization percent chart.](./media/container-insights-livedata-metrics/cluster-view-node-memory-util.png)
+<!-- convertborder later -->
+<!-- convertborder later -->
+<!-- convertborder later -->
The percentile calculations will function in larger clusters to help identify outlier nodes in your cluster. For example, you can understand if nodes are underutilized for scale-down purposes. By using the **Min** aggregation, you can see which nodes have low utilization in the cluster. To further investigate, select the **Nodes** tab and sort the grid by CPU or memory utilization.
This information also helps you understand which nodes are being pushed to their
### Node count This performance chart maps to an equivalent of invoking `kubectl get nodes` and mapping the **STATUS** column to a chart grouped by status types.-
-![Screenshot that shows the kubectl get nodes example results.](./media/container-insights-livedata-metrics/kubectl-get-nodes-example.png)
-
-![Screenshot that shows the Node count chart.](./media/container-insights-livedata-metrics/cluster-view-node-count-01.png)
+<!-- convertborder later -->
+<!-- convertborder later -->
Nodes are reported either in a **Ready** or **Not Ready** state and they're counted to create a total count. The results of these two aggregations are charted so that, for example, you can understand if your nodes are falling into failed states. By using the **Not Ready** aggregation, you can quickly see the number of nodes in your cluster currently in the **Not Ready** state.
Nodes are reported either in a **Ready** or **Not Ready** state and they're coun
This performance chart maps to an equivalent of invoking `kubectl get pods --all-namespaces` and maps the **STATUS** column the chart grouped by status types.
-![Screenshot that shows the kubectl get pods example results.](./media/container-insights-livedata-metrics/kubectl-get-pods-example.png)
-
-![Screenshot that shows the Active pod count chart.](./media/container-insights-livedata-metrics/cluster-view-node-pod-count.png)
+<!-- convertborder later -->
>[!NOTE] >Names of status as interpreted by `kubectl` might not exactly match in the chart.
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
InsightsMetrics
``` The output will show results similar to the following example.-
-![Screenshot that shows the log query results of data ingestion volume.](media/container-insights-log-query/log-query-example-usage-03.png)
+<!-- convertborder later -->
To estimate what each metrics size in GB is for a month to understand if the volume of data ingested received in the workspace is high, the following query is provided.
InsightsMetrics
``` The output will show results similar to the following example.-
-![Screenshot that shows log query results of data ingestion volume.](./media/container-insights-log-query/log-query-example-usage-02.png)
+<!-- convertborder later -->
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
Follow the instructions to configure an existing ConfigMap or to use a new one.
1. In the Insights section of your Kubernetes cluster, select the **Monitoring Settings** button from the top toolbar
-![Screenshot that shows monitoring settings.](./media/container-insights-logging-v2/container-insights-v2-monitoring-settings.png)
2. Select **Edit collection settings** to open the advanced settings
-![Screenshot that shows advanced collection settings.](./media/container-insights-logging-v2/container-insights-v2-monitoring-settings-open.png)
3. Select the checkbox with **Enable ContainerLogV2** and choose the **Save** button below
-![Screenshot that shows ContainerLogV2 checkbox.](./media/container-insights-logging-v2/container-insights-v2-collection-settings.png)
4. The summary section should display the message "ContainerLogV2 enabled", click the **Configure** button to complete your configuration change
-![Screenshot that shows ContainerLogV2 enabled.](./media/container-insights-logging-v2/container-insights-v2-monitoring-settings-configured.png)
## [CLI](#tab/configure-CLI)
Additionally, the feature also adds support for .NET, Go, Python and Java stack
Below are two screenshots which demonstrate Multi-line logging at work for Go exception stack trace: Multi-line logging disabled scenario:-
-![Screenshot that shows Multi-line logging disabled.](./media/container-insights-logging-v2/multi-line-disabled-go.png)
+<!-- convertborder later -->
Multi-line logging enabled scenario:-
-[ ![Screenshot that shows Multi-line enabled.](./media/container-insights-logging-v2/multi-line-enabled-go.png) ](./media/container-insights-logging-v2/multi-line-enabled-go.png#lightbox)
+<!-- convertborder later -->
Similarly, below screenshots depict Multi-line logging enabled scenarios for Java and Python stack traces: For Java:
-[ ![Screenshot that shows Multi-line enabled for Java](./media/container-insights-logging-v2/multi-line-enabled-java.png) ](./media/container-insights-logging-v2/multi-line-enabled-java.png#lightbox)
For Python:
-[ ![Screenshot that shows Multi-line enabled for Python](./media/container-insights-logging-v2/multi-line-enabled-python.png) ](./media/container-insights-logging-v2/multi-line-enabled-python.png#lightbox)
### Pre-requisites
azure-monitor Container Insights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout.md
If you choose to use the Azure CLI, you must install and use the CLI locally. Yo
``` 1. Edit the values for **aksResourceId** and **aksResourceLocation** by using the values of the AKS cluster, which you can find on the **Properties** page for the selected cluster.-
- ![Screenshot that shows the Container properties page.](media/container-insights-optout/container-properties-page.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/container-insights-optout/container-properties-page.png" lightbox="media/container-insights-optout/container-properties-page.png" alt-text="Screenshot that shows the Container properties page." border="false":::
While you're on the **Properties** page, also copy the **Workspace Resource ID**. This value is required if you decide you want to delete the Log Analytics workspace later. Deleting the Log Analytics workspace isn't performed as part of this process.
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
Reports in Container insights are recommended out-of-the-box for [Azure workbook
## View workbooks On the **Azure Monitor** menu in the Azure portal, select **Containers**. In the **Monitoring** section, select **Insights**, choose a particular cluster, and then select the **Reports** tab. You can also view them from the [workbook gallery](../visualize/workbooks-overview.md#the-gallery) in Azure Monitor.-
-[![Screenshot that shows the Reports page.](media/container-insights-reports/reports-page.png)](media/container-insights-reports/reports-page.png#lightbox)
+<!-- convertborder later -->
## Cluster Optimization Workbook
The number on each tile represents how far the container limits/requests are fro
## Create a custom workbook To create a custom workbook based on any of these workbooks, select the **View Workbooks** dropdown list and then select **Go to AKS Gallery** at the bottom of the list. For more information about workbooks and using workbook templates, see [Azure Monitor workbooks](../visualize/workbooks-overview.md).-
-[![Screenshot that shows the AKS gallery.](media/container-insights-reports/aks-gallery.png)](media/container-insights-reports/aks-gallery.png#lightbox)
+<!-- convertborder later -->
## Next steps
azure-monitor Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection.md
The diagram below shows data collection for [resource logs](resource-logs.md) us
See [Workspace transformation DCR](data-collection-transformations.md#workspace-transformation-dcr) for details about workspace transformation DCRs and links to walkthroughs for creating them.
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Is there a maximum amount of data that I can collect in Azure Monitor?
+
+There's no limit to the amount of metric data you can collect, but this data is stored for a maximum of 93 days. See [Retention of metrics](./data-platform-metrics.md#retention-of-metrics). There's no limit on the amount of log data that you can collect, but the pricing tier you choose for the Log Analytics workspace might affect the limit. See [Pricing details](https://azure.microsoft.com/pricing/details/monitor/).
+
+### How do I access data collected by Azure Monitor?
+
+Insights and solutions provide a custom experience for working with data stored in Azure Monitor. You can work directly with log data by using a log query written in Kusto Query Language (KQL). In the Azure portal, you can write and run queries and interactively analyze data by using Log Analytics. Analyze metrics in the Azure portal with the metrics explorer. See [Analyze log data in Azure Monitor](../logs/log-query-overview.md) and [Analyze metrics with Azure Monitor metrics explorer](./analyze-metrics.md).
+ ## Next steps - Read more about [data collection rules](data-collection-rule-overview.md).
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
Previously updated : 10/01/2022 Last updated : 11/02/2023 # Query Basic Logs in Azure Monitor
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
Last updated 6/28/2023
Azure Monitor Logs retains data in two states: * **Interactive retention**: Lets you retain Analytics logs for [interactive queries](../logs/get-started-queries.md) of up to 2 years.
-* **Archive**: Lets you keep older, less used data in your workspace at a reduced cost. You can access data in the archived state by using [search jobs](../logs/search-jobs.md) and [restore](../logs/restore.md). You can currently keep data in archived state for up to 7 years. In the coming months, it will be possible to extend archive to 12 years.
+* **Archive**: Lets you keep older, less used data in your workspace at a reduced cost. You can access data in the archived state by using [search jobs](../logs/search-jobs.md) and [restore](../logs/restore.md). You can keep data in archived state for up to 12 years.
This article describes how to configure data retention and archiving.
To set the default workspace retention:
By default, all tables in your workspace inherit the workspace's interactive retention setting and have no archive. You can modify the retention and archive settings of individual tables, except for workspaces in the legacy Free Trial pricing tier.
-The Analytics log data plan includes 30 days of interactive retention. You can increase the interactive retention period to up to 730 days at an [additional cost](https://azure.microsoft.com/pricing/details/monitor/). If needed, you can reduce the interactive retention period to as little as four days using the API or CLI. However, since 30 days are included in the ingestion price, lowering the retention period below 30 days doesn't reduce costs. You can set the archive period to a total retention time of up to 2,556 days (seven years).
+The Analytics log data plan includes 30 days of interactive retention. You can increase the interactive retention period to up to 730 days at an [additional cost](https://azure.microsoft.com/pricing/details/monitor/). If needed, you can reduce the interactive retention period to as little as four days using the API or CLI. However, since 30 days are included in the ingestion price, lowering the retention period below 30 days doesn't reduce costs. You can set the archive period to a total retention time of up to 4,383 days (12 years).
> [!NOTE]
-> In the coming months, new settings will enable retaining data for up to 12 years.
+> Currently, you can set total retention to up to 12 years through the Azure portal and API. CLI and PowerShell are limited to seven years; support for 12 years will follow.
# [Portal](#tab/portal-1)
To set the retention and archive duration for a table in the Azure portal:
To set the retention and archive duration for a table, call the **Tables - Update** API: ```http
-PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}?api-version=2021-12-01-preview
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}?api-version=2022-10-01
``` > [!NOTE]
The request body includes the values in the following table.
|Name | Type | Description | | | | | |properties.retentionInDays | integer | The table's data retention in days. This value can be between 4 and 730. <br/>Setting this property to null applies the workspace retention period. For a Basic Logs table, the value is always 8. |
-|properties.totalRetentionInDays | integer | The table's total data retention including archive period. This value can be between 4 and 730; or 1095, 1460, 1826, 2191, or 2556. Set this property to null if you don't want to archive data. |
+|properties.totalRetentionInDays | integer | The table's total data retention including archive period. This value can be between 4 and 730; or 1095, 1460, 1826, 2191, 2556, 2922, 3288, 3653, 4018, or 4383. Set this property to null if you don't want to archive data. |
**Example**
This example sets the table's interactive retention to the workspace default of
**Request** ```http
-PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/CustomLog_CL?api-version=2021-12-01-preview
+PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/CustomLog_CL?api-version=2022-10-01
``` **Request body**
The **Tables** screen shows the interactive retention and archive period for all
To get the retention setting of a particular table (in this example, `SecurityEvent`), call the **Tables - Get** API: ```JSON
-GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2021-12-01-preview
+GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2022-10-01
``` To get all table-level retention settings in your workspace, don't set a table name.
To get all table-level retention settings in your workspace, don't set a table n
For example: ```JSON
-GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2021-12-01-preview
+GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2022-10-01
``` # [CLI](#tab/cli-2)
azure-monitor Get Started Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/get-started-queries.md
description: This article provides a tutorial for getting started writing log qu
+ Last updated 10/31/2023
To make the output clearer, you can select to display it as a time chart, which
![Screenshot that shows the values of a query memory over time.](media/get-started-queries/chart.png)
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Why am I seeing duplicate records in Azure Monitor Logs?
+
+Occasionally, you might notice duplicate records in Azure Monitor Logs. This duplication is typically from one of the following two conditions:
+
+- Components in the pipeline have retries to ensure reliable delivery at the destination. Occasionally, this capability might result in duplicates for a small percentage of telemetry items.
+- If the duplicate records come from a virtual machine, you might have both the Log Analytics agent and Azure Monitor Agent installed. If you still need the Log Analytics agent installed, configure the Log Analytics workspace to no longer collect data that's also being collected by the data collection rule used by Azure Monitor Agent.
++ ## Next steps - To learn more about using string data in a log query, see [Work with strings in Azure Monitor log queries](/azure/data-explorer/kusto/query/samples?&pivots=azuremonitor#string-operations).
azure-monitor Manage Logs Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-logs-tables.md
You must have `microsoft.operationalinsights/workspaces/tables/write` permission
This diagram provides an overview of the table configuration options in Azure Monitor Logs: ### Table type and schema
Your Log Analytics workspace can contain the following types of tables:
### Retention and archive
-Archiving is a low-cost solution for keeping data that you no longer use regularly in your workspace for compliance or occasional investigation. [Set table-level retention policies](../logs/data-retention-archive.md) to override the default workspace retention policy and to archive data within your workspace.
+Archiving is a low-cost solution for keeping data that you no longer use regularly in your workspace for compliance or occasional investigation. [Set table-level retention](../logs/data-retention-archive.md) to override the default workspace retention and to archive data within your workspace.
To access archived data, [run a search job](../logs/search-jobs.md) or [restore data for a specific time range](../logs/restore.md).
GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{
|Name | Type | Description | | | | | |properties.plan | string | The table plan. Either `Analytics` or `Basic`. |
-|properties.retentionInDays | integer | The table's data retention in days. In `Basic Logs`, the value is eight days, fixed. In `Analytics Logs`, the value is between 7 and 730 days.|
+|properties.retentionInDays | integer | The table's data retention in days. In `Basic Logs`, the value is eight days, fixed. In `Analytics Logs`, the value is between four and 730 days.|
|properties.totalRetentionInDays | integer | The table's data retention that also includes the archive period.| |properties.archiveRetentionInDays|integer|The table's archive period (read-only, calculated).| |properties.lastPlanModifiedDate|String|Last time when the plan was set for this table. Null if no change was ever done from the default settings (read-only).
Learn how to:
- [Set a table's log data plan](../logs/basic-logs-configure.md) - [Add custom tables and columns](../logs/create-custom-table.md)-- [Set retention and archive policies](../logs/data-retention-archive.md)
+- [Set retention and archive](../logs/data-retention-archive.md)
- [Design a workspace architecture](../logs/workspace-design.md)
azure-monitor Vminsights Migrate From Service Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-migrate-from-service-map.md
> [!NOTE] > Service Map will be retired on 30 September 2025. Be sure to migrate to VM insights before this date to continue monitoring processes and dependencies for your virtual machines.
-The map feature of VM insights visualizes virtual machine dependencies by discovering running processes that have active network connection between servers, inbound and outbound connection latency, or ports across any TCP-connected architecture over a specified time range. For more information about the benefits of the VM insights map feature over Service Map, see [How is VM insights Map feature different from Service Map?](/azure/azure-monitor/faq#how-is-the-vm-insights-map-feature-different-from-service-map-).
+The map feature of VM insights visualizes virtual machine dependencies by discovering running processes that have active network connection between servers, inbound and outbound connection latency, or ports across any TCP-connected architecture over a specified time range.
## Enable VM insights using Azure Monitor Agent
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
To access from Azure Monitor:
On the **Top N Charts** tab, if you have more than one Log Analytics workspace, select the workspace enabled with the solution from the **Workspace** selector at the top of the page. The **Group** selector returns subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and virtual machine scale sets of computers related to the selected workspace that you can use to further filter results presented in the charts on this page and across the other pages. Your selection only applies to the Performance feature and doesn't carry over to Health or Map.
-By default, the charts show the last 24 hours. By using the **TimeRange** selector, you can query for historical time ranges of up to 30 days to show how performance looked in the past.
+By default, the charts show performance counters for the last hour. By using the **TimeRange** selector, you can query for historical time ranges of up to 30 days to show how performance looked in the past.
Five capacity utilization charts are shown on the page:
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
na Previously updated : 03/08/2023 Last updated : 11/02/2023
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
| Azure NetApp Files features | Azure public cloud availability | Azure Government availability | |: |: |: | | Azure NetApp Files backup | Public preview | No |
-| Azure NetApp Files datastores for AVS | Generally available (GA) | No |
-| Azure NetApp Files customer-managed keys | Public preview | Public preview [(in select regions)](configure-customer-managed-keys.md#supported-regions) |
| Azure NetApp Files large volumes | Public preview | No | | Edit network features for existing volumes | Public preview | No | | Standard network features | Generally available (GA) | Public preview [(in select regions)](azure-netapp-files-network-topologies.md#supported-regions) |
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Azure NetApp Files customer-managed keys is supported for the following regions:
* UAE North * UK South * UK West
+* US Gov Arizona
* US Gov Texas * US Gov Virginia * West Europe
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Billing calculation for a Standard capacity pool is at the hot-tier rate for the
### Examples of billing structure
-Assume that you created a 4-TiB Standard capacity pool. The billing structure is at the Standard capacity tier rate for the entire 4 TiB.
+Assume that you created a 4 TiB Standard capacity pool. The billing structure is at the Standard capacity tier rate for the entire 4 TiB.
When you create volumes in the capacity pool and start tiering data to the cool tier, the following scenarios explain the applicable billing structure: * Assume that you create three volumes with 1 TiB each. You don't enable tiering at the volume level. The billing calculation is as follows:
- * 3-TiB of allocated capacity at the hot tier rate
- * 1-TiB of unallocated capacity at the hot tier rate
+ * 3 TiB of allocated capacity at the hot tier rate
+ * 1 TiB of unallocated capacity at the hot tier rate
* Zero capacity at the cool tier rate * Zero network transfer between the hot tier and the cool tier at the rate determined by the markup on top of the transaction cost (`GET`, `PUT`) on blob storage and private link transfer in either direction between the hot tiers. * Assume that you create four volumes with 1 TiB each. Each volume has 0.25 TiB of the volume capacity on the hot tier, and 0.75 TiB of the volume capacity in the cool tier. The billing calculation is as follows:
- * 1-TiB capacity at the hot tier rate
- * 3-TiB capacity at the cool tier rate
+ * 1 TiB capacity at the hot tier rate
+ * 3 TiB capacity at the cool tier rate
* Network transfer between the hot tier and the cool tier at the rate determined by the markup on top of the transaction cost (`GET`, `PUT`) on blob storage and private link transfer in either direction between the hot tiers. * Assume that you create two volumes with 1 TiB each. Each volume has 0.25 TiB of the volume capacity on the hot tier, and 0.75 TiB of the volume capacity in the cool tier. The billing calculation is as follows:
- * 0.5-TiB capacity at the hot tier rate
- * 1.5-TiB capacity at the cool tier rate
+ * 0.5 TiB capacity at the hot tier rate
+ * 2 TiB of unallocated capacity at the hot tier rate
+ * 1.5 TiB capacity at the cool tier rate
* Network transfer between the hot tier and the cool tier at the rate determined by the markup on top of the transaction cost (`GET`, `PUT`) on blob storage and private link transfer in either direction between the hot tiers. * Assume that you create one volume with 1 TiB. The volume has 0.25 TiB of the volume capacity on the hot tier, 0.75 of the volume capacity in the cool tier. The billing calculation is as follows:
- * 0.25 capacity at the hot tier rate
- * 0.75-TiB capacity at the cool tier rate
+ * 0.25 TiB capacity at the hot tier rate
+ * 0.75 TiB capacity at the cool tier rate
* Network transfer between the hot tier and the cool tier at the rate determined by the markup on top of the transaction cost (`GET`, `PUT`) on blob storage and private link transfer in either direction between the hot tiers. ### Examples of cost calculations with varying coolness periods
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
na Previously updated : 08/31/2023 Last updated : 11/02/2023 # Requirements and considerations for large volumes (preview)
Support for Azure NetApp Files large volumes is available in the following regio
* Qatar Central * South Africa North * South Central US
+* Southeast Asia
* Switzerland North * UAE North * UK West
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 10/16/2023 Last updated : 11/02/2023
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## November 2023
+
+* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md#supported-regions) in select US Gov regions
+
+ Azure NetApp Files now supports [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md?tabs=azure-portal) in US Gov Arizona and US Gov Virginia regions. Azure NetApp Files datastores for Azure VMware Solution provide the ability to scale storage independently of compute and can go beyond the limits of the local instance storage provided by vSAN reducing total cost of ownership.
+ ## October 2023 * [Standard storage with cool access](cool-access-introduction.md) (Preview)
azure-portal Alerts Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/alerts-notifications.md
+
+ Title: Azure mobile app alerts and notifications
+description: Use Azure mobile app notifications to get up-to-date alerts and information on your resources and services.
Last updated : 11/2/2023+++
+# Azure mobile app alerts and notifications
+
+Use Azure mobile app notifications to get up-to-date alerts and information on your resources and services.
+
+Azure mobile app notifications offer users flexibility in how they receive push notifications.
+
+Azure mobile app notifications are a way to monitor and manage your Azure resources from your mobile device. You can use the Azure mobile app to view the status of your resources, get alerts on issues, and take corrective actions.
+
+This article describes different options for configuring your notifications in the Azure mobile app.
+
+## Enable push notifications for Service Health alerts
+
+To enable push notifications for Service Health on specific subscriptions:
+
+1. Open the Azure mobile app and sign in with your Azure account.
+1. Select the menu icon on the top left corner, then select **Settings**.
+1. Select **Service Health issue alerts**.
+
+ :::image type="content" source="media/alerts-notifications/service-health.png" alt-text="Screenshot showing the Service Health issue alerts section of the Settings page in the Azure mobile app.":::
+
+1. Use the toggle switches to select subscriptions for which you want to receive push notifications.
+1. Select **Save** to confirm your changes.
+
+## Enable push notifications for custom alerts
+
+You can enable push notifications in the Azure mobile app for custom alerts that you define. To do so, you first [create a new alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=metric) in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using the same Azure account information that you're using in the Azure mobile app.
+1. In the Azure portal, open **Azure Monitor**.
+1. Select **Alerts**.
+1. Select **Create alert rule** and select the target resource that you want to monitor.
+1. Configure the condition, severity, and action group for your alert rule. You can use an existing [action group](/azure/azure-monitor/alerts/action-groups), or create a new one.
+1. In the action group, make sure to add a notification type of **Push Notification** and select the Azure mobile app as the destination. This enables notifications in your Azure mobile app.
+1. Select **Create alert rule** to save your changes.
+
+## View alerts
+
+There are several ways to view current alerts on the Azure mobile app.
+
+### Notifications list view
+
+Select the **Notifications** icon on the bottom toolbar to see a list view of all current alerts.
++
+In the list view you have the option to search for specific alerts or utilize the filter option in the top right of the screen to filter by specific subscriptions.
++
+When you select a specific alert, you'll see an alert details page that will provide more information, including:
+
+- Severity
+- Fired time
+- App Service plan
+- Alert condition
+- User response
+- Why the alert fired
+- Additional details
+ - Description
+ - Monitor service
+ - AlertID
+ - Suppression status
+ - Target resource type
+ - Signal type
+
+You can change the user response by selecting the edit option (pencil icon) next to the current response. Select either **New**, **Acknowledged**, or **Closed**, and then select **Done** in the top right corner. You can also select **History** near the top of the screen to view the timeline of events for the alert.
++
+### Alerts card on Home view
+
+You can also view alerts on the **Alerts** tile on your [Azure mobile app **Home**](home.md).
+
+The **Alerts** tile includes two viewing options: **List** or **Chart**.
+
+The **List** view will show your latest alerts along with top level information including:
+
+- Title
+- Alert state
+- Severity
+- Time
+
+You can select **See All** to display the notifications list view showing all of your alerts.
++
+Alternately, you can select the **Chart** view to see the severity of the latest alerts on a bar chart.
++
+## Next steps
+
+- Learn more about the [Azure mobile app](overview.md).
+- Download the Azure mobile app for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc).
azure-portal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/overview.md
Title: What is the Azure mobile app? description: The Azure mobile app is a tool that allows you to monitor and manage your Azure resources and services from your mobile device. Previously updated : 10/16/2023 Last updated : 11/02/2023
You can download the Azure mobile app today for free from the [Apple App Store](
## Next steps - Learn about [Azure mobile app **Home**](home.md) and how to customize it.
+- Learn about [alerts and notifications](alerts-notifications.md) in the Azure mobile app.
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-cli.md
The deployment can take a few minutes to complete. When it finishes, you see a m
## Deploy remote Bicep file
-Currently, Azure CLI doesn't support deploying remote Bicep files. You can use [Bicep CLI](./install.md#visual-studio-code-and-bicep-extension) to [build](/cli/azure/bicep) the Bicep file to a JSON template, and then load the JSON file to the remote location.
+Currently, Azure CLI doesn't support deploying remote Bicep files. You can use [Bicep CLI](./install.md#visual-studio-code-and-bicep-extension) to [build](/cli/azure/bicep) the Bicep file to a JSON template, and then load the JSON file to the remote location. For more information, see [Deploy remote ARM JSON templates](../templates/deploy-cli.md#deploy-remote-template).
## Parameters
-To pass parameter values, you can use either inline parameters or a parameters file. The parameter file can be either a [Bicep parameters file](#bicep-parameter-files) or a [JSON parameters file](#json-parameter-files).
+To pass parameter values, you can use either inline parameters or a parameters file. The parameters file can be either a [Bicep parameters file](#bicep-parameter-files) or a [JSON parameters file](#json-parameter-files).
### Inline parameters
The _arrayContent.json_ format is:
```json [
- "value1",
- "value2"
+ "value1",
+ "value2"
] ``` To pass in an object, for example, to set tags, use JSON. For example, your Bicep file might include a parameter like this one: ```json
- "resourceTags": {
- "type": "object",
- "defaultValue": {
- "Cost Center": "IT Department"
- }
- }
+"resourceTags": {
+ "type": "object",
+ "defaultValue": {
+ "Cost Center": "IT Department"
+ }
+}
``` In this case, you can pass in a JSON string to set the parameter as shown in the following Bash script:
az deployment group create \
However, if you're using Azure CLI with Windows Command Prompt (CMD) or PowerShell, set the variable to a JSON string. Escape the quotation marks: `$params = '{ \"prefix\": {\"value\":\"start\"}, \"suffix\": {\"value\":\"end\"} }'`.
-The evaluation of parameters follows a sequential order, meaning that if a value is assigned multiple times, only the last assigned value is used. To ensure proper parameter assignment, it is advised to provide your parameters file initially and selectively override specific parameters using the _KEY=VALUE_ syntax. It's important to mention that if you are supplying a `bicepparam` parameters file, you can use this argument only once.
+The evaluation of parameters follows a sequential order, meaning that if a value is assigned multiple times, only the last assigned value is used. To ensure proper parameter assignment, it's advised to provide your parameters file initially and selectively override specific parameters using the _KEY=VALUE_ syntax. It's important to mention that if you're supplying a `bicepparam` parameters file, you can use this argument only once.
-### JSON parameter files
+### Bicep parameter files
-Rather than passing parameters as inline values in your script, you might find it easier to use a parameters file, either a `.bicepparam` file or a JSON parameters file, that contains the parameter values. The parameters file must be a local file. External parameters files aren't supported with Azure CLI.
+Rather than passing parameters as inline values in your script, you might find it easier to use a parameters file, either a [Bicep parameters file](#bicep-parameter-files) or a [JSON parameters file](#json-parameter-files) that contains the parameter values. The parameters file must be a local file. External parameters files aren't supported with Azure CLI. For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
-The following example shows a parameters file named _storage.parameters.json_. The file is in the same directory where the command is run.
+With Azure CLI version 2.53.0 or later, and Bicep CLI version 0.22.6 or later, you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there's no need to provide the `--template-file` switch when specifying a Bicep parameter file for the `--parameters` switch. Including the `--template-file` switch will result in an "Only a .bicep template is allowed with a .bicepparam file" error.
+
+The following example shows a parameters file named _storage.bicepparam_. The file is in the same directory where the command is run.
```azurecli-interactive az deployment group create \ --name ExampleDeployment \ --resource-group ExampleGroup \
- --template-file storage.bicep \
- --parameters '@storage.parameters.json'
+ --parameters storage.bicepparam
```
-For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
-
-### Bicep parameter files
-
-With Azure CLI version 2.53.0 or later, and Bicep CLI version 0.22.6 or later, you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there is no need to provide the `--template-file` switch when specifying a Bicep parameter file for the `--parameters` switch. Including the `--template-file` switch will result in an "Only a .bicep template is allowed with a .bicepparam file" error.
+### JSON parameter files
-The following example shows a parameters file named _storage.bicepparam_. The file is in the same directory where the command is run.
+The following example shows a parameters file named _storage.parameters.json_. The file is in the same directory where the command is run.
```azurecli-interactive az deployment group create \ --name ExampleDeployment \ --resource-group ExampleGroup \
- --parameters storage.bicepparam
+ --template-file storage.bicep \
+ --parameters '@storage.parameters.json'
```
-The parameters file must be a local file. External parameters files aren't supported with Azure CLI. For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
+For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
## Preview changes
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-powershell.md
Currently, Azure PowerShell doesn't support deploying remote Bicep files. Use [B
## Parameters
-To pass parameter values, you can use either inline parameters or a parameters file.
+To pass parameter values, you can use either inline parameters or a parameters file. The parameters file can be either a [Bicep parameters file](#bicep-parameter-files) or a [JSON parameters file](#json-parameters-files).
### Inline parameters
New-AzResourceGroupDeployment -ResourceGroupName testgroup `
-exampleArray $subnetArray ```
-### Parameters files
+### Bicep parameter files
-Rather than passing parameters as inline values in your script, you may find it easier to use a `.bicepparam` file or a JSON file that contains the parameter values. The parameters file can be a local file or an external file with an accessible URI.
+Rather than passing parameters as inline values in your script, you might find it easier to use a parameters file, either a `.bicepparam` file or a JSON parameters file, that contains the parameter values. The Bicep parameters file must be a local file.
-For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
+With Azure PowerShell version 10.4.0 or later, and Bicep CLI version 0.22.6 or later, you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there is no need to provide the `-TemplateFile` switch when specifying a Bicep parameter file for the `-TemplateParameterFile` switch.
-To pass a local parameters file, use the `TemplateParameterFile` parameter with a `.bicepparam` file:
+The following example shows a parameters file named _storage.bicepparam_. The file is in the same directory where the command is run.
```powershell
-New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
- -TemplateFile c:\BicepFiles\storage.bicep `
- -TemplateParameterFile c:\BicepFiles\storage.bicepparam
+New-AzResourceGroupDeployment `
+ -Name ExampleDeployment `
+ -ResourceGroupName ExampleResourceGroup `
+ -TemplateParameterFile storage.bicepparam
```
-To pass a local parameters file, use the `TemplateParameterFile` parameter with a JSON parameters file:
+For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
+
+### JSON parameters files
+
+The JSON parameters file can be a local file or an external file with an accessible URI. For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
+
+To pass a local parameters file, use the `TemplateParameterFile` switch with a JSON parameters file:
```powershell
-New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
+New-AzResourceGroupDeployment `
+ -Name ExampleDeployment `
+ -ResourceGroupName ExampleResourceGroup `
-TemplateFile c:\BicepFiles\storage.bicep ` -TemplateParameterFile c:\BicepFiles\storage.parameters.json ```
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName Example
To pass an external parameters file, use the `TemplateParameterUri` parameter: ```powershell
-New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
+New-AzResourceGroupDeployment `
+ -Name ExampleDeployment `
+ -ResourceGroupName ExampleResourceGroup `
-TemplateFile c:\BicepFiles\storage.bicep ` -TemplateParameterUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.parameters.json ```
azure-resource-manager Deployment Script Bicep Configure Dev https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep-configure-dev.md
+
+ Title: Configure development environment for deployment scripts in Bicep | Microsoft Docs
+description: Configure development environment for deployment scripts in Bicep.
+ Last updated : 11/02/2023+
+ms.devlang: azurecli
++
+# Configure development environment for deployment scripts in Bicep files
+
+Learn how to create a development environment for developing and testing deployment scripts with a deployment script image. You can either create an [Azure container instance](../../container-instances/container-instances-overview.md) or use [Docker](https://docs.docker.com/get-docker/). Both options are covered in this article.
+
+## Prerequisites
+
+### Azure PowerShell container
+
+If you don't have an Azure PowerShell deployment script, you can create a *hello.ps1* file by using the following content:
+
+```powershell
+param([string] $name)
+$output = 'Hello {0}' -f $name
+Write-Output $output
+$DeploymentScriptOutputs = @{}
+$DeploymentScriptOutputs['text'] = $output
+```
+
+```powershell
+param([string] $name, [string] $subscription)
+$output = 'Hello {0}' -f $name
+#Write-Output $output
+
+Connect-AzAccount -UseDeviceAuthentication
+Set-AzContext -subscription $subscription
+
+$kv = Get-AzKeyVault
+#Write-Output $kv
+
+$DeploymentScriptOutputs = @{}
+$DeploymentScriptOutputs['greeting'] = $output
+$DeploymentScriptOutputs['kv'] = $kv.resourceId
+Write-Output $DeploymentScriptOutputs
+```
+
+In an Azure PowerShell deployment script, the variable `$DeploymentScriptOutputs` is used to store the output values. For more information about working with Azure PowerShell outputs, see [Work with outputs from PowerShell scripts](./deployment-script-bicep.md#work-with-outputs-from-powershell-scripts).
+
+### Azure CLI container
+
+For an Azure CLI container image, you can create a *hello.sh* file by using the following content:
+
+```bash
+FIRSTNAME=$1
+LASTNAME=$2
+OUTPUT="{\"name\":{\"displayName\":\"$FIRSTNAME $LASTNAME\",\"firstName\":\"$FIRSTNAME\",\"lastName\":\"$LASTNAME\"}}"
+echo -n "Hello "
+echo $OUTPUT | jq -r '.name.displayName'
+```
+
+In an Azure CLI deployment script, an environment variable called `AZ_SCRIPTS_OUTPUT_PATH` stores the location of the script output file. The environment variable isn't available in the development environment container. For more information about working with Azure CLI outputs, see [Work with outputs from CLI scripts](deployment-script-bicep.md#work-with-outputs-from-cli-scripts).
+
+## Use Azure PowerShell container instance
+
+To author Azure PowerShell scripts on your computer, you need to create a storage account and mount the storage account to the container instance. So that you can upload your script to the storage account and run the script on the container instance. The storage account that you create to test your script is not the same storage account that the deployment script service uses to execute the script. Deployment script service creates a unique name as a file share on every execution.
+
+### Create an Azure PowerShell container instance
+
+The following Bicep file creates a container instance and a file share, and then mounts the file share to the container image.
+
+```bicep
+@description('Specify a project name that is used for generating resource names.')
+param projectName string
+
+@description('Specify the resource location.')
+param location string = resourceGroup().location
+
+@description('Specify the container image.')
+param containerImage string = 'mcr.microsoft.com/azuredeploymentscripts-powershell:az9.7'
+
+@description('Specify the mount path.')
+param mountPath string = '/mnt/azscripts/azscriptinput'
+
+var storageAccountName = toLower('${projectName}store')
+var fileShareName = '${projectName}share'
+var containerGroupName = '${projectName}cg'
+var containerName = '${projectName}container'
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
+ name: storageAccountName
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ properties: {
+ accessTier: 'Hot'
+ }
+}
+
+resource fileShare 'Microsoft.Storage/storageAccounts/fileServices/shares@2023-01-01' = {
+ name: '${storageAccountName}/default/${fileShareName}'
+ dependsOn: [
+ storageAccount
+ ]
+}
+
+resource containerGroup 'Microsoft.ContainerInstance/containerGroups@2023-05-01' = {
+ name: containerGroupName
+ location: location
+ properties: {
+ containers: [
+ {
+ name: containerName
+ properties: {
+ image: containerImage
+ resources: {
+ requests: {
+ cpu: 1
+ memoryInGB: json('1.5')
+ }
+ }
+ ports: [
+ {
+ protocol: 'TCP'
+ port: 80
+ }
+ ]
+ volumeMounts: [
+ {
+ name: 'filesharevolume'
+ mountPath: mountPath
+ }
+ ]
+ command: [
+ '/bin/sh'
+ '-c'
+ 'pwsh -c \'Start-Sleep -Seconds 1800\''
+ ]
+ }
+ }
+ ]
+ osType: 'Linux'
+ volumes: [
+ {
+ name: 'filesharevolume'
+ azureFile: {
+ readOnly: false
+ shareName: fileShareName
+ storageAccountName: storageAccountName
+ storageAccountKey: storageAccount.listKeys().keys[0].value
+ }
+ }
+ ]
+ }
+}
+```
+
+The default value for the mount path is `/mnt/azscripts/azscriptinput`. This is the path in the container instance where it's mounted to the file share.
+
+The default container image specified in the Bicep file is **mcr.microsoft.com/azuredeploymentscripts-powershell:az9.7**. See a list of all [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list).
+
+The Bicep file suspends the container instance after 1,800 seconds. You have 30 minutes before the container instance goes into a terminated state and the session ends.
+
+Use the following script to deploy the Bicep file:
+
+```azurepowershell
+$projectName = Read-Host -Prompt "Enter a project name that is used to generate resource names"
+$location = Read-Host -Prompt "Enter the location (i.e. centralus)"
+$templateFile = Read-Host -Prompt "Enter the Bicep file path and file name"
+$resourceGroupName = "${projectName}rg"
+
+New-AzResourceGroup -Location $location -name $resourceGroupName
+New-AzResourceGroupDeployment -resourceGroupName $resourceGroupName -TemplateFile $templatefile -projectName $projectName
+```
+
+### Upload the deployment script
+
+Upload your deployment script to the storage account. Here's an example of a PowerShell script:
+
+```azurepowershell
+$projectName = Read-Host -Prompt "Enter the same project name that you used earlier"
+$fileName = Read-Host -Prompt "Enter the deployment script file name with the path"
+
+$resourceGroupName = "${projectName}rg"
+$storageAccountName = "${projectName}store"
+$fileShareName = "${projectName}share"
+
+$context = (Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName).Context
+Set-AzStorageFileContent -Context $context -ShareName $fileShareName -Source $fileName -Force
+```
+
+You can also upload the file by using the Azure portal or the Azure CLI.
+
+### Test the deployment script
+
+1. In the Azure portal, open the resource group where you deployed the container instance and the storage account.
+2. Open the container group. The default container group name is the project name appended with *cg*. The container instance is in the **Running** state.
+3. In the resource menu, select **Containers**. The container instance name is the project name appended with *container*.
+
+ :::image type="content" source="./media/deployment-script-bicep-configure-dev/deployment-script-container-instance-connect.png" alt-text="Screenshot of the deployment script connect container instance option in the Azure portal.":::
+
+4. Select **Connect**, and then select **Connect**. If you can't connect to the container instance, restart the container group and try again.
+5. In the console pane, run the following commands:
+
+ ```console
+ cd /mnt/azscripts/azscriptinput
+ ls
+ pwsh ./hello.ps1 "John Dole"
+ ```
+
+ The output is **Hello John Dole**.
+
+ :::image type="content" source="./media/deployment-script-bicep-configure-dev/deployment-script-container-instance-test.png" alt-text="Screenshot of the deployment script connect container instance test output displayed in the console.":::
+
+## Use an Azure CLI container instance
+
+To author Azure CLI scripts on your computer, create a storage account and mount the storage account to the container instance. Then, you can upload your script to the storage account and run the script on the container instance. The storage account that you create to test your script isn't the same storage account that the deployment script service uses to execute the script. The deployment script service creates a unique name as a file share on every execution.
+
+### Create an Azure CLI container instance
+
+The following Bicep file creates a container instance and a file share, and then mounts the file share to the container image:
+
+```bicep
+@description('Specify a project name that is used for generating resource names.')
+param projectName string
+
+@description('Specify the resource location.')
+param location string = resourceGroup().location
+
+@description('Specify the container image.')
+param containerImage string = 'mcr.microsoft.com/azure-cli:2.9.1'
+
+@description('Specify the mount path.')
+param mountPath string = '/mnt/azscripts/azscriptinput'
+
+var storageAccountName = toLower('${projectName}store')
+var fileShareName = '${projectName}share'
+var containerGroupName = '${projectName}cg'
+var containerName = '${projectName}container'
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
+ name: storageAccountName
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ properties: {
+ accessTier: 'Hot'
+ }
+}
+
+resource fileshare 'Microsoft.Storage/storageAccounts/fileServices/shares@2023-01-01' = {
+ name: '${storageAccountName}/default/${fileShareName}'
+ dependsOn: [
+ storageAccount
+ ]
+}
+
+resource containerGroup 'Microsoft.ContainerInstance/containerGroups@2023-05-01' = {
+ name: containerGroupName
+ location: location
+ properties: {
+ containers: [
+ {
+ name: containerName
+ properties: {
+ image: containerImage
+ resources: {
+ requests: {
+ cpu: 1
+ memoryInGB: json('1.5')
+ }
+ }
+ ports: [
+ {
+ protocol: 'TCP'
+ port: 80
+ }
+ ]
+ volumeMounts: [
+ {
+ name: 'filesharevolume'
+ mountPath: mountPath
+ }
+ ]
+ command: [
+ '/bin/bash'
+ '-c'
+ 'echo hello; sleep 1800'
+ ]
+ }
+ }
+ ]
+ osType: 'Linux'
+ volumes: [
+ {
+ name: 'filesharevolume'
+ azureFile: {
+ readOnly: false
+ shareName: fileShareName
+ storageAccountName: storageAccountName
+ storageAccountKey: storageAccount.listKeys().keys[0].value
+ }
+ }
+ ]
+ }
+}
+```
+
+The default value for the mount path is `/mnt/azscripts/azscriptinput`. This is the path in the container instance where it's mounted to the file share.
+
+The default container image specified in the Bicep file is **mcr.microsoft.com/azure-cli:2.9.1**. See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list). The deployment script uses the available CLI images from Microsoft Container Registry (MCR). It takes about one month to certify a CLI image for a deployment script. Don't use the CLI versions that were released within 30 days. To find the release dates for the images, see [Azure CLI release notes](/cli/azure/release-notes-azure-cli). If you use an unsupported version, the error message lists the supported versions.
+
+The Bicep file suspends the container instance after 1,800 seconds. You have 30 minutes before the container instance goes into a terminal state and the session ends.
+
+To deploy the Bicep file:
+
+```azurepowershell
+$projectName = Read-Host -Prompt "Enter a project name that is used to generate resource names"
+$location = Read-Host -Prompt "Enter the location (i.e. centralus)"
+$templateFile = Read-Host -Prompt "Enter the Bicep file path and file name"
+$resourceGroupName = "${projectName}rg"
+
+New-AzResourceGroup -Location $location -name $resourceGroupName
+New-AzResourceGroupDeployment -resourceGroupName $resourceGroupName -TemplateFile $templatefile -projectName $projectName
+```
+
+### Upload the deployment script
+
+Upload your deployment script to the storage account. The following is a PowerShell example:
+
+```azurepowershell
+$projectName = Read-Host -Prompt "Enter the same project name that you used earlier"
+$fileName = Read-Host -Prompt "Enter the deployment script file name with the path"
+
+$resourceGroupName = "${projectName}rg"
+$storageAccountName = "${projectName}store"
+$fileShareName = "${projectName}share"
+
+$context = (Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName).Context
+Set-AzStorageFileContent -Context $context -ShareName $fileShareName -Source $fileName -Force
+```
+
+You also can upload the file by using the Azure portal or the Azure CLI.
+
+### Test the deployment script
+
+1. In the Azure portal, open the resource group where you deployed the container instance and the storage account.
+1. Open the container group. The default container group name is the project name appended with *cg*. The container instance is shown in the **Running** state.
+1. In the resource menu, select **Containers**. The container instance name is the project name appended with *container*.
+
+ :::image type="content" source="./media/deployment-script-bicep-configure-dev/deployment-script-container-instance-connect.png" alt-text="Screenshot of the deployment script connect container instance option in the Azure portal.":::
+
+1. Select **Connect**, and then select **Connect**. If you can't connect to the container instance, restart the container group and try again.
+1. In the console pane, run the following commands:
+
+ ```console
+ cd /mnt/azscripts/azscriptinput
+ ls
+ ./hello.sh John Dole
+ ```
+
+ The output is **Hello John Dole**.
+
+ :::image type="content" source="./media/deployment-script-bicep-configure-dev/deployment-script-container-instance-test-cli.png" alt-text="Screenshot of the deployment script container instance test output displayed in the console.":::
+
+## Use Docker
+
+You can use a pre-configured Docker container image as your deployment script development environment. To install Docker, see [Get Docker](https://docs.docker.com/get-docker/).
+You also need to configure file sharing to mount the directory, which contains the deployment scripts into Docker container.
+
+1. Pull the deployment script container image to the local computer:
+
+ ```command
+ docker pull mcr.microsoft.com/azuredeploymentscripts-powershell:az4.3
+ ```
+
+ The example uses version PowerShell 4.3.0.
+
+ To pull a CLI image from an MCR:
+
+ ```command
+ docker pull mcr.microsoft.com/azure-cli:2.0.80
+ ```
+
+ This example uses version CLI 2.0.80. Deployment script uses the default CLI containers images found [here](https://hub.docker.com/_/microsoft-azure-cli).
+
+1. Run the Docker image locally.
+
+ ```command
+ docker run -v <host drive letter>:/<host directory name>:/data -it mcr.microsoft.com/azuredeploymentscripts-powershell:az4.3
+ ```
+
+ Replace **&lt;host driver letter>** and **&lt;host directory name>** with an existing folder on the shared drive. It maps the folder to the _/data_ folder in the container. For example, to map _D:\docker_:
+
+ ```command
+ docker run -v d:/docker:/data -it mcr.microsoft.com/azuredeploymentscripts-powershell:az4.3
+ ```
+
+ **-it** means keeping the container image alive.
+
+ A CLI example:
+
+ ```command
+ docker run -v d:/docker:/data -it mcr.microsoft.com/azure-cli:2.0.80
+ ```
+
+1. The following screenshot shows how to run a PowerShell script, given that you have a *helloworld.ps1* file in the shared drive.
+
+ :::image type="content" source="./medi.png" alt-text="Screenshot of the Resource Manager template deployment script using Docker command.":::
+
+After the script is tested successfully, you can use it as a deployment script in your Bicep files.
+
+## Next steps
+
+In this article, you learned how to use deployment scripts. To walk through a deployment script tutorial:
+
+> [!div class="nextstepaction"]
+> [Use deployment scripts in Bicep](./deployment-script-bicep.md)
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
Title: Use deployment scripts in Bicep | Microsoft Docs
description: use deployment scripts in Bicep. Previously updated : 10/04/2023 Last updated : 11/02/2023 # Use deployment scripts in Bicep
For deployment script API version 2020-10-01 or later, there are two principals
- **Deployment script principal**: This principal is only required if the deployment script needs to authenticate to Azure and call Azure CLI/PowerShell. There are two ways to specify the deployment script principal:
- - Specify a [user-assigned managed identity]() in the `identity` property (see [Sample Bicep files](#sample-bicep-files)). When specified, the script service calls `Connect-AzAccount -Identity` before invoking the deployment script. The managed identity must have the required access to complete the operation in the script. Currently, only user-assigned managed identity is supported for the `identity` property. To login with a different identity, use the second method in this list.
+ - Specify a [user-assigned managed identity]() in the `identity` property (see [Sample Bicep files](#sample-bicep-files)). When specified, the script service calls `Connect-AzAccount -Identity` before invoking the deployment script. The managed identity must have the required access to complete the operation in the script. Currently, only user-assigned managed identity is supported for the `identity` property. To log in with a different identity, use the second method in this list.
- Pass the service principal credentials as secure environment variables, and then can call [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) or [az login](/cli/azure/reference-index#az-login) in the deployment script.
- If a managed identity is used, the deployment principal needs the **Managed Identity Operator** role (a built-in role) assigned to the managed identity resource.
+ If a managed identity is used, the deployment principle needs the **Managed Identity Operator** role (a built-in role) assigned to the managed identity resource.
## Sample Bicep files
resource runPowerShellInline 'Microsoft.Resources/deploymentScripts@2020-10-01'
Property value details: -- <a id='identity'></a>`identity`: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script or running deployment script in private network. For more information, see [Access private virtual network](#access-private-virtual-network). For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. When the identity property is specified, the script service calls `Connect-AzAccount -Identity` before invoking the user script. Currently, only user-assigned managed identity is supported. To login with a different identity, you can call [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) in the script.
+- <a id='identity'></a>`identity`: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script or running deployment script in private network. For more information, see [Access private virtual network](#access-private-virtual-network). For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. When the identity property is specified, the script service calls `Connect-AzAccount -Identity` before invoking the user script. Currently, only user-assigned managed identity is supported. To log in with a different identity, you can call [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) in the script.
- `tags`: Deployment script tags. If the deployment script service generates a storage account and a container instance, the tags are passed to both resources, which can be used to identify them. Another way to identify these resources is through their suffixes, which contain "azscripts". For more information, see [Monitor and troubleshoot deployment scripts](#monitor-and-troubleshoot-deployment-scripts). - `kind`: Specify the type of script. Currently, Azure PowerShell and Azure CLI scripts are supported. The values are **AzurePowerShell** and **AzureCLI**. - `forceUpdateTag`: Changing this value between Bicep file deployments forces the deployment script to re-execute. If you use the `newGuid()` or the `utcNow()` functions, both functions can only be used in the default value for a parameter. To learn more, see [Run script more than once](#run-script-more-than-once).
Supporting script files can be called from both inline scripts and primary scrip
The supporting files are copied to `azscripts/azscriptinput` at the runtime. Use relative path to reference the supporting files from inline scripts and primary script files.
-## Work with outputs from PowerShell script
+## Work with outputs from PowerShell scripts
The following Bicep file shows how to pass values between two `deploymentScripts` resources:
The following Bicep file shows how to pass values between two `deploymentScripts
In the first resource, you define a variable called `$DeploymentScriptOutputs`, and use it to store the output values. Use resource symbolic name to access the output values.
-## Work with outputs from CLI script
+## Work with outputs from CLI scripts
-Different from the PowerShell deployment script, CLI/bash support doesn't expose a common variable to store script outputs, instead, there's an environment variable called `AZ_SCRIPTS_OUTPUT_PATH` that stores the location where the script outputs file resides. If a deployment script is run from a Bicep file, this environment variable is set automatically for you by the Bash shell. The value of `AZ_SCRIPTS_OUTPUT_PATH` is */mnt/azscripts/azscriptoutput/scriptoutputs.json*.
+In contrast to the Azure PowerShell deployment scripts, CLI/bash doesn't expose a common variable for storing script outputs. Instead, it utilizes an environment variable named `AZ_SCRIPTS_OUTPUT_PATH` to indicate the location of the script outputs file. When executing a deployment script within a Bicep file, the Bash shell automatically configures this environment variable for you. Its predefined value is set as */mnt/azscripts/azscriptoutput/scriptoutputs.json*. The outputs are required to conform to a valid JSON string object structure. The file's contents should be formatted as a key-value pair. For instance, an array of strings should be saved as { "MyResult": [ "foo", "bar"] }. Storing only the array results, such as [ "foo", "bar" ], is considered invalid.
-Deployment script outputs must be saved in the `AZ_SCRIPTS_OUTPUT_PATH` location, and the outputs must be a valid JSON string object. The contents of the file must be saved as a key-value pair. For example, an array of strings is stored as `{ "MyResult": [ "foo", "bar"] }`. Storing just the array results, for example `[ "foo", "bar" ]`, is invalid.
- [jq](https://stedolan.github.io/jq/) is used in the previous sample. It comes with the container images. See [Configure development environment](#configure-development-environment).
+In the preceding Bicep sample, a storage account is created and configured to be used by the deployment script. This is necessary for storing the script output. An alternative solution, without specifying your own storage account, involves setting `cleanupPreference` to `OnExpiration`and configuring `retentionInterval` for a duration that allows ample time for reviewing the outputs before the storage account is removed.
+ ## Use existing storage account Two supporting resources, a storage account and a container instance, are needed for script execution and troubleshooting. You have the options to specify an existing storage account, otherwise the storage account along with the container instance are automatically created by the script service. The requirements for using an existing storage account:
Deployment script uses these environment variables:
|AZ_SCRIPTS_PATH_EXECUTION_RESULTS_FILE_NAME|executionresult.json|Y| |AZ_SCRIPTS_USER_ASSIGNED_IDENTITY|/subscriptions/|N|
-For more information about using `AZ_SCRIPTS_OUTPUT_PATH`, see [Work with outputs from CLI script](#work-with-outputs-from-cli-script).
+For more information about using `AZ_SCRIPTS_OUTPUT_PATH`, see [Work with outputs from CLI script](#work-with-outputs-from-cli-scripts).
### Pass secured strings to deployment script
The list command output is similar to:
```json [ {
- "arguments": "-name \\\"John Dole\\\"",
- "azPowerShellVersion": "9.7",
- "cleanupPreference": "OnSuccess",
+ "arguments": "'foo' 'bar'",
+ "azCliVersion": "2.40.0",
+ "cleanupPreference": "OnExpiration",
"containerSettings": { "containerGroupName": null }, "environmentVariables": null,
- "forceUpdateTag": "20220625T025902Z",
- "id": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Resources/deploymentScripts/runPowerShellInlineWithOutput",
+ "forceUpdateTag": "20231101T163748Z",
+ "id": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Resources/deploymentScripts/runBashWithOutputs",
"identity": { "tenantId": "01234567-89AB-CDEF-0123-456789ABCDEF", "type": "userAssigned", "userAssignedIdentities": {
- "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myidentity1008rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuami": {
+ "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourcegroups/myidentity/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuami": {
"clientId": "01234567-89AB-CDEF-0123-456789ABCDEF", "principalId": "01234567-89AB-CDEF-0123-456789ABCDEF" } } },
- "kind": "AzurePowerShell",
+ "kind": "AzureCLI",
"location": "centralus",
- "name": "runPowerShellInlineWithOutput",
+ "name": "runBashWithOutputs",
"outputs": {
- "text": "Hello John Dole"
+ "Result": [
+ {
+ "id": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/mytest/providers/Microsoft.KeyVault/vaults/mykv1027",
+ "resourceGroup": "mytest"
+ }
+ ]
}, "primaryScriptUri": null, "provisioningState": "Succeeded",
- "resourceGroup": "myds0624rg",
+ "resourceGroup": "mytest",
"retentionInterval": "1 day, 0:00:00",
- "scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ",
+ "scriptContent": "result=$(az keyvault list); echo \"arg1 is: $1\"; echo $result | jq -c '{Result: map({id: .id})}' > $AZ_SCRIPTS_OUTPUT_PATH",
"status": {
- "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts",
- "endTime": "2023-05-11T03:00:16.796923+00:00",
+ "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/mytest/providers/Microsoft.ContainerInstance/containerGroups/eg6n7wvuyxn7iazscripts",
+ "endTime": "2023-11-01T16:39:12.080950+00:00",
"error": null,
- "expirationTime": "2023-05-12T03:00:16.796923+00:00",
- "startTime": "2023-05-11T02:59:07.595140+00:00",
- "storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts"
+ "expirationTime": "2023-11-02T16:39:12.080950+00:00",
+ "startTime": "2023-11-01T16:37:53.139700+00:00",
+ "storageAccountId": null
+ },
+ "storageAccountSettings": {
+ "storageAccountKey": null,
+ "storageAccountName": "dsfruro267qwb4i"
},
- "storageAccountSettings": null,
"supportingScriptUris": null, "systemData": {
- "createdAt": "2023-05-11T02:59:04.750195+00:00",
+ "createdAt": "2023-10-31T19:06:57.060909+00:00",
"createdBy": "someone@contoso.com", "createdByType": "User",
- "lastModifiedAt": "2023-05-11T02:59:04.750195+00:00",
+ "lastModifiedAt": "2023-11-01T16:37:51.859570+00:00",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User" }, "tags": null,
- "timeout": "1:00:00",
+ "timeout": "0:30:00",
"type": "Microsoft.Resources/deploymentScripts" } ]
The two automatically created supporting resources can never outlive the `deploy
- `cleanupPreference`: Specify the clean-up preference of the two supporting resources when the script execution gets in a terminal state. The supported values are:
- - **Always**: Delete the two supporting resources once script execution gets in a terminal state. If an existing storage account is used, the script service deletes the file share created by the service. Because the `deploymentScripts` resource may still be present after the supporting resources are cleaned up, the script service persists the script execution results, for example, stdout, outputs, and return value before the resources are deleted.
+ - **Always**: Delete the two supporting resources once script execution gets in a terminal state. If an existing storage account is used, the script service deletes the file share created by the service. Because the `deploymentScripts` resource might still be present after the supporting resources are cleaned up, the script service persists the script execution results, for example, stdout, outputs, and return value before the resources are deleted.
- **OnSuccess**: Delete the two supporting resources only when the script execution is successful. If an existing storage account is used, the script service removes the file share only when the script execution is successful. If the script execution is not successful, the script service waits until the `retentionInterval` expires before it cleans up the supporting resources and then the deployment script resource.
azure-resource-manager User Defined Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-functions.md
Title: User-defined functions in Bicep
description: Describes how to define and use user-defined functions in Bicep. Previously updated : 09/13/2023 Last updated : 11/02/2023 # User-defined functions in Bicep (Preview)
To enable this preview, modify your project's [bicepconfig.json](./bicep-config.
} ```
+## Limitations
+
+When defining a user function, there are some restrictions:
+
+* The function can't access variables.
+* The function can only use parameters that are defined in the function.
+* The function can't use the [reference](bicep-functions-resource.md#reference) function or any of the [list](bicep-functions-resource.md#list) functions.
+* Parameters for the function can't have default values.
+ ## Define the function Use the `func` statement to define user-defined functions.
The outputs from the preceding examples are:
| nameArray | Array | ["John"] | | addNameArray | Array | ["Mary","Bob","John"] |
+With [Bicep version 0.23 or newer](./install.md), you have the flexibility to invoke another user-defined function within a user-defined function. In the preceding example, with the function definition of `sayHelloString`, you can redefine the `sayHelloObject` function as:
+
+```bicep
+func sayHelloObject(name string) object => {
+ hello: sayHelloString(name)
+}
+```
+ User-defined functions support using [user-defined data types](./user-defined-data-types.md). For example: ```bicep
The output from the preceding example is:
| - | - | -- | | elements | positiveInt | 3 |
-## Limitations
-
-When defining a user function, there are some restrictions:
-
-* The function can't access variables.
-* The function can only use parameters that are defined in the function.
-* The function can't use the [reference](bicep-functions-resource.md#reference) function or any of the [list](bicep-functions-resource.md#list) functions.
-* Parameters for the function can't have default values.
- ## Next steps * To learn about the Bicep file structure and syntax, see [Understand the structure and syntax of Bicep files](./file.md).
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-cli.md
For more information, see [Azure Resource Manager template specs](template-specs
## Preview changes
-Before deploying your ARM template, you can preview the changes the template will make to your environment. Use the [what-if operation](./deploy-what-if.md) to verify that the template makes the changes that you expect. What-if also validates the template for errors.
+Before deploying your ARM template, you can preview the changes the template makes to your environment. Use the [what-if operation](./deploy-what-if.md) to verify that the template makes the changes that you expect. What-if also validates the template for errors.
## Parameters
The _arrayContent.json_ format is:
```json [
- "value1",
- "value2"
+ "value1",
+ "value2"
] ``` To pass in an object, for example, to set tags, use JSON. For example, your template might include a parameter like this one: ```json
- "resourceTags": {
- "type": "object",
- "defaultValue": {
- "Cost Center": "IT Department"
- }
- }
+"resourceTags": {
+ "type": "object",
+ "defaultValue": {
+ "Cost Center": "IT Department"
+ }
+}
``` In this case, you can pass in a JSON string to set the parameter as shown in the following Bash script:
However, if you're using Azure CLI with Windows Command Prompt (CMD) or PowerShe
Rather than passing parameters as inline values in your script, you might find it easier to use a parameters file, either a `.bicepparam` file or a JSON parameters file, that contains the parameter values. The parameters file must be a local file. External parameters files aren't supported with Azure CLI.
-To pass a local parameter file, use `@` to specify a local file named _storage.parameters.json_.
- ```azurecli-interactive az deployment group create \ --name ExampleDeployment \ --resource-group ExampleGroup \ --template-file storage.json \
- --parameters '@storage.parameters.json'
+ --parameters 'storage.parameters.json'
``` For more information about the parameter file, see [Create Resource Manager parameter file](./parameter-files.md). ### Bicep parameter files
-With Azure CLI version 2.53.0 or later, and Bicep CLI version 0.22.6 or later, you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there is no need to provide the `--template-file` switch when specifying a Bicep parameter file for the `--parameters` switch. Including the `--template-file` switch will result in an "Only a .bicep template is allowed with a .bicepparam file" error.
+With Azure CLI version 2.53.0 or later, and Bicep CLI version 0.22.6 or later, you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there is no need to provide the `--template-file` switch when specifying a Bicep parameter file for the `--parameters` switch. Including the `--template-file` switch results in an "Only a .bicep template is allowed with a .bicepparam file" error.
```azurecli-interactive az deployment group create \
az deployment group create \
The parameters file must be a local file. External parameters files aren't supported with Azure CLI. For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md). - ## Comments and the extended JSON format You can include `//` style comments in your parameter file, but you must name the file with a `.jsonc` extension.
az deployment group create \
--template-file storage.json \ --parameters '@storage.parameters.jsonc' ```
-For more details about comments and metadata see [Understand the structure and syntax of ARM templates](./syntax.md#comments-and-metadata).
+For more details about comments and metadata, see [Understand the structure and syntax of ARM templates](./syntax.md#comments-and-metadata).
If you are using Azure CLI with version 2.3.0 or older, you can deploy a template with multi-line strings or comments using the `--handle-extended-json-format` switch. For example:
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-powershell.md
Before deploying your template, you can preview the changes the template will ma
## Pass parameter values
-To pass parameter values, you can use either inline parameters or a parameter file.
+To pass parameter values, you can use either inline parameters or a parameters file. The parameter file can be either a [Bicep parameters file](#bicep-parameter-files) or a [JSON parameters file](#json-parameter-files).
### Inline parameters
New-AzResourceGroupDeployment -ResourceGroupName testgroup `
-exampleArray $subnetArray ```
-### Parameter files
+### JSON parameter files
Rather than passing parameters as inline values in your script, you may find it easier to use a JSON file that contains the parameter values. The parameter file can be a local file or an external file with an accessible URI.
For more information about the parameter file, see [Create Resource Manager para
To pass a local parameter file, use the `TemplateParameterFile` parameter: ```powershell
-New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
+New-AzResourceGroupDeployment `
+ -Name ExampleDeployment `
+ -ResourceGroupName ExampleResourceGroup `
-TemplateFile <path-to-template> ` -TemplateParameterFile c:\MyTemplates\storage.parameters.json ```
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName Example
To pass an external parameter file, use the `TemplateParameterUri` parameter: ```powershell
-New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
+New-AzResourceGroupDeployment `
+ -Name ExampleDeployment `
+ -ResourceGroupName ExampleResourceGroup `
-TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json ` -TemplateParameterUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.parameters.json ```
+For more information about parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
+
+### Bicep parameter files
+
+With Azure PowerShell version 10.4.0 or later, and Bicep CLI version 0.22.6 or later, you can deploy an ARM template file by utilizing a [Bicep parameter file](../bicep/parameter-files.md). With the `using` statement within the Bicep parameters file, there is no need to provide the `-TemplateFile` switch when specifying a Bicep parameter file for the `-TemplateParameterFile` switch.
+
+The following example shows a parameters file named _storage.bicepparam_. The file is in the same directory where the command is run.
+
+```powershell
+New-AzResourceGroupDeployment `
+ -Name ExampleDeployment `
+ -ResourceGroupName ExampleResourceGroup `
+ -TemplateParameterFile storage.bicepparam
+```
+
+For more information about Bicep parameters file, see [Bicep parameters file](../bicep/parameter-files.md).
+ ## Next steps - To roll back to a successful deployment when you get an error, see [Rollback on error to successful deployment](rollback-on-error.md).
azure-resource-manager Deployment Script Template Configure Dev https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template-configure-dev.md
echo $OUTPUT | jq -r '.name.displayName'
``` > [!NOTE]
-> When you run an Azure CLI deployment script, an environment variable called `AZ_SCRIPTS_OUTPUT_PATH` stores the location of the script output file. The environment variable isn't available in the development environment container. For more information about working with Azure CLI outputs, see [Work with outputs from CLI script](deployment-script-template.md#work-with-outputs-from-cli-script).
+> When you run an Azure CLI deployment script, an environment variable called `AZ_SCRIPTS_OUTPUT_PATH` stores the location of the script output file. The environment variable isn't available in the development environment container. For more information about working with Azure CLI outputs, see [Work with outputs from CLI script](deployment-script-template.md#work-with-outputs-from-cli-scripts).
## Use Azure PowerShell container instance
The following Azure Resource Manager template (ARM template) creates a container
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2022-09-01",
+ "apiVersion": "2023-01-01",
"name": "[variables('storageAccountName')]", "location": "[parameters('location')]", "sku": {
The following Azure Resource Manager template (ARM template) creates a container
}, { "type": "Microsoft.Storage/storageAccounts/fileServices/shares",
- "apiVersion": "2022-09-01",
+ "apiVersion": "2023-01-01",
"name": "[format('{0}/default/{1}', variables('storageAccountName'), variables('fileShareName'))]", "dependsOn": [ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
The following Azure Resource Manager template (ARM template) creates a container
"readOnly": false, "shareName": "[variables('fileShareName')]", "storageAccountName": "[variables('storageAccountName')]",
- "storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2022-09-01').keys[0].value]"
+ "storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2023-01-01').keys[0].value]"
} } ] }, "dependsOn": [
- "storageAccount"
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
] } ]
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
Supporting script files can be called from both inline scripts and primary scrip
The supporting files are copied to `azscripts/azscriptinput` at the runtime. Use relative path to reference the supporting files from inline scripts and primary script files.
-## Work with outputs from PowerShell script
+## Work with outputs from PowerShell scripts
The following template shows how to pass values between two `deploymentScripts` resources:
In the first resource, you define a variable called `$DeploymentScriptOutputs`,
reference('<ResourceName>').outputs.text ```
-## Work with outputs from CLI script
+## Work with outputs from CLI scripts
-Different from the PowerShell deployment script, CLI/bash support doesn't expose a common variable to store script outputs, instead, there's an environment variable called `AZ_SCRIPTS_OUTPUT_PATH` that stores the location where the script outputs file resides. If a deployment script is run from a Resource Manager template, this environment variable is set automatically for you by the Bash shell. The value of `AZ_SCRIPTS_OUTPUT_PATH` is */mnt/azscripts/azscriptoutput/scriptoutputs.json*.
-
-Deployment script outputs must be saved in the `AZ_SCRIPTS_OUTPUT_PATH` location, and the outputs must be a valid JSON string object. The contents of the file must be saved as a key-value pair. For example, an array of strings is stored as `{ "MyResult": [ "foo", "bar"] }`. Storing just the array results, for example `[ "foo", "bar" ]`, is invalid.
+In contrast to the Azure PowerShell deployment scripts, CLI/bash doesn't expose a common variable for storing script outputs. Instead, it utilizes an environment variable named `AZ_SCRIPTS_OUTPUT_PATH` to indicate the location of the script outputs file. When executing a deployment script within a Bicep file, the Bash shell automatically configures this environment variable for you. Its predefined value is set as */mnt/azscripts/azscriptoutput/scriptoutputs.json*. The outputs are required to conform to a valid JSON string object structure. The file's contents should be formatted as a key-value pair. For instance, an array of strings should be saved as { "MyResult": [ "foo", "bar"] }. Storing only the array results, such as [ "foo", "bar" ], is considered invalid.
:::code language="json" source="~/resourcemanager-templates/deployment-script/deploymentscript-basic-cli.json" range="1-44" highlight="32":::
Deployment script uses these environment variables:
|AZ_SCRIPTS_PATH_EXECUTION_RESULTS_FILE_NAME|executionresult.json|Y| |AZ_SCRIPTS_USER_ASSIGNED_IDENTITY|/subscriptions/|N|
-For more information about using `AZ_SCRIPTS_OUTPUT_PATH`, see [Work with outputs from CLI script](#work-with-outputs-from-cli-script).
+For more information about using `AZ_SCRIPTS_OUTPUT_PATH`, see [Work with outputs from CLI script](#work-with-outputs-from-cli-scripts).
### Pass secured strings to deployment script
The list command output is similar to:
```json [ {
- "arguments": "-name \\\"John Dole\\\"",
- "azPowerShellVersion": "9.7",
- "cleanupPreference": "OnSuccess",
+ "arguments": "'foo' 'bar'",
+ "azCliVersion": "2.40.0",
+ "cleanupPreference": "OnExpiration",
"containerSettings": { "containerGroupName": null }, "environmentVariables": null,
- "forceUpdateTag": "20230511T025902Z",
- "id": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Resources/deploymentScripts/runPowerShellInlineWithOutput",
+ "forceUpdateTag": "20231101T163748Z",
+ "id": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Resources/deploymentScripts/runBashWithOutputs",
"identity": { "tenantId": "01234567-89AB-CDEF-0123-456789ABCDEF", "type": "userAssigned", "userAssignedIdentities": {
- "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myidentity1008rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuami": {
+ "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourcegroups/myidentity/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuami": {
"clientId": "01234567-89AB-CDEF-0123-456789ABCDEF", "principalId": "01234567-89AB-CDEF-0123-456789ABCDEF" } } },
- "kind": "AzurePowerShell",
+ "kind": "AzureCLI",
"location": "centralus",
- "name": "runPowerShellInlineWithOutput",
+ "name": "runBashWithOutputs",
"outputs": {
- "text": "Hello John Dole"
+ "Result": [
+ {
+ "id": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/mytest/providers/Microsoft.KeyVault/vaults/mykv1027",
+ "resourceGroup": "mytest"
+ }
+ ]
}, "primaryScriptUri": null, "provisioningState": "Succeeded",
- "resourceGroup": "myds0624rg",
+ "resourceGroup": "mytest",
"retentionInterval": "1 day, 0:00:00",
- "scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ",
+ "scriptContent": "result=$(az keyvault list); echo \"arg1 is: $1\"; echo $result | jq -c '{Result: map({id: .id})}' > $AZ_SCRIPTS_OUTPUT_PATH",
"status": {
- "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts",
- "endTime": "2023-05-11T03:00:16.796923+00:00",
+ "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/mytest/providers/Microsoft.ContainerInstance/containerGroups/eg6n7wvuyxn7iazscripts",
+ "endTime": "2023-11-01T16:39:12.080950+00:00",
"error": null,
- "expirationTime": "2023-05-12T03:00:16.796923+00:00",
- "startTime": "2023-05-11T02:59:07.595140+00:00",
- "storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts"
+ "expirationTime": "2023-11-02T16:39:12.080950+00:00",
+ "startTime": "2023-11-01T16:37:53.139700+00:00",
+ "storageAccountId": null
+ },
+ "storageAccountSettings": {
+ "storageAccountKey": null,
+ "storageAccountName": "dsfruro267qwb4i"
},
- "storageAccountSettings": null,
"supportingScriptUris": null, "systemData": {
- "createdAt": "2023-05-11T02:59:04.750195+00:00",
+ "createdAt": "2023-10-31T19:06:57.060909+00:00",
"createdBy": "someone@contoso.com", "createdByType": "User",
- "lastModifiedAt": "2023-05-11T02:59:04.750195+00:00",
+ "lastModifiedAt": "2023-11-01T16:37:51.859570+00:00",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User" }, "tags": null,
- "timeout": "1:00:00",
+ "timeout": "0:30:00",
"type": "Microsoft.Resources/deploymentScripts" } ]
The two automatically created supporting resources can never outlive the `deploy
- `cleanupPreference`: Specify the clean-up preference of the two supporting resources when the script execution gets in a terminal state. The supported values are:
- - **Always**: Delete the two supporting resources once script execution gets in a terminal state. If an existing storage account is used, the script service deletes the file share created by the service. Because the `deploymentScripts` resource may still be present after the supporting resources are cleaned up, the script service persists the script execution results, for example, stdout, outputs, and return value before the resources are deleted.
+ - **Always**: Delete the two supporting resources once script execution gets in a terminal state. If an existing storage account is used, the script service deletes the file share created by the service. Because the `deploymentScripts` resource might still be present after the supporting resources are cleaned up, the script service persists the script execution results, for example, stdout, outputs, and return value before the resources are deleted.
- **OnSuccess**: Delete the two supporting resources only when the script execution is successful. If an existing storage account is used, the script service removes the file share only when the script execution is successful. If the script execution isn't successful, the script service waits until the `retentionInterval` expires before it cleans up the supporting resources and then the deployment script resource.
azure-resource-manager Template Tutorial Deployment Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-deployment-script.md
Title: Use template deployment scripts | Microsoft Docs description: Learn how to use deployment scripts in Azure Resource Manager templates (ARM templates).----- Previously updated : 09/28/2022 - Last updated : 09/28/2022 # Tutorial: Use deployment scripts to create a self-signed certificate
The deployment script adds a certificate to the key vault. Configure the key vau
* `timeout`: Specify the maximum allowed script execution time specified in the [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). Default value is **P1D**. * `arguments`: Specify the parameter values. The values are separated by spaces. * `scriptContent`: Specify the script content. To run an external script, use `primaryScriptURI` instead. For more information, see [Use external script](./deployment-script-template.md#use-external-scripts).
- Declaring `$DeploymentScriptOutputs` is only required when testing the script on a local machine. Declaring the variable allows the script to be run on a local machine and in a `deploymentScript` resource without having to make changes. The value assigned to `$DeploymentScriptOutputs` is available as outputs in the deployments. For more information, see [Work with outputs from PowerShell deployment scripts](./deployment-script-template.md#work-with-outputs-from-powershell-script) or [Work with outputs from CLI deployment scripts](./deployment-script-template.md#work-with-outputs-from-cli-script).
+ Declaring `$DeploymentScriptOutputs` is only required when testing the script on a local machine. Declaring the variable allows the script to be run on a local machine and in a `deploymentScript` resource without having to make changes. The value assigned to `$DeploymentScriptOutputs` is available as outputs in the deployments. For more information, see [Work with outputs from PowerShell deployment scripts](./deployment-script-template.md#work-with-outputs-from-powershell-scripts) or [Work with outputs from CLI deployment scripts](./deployment-script-template.md#work-with-outputs-from-cli-scripts).
* `cleanupPreference`: Specify the preference on when to delete the deployment script resources. The default value is **Always**, which means the deployment script resources are deleted despite the terminal state (Succeeded, Failed, Canceled). In this tutorial, **OnSuccess** is used so that you get a chance to view the script execution results. * `retentionInterval`: Specify the interval for which the service retains the script resources after it reaches a terminal state. Resources will be deleted when this duration expires. Duration is based on ISO 8601 pattern. This tutorial uses **P1D**, which means one day. This property is used when `cleanupPreference` is set to **OnExpiration**. This property isn't enabled currently. The deployment script takes three parameters: `keyVaultName`, `certificateName`, and `subjectName`. It creates a certificate, and then adds the certificate to the key vault.
- `$DeploymentScriptOutputs` is used to store output value. To learn more, see [Work with outputs from PowerShell deployment scripts](./deployment-script-template.md#work-with-outputs-from-powershell-script) or [Work with outputs from CLI deployment scripts](./deployment-script-template.md#work-with-outputs-from-cli-script).
+ `$DeploymentScriptOutputs` is used to store output value. To learn more, see [Work with outputs from PowerShell deployment scripts](./deployment-script-template.md#work-with-outputs-from-powershell-scripts) or [Work with outputs from CLI deployment scripts](./deployment-script-template.md#work-with-outputs-from-cli-scripts).
The completed template can be found [here](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault.json).
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Azure NetApp Files datastores for Azure VMware Solution are currently supported
* Switzerland West * UK South * UK West
+* US Gov Arizona
+* US Gov Virginia
* West Europe * West US * West US 2
backup Sap Hana Database Instance Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instance-troubleshoot.md
Title: Troubleshoot SAP HANA databases instance backup errors description: This article describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA database instances. Previously updated : 10/05/2022 Last updated : 11/02/2023
Azure VM and retry the operation. For more information, see the [Azure workload
**Recommended action**: Upgrade the VM or use a compatible target vm for restore. For more information, see the [SAP HANA database backup troubleshooting article](https://aka.ms/HANASnapshotTSGuide).
+### UserErrorSnasphotRestoreContextMissingForDBRecovery
+
+**Error message**: Snapshot based point in time restore operation could not be started because one of the previous restore steps is not complete
+
+**Cause**: Snapshot attach and mount or SystemDB recovery isn't done on the target VM.
+
+**Recommended action**: Retry the operation after completing a snapshot attach and mount operation on the target machine.
+
+### UserErrorInvalidScenarioForSnapshotPointInTimeRecovery
+
+**Cause**: The snapshot point-in-time restore operation failed as the underlying database on the target machine is protected with Azure Backup.
+
+**Recommended action**: Retry the restore operation after you stop protection of the databases on the target machine and ensure that the *Backint path is empty*. [Learn more about Backint path](https://aka.ms/HANABackupConfigurations).
+ ## Appendix **Perform restoration actions in SAP HANA studio**
backup Sap Hana Database Instances Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instances-backup.md
This article describes how to back up SAP HANA database instances that are runni
Azure Backup now performs an SAP HANA storage snapshot-based backup of an entire database instance. Backup combines an Azure managed disk full or incremental snapshot with HANA snapshot commands to provide instant HANA backup and restore. +
+>[!Note]
+>- Currently, the snapshots are stored on your storage account/operational tier, and isn't stored in Recovery Services vault. Thus, the vault features, such as Cross-region restore,Cross-subscription restore, and security capabilities, aren't supported.
+>- Original Location Restore (OLR) isn't supported.
+>- HANA System Replication (HSR)) isn't supported.
+>- For pricing, as per SAP advisory, you must do a weekly full backup + logs streaming/Backint based backup so that the existing protected instance fee and storage cost are applied. For snapshot backup, the snapshot data created by Azure Backup is saved in your storage account and incurs snapshot storage charges. Thus, in addition to streaming/Backint backup charges, you're charged for per GB data stored in your snapshots, which is charged separately. Learn more about [Snapshot pricing](https://azure.microsoft.com/pricing/details/managed-disks/) and [Streaming/Backint based backup pricing](https://azure.microsoft.com/pricing/details/backup/?ef_id=_k_CjwKCAjwp8OpBhAFEiwAG7NaEsaFZUxIBD-FH1IUIfF-7yZRWAYJSMHP67InGf0drY0X2Km71KOKDBoCktgQAvD_BwE_k_&OCID=AIDcmmf1elj9v5_SEM__k_CjwKCAjwp8OpBhAFEiwAG7NaEsaFZUxIBD-FH1IUIfF-7yZRWAYJSMHP67InGf0drY0X2Km71KOKDBoCktgQAvD_BwE_k_&gclid=CjwKCAjwp8OpBhAFEiwAG7NaEsaFZUxIBD-FH1IUIfF-7yZRWAYJSMHP67InGf0drY0X2Km71KOKDBoCktgQAvD_BwE).
++ For more information about the supported configurations and scenarios, see [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md). ## Before you start
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 10/12/2023 Last updated : 11/02/2023
When scheduling a task on Batch nodes, you can choose whether to run it with tas
A [compute node](nodes-and-pools.md#nodes) is an Azure virtual machine (VM) or cloud service VM that is dedicated to processing a portion of your application's workload. Follow these guidelines when working with nodes.
-### Idempotent start tasks
+### Start tasks: lifetime and idempotency
-As with other tasks, the node [start task](jobs-and-tasks.md#start-task) should be idempotent, as it will be rerun every time the node boots. An idempotent task is simply one that produces a consistent result when run multiple times.
+As with other tasks, the node [start task](jobs-and-tasks.md#start-task) should be idempotent. Start tasks are rerun when the compute node
+restarts or when the Batch agent restarts. An idempotent task is simply one that produces a consistent result when run multiple times.
-### Isolated nodes
+Start tasks shouldn't be long-running or be coupled to the lifetime of the compute node. If you need to start programs that are services or
+service-like in nature, construct a start task that enables these programs to be started and managed by operating system facilities such as
+`systemd` on Linux or Windows Services. The start task should still be constructed as idempotent such that subsequent execution of the
+start task is handled properly if these programs were previously installed as services.
-Consider using isolated VM sizes for workloads with compliance or regulatory requirements. Supported isolated sizes in virtual machine configuration mode include `Standard_E80ids_v4`, `Standard_M128ms`, `Standard_F72s_v2`, `Standard_G5`, `Standard_GS5`, and `Standard_E64i_v3`. For more information about isolated VM sizes, see [Virtual machine isolation in Azure](../virtual-machines/isolation.md).
+> [!TIP]
+> When Batch reruns your start task, it will attempt to delete the start task directory and create it again. If Batch fails to
+> recreate the start task directory, then the compute node will fail to launch the start task.
-### Manage long-running services via the operating system services interface
+These services must not take file locks on any files in Batch-managed directories on the node, because otherwise Batch is unable to delete
+those directories due to the file locks. For example, instead of configuring launch of the service directly from the start task working
+directory, copy the files elsewhere in an idempotent fashion. Then install the service from that location using the operating system
+facilities.
-Sometimes there's a need to run another agent alongside the Batch agent in the node. For example, you may want to gather data from the node and report it. We recommend that these agents be deployed as OS services, such as a Windows service or a Linux `systemd` service.
+### Isolated nodes
-These services must not take file locks on any files in Batch-managed directories on the node, because otherwise Batch will be unable to delete those directories due to the file locks. For example, if installing a Windows service in a start task, instead of launching the service directly from the start task working directory, copy the files elsewhere (or if the files exist just skip the copy). Then install the service from that location. When Batch reruns your start task, it will delete the start task working directory and create it again.
+Consider using isolated VM sizes for workloads with compliance or regulatory requirements. Supported isolated sizes in virtual machine configuration mode include `Standard_E80ids_v4`, `Standard_M128ms`, `Standard_F72s_v2`, `Standard_G5`, `Standard_GS5`, and `Standard_E64i_v3`. For more information about isolated VM sizes, see [Virtual machine isolation in Azure](../virtual-machines/isolation.md).
### Avoid creating directory junctions in Windows
section about attaching and preparing data disks for compute nodes.
### Attaching and preparing data disks
-Each individual compute node will have the exact same data disk specification attached if specified as part of the Batch pool instance. Only
+Each individual compute node has the exact same data disk specification attached if specified as part of the Batch pool instance. Only
new data disks may be attached to Batch pools. These data disks attached to compute nodes aren't automatically partitioned, formatted or mounted. It's your responsibility to perform these operations as part of your [start task](jobs-and-tasks.md#start-task). These start tasks
-must be crafted to be idempotent. A re-execution of the start task after the compute node has been provisioned is possible. If the start
+must be crafted to be idempotent. Re-execution of the start tasks on compute nodes is possible. If the start
task isn't idempotent, potential data loss can occur on the data disks. > [!TIP]
note before promoting your method into production use.
#### Preparing data disks in Windows Batch pools
-Azure data disks attached to Batch Windows compute nodes are presented unpartitioned and unformatted. You'll need to enumerate disks
+Azure data disks attached to Batch Windows compute nodes are presented unpartitioned and unformatted. You need to enumerate disks
with `RAW` partitions for actioning as part of your start task. This information can be retrieved using the `Get-Disk` PowerShell cmdlet. As an example, you could potentially see:
For user subscription mode Batch accounts, automated OS upgrades can interrupt t
For Windows pools, `enableAutomaticUpdates` is set to `true` by default. Allowing automatic updates is recommended, but you can set this value to `false` if you need to ensure that an OS update doesn't happen unexpectedly.
+## Batch API
+
+### Timeout Failures
+
+Timeout failures don't necessarily indicate that the service failed to process the request. When a timeout failure occurs,
+you should either retry the operation or retrieve the state of the resource, as appropriate for the situation, to verify the
+status of whether the operation succeeded or failed.
+ ## Connectivity Review the following guidance related to connectivity in your Batch solutions.
Linux:
- A user named **_azbatch**
+> [!TIP]
+> Naming of these users or groups are implementation artifacts and are subject to change at any time.
+ ### File cleanup Batch actively tries to clean up the working directory that tasks are run in, once their retention time expires. Any files written outside of this directory are [your responsibility to clean up](#manage-task-lifetime) to avoid filling up disk space.
chaos-studio Chaos Studio Target Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-target-selection.md
Title: Target selection in Azure Chaos Studio
-description: Understand two different ways to select experiment targets in Azure Chaos Studio.
+description: Understand two different ways to select experiment targets and target scoping in Azure Chaos Studio.
# Target selection in Azure Chaos Studio
-Every chaos experiment is made up of a different combination of faults and targets, building up to a unique outage scenario to test your system's resilience against. You may want to select a fixed set of targets for your chaos experiment, or provide a rule in which all matching fault-onboarded resources are included as targets in your experiment. Chaos Studio enables you to do both by providing both manual and query-based target selection.
+Every chaos experiment is made up of a different combination of faults and targets, building up to a unique outage scenario to test your system's resilience against. You can select a fixed set of targets for your chaos experiment, or provide a rule in which all matching fault-onboarded resources are included as targets in your experiment. Chaos Studio enables you to do both by providing both manual and query-based target selection.
## List-based manual target selection
-List-based manual target selection allows you to select a fixed set of onboarded targets for a particular fault in your chaos experiment. Depending on the selected fault, you may select one or more onboarded resources to target. The aforementioned resources are added to the experiment upon creation time. In order to modify the list, you must navigate to the experiment's page and add or remove fault targets manually. An example of manual target selection is shown below.
+List-based manual target selection allows you to select a fixed set of onboarded targets for a particular fault in your chaos experiment. Depending on the selected fault, you can select one or more onboarded resources to target. The aforementioned resources are added to the experiment upon creation time. In order to modify the list, you must navigate to the experiment's page and add or remove fault targets manually. An example of manual target selection is shown below.
[ ![Screenshot that shows the list-based manual target selection option in the Azure portal.](images/manual-target-selection.png) ](images/manual-target-selection.png#lightbox) ## Query-based dynamic target selection
-Query-based dynamic target selection allows you to input a KQL query that will select all onboarded targets that match the query result set. Using your query, you may filter targets based on common Azure resource parameters including type, region, name, and more. Upon experiment creation time, only the query itself will be added to your chaos experiment.
+Query-based dynamic target selection allows you to input a KQL query that selects all onboarded targets that match the query result set. Using your query, you can filter targets based on common Azure resource parameters including type, region, name, and more. Upon experiment creation time, only the query itself is added to your chaos experiment.
-The inputted query will run and add onboarded targets that match its result set upon experiment execution time. Thus, any resources onboarded to Chaos Studio after experiment creation time that match the query result set upon experiment execution time will be targeted by your experiment. You may your query's result set when adding it to your experiment, but be aware that it may not match the result set at experiment execution time. An example of a possible dynamic target query is shown below.
+The inputted query runs and adds onboarded targets to your experiment that match its result set upon experiment execution time. Thus, any resources onboarded to Chaos Studio after experiment creation time that match the query result set upon experiment execution time are targeted by your experiment. You can preview your query's result set when adding it to your experiment, but it may not match the result set at experiment execution time. An example of a possible dynamic target query is shown below.
[ ![Screenshot that shows the query-based dynamic target selection option in the Azure portal.](images/dynamic-target-selection-preview.png) ](images/dynamic-target-selection-preview.png#lightbox)
+## Target scoping
+
+Certain faults in Chaos Studio allow you to further target specific functionality within your Azure resources. If scope selection is available for a target and not configured, the resource will be targeted fully by the selected fault. An example of scope selection on a Virtual Machine Scale Sets instance being targeted by the **VMSS Shutdown (version 2.0)** fault is shown below.
+
+[ ![Screenshot that shows scope selection being done on a target.](images/tutorial-dynamic-targets-fault-zones.png) ](images/tutorial-dynamic-targets-fault-zones.png#lightbox)
+ ## Next steps Now that you understand both ways to select targets within a chaos experiment, you're ready to:
chaos-studio Chaos Studio Targets Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-targets-capabilities.md
Returns the following JSON:
## Next steps Now that you understand what targets and capabilities are, you're ready to: - [Learn about faults and actions](chaos-studio-faults-actions.md)
+- [Learn about target selection and scoping](chaos-studio-target-selection.md)
- [Create and run your first experiment](chaos-studio-tutorial-service-direct-portal.md)
communication-services Chat Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/chat-logs.md
Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal. > [!IMPORTANT]
-> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/overview.md#frequently-asked-questions)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
## Resource log categories
communication-services Network Traversal Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/network-traversal-logs.md
Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal. > [!IMPORTANT]
-> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/overview.md#frequently-asked-questions)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
## Resource log categories
communication-services Recording Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/recording-logs.md
Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. You configure these capabilities through the Azure portal.
-The content in this article refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for Communication Services, see [Enable logging in diagnostic settings](../enable-logging.md).
+The content in this article refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/overview.md#frequently-asked-questions)). To enable these logs for Communication Services, see [Enable logging in diagnostic settings](../enable-logging.md).
## Resource log categories
communication-services Rooms Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/rooms-logs.md
Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal. > [!IMPORTANT]
-> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/overview.md#frequently-asked-questions)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
## Pre-requisites
Communication Services offers the following types of logs that you can enable:
] ```
- (See also [FAQ](../../../../azure-monitor/faq.yml)).
+ (See also [FAQ](../../../../azure-monitor/overview.md#frequently-asked-questions)).
communication-services Router Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/router-logs.md
Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. You configure these capabilities through the Azure portal.
-The content in this article refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for Communication Services, see [Enable logging in diagnostic settings](../enable-logging.md).
+The content in this article refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/overview.md#frequently-asked-questions)). To enable these logs for Communication Services, see [Enable logging in diagnostic settings](../enable-logging.md).
## Resource log categories
communication-services Sms Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/sms-logs.md
Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal. > [!IMPORTANT]
-> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/overview.md#frequently-asked-questions)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
## Pre-requisites
Communication Services offers the following types of logs that you can enable:
```
- (see also [FAQ](../../../../azure-monitor/faq.yml)).
+ (see also [FAQ](../../../../azure-monitor/overview.md#frequently-asked-questions)).
communication-services Voice And Video Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/voice-and-video-logs.md
Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. You configure these capabilities through the Azure portal.
-The content in this article refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for Communication Services, see [Enable logging in diagnostic settings](../enable-logging.md).
+The content in this article refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/overview.md#frequently-asked-questions)). To enable these logs for Communication Services, see [Enable logging in diagnostic settings](../enable-logging.md).
## Data concepts
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
When sending or receiving a high volume of messages, you might receive a ```429`
Rate Limits for SMS:
-|Operation|Scope|Timeframe (seconds)| Limit (number of requests) | Message units per minute|
-||--|-|-|-|
-|Send Message|Per Number|60|200|200|
+|Operation|Number Type |Scope|Timeframe (s)| Limit (request #) | Message units per minute|
+|||--|-|-|-|
+|Send Message|Toll-Free|Per Number|60|200|200|
+|Send Message|Short Code |Per Number|60|6000|6000|
+|Send Message|Alphanumeric Sender ID |Per resource|60|600|600|
### Action to take
-If you require to send a volume of messages that exceed the rate limits, email us at phone@microsoft.com.
+If you have requirements that exceed the rate-limits, submit [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md) to enable higher throughput.
+ For more information on the SMS SDK and service, see the [SMS SDK overview](./sms/sdk-features.md) page or the [SMS FAQ](./sms/sms-faq.md) page.
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
The Azure Communication Services SMS SDK uses the following error codes to help
| 4006 | The Destination/To number isn't reachable| Try resending the message at a later time | | 4007 | The Destination/To number has opted out of receiving messages from you| Mark the Destination/To number as opted out so that no further message attempts are made to the number| | 4008 | You've exceeded the maximum number of messages allowed for your profile| Ensure you aren't exceeding the maximum number of messages allowed for your number or use queues to batch the messages |
-| 4009 | Message is rejected by Microsoft Entitlement System| Most often it happens if fraudulent activity is detected. Please contact support for more details |
+| 4009 | Message is rejected by Microsoft Entitlement System| Most often this happens if fraudulent activity is detected. Please contact support for more details |
| 4010 | Message was blocked due to the toll-free number not being verified | [Review unverified sending limits](./sms/sms-faq.md#toll-free-verification) and submit toll-free verification as soon as possible | | 5000 | Message failed to deliver. Please reach out Microsoft support team for more details| File a support request through the Azure portal | | 5001 | Message failed to deliver due to temporary unavailability of application/system| |
-| 5002 | Message Delivery Timeout| Try resending the message |
+| 5002 | Carrier does not support delivery report | Most often this happens if a carrier does not support delivery reports. No action required as message may have been delivered already. |
| 9999 | Message failed to deliver due to unknown error/failure| Try resending the message |
connectors Connectors Run 3270 Apps Ibm Mainframe Create Api 3270 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-run-3270-apps-ibm-mainframe-create-api-3270.md
- Title: Connect to 3270 apps on IBM mainframes
-description: Integrate and automate 3270 screen-driven apps with Azure by using Azure Logic Apps and IBM 3270 connector
----- Previously updated : 02/03/2021
-tags: connectors
--
-# Integrate 3270 screen-driven apps on IBM mainframes with Azure by using Azure Logic Apps and IBM 3270 connector
-
-With Azure Logic Apps and the IBM 3270 connector, you can access and run IBM mainframe apps that you usually drive by navigating through 3270 emulator screens. That way, you can integrate your IBM mainframe apps with Azure, Microsoft, and other apps, services, and systems by creating automated workflows with Azure Logic Apps. The connector communicates with IBM mainframes by using the TN3270 protocol and is available in all Azure Logic Apps regions except for Azure Government and Microsoft Azure operated by 21Vianet. If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
-
-This article describes these aspects for using the 3270 connector:
-
-* Why use the IBM 3270 connector in Azure Logic Apps
-and how the connector runs 3270 screen-driven apps
-
-* The prerequisites and setup for using the 3270 connector
-
-* The steps for adding 3270 connector actions to your logic app
-
-## Why use this connector?
-
-To access apps on IBM mainframes, you typically use a
-3270 terminal emulator, often called a "green screen".
-This method is a time-tested way but has limitations.
-Although Host Integration Server (HIS) helps you work
-directly with these apps, sometimes, separating the
-screen and business logic might not be possible. Or,
-maybe you no longer have information for how the host
-applications work.
-
-To extend these scenarios, the IBM 3270 connector in
-Azure Logic Apps works with the 3270 Design Tool,
-which you use to record, or "capture", the host screens
-used for a specific task, define the navigation flow for
-that task through your mainframe app, and define the methods
-with input and output parameters for that task. The design
-tool converts that information into metadata that the 3270
-connector uses when calling an action that represents that
-task from your logic app.
-
-After you generate the metadata file from the design tool,
-you add that file to an integration account in Azure. That way,
-your logic app can access your app's metadata when you add a
-3270 connector action. The connector reads the metadata file
-from your integration account, handles navigation through the
-3270 screens, and dynamically presents the parameters for
-the 3270 connector action. You can then provide data to the
-host application, and the connector returns the results to
-your logic app. That way, you can integrate your legacy apps
-with Azure, Microsoft, and other apps, services, and systems
-that Azure Logic Apps supports.
-
-## Prerequisites
-
-* An Azure account and subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
-* Basic knowledge about [logic app workflows](../logic-apps/logic-apps-overview.md)
-
-* Recommended: An [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment.md)
-
- You can select this environment as the location for creating and
- running your logic app. An ISE provides access from your logic app
- to resources that are protected inside Azure virtual networks.
-
-* The logic app to use for automating and running your 3270 screen-driven app
-
- The IBM 3270 connector doesn't have triggers, so use another trigger to start your logic app, such as the **Recurrence** trigger. You can then add 3270 connector actions. To get started, create a blank logic app workflow. If you use an ISE, select that ISE as your logic app's location.
-
-* [Download and install the 3270 Design Tool](https://aka.ms/3270-design-tool-download).
-The only prerequisite is [Microsoft .NET Framework 4.8](https://aka.ms/net-framework-download).
-
- This tool helps you record the screens, navigation paths,
- methods, and parameters for the tasks in your app that you
- add and run as 3270 connector actions. The tool generates
- a Host Integration Designer XML (HIDX) file that provides
- the necessary metadata for the connector to use for driving
- your mainframe app.
-
- After downloading and installing this tool,
- follow these steps for connecting to your host:
-
- 1. Open the 3270 Design Tool. From the
- **Session** menu, select **Host Sessions**.
-
- 1. Provide your TN3270 host server information.
-
-* An [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md),
-which is the place where you store your HIDX file as a map so your
-logic app can access the metadata and method definitions in that file.
-
- Make sure your integration account is linked to the logic app
- you're using. Also, if you use an ISE, make sure your integration
- account's location is the same ISE that your logic app uses.
-
-* Access to the TN3270 server that hosts your mainframe app
-
-<a name="define-app-metadata"></a>
-
-## Create metadata overview
-
-In a 3270 screen-driven app, the screens and data fields are unique
-to your scenarios, so the 3270 connector needs this information about
-your app, which you can provide as metadata. This metadata describes
-information that helps your logic app identify and recognize screens,
-describes how to navigate between screens, where to input data,
-and where to expect results. To specify and generate this metadata,
-you use the 3270 Design Tool, which walks you through these specific
-*modes*, or stages, as described later in more details:
-
-* **Capture**: In this mode, you record the screens required for completing
-a specific task with your mainframe app, for example, getting a bank balance.
-
-* **Navigation**: In this mode, you specify the plan or path for how
-to navigate through your mainframe app's screens for the specific task.
-
-* **Methods**: In this mode, you define the method, for example,
-`GetBalance`, that describes the screen navigation path. You also
-select the fields on each screen that become the method's input
-and output parameters.
-
-### Unsupported elements
-
-The design tool doesn't support these elements:
-
-* Partial IBM Basic Mapping Support (BMS) maps: If you import a BMS map, the design tool ignores partial screen definitions.
-
-* Menu processing
-
-<a name="capture-screens"></a>
-
-## Capture screens
-
-In this mode, you mark an item on each 3270 screen that
-uniquely identifies that screen. For example, you might
-specify a line of text or a more complex set of conditions,
-such as specific text and a non-empty field. You can either
-record these screens over a live connection to the host server,
-or import this information from an IBM Basic Mapping Support
-(BMS) map. The live connection uses a TN3270 emulator for
-connecting to the host. Each connector action must map to
-a single task that starts with connecting to your session
-and ends with disconnecting from your session.
-
-1. If you haven't already, open the 3270 Design Tool. On the toolbar, select **Capture** so that you enter Capture mode.
-
-1. From the **Session** menu, select **Connect**.
-
-1. To start recording, from the **Recording** menu, select **Start Recording**. (Keyboard: Ctrl + E)
-
-1. In the **Capture** pane, starting from the
-first screen in your app, step through your app
-for the specific task that you're recording.
-
-1. After you finish the task, sign out from your app as you usually do.
-
-1. From the **Session** menu, select **Disconnect**.
-
-1. To stop recording, from the **Recording** menu, select **Stop Recording**. (Keyboard: Ctrl + Shift + E)
-
- After you capture the screens for a task, the designer tool
- shows thumbnails that represent those screens. Some notes
- about these thumbnails:
-
- * Included with your captured screens,
- you have a screen that's named "Empty".
-
- When you first connect to
- [CICS](https://www.ibm.com/it-infrastructure/z/cics),
- you must send the "Clear" key before you can enter the name
- for the transaction you want to run. The screen where you
- send the "Clear" key doesn't have any *recognition attributes*,
- such as a screen title, which you can add by using the Screen
- Recognition editor. To represent this screen, the thumbnails
- includes a screen named "Empty". You can later use this screen
- for representing the screen where you enter the transaction name.
-
- * By default, the name for a captured screen uses the first word on
- the screen. If that name already exists, the design tool appends
- the name with an underscore and a number, for example, "WBGB" and "WBGB_1".
-
-1. To give a more meaningful name to a
-captured screen, follow these steps:
-
- 1. In the **Host Screens** pane, select the
- screen you want to rename.
-
- 1. In the same pane, near the bottom in the
- same pane, find the **Screen Name** property.
-
- 1. Change the current screen name to a more descriptive name.
-
-1. Now specify the fields for identifying each screen.
-
- With the 3270 data stream, screens don't have default identifiers,
- so you need to select unique text on each screen. For complex scenarios,
- you can specify multiple conditions, for example, unique text and a
- field with a specific condition.
-
-After you finish selecting the recognition fields,
-move to the next mode.
-
-### Conditions for identifying repeated screens
-
-For the connector to navigate and differentiate between screens,
-you usually find unique text on a screen that you can use as an
-identifier among the captured screens. For repeated screens,
-you might need more identification methods. For example, suppose
-you have two screens that look the same except one screen returns
-a valid value, while the other screen returns an error message.
-
-In the design tool, you can add *recognition attributes*,
-for example, a screen title such as "Get Account Balance",
-by using the Screen Recognition editor. If you have a forked
-path and both branches return the same screen but with
-different results, you need other recognition attributes.
-At run time, the connector uses these attributes for
-determining the current branch and fork. Here are the
-conditions you can use:
-
-* Specific value: This value matches the
-specified string at the specified location.
-* NOT a specific value: This value doesn't match
-the specified string at the specified location.
-* Empty: This field is empty.
-* NOT empty: This field isn't empty.
-
-To learn more, see the [Example navigation plan](#example-plan)
-later in this topic.
-
-<a name="define-navigation"></a>
-
-## Define navigation plans
-
-In this mode, you define the flow or steps for navigating
-through your mainframe app's screens for your specific task.
-For example, sometimes, you might have more than one path that
-your app can take where one path produces the correct result,
-while the other path produces an error. For each screen, specify the
-keystrokes necessary for moving to the next screen, such as `CICSPROD <enter>`.
-
-> [!TIP]
-> If you're automating several tasks that use the same connect
-> and disconnect screens, the design tool provides special
-> Connect and Disconnect plan types. When you define these plans,
-> you can add them to your navigation plan's beginning and end.
-
-### Guidelines for plan definitions
-
-* Include all screens, starting with
-connecting and ending with disconnecting.
-
-* You can create a standalone plan or use the
-Connect and Disconnect plans, which let you reuse
-a series of screens common to all your transactions.
-
- * The last screen in your Connect plan must be the
- same screen as the first screen in your navigation plan.
-
- * The first screen in your Disconnect plan must be
- same screen as the last screen in your navigation plan.
-
-* Your captured screens might contain many repeated screens,
-so select and use only one instance of any repeated screens
-in your plan. Here are some examples of repeated screens:
-
- * The sign-in screen, for example, the **MSG-10** screen
- * The welcome screen for CICS
- * The "Clear" or **Empty** screen
-
-<a name="create-plans"></a>
-
-### Create plans
-
-1. On the 3270 Design Tool's toolbar, select **Navigation** so that you enter Navigation mode.
-
-1. To start your plan, in the **Navigation** pane, select **New Plan**.
-
-1. Under **Choose New Plan Name**, enter a name for
-your plan. From the **Type** list, select the plan type:
-
- | Plan type | Description |
- |--|-|
- | **Process** | For standalone or combined plans |
- | **Connect** | For Connect plans |
- | **Disconnect** | For Disconnect plans |
- |||
-
-1. From the **Host Screens** pane, drag the captured thumbnails
-to the navigation plan surface in the **Navigation** pane.
-
- To represent the blank screen where you enter
- the transaction name, use the "Empty" screen.
-
-1. Arrange the screens in the order that
-describes the task that you're defining.
-
-1. To define the flow path between screens, including forks
-and joins, on the design tool's toolbar, select **Flow**.
-
-1. Choose the first screen in the flow. Drag and
-draw a connection to the next screen in the flow.
-
-1. For each screen, provide the values for the **AID Key**
-property (Attention Identifier) and for the **Fixed Text**
-property, which moves the flow to the next screen.
-
- You might have just the AID key,
- or both the AID key and fixed text.
-
-After you finish your navigation plan,
-you can [define methods in the next mode](#define-method).
-
-<a name="example-plan"></a>
-
-### Example
-
-In this example, suppose you run a CICS
-transaction named "WBGB" that has these steps:
-
-* On the first screen, you enter a name and an account.
-* On the second screen, you get the account balance.
-* You exit to the "Empty" screen.
-* You sign out from CICS to the "MSG-10" screen.
-
-Also suppose that you repeat these steps, but you enter incorrect
-data so you can capture the screen that shows the error. Here are
-the screens you capture:
-
-* MSG-10
-* CICS Welcome
-* Empty
-* WBGB_1 (input)
-* WBGB_2 (error)
-* Empty_1
-* MSG-10_1
-
-Although many screens here get unique names, some screens are the same screen,
-for example, "MSG-10" and "Empty". For a repeated screen, use only one instance
-for that screen in your plan. Here are examples that show how a standalone plan,
-Connect plan, Disconnect plan, and a combined plan might look:
-
-* Standalone plan
-
- ![Standalone navigation plan](./media/connectors-create-api-3270/standalone-plan.png)
-
-* Connect plan
-
- ![Connect plan](./media/connectors-create-api-3270/connect-plan.png)
-
-* Disconnect plan
-
- ![Disconnect plan](./media/connectors-create-api-3270/disconnect-plan.png)
-
-* Combined plan
-
- ![Combined plan](./media/connectors-create-api-3270/combined-plan.png)
-
-#### Example: Identify repeated screens
-
-For the connector to navigate and differentiate screens,
-you usually find unique text on a screen that you can use as
-an identifier across the captured screens. For repeated screens,
-you might need more identification methods. The example plan has a
-fork where you can get screens that look similar. One screen returns
-an account balance, while the other screen returns an error message.
-
-The design tool lets you add recognition attributes, for example,
-a screen title named "Get Account Balance", by using the Screen
-Recognition editor. In the case with similar screens, you need
-other attributes. At run time, the connector uses these attributes
-for determining the branch and fork.
-
-* In the branch that returns valid input, which is
-the screen with the account balance, you can add a
-field that has a "not empty" condition.
-
-* In the branch that returns with an error, you can
-add a field that has an "empty" condition.
-
-<a name="define-method"></a>
-
-## Define methods
-
-In this mode, you define a method that's associated with your navigation plan.
-For each method parameter, you specify the data type, such as a string, integer,
-date or time, and so on. When you're done, you can test your method on the
-live host and confirm that the method works as expected. You then generate
-the metadata file, or Host Integration Designer XML (HIDX) file, which now
-has the method definitions to use for creating and running an action for
-the IBM 3270 connector.
-
-1. On the 3270 Design Tool's toolbar, select
-**Methods** so that you enter Methods mode.
-
-1. In the **Navigation** pane, select the
-screen that has the input fields you want.
-
-1. To add the first input parameter for your method,
-follow these steps:
-
- 1. In the **Capture** pane, on the 3270 emulator screen,
- select the whole field, not just text inside the field,
- that you want as the first input.
-
- > [!TIP]
- > To display all the fields and make sure
- > that you select the complete field,
- > on the **View** menu, select **All Fields**.
-
- 1. On the design tool's toolbar, select **Input Field**.
-
- To add more input parameters,
- repeat the previous steps for each parameter.
-
-1. To add the first output parameter for your method,
-follow these steps:
-
- 1. In the **Capture** pane, on the 3270 emulator screen,
- select the whole field, not just text inside the field,
- that you want as the first output.
-
- > [!TIP]
- > To display all the fields and make sure
- > that you select the complete field,
- > on the **View** menu, select **All Fields**.
-
- 1. On the design tool's toolbar, select **Output Field**.
-
- To add more output parameters,
- repeat the previous steps for each parameter.
-
-1. After you add all your method's parameters,
-define these properties for each parameter:
-
- | Property name | Possible values |
- ||--|
- | **Data Type** | Byte, Date Time, Decimal, Int, Long, Short, String |
- | **Field Fill Technique** | Parameters support these fill types, filling with blanks if necessary: <p><p>- **Type**: Enter characters sequentially into the field. <p>- **Fill**: Replace the field's contents with characters, filling with blanks if necessary. <p>- **EraseEofType**: Clear the field, and then enter characters sequentially into the field. |
- | **Format String** | Some parameter data types use a format string, which informs the 3270 connector how to convert text from the screen into a .NET data type: <p><p>- **DateTime**: The DateTime format string follows the [.NET custom date and time format strings](/dotnet/standard/base-types/custom-date-and-time-format-strings). For example, the date `06/30/2019` uses the format string `MM/dd/yyyy`. <p>- **Decimal**: The decimal format string uses the [COBOL Picture clause](https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/rzasb/picture.htm). For example, the number `100.35` uses the format string `999V99`. |
- |||
-
-## Save and view metadata
-
-After you define your method, but before you test your method,
-save all the information that you defined so far to a RAP (.rap) file.
-You can save to this RAP file at any time during any mode. The design
-tool also includes a sample RAP file that you can open and review by
-browsing to the design tool's installation folder at this location
-and opening the "WoodgroveBank.rap" file:
-
-`..\Program Files\Microsoft Host Integration Server - 3270 Design Tool\SDK\WoodgroveBank.rap`
-
-However, if you try saving changes to the sample RAP file or
-generating an HIDX file from the sample RAP file while the file
-stays in the design tool's installation folder, you might get an
-"access denied" error. By default, the design tool installs in
-your Program Files folder without elevated permissions. If you
-get an error, try one of these solutions:
-
-* Copy the sample file to a different location.
-* Run the design tool as an administrator.
-* Make yourself the owner for the SDK folder.
-
-## Test your method
-
-1. To run your method against the live host, while still in Methods mode, press the F5 key, or from the design tool's toolbar, select **Test**.
-
- > [!TIP]
- > You can change modes at any time. On the **File** menu, select **Mode**, and then select the mode you want.
-
-1. Enter your parameters' values, and select **OK**.
-
-1. To continue to the next screen, select **Next**.
-
-1. When you're finished, select **Done**, which shows your output parameter values.
-
-<a name="add-metadata-integration-account"></a>
-
-## Generate and upload HIDX file
-
-When you're ready, generate the HIDX file so you
-can upload to your integration account. The 3270
-Design Tool creates the HIDX file in a new
-subfolder where you saved your RAP file.
-
-1. In the 3270 Design Tool, from the **Tools** menu, select **Generate Definitions**. (Keyboard: F6)
-
-1. Go to the folder that contains your RAP file, and open the
-subfolder that the tool created after generating your HIDX file.
-Confirm that the tool created the HIDX file.
-
-1. Sign in to the [Azure portal](https://portal.azure.com),
-and find your integration account.
-
-1. Add your HIDX file as a map to your integration account
-by [follow these similar steps for adding maps](../logic-apps/logic-apps-enterprise-integration-liquid-transform.md),
-but when you select the map type, select **HIDX**.
-
-Later in this topic, when you add an IBM 3270 action to your
-logic app for the first time, you're prompted to create a
-connection between your logic app and the host server by
-providing connection information, such as the names for
-your integration account and host server. After you create
-the connection, you can select your previously added
-HIDX file, the method to run, and the parameters to use.
-
-When you finish all these steps, you can use the action that
-you create in your logic app for connecting to your IBM mainframe,
-drive screens for your app, enter data, return results, and so on.
-You can also continue adding other actions to your logic app for
-integrating with other apps, services, and systems.
-
-<a name="run-action"></a>
-
-## Run IBM 3270 actions
--
-1. Sign in to the [Azure portal](https://portal.azure.com),
-and open your logic app in Logic App Designer, if not open already.
-
-1. Under the last step where you want to add an action, select **New step** **>** **Add an action**.
-
-1. Under the search box, select **Enterprise**. In the search box, enter `3270` as your filter. From the actions list, select the action named
-**Runs a mainframe program over a TN3270 connection**
-
- ![Select 3270 action](./media/connectors-create-api-3270/select-3270-action.png)
-
- To add an action between steps, move your pointer over the arrow between steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
-
-1. If no connection exists yet, provide the necessary information for your connection, and select **Create**.
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Connection Name** | Yes | <*connection-name*> | The name for your connection |
- | **Integration Account ID** | Yes | <*integration-account-name*> | Your integration account's name |
- | **Integration Account SAS URL** | Yes | <*integration-account-SAS-URL*> | Your integration account's Shared Access Signature (SAS) URL, which you can generate from your integration account's settings in the Azure portal. <p>1. On your integration account menu, under **Settings**, select **Callback URL**. <br>2. In the right-hand pane, copy the **Generated Callback URL** value. |
- | **Server** | Yes | <*TN3270-server-name*> | The server name for your TN3270 service |
- | **Port** | No | <*TN3270-server-port*> | The port used by your TN3270 server. If left blank, the connector uses `23` as the default value. |
- | **Device Type** | No | <*IBM-terminal-model*> | The model name or number for the IBM terminal to emulate. If left blank, the connector uses default values. |
- | **Code Page** | No | <*code-page-number*> | The code page number for the host. If left blank, the connector uses `37` as the default value. |
- | **Logical Unit Name** | No | <*logical-unit-name*> | The specific logical unit name to request from the host |
- | **Enable SSL?** | No | On or off | Turn on or turn off TLS encryption. |
- | **Validate host ssl certificate?** | No | On or off | Turn on or turn off validation for the server's certificate. |
- ||||
-
- For example:
-
- ![Connection properties](./media/connectors-create-api-3270/connection-properties.png)
-
-1. Provide the necessary information for the action:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Hidx Name** | Yes | <*HIDX-file-name*> | Select the 3270 HIDX file that you want to use. |
- | **Method Name** | Yes | <*method-name*> | Select the method in the HIDX file that you want to use. After you select a method, the **Add new parameter** list appears so you can select parameters to use with that method. |
- ||||
-
- For example:
-
- **Select the HIDX file**
-
- ![Select HIDX file](./media/connectors-create-api-3270/select-hidx-file.png)
-
- **Select the method**
-
- ![Select method](./media/connectors-create-api-3270/select-method.png)
-
- **Select the parameters**
-
- ![Select parameters](./media/connectors-create-api-3270/add-parameters.png)
-
-1. When you're done, save and run your logic app.
-
- After your logic app finishes running, the steps from the run appear.
- Successful steps show check marks, while unsuccessful steps show the letter "X".
-
-1. To review the inputs and outputs for each step, expand that step.
-
-1. To review the outputs, select **See raw outputs**.
-
-## Connector reference
-
-For more technical details about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/si3270/).
-
-> [!NOTE]
-> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
-> this connector's ISE-labeled version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
-
-## Next steps
-
-* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
-* [Built-in connectors for Azure Logic Apps](built-in.md)
connectors Integrate 3270 Apps Ibm Mainframe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/integrate-3270-apps-ibm-mainframe.md
+
+ Title: Connect to 3270 apps on IBM mainframes
+description: Integrate 3270 screen-driven apps with workflows in Azure Logic Apps using the IBM 3270 connector.
+
+ms.suite: integration
++++ Last updated : 11/02/2023
+tags: connectors
++
+# Integrate 3270 screen-driven apps on IBM mainframes with Azure using Azure Logic Apps and IBM 3270 connector
++
+To access and run IBM mainframe apps, which you usually execute by navigating through 3270 emulator screens, from Consumption and Standard workflows in Azure Logic Apps, you can use the **IBM 3270** connector. That way, you can integrate your IBM mainframe apps with Azure, Microsoft, and other apps, services, and systems by creating automated workflows with Azure Logic Apps. The connector communicates with IBM mainframes by using the TN3270 protocol. The **IBM 3270** connector is available in all Azure Logic Apps regions except for Azure Government and Microsoft Azure operated by 21Vianet.
+
+This how-to guide describes the following aspects about the **IBM 3270** connector:
+
+- Why use the IBM 3270 connector in Azure Logic Apps
+
+- How does the IBM 3270 connector run 3270 screen-driven apps
+
+- Prerequisites and setup for using the IBM 3270 connector
+
+- Steps for adding IBM 3270 connector actions to your workflow
+
+## Why use this connector?
+
+To access apps on IBM mainframes, you typically use a 3270 terminal emulator, often called a "green screen". This method is a time-tested way but has limitations. Although Host Integration Server (HIS) helps you work
+directly with these apps, sometimes, separating the screen and business logic might not be possible. Or, maybe you no longer have information for how the host applications work.
+
+To extend these scenarios, the **IBM 3270** connector in Azure Logic Apps works with the [3270 Design Tool](/host-integration-server/core/application-integration-3270designer-1), which you use to record, or "capture", the host screens used for a specific task, define the navigation flow for that task through your mainframe app, and define the methods with input and output parameters for that task. The design tool converts that information into metadata that the 3270 connector uses when running an action in your workflow.
+
+After you generate the metadata file from the 3270 Design Tool, you add that file as a map artifact either to your Standard logic app resource or to your linked integration account for a Consumption logic app in Azure Logic Apps. That way, your workflow can access your app's metadata when you add an **IBM 3270** connector action. The connector reads the metadata file from your logic app resource (Standard) or your integration account (Consumption), handles navigation through the 3270 screens, and dynamically presents the parameters to use with the 3270 connector in your workflow. You can then provide data to the host application, and the connector returns the results to your workflow. As a result, you can integrate your legacy apps with Azure, Microsoft, and other apps, services, and systems that Azure Logic Apps supports.
+
+## Connector technical reference
+
+The IBM 3270 connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
+
+| Logic app | Environment | Connection version |
+|--|-|--|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Enterprise** label. This connector provides only single action and no triggers. For more information, see [IBM 3270 managed connector reference](/connectors/si3270). |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (ASE v3 with Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in, [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation) connector, which appears in the connector gallery under **Runtime** > **In-App**. The built-in version differs in the following ways: -<br><br>- The built-in connector requires that you upload your HIDX file to your Standard logic app resource, not an integration account. <br><br>- The built-in connector can directly connect to a 3270 server and access Azure virtual networks using a connection string. <br><br>- The built-in version supports server authentication with TLS (SSL) encryption for data in transit, message encoding for its operation, and Azure virtual network integration. <br><br>For more information, see the following documentation: <br><br>- [IBM 3270 managed connector reference](/connectors/si3270) <br>- [IBM 3270 built-in connector reference](#built-in-reference) |
+
+<a name="built-in-reference"></a>
+
+### Built-in connector reference
+
+The following section describes the operations for the IBM 3270 connector, which currently includes only the following action:
+
+### Execute a navigation plan
+
+| Parameter | Required | Type | Description |
+|--|-|-|-|
+| **HIDX Name** | Yes | String | Select the 3270 HIDX file that you want to use. |
+| **Method Name** | Yes | String | Select the method in the HIDX file that you want to use. |
+| **Advanced parameters** | No | Varies | This list appears after you select a method so that you can add other parameters to use with the selected method. The available parameters vary based on your HIDX file and the method that you select. |
+
+This operation also includes advanced parameters, which appear after you select a method, for you to select and use with the selected method. These parameters vary based on your HIDX file and the method that you select.
+
+## Prerequisites
+
+- An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Access to the TN3270 server that hosts your 3270 screen-driven app
+
+- The Host Integration Designer XML (HIDX) file that provides the necessary metadata for the **IBM 3270** connector to run your 3270 screen-driven app.
+
+ To create this HIDX file, [download and install the 3270 Design Tool](https://aka.ms/3270-design-tool-download). The only prerequisite is [Microsoft .NET Framework 4.8](https://aka.ms/net-framework-download).
+
+ This tool helps you record the screens, navigation paths, methods, and parameters for the tasks in your app that you add and run as 3270 connector actions. The tool generates a Host Integration Designer XML (HIDX) file that provides the necessary metadata for the connector to run your 3270 screen-driven app.
+
+ After you download and install this tool, [follow these steps to connect with your TN3270 host server, design the required metadata artifact, and generate the HIDX file](/host-integration-server/core/application-integration-la3270apps).
+
+- The Standard or Consumption logic app resource and workflow where you want to run your 3270 screen-driven app
+
+ The IBM 3270 connector doesn't have triggers, so use any trigger to start your workflow, such as the **Recurrence** trigger or **Request** trigger. You can then add the 3270 connector actions.
+
+- An [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md), which is required based on the 3270 connector version that you use and is an Azure resource where you can centrally store B2B artifacts such as trading partners, agreements, maps, schemas, and certificates to use with specific workflow actions.
+
+ | Workflow | Description |
+ |-|-|
+ | Standard | - 3270 built-in connector: Upload HIDX file to Standard logic app resource. <br><br>- 3270 managed connector: Upload HIDX file to your Standard logic app resource or your [linked integration account](../logic-apps/enterprise-integration/create-integration-account.md?tabs=standard#link-to-logic-app). |
+ | Consumption | 3270 managed connector: Upload HIDX file to your [linked integration account](../logic-apps/enterprise-integration/create-integration-account.md?tabs=consumption#link-to-logic-app). |
+
+ For more information, see [Upload the HIDX file](#upload-hidx-file).
+
+<a name="upload-hidx-file"></a>
+
+## Upload the HIDX file
+
+For your workflow to use the HIDX file, follow these steps:
+
+### [Standard](#tab/standard)
+
+1. Go to the folder where you saved your HIDX file, and copy the file.
+
+1. In the [Azure portal](https://portal.azure.com), choose the following steps, based on the connector version:
+
+ - 3270 built-in connector: [Upload your HIDX file to your Standard logic app resource](../logic-apps/logic-apps-enterprise-integration-maps.md?tabs=standard#add-map-to-standard-logic-app-resource).
+
+ - 3279 managed connector:
+
+ - [Upload your HIDX file to a linked integration account](../logic-apps/logic-apps-enterprise-integration-maps.md?tabs=standard#add-map-to-integration-account). Make sure that you select **HIDX** as the **Map type**.
+
+ - [Upload your HIDX file to your Standard logic app resource](../logic-apps/logic-apps-enterprise-integration-maps.md?tabs=standard#add-map-to-standard-logic-app-resource).
+
+1. Now, [add an IBM 3270 action to your workflow](#add-ibm-3270-action).
+
+### [Consumption](#tab/consumption)
+
+1. Go to the folder where you saved your HIDX file, and copy the file.
+
+1. In the [Azure portal](https://portal.azure.com), [upload the HIDX file as a map artifact to your linked integration account](../logic-apps/logic-apps-enterprise-integration-maps.md?tabs=consumption#add-map-to-integration-account). Make sure that you select **HIDX** as the **Map type**.
+
+1. Now, [add an IBM 3270 action to your workflow](#add-ibm-3270-action).
+++
+Later in this guide, when you add an **IBM 3270** connector action to your workflow for the first time, you're prompted to create a connection between your workflow and the mainframe system. After you create the connection, you can select your previously added HIDX file, the method to run, and the parameters to use.
+
+<a name="add-ibm-3270-action"></a>
+
+## Add an IBM 3270 action
+
+A Standard logic app workflow can use the IBM 3270 managed connector and the IBM 3270 built-in connector. However, a Consumption logic app workflow can use only the IBM 3270 managed connector. Each version has different actions. Based on whether you have a Consumption or Standard logic app workflow, follow the corresponding steps:
+
+### [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource and workflow where you've already add a trigger.
+
+1. If you haven't already added a trigger, [follow these general steps to add the trigger that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+ This example continues with the **Request** trigger named **When a HTTP request is received**.
+
+1. [Follow these general steps to add the **IBM 3270** built-in connector action named **Execute a navigation plan**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+1. When the connection information box appears, provide the following necessary parameter values:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection Name** | Yes | <*connection-name*> | A name for your connection |
+ | **Code Page** | No | <*code-page*> | The code page number for the host to use for converting text. If left blank, the connector uses `37` as the default value. |
+ | **Device Type** | No | <*IBM-terminal-model*> | The model name or number for the IBM terminal to emulate. If left blank, the connector uses default values. |
+ | **Log Exception Screens** | No | True or false | Log the host screen if an error occurs during screen navigation. |
+ | **Logical Unit Name** | No | <*logical-unit-name*> | The specific logical unit name to request from the host |
+ | **Port Number** | No | <*TN3270-server-port*> | The port used by your TN3270 server. If left blank, the connector uses `23` as the default value. |
+ | **Server** | Yes | <*TN3270-server-name*> | The server name for your TN3270 service |
+ | **Timeout** | No | <*timeout-seconds*> | The timeout duration in seconds while waiting for screens |
+ | **Use TLS** | No | On or off | Turn on or turn off TLS encryption. |
+ | **Validate TN3270 Server Certificate** | No | On or off | Turn on or turn off validation for the server's certificate. |
+
+ For example:
+
+ :::image type="content" source="./media/integrate-3270-apps-ibm-mainframe/connection-properties-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and IBM 3270 connection properties." lightbox="./media/integrate-3270-apps-ibm-mainframe/connection-properties-standard.png":::
+
+1. When you're done, select **Create New**.
+
+1. When the action information box appears, provide the necessary parameter values:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **HIDX Name** | Yes | <*HIDX-file-name*> | Select the 3270 HIDX file that you want to use. |
+ | **Method Name** | Yes | <*method-name*> | Select the method in the HIDX file that you want to use. After you select a method, the **Add new parameter** list appears so you can select parameters to use with that method. |
+ | **Advanced parameters** | No | Varies | This list appears after you select a method so that you can add other parameters to use with the selected method. The available parameters vary based on your HIDX file and the method that you select. |
+
+ For example:
+
+ **Select the HIDX file**
+
+ :::image type="content" source="./media/integrate-3270-apps-ibm-mainframe/select-hidx-file-standard.png" alt-text="Screenshot shows Standard workflow designer, 3270 action, and selected HIDX file." lightbox="./media/integrate-3270-apps-ibm-mainframe/select-hidx-file-standard.png":::
+
+ **Select the method**
+
+ :::image type="content" source="./media/integrate-3270-apps-ibm-mainframe/select-method-standard.png" alt-text="Screenshot shows Standard workflow designer, 3270 action, and selected method." lightbox="./media/integrate-3270-apps-ibm-mainframe/select-method-standard.png":::
+
+ **Select the parameters**
+
+ :::image type="content" source="./media/integrate-3270-apps-ibm-mainframe/add-parameters-standard.png" alt-text="Screenshot shows Standard workflow designer, 3270 action, and more parameters." lightbox="./media/integrate-3270-apps-ibm-mainframe/add-parameters-standard.png":::
+
+1. When you're done, save your workflow. On designer toolbar, select **Save**.
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app resource and workflow where you've already add a trigger.
+
+1. If you haven't already added a trigger, [follow these general steps to add the trigger that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+
+ This example continues with the **Request** trigger named **When a HTTP request is received**.
+
+1. [Follow these general steps to add the **IBM 3270** managed connector action named **Run a mainframe program over a TN3270 connection**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). You can find the connector under the **Enterprise** category.
+
+1. When the connection information box appears, provide the following necessary parameter values:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | A name for your connection |
+ | **Integration Account ID** | Yes | <*integration-account-name*> | Your integration account's name |
+ | **Integration Account SAS URL** | Yes | <*integration-account-SAS-URL*> | Your integration account's Shared Access Signature (SAS) URL, which you can generate from your integration account's settings in the Azure portal. <p>1. On your integration account menu, under **Settings**, select **Callback URL**. <br>2. In the right-hand pane, copy the **Generated Callback URL** value. |
+ | **Server** | Yes | <*TN3270-server-name*> | The server name for your TN3270 service |
+ | **Port** | No | <*TN3270-server-port*> | The port used by your TN3270 server. If left blank, the connector uses `23` as the default value. |
+ | **Device Type** | No | <*IBM-terminal-model*> | The model name or number for the IBM terminal to emulate. If left blank, the connector uses default values. |
+ | **Code Page** | No | <*code-page-number*> | The code page number for the host. If left blank, the connector uses `37` as the default value. |
+ | **Logical Unit Name** | No | <*logical-unit-name*> | The specific logical unit name to request from the host |
+ | **Enable SSL?** | No | On or off | Turn on or turn off TLS encryption. |
+ | **Validate host ssl certificate?** | No | On or off | Turn on or turn off validation for the server's certificate. |
+
+ For example:
+
+ :::image type="content" source="./media/integrate-3270-apps-ibm-mainframe/connection-properties-consumption.png" alt-text="Screenshot shows Azure portal, Consumption workflow designer, and IBM 3270 connection properties." lightbox="./media/integrate-3270-apps-ibm-mainframe/connection-properties-consumption.png":::
+
+1. When you're done, select **Create**.
+
+1. When the action information box appears, provide the necessary parameter values:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **HIDX Name** | Yes | <*HIDX-file-name*> | Select the 3270 HIDX file that you want to use. |
+ | **Method Name** | Yes | <*method-name*> | Select the method in the HIDX file that you want to use. After you select a method, the **Add new parameter** list appears so you can select parameters to use with that method. |
+ | **Add new parameter** | No | Varies | This list appears after you select a method so that you can add other parameters to use with the selected method. The available parameters vary based on your HIDX file and the method that you select. |
+
+ For example:
+
+ **Select the HIDX file**
+
+ :::image type="content" source="./media/integrate-3270-apps-ibm-mainframe/select-hidx-file-consumption.png" alt-text="Screenshot shows Consumption workflow designer, 3270 action, and selected HIDX file." lightbox="./media/integrate-3270-apps-ibm-mainframe/select-hidx-file-consumption.png":::
+
+ **Select the method**
+
+ :::image type="content" source="./media/integrate-3270-apps-ibm-mainframe/select-method-consumption.png" alt-text="Screenshot shows Consumption workflow designer, 3270 action, and selected method." lightbox="./media/integrate-3270-apps-ibm-mainframe/select-method-consumption.png":::
+
+ **Select the parameters**
+
+ :::image type="content" source="./media/integrate-3270-apps-ibm-mainframe/add-parameters-consumption.png" alt-text="Screenshot shows Consumption workflow designer, 3270 action, and selected parameters." lightbox="./media/integrate-3270-apps-ibm-mainframe/add-parameters-consumption.png":::
+
+1. When you're done, save your workflow. On designer toolbar, select **Save**.
+++
+## Test your workflow
+
+### [Standard](#tab/standard)
+
+1. To run your workflow, on the designer, select workflow menu, select **Overview**. On the **Overview** toolbar, select **Run** > **Run**.
+
+ After your workflow finishes running, your workflow's run history appears. Successful steps show check marks, while unsuccessful steps show an exclamation point (**!**).
+
+1. To review the inputs and outputs for each step, expand that step.
+
+1. To review the outputs, select **See raw outputs**.
+
+1. To review the inputs and outputs for each step, expand that step.
+
+1. To review the outputs, select **See raw outputs**.
+
+### [Consumption](#tab/consumption)
+
+1. To run your workflow, on the designer toolbar, select **Run Trigger** > **Run**.
+
+ After your workflow finishes running, your workflow's run history appears. Successful steps show check marks, while unsuccessful steps show an exclamation point (**!**).
+
+1. To review the inputs and outputs for each step, expand that step.
+
+1. To review the outputs, select **See raw outputs**.
+
+1. To review the inputs and outputs for each step, expand that step.
+
+1. To review the outputs, select **See raw outputs**.
+++
+## Next steps
+
+- [Monitor workflow run status, review trigger and workflow run history, and set up alerts in Azure Logic Apps](../logic-apps/monitor-logic-apps.md?tabs=standard)
+- [View metrics for workflow health and performance in Azure Logic Apps](../logic-apps/view-workflow-metrics.md?tabs=standard)
+- [Monitor and collect diagnostic data for workflows in Azure Logic Apps](../logic-apps/monitor-workflows-collect-diagnostic-data.md?tabs=standard)
+- [Enable and view enhanced telemetry in Application Insights for Standard workflows in Azure Logic Apps](../logic-apps/enable-enhanced-telemetry-standard-workflows.md)
connectors Integrate Cics Apps Ibm Mainframe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/integrate-cics-apps-ibm-mainframe.md
+
+ Title: Connect to CICS programs on IBM mainframes
+description: Integrate CICS programs with Standard workflows in Azure Logic Apps using the IBM CICS connector.
+
+ms.suite: integration
++++ Last updated : 11/01/2023++
+# Integrate CICS programs on IBM mainframes with Standard workflows in Azure Logic Apps
++
+To access and run IBM mainframe apps on Customer Information Control System (CICS) systems from Standard workflows in Azure Logic Apps, you can use the **CICS Program Call** built-in, service provider-based connector. CICS provides a Transaction Program (TP) Monitor with an integrated Transaction Manager (TM). The connector communicates with IBM CICS transaction programs by using TCP/IP. The CICS connector is available in all Azure Logic Apps regions except for Azure Government and Microsoft Azure operated by 21Vianet.
+
+This how-to guide describes the following aspects about the CICS connector:
+
+* Why use the CICS connector in Azure Logic Apps
+
+* Prerequisites and setup for using the CICS connector
+
+* Steps for adding CICS connector actions to your Standard logic app workflow
+
+## Why use this connector?
+
+CICS systems were one of the first mission-critical systems that run on mainframes. Microsoft [Host Integration Server (HIS)](/host-integration-server/what-is-his) provides connectivity to CICS systems using TCP/IP, HTTP, and APPC LU6.2. Customers have used the HIS Transaction Integrator (TI) to integrate CICS systems with Windows on premises for many years. The **CICS Program Call** connector uses TCP/IP and HTTP [programming models](/host-integration-server/core/choosing-the-appropriate-programming-model1) to interact with CICS transaction programs.
+
+The following diagram shows how the CICS connector interacts with an IBM mainframe system:
++
+To extend these hybrid cloud scenarios, the CICS connector in a Standard workflow works with the [HIS Designer for Logic Apps](/host-integration-server/core/application-integration-ladesigner-2), which you can use to create a *program definition* or *program map* of the mainframe transaction program. For this task, the HIS Designer uses a [programming model](/host-integration-server/core/choosing-the-appropriate-programming-model1) that determines the characteristics of the data exchange between the mainframe and the workflow. The HIS Designer converts that information into metadata that the CICS connector uses when running an action in your workflow.
+
+After you generate the metadata file as Host Integration Designer XML (HIDX) file from the HIS Designer, you can add that file as a map artifact to your Standard logic app resource. That way, your workflow can access your app's metadata when you add a CICS connector action. The connector reads the metadata file from your logic app resource, and dynamically presents parameters to use with the CICS connector in your workflow. You can then provide parameters to the host application, and the connector returns the results to your workflow. As a result, you can integrate your legacy apps with Azure, Microsoft, other apps, services, and systems that Azure Logic Apps supports.
+
+## Connector technical reference
+
+The following section describes the operations for the CICS connector, which currently includes only the following action:
+
+### Call a CICS program
+
+| Parameter | Required | Type | Description |
+|--|-|-|-|
+| **HIDX Name** | Yes | String | Select the CICS HIDX file that you want to use. |
+| **Method Name** | Yes | String | Select the method in the HIDX file that you want to use. |
+| **Advanced parameters** | No | Varies | This list appears after you select a method so that you can add other parameters to use with the selected method. The available parameters vary based on your HIDX file and the method that you select. |
+
+This operation also includes advanced parameters, which appear after you select a method, for you to select and use with the selected method. These parameters vary based on your HIDX file and the method that you select.
+
+## Limitations
+
+Currently, this connector requires that you upload your HIDX file directly to your Standard logic app resource, not an integration account.
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* Access to the mainframe that hosts the CICS system
+
+* The Host Integration Designer XML (HIDX) file that provides the necessary metadata for the **CICS Program Call** connector to execute your mainframe program.
+
+ To create this HIDX file, [download and install the HIS Designer for Azure Logic Apps](https://aka.ms/his-designer-logicapps-download). The only prerequisite is [Microsoft .NET Framework 4.8](https://aka.ms/net-framework-download).
+
+ To invoke a mainframe program, your workflow needs to understand the mainframe program's type, parameters, and return values. The CICS connector manages this process and data conversions, which are required for providing input data from the workflow to the mainframe program and for sending any output data generated from the mainframe program to the workflow. The connector also provides tabular data definition and code page translation. For this process, Azure Logic Apps requires that you provide this information as metadata.
+
+ To create this metadata, use the [HIS Designer for Logic Apps](/host-integration-server/core/application-integration-ladesigner-2). With this tool, you can manually create the methods, parameters, and return values that you use in your workflow. You can also import COBOL or RPG program definitions (copybooks) that provide this information.
+
+ The tool generates a Host Integration Designer XML (HIDX) file that provides the necessary metadata for the connector. If you're using HIS, you can use the TI Designer to create the HIDX file.
+
+* The Standard logic app workflow where you want to integrate with the CICS system
+
+ The CICS connector doesn't have triggers, so use any trigger to start your workflow, such as the **Recurrence** trigger or **Request** trigger. You can then add the CICS connector actions. To get started, create a blank workflow in your Standard logic app resource.
+
+<a name="define-generate-app-metadata"></a>
+
+## Define and generate metadata
+
+After you download and install the HIS Designer for Azure Logic Apps, follow [these steps to generate the HIDX file from the metadata artifact](/host-integration-server/core/application-integration-lahostapps).
+
+<a name="upload-hidx-file"></a>
+
+## Upload the HIDX file
+
+For your workflow to use the HIDX file, follow these steps:
+
+1. Go to the folder where you saved your HIDX file, and copy the file.
+
+1. In the [Azure portal](https://portal.azure.com), [upload the HIDX file as a map to your Standard logic app resource](../logic-apps/logic-apps-enterprise-integration-maps.md?tabs=standard#add-map-to-standard-logic-app-resource).
+
+1. Now, [add a CICS action to your workflow](#add-cics-action).
+
+Later in this guide, when you add a **CICS Program Call** connector action to your workflow for the first time, you're prompted to create a connection between your workflow and the mainframe system. After you create the connection, you can select your previously added HIDX file, the method to run, and the parameters to use.
+
+<a name="add-cics-action"></a>
+
+## Add a CICS action
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource and workflow in the designer.
+
+1. If you haven't already added a trigger to start your workflow, [follow these general steps to add the trigger that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+ This example continues with the **Request** trigger named **When a HTTP request is received**.
+
+ :::image type="content" source="media/integrate-cics-apps-ibm-mainframe/request-trigger.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and Request trigger." lightbox="media/integrate-cics-apps-ibm-mainframe/request-trigger.png":::
+
+1. To add a CICS connector action, [follow these general steps to add the **CICS Program Call** built-in connector action named **Call a CICS Program**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+1. After the connection details pane appears, provide the following information, such as the host server name and CICS system configuration information:
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Connection Name** | Yes | <*connection-name*> | The name for your connection |
+ | **Programming Model** | Yes | <*CICS-programming-model*> | The selected CICS programming model. For more information, see [Programming Models](/host-integration-server/core/programming-models2) and [Choosing the Appropriate Programming Model](/host-integration-server/core/programming-models2). |
+ | **Code Page** | No | <*code-page*> | The code page number to use for converting text |
+ | **Password** | No | <*password*> | The optional user password for connection authentication |
+ | **Port Number** | Yes | <*port-number*> | The port number to use for connection authentication |
+ | **Server Name** | Yes | <*server-name*> | The server name |
+ | **Timeout** | No | <*time-out*> | The timeout period in seconds while waiting for responses from the server |
+ | **User Name** | No | <*user-Name*> | The optional username for connection authentication |
+ | **Use TLS** | No | True or false | Secure the connection with Transport Security Layer (TLS). |
+ | **Validate Server certificate** | No | True or false | Validate the server's certificate. |
+ | **Server certificate common name** | No | <*server-cert-common-name*> | The name of the Transport Security layer (TLS) certificate to use |
+ | **Use IBM Request Header Format** | No | True or false | The server expects ELM or TRM headers in the IBM format |
+
+ For example:
+
+ :::image type="content" source="./media/integrate-cics-apps-ibm-mainframe/cics-connection.png" alt-text="Screenshot shows CICS action's connection properties." lightbox="./media/integrate-cics-apps-ibm-mainframe/cics-connection.png":::
+
+1. When you're done, select **Create New**.
+
+1. After the action details pane appears, in the **Parameters** section, provide the required information:
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **HIDX Name** | Yes | <*HIDX-file-name*> | Select the CICS HIDX file that you want to use. |
+ | **Method Name** | Yes | <*method-name*> | Select the method in the HIDX file that you want to use. |
+ | **Advanced parameters** | No | Varies | This list appears after you select a method so that you can add other parameters to use with the selected method. The available parameters vary based on your HIDX file and the method that you select. |
+
+ For example:
+
+ **Select HIDX file and method**
+
+ :::image type="content" source="./media/integrate-cics-apps-ibm-mainframe/action-parameters.png" alt-text="Screenshot shows CICS action with selected HIDX file and method.":::
+
+ **Select advanced parameters**
+
+ :::image type="content" source="./media/integrate-cics-apps-ibm-mainframe/action-advanced-parameters.png" alt-text="Screenshot shows CICS action with all parameters." lightbox="./media/integrate-cics-apps-ibm-mainframe/action-advanced-parameters.png":::
+
+1. When you're done, save your workflow. On designer toolbar, select **Save**.
+
+## Test your workflow
+
+1. To run your workflow, on the workflow menu, select **Overview**. On the **Overview** toolbar, select **Run** > **Run**.
+
+ After your workflow finishes running, your workflow's run history appears. Successful steps show check marks, while unsuccessful steps show an exclamation point (**!**).
+
+1. To review the inputs and outputs for each step, expand that step.
+
+1. To review the outputs, select **See raw outputs**.
+
+## Next steps
+
+* [Monitor workflow run status, review trigger and workflow run history, and set up alerts in Azure Logic Apps](../logic-apps/monitor-logic-apps.md?tabs=standard)
+* [View metrics for workflow health and performance in Azure Logic Apps](../logic-apps/view-workflow-metrics.md?tabs=standard)
+* [Monitor and collect diagnostic data for workflows in Azure Logic Apps](../logic-apps/monitor-workflows-collect-diagnostic-data.md?tabs=standard)
+* [Enable and view enhanced telemetry in Application Insights for Standard workflows in Azure Logic Apps](../logic-apps/enable-enhanced-telemetry-standard-workflows.md)
connectors Integrate Host Files Ibm Mainframe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/integrate-host-files-ibm-mainframe.md
+
+ Title: Parse and generate IBM host files
+description: Learn how to parse and generate offline IBM host files.
+
+ms.suite: integration
++++ Last updated : 11/02/2023++
+# Parse and generate host files from IBM mainframes for Standard workflows in Azure Logic Apps
++
+To parse and generate new IBM host files and i Series physical files from Standard workflows in Azure Logic Apps, you can use the **IBM Host File** built-in, service provider-based connector. Since the introduction of mainframe systems, ubiquitous host files are used to store abundant data for mission critical systems. Although this connector doesn't require access to an IBM mainframe or midrange system, you must make the host file available to a Standard workflow by using other mechanisms such as FTP, blob storage, Host Integration Server, or a partner software appliance. The **IBM Host File** connector is available in all Azure Logic Apps regions except for Azure Government and Microsoft Azure operated by 21Vianet.
+
+This how-to guide describes the following aspects about the **IBM Host File** connector:
+
+* Why use the **IBM Host File** connector in Azure Logic Apps
+
+* Prerequisites and setup for using the **IBM Host File** connector
+
+* Steps for adding the **IBM Host File** connector actions to your Standard logic app workflow
+
+## Why use this connector?
+
+On IBM mainframes, *access methods*, which are special components in the operating system, handle file processing. In the 1970s, Virtual Storage Access Method (VSAM) was built and became the most widely used access method on IBM mainframes. VSAM provides the following types of files: entry-sequenced datasets, key-sequenced datasets, and relative record datasets.
+
+Today, the market has multiple solutions that directly connect to host files and run data operations. Many solutions require that you install software on the mainframe system. Although this option works well for some customers, others want to avoid growing the footprint in their mainframe systems.
+
+[Microsoft Host Integration Server (HIS)](/host-integration-server/what-is-his) provides a managed adapter for host files and doesn't require installing software on the mainframe. However, HIS requires that you enable the [IBM Distributed File Manager (DFM)](https://www.ibm.com/docs/en/zos/2.2.0?topic=management-distributed-file-manager) mainframe subsystem, which requires LU 6.2. This managed provider also requires you to configure an HIS System Network Architecture (SNA) gateway that provides access to the DFM.
+
+In most ways, the managed provider operates as a normal data provider. You can connect to a host file system, execute commands, and retrieve data. Although a great alternative for some customers, the **IBM Host File** connector requires that you make IBM host files available in binary format to Standard workflows in Azure Logic Apps. This requirement reduces the complexity of this solution and lets you use your choice of tools to access and manage data in host files. After you make the host file available in a place where the Standard workflow can use a trigger to read the file, the **IBM Host File** connector operation can parse that file.
+
+For customers interested in accessing and using databases, such as SQL Server or Cosmos DB, in their mainframe environments, the **IBM Host File** connector provides the capability to generate host files in JSON format. That way, you can use these host files in your cloud database of choice, and send the data back as a host file to your mainframe or midrange environments.
+
+The following diagram shows how the **IBM Host File** connector in Azure Logic Apps interacts with other systems:
++
+To extend hybrid cloud scenarios, the **IBM Host File** connector works with the [HIS Designer for Logic Apps](/host-integration-server/core/application-integration-ladesigner-2), which you can use to create a *data definition* or *data map* of the mainframe host file. For this task, the HIS Designer converts that data into metadata that the **IBM Host File** connector uses when running an action in your workflow. The connector performs the data type conversions, which are required to receive input from preceding workflow operations and to send output for use by subsequent workflow actions. The connector also provides tabular data definition and code page translation.
+
+After you generate the metadata file as a Host Integration Designer XML (HIDX) file from the HIS Designer, you can add that file as a map artifact to your Standard logic app resource. That way, your workflow can access your app's metadata when you add an **IBM Host File** connector action. The connector reads the metadata file from your logic app resource, and dynamically presents the binary file's structure to use with the **IBM Host File** connector actions in your workflow.
+
+## Connector technical reference
+
+The following section describes the operations for the **IBM Host File** connector, which currently includes only the following actions:
+
+### Parse Host File Contents action
+
+| Parameter | Required | Type | Description |
+|--|-|-|-|
+| **HIDX Name** | Yes | String | Select the mainframe host file HIDX file that you want to use. |
+| **Schema Name** | Yes | String | Select the host file schema in the HIDX file that you want to use. |
+| **Binary contents** | Yes | Binary | Select the binary data with a fixed length record extracted from the mainframe. |
+
+### Generate Host File Contents action
+
+| Parameter | Required | Type | Description |
+|--|-|-|-|
+| **HIDX Name** | Yes | String | Select the mainframe host file HIDX file that you want to use. |
+| **Schema Name** | Yes | String | Select the host file schema in the HIDX file that you want to use. |
+| **Rows** | Yes | JSON | Select the Array or individual rows. To enter an entire data object in JSON format, you can select the **Switch to input entire array** option. |
+
+## Limitations
+
+Currently, this connector requires that you upload your HIDX file directly to your Standard logic app resource, not an integration account.
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* The Host Integration Designer XML (HIDX) file that provides the necessary metadata for the **IBM Host File** connector to recognize the host file data structure.
+
+ To create this HIDX file, [download and install the HIS Designer for Azure Logic Apps](https://aka.ms/his-designer-logicapps-download). The only prerequisite is [Microsoft .NET Framework 4.8](https://aka.ms/net-framework-download).
+
+ To effectively parse and generate host files, your workflow needs to understand the host file metadata. However, as a key difference between a host file and a database table, the host file doesn't have the metadata that describes the data structure. To create this metadata, use the [HIS Designer for Logic Apps](/host-integration-server/core/application-integration-ladesigner-2). With this tool, you can manually create the host file structure that your workflow uses. You can also import COBOL definitions (copybooks) that provide these data structures.
+
+ The tool generates a Host Integration Designer XML (HIDX) file that provides the necessary metadata for the connector to recognize the host file data structure. If you're using HIS, you can use the TI Designer to create the HIDX file.
+
+* The Standard logic app workflow where you want to parse or generate the host file.
+
+ The **IBM Host File** connector doesn't have triggers, so use any trigger to start your workflow, such as the **Recurrence** trigger or **Azure Blob Storage** trigger. You can then add the **IBM Host File** connector actions. To get started, create a blank workflow in your Standard logic app resource.
+
+<a name="define-generate-hostfile-metadata"></a>
+
+## Define and generate metadata
+
+After you download and install the HIS Designer for Azure Logic Apps, follow [these steps to generate the HIDX file from the metadata artifact](/host-integration-server/core/application-integration-lahostfiles).
+
+<a name="upload-hidx-file"></a>
+
+## Upload the HIDX file
+
+For your workflow to use the HIDX file, follow these steps:
+
+1. Go to the folder where you saved your HIDX file, and copy the file.
+
+1. In the [Azure portal](https://portal.azure.com), [upload the HIDX file as a map to your Standard logic app resource](../logic-apps/logic-apps-enterprise-integration-maps.md?tabs=standard#add-map-to-standard-logic-app-resource).
+
+1. Now, [add an **IBM Host File** action to your workflow](#add-host-files-action).
+
+Later in this guide, when you add the **Parse Host File Contents** action to your workflow for the first time, you're prompted to create a connection. After you create the connection, you can select your previously added HIDX file, the schema, and the parameters to use.
+
+<a name="add-host-files-action"></a>
+
+## Add a Parse Host File Contents action
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource and workflow in the designer.
+
+1. If you haven't already added a trigger to start your workflow, [follow these general steps to add the trigger that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+ This example continues with the **Azure Blob Storage** built-in, service provider-based trigger named **When a blob is added or updated**.
+
+ :::image type="content" source="media/integrate-host-files-ibm-mainframe/blob-storage-trigger.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and Azure Blob Storage trigger." lightbox="media/integrate-host-files-ibm-mainframe/blob-storage-trigger.png":::
+
+1. To get the content from the added or updated blob, [follow these general steps to add the **Azure Blob Storage** built-in connector action named **Read blob content**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+1. [Follow these general steps to add the **IBM Host File** built-in connector action named **Parse Host File Contents**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+1. After the connection details pane appears, provide the following information:
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Connection Name** | Yes | <*connection-name*> | The name for your connection |
+ | **Code Page** | No | <*code-page*> | The code page number to use for converting text |
+ | **From iSeries** | No | <*mf-iseries*> | Whether the file originates from an i Series server |
+
+ For example:
+
+ :::image type="content" source="./media/integrate-host-files-ibm-mainframe/parse-host-file-contents-connection.png" alt-text="Screenshot showing the Parse Host File Contents action's connection properties." lightbox="./media/integrate-host-files-ibm-mainframe/parse-host-file-contents-connection.png":::
+
+1. When you're done, select **Create New**.
+
+1. After the action details pane appears, in the **Parameters** section, provide the required information:
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **HIDX Name** | Yes | <*HIDX-file-name*> | Select the mainframe host file HIDX file that you want to use. |
+ | **Schema Name** | Yes | <*schema-name*> | Select the schema in the HIDX file that you want to use. |
+ | **Binary Contents** | Yes | <*binary-contents*> | Select the binary data with a fixed length record extracted from the host. |
+
+ For example, the following image shows Visual Studio with a sample host file HIDX file with a **CUSTOMER** table and **CUSTOMER_RECORD** schema in the HIS Designer for Logic Apps:
+
+ :::image type="content" source="./media/integrate-host-files-ibm-mainframe/visual-studio-customers-hidx.png" alt-text="Screenshot shows Visual Studio and the host file schema in the HIDX file." lightbox="./media/integrate-host-files-ibm-mainframe/visual-studio-customers-hidx.png":::
+
+ **Provide HIDX file and schema**
+
+ :::image type="content" source="./media/integrate-host-files-ibm-mainframe/parse-host-file-contents-parameters.png" alt-text="Screenshot shows the Parse Host File Contents action with selected HIDX file and schema.":::
+
+ **Select binary data to read from blob**
+
+ :::image type="content" source="./media/integrate-host-files-ibm-mainframe/parse-host-file-contents-binary.png" alt-text="Screenshot shows the Parse Host File Contents action, dynamic content list, and selecting binary data to read from JSON file in Blob Storage account." lightbox="./media/integrate-host-files-ibm-mainframe/parse-host-file-contents-binary.png":::
+
+ When you're done, the **Parse Host File Contents** action looks like the following example with a subsequent action that creates a file on an SFTP server:
+
+ :::image type="content" source="./media/integrate-host-files-ibm-mainframe/parse-host-file-contents-complete.png" alt-text="Screenshot shows the completed Parse Host File Contents action." lightbox="./media/integrate-host-files-ibm-mainframe/parse-host-file-contents-complete.png":::
+
+1. When you're done, save your workflow. On designer toolbar, select **Save**.
+
+## Add a Generate Host File Contents action
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource and workflow in the designer.
+
+1. If you haven't already added a trigger to start your workflow, [follow these general steps to add the trigger that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+ This example continues with the **Azure Blob Storage** built-in, service provider-based trigger named **When a blob is added or updated**.
+
+ :::image type="content" source="media/integrate-host-files-ibm-mainframe/blob-storage-trigger.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and Azure Blob Storage trigger." lightbox="media/integrate-host-files-ibm-mainframe/blob-storage-trigger.png":::
+
+1. To get the content from the added or updated blob, [follow these general steps to add the **Azure Blob Storage** built-in connector action named **Read blob content**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+1. [Follow these general steps to add the **IBM Host File** built-in connector action named **Generate Host File Contents**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+1. After the connection details pane appears, provide the following information:
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Connection Name** | Yes | <*connection-name*> | The name for your connection |
+ | **Code Page** | No | <*code-page*> | The code page number to use for converting text |
+ | **From iSeries** | No | <*mf-iseries*> | Whether the file originates from an i Series server |
+
+ For example:
+
+ :::image type="content" source="./media/integrate-host-files-ibm-mainframe/generate-host-file-contents-connection.png" alt-text="Screenshot showing Generate Host File Contents action's connection properties." lightbox="./media/integrate-host-files-ibm-mainframe/generate-host-file-contents-connection.png":::
+
+1. When you're done, select **Create New**.
+
+1. After the action details pane appears, in the **Parameters** section, provide the required information:
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **HIDX Name** | Yes | <*HIDX-file-name*> | Provide the name for the mainframe host file HIDX file that you want to use. |
+ | **Schema Name** | Yes | <*schema-name*> | Provide name for the schema in the HIDX file that you want to use. |
+ | **Rows** | Yes | <*rows*> | Provide an array of records to convert to IBM format. To select the output from a preceding workflow operation, follow these steps: <br><br>1. Select inside the **Rows** box, and then select the dynamic content option (lightning bolt). <br><br>2. From the dynamic content list, select the output from a preceding action. For example, from the **Read blob content** section, select **Response from read blob action Content**. <br><br>**Tip**: To enter an entire data object in JSON format, select the **Switch to input entire array** option. |
+
+ For example, the following image shows Visual Studio with a sample HIDX file in the HIS Designer for Logic Apps:
+
+ :::image type="content" source="./media/integrate-host-files-ibm-mainframe/visual-studio-customers-hidx.png" alt-text="Screenshot shows the host file schema in the HIDX file." lightbox="./media/integrate-host-files-ibm-mainframe/visual-studio-customers-hidx.png":::
+
+ **Provide HIDX file and schema**
+
+ :::image type="content" source="./media/integrate-host-files-ibm-mainframe/generate-host-file-contents-parameters.png" alt-text="Screenshot shows the Generate Host File Contents action with selected HIDX file and schema." lightbox="./media/integrate-host-files-ibm-mainframe/generate-host-file-contents-parameters.png":::
+
+ **Select rows from blob to read and convert**
+
+ :::image type="content" source="./media/integrate-host-files-ibm-mainframe/generate-host-file-contents-rows.png" alt-text="Screenshot shows the Generate Host File Contents action, dynamic content list, and selecting rows to read and convert from JSON file in Blob Storage account.":::
+
+ When you're done, the **Generate Host File Contents** action looks like the following example with a subsequent action that creates a file on an SFTP server:
+
+ :::image type="content" source="./media/integrate-host-files-ibm-mainframe/generate-host-file-contents-complete.png" alt-text="Screenshot shows the completed Generate Host File Contents action." lightbox="./media/integrate-host-files-ibm-mainframe/generate-host-file-contents-complete.png":::
+
+1. When you're done, save your workflow. On designer toolbar, select **Save**.
+
+## Test your workflow
+
+1. To run your workflow, on the workflow menu, select **Overview**. On the **Overview** toolbar, select **Run** > **Run**.
+
+ After your workflow finishes running, your workflow's run history appears. Successful steps show check marks, while unsuccessful steps show an exclamation point (**!**).
+
+1. To review the inputs and outputs for each step, expand that step.
+
+1. To review the outputs, select **See raw outputs**.
+
+## Next steps
+
+* [Monitor workflow run status, review trigger and workflow run history, and set up alerts in Azure Logic Apps](../logic-apps/monitor-logic-apps.md?tabs=standard)
+* [View metrics for workflow health and performance in Azure Logic Apps](../logic-apps/view-workflow-metrics.md?tabs=standard)
+* [Monitor and collect diagnostic data for workflows in Azure Logic Apps](../logic-apps/monitor-workflows-collect-diagnostic-data.md?tabs=standard)
+* [Enable and view enhanced telemetry in Application Insights for Standard workflows in Azure Logic Apps](../logic-apps/enable-enhanced-telemetry-standard-workflows.md)
connectors Integrate Ims Apps Ibm Mainframe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/integrate-ims-apps-ibm-mainframe.md
+
+ Title: Connect to IMS programs on IBM mainframes
+description: Integrate IMS programs with Standard workflows in Azure Logic Apps using the IBM IMS connector.
+
+ms.suite: integration
++++ Last updated : 10/30/2023++
+# Integrate IMS programs on IBM mainframes with Standard workflows in Azure Logic Apps
++
+To access and run IBM mainframe apps on Information Management System (IMS) systems from Standard workflows in Azure Logic Apps, you can use the **IMS Program Call** built-in, service provider-based connector. IMS provides a Transaction Program (TP) Monitor with an integrated Transaction Manager (TM) and hierarchical database. The connector communicates with IBM IMS transaction programs by using IMS Connect, which is an IMS TM network component. This component provides high performance communications for IMS systems between one or more TCP/IP clients and one or more IMS systems. The IMS connector is available in all Azure Logic Apps regions except for Azure Government and Microsoft Azure operated by 21Vianet.
+
+This how-to guide describes the following aspects about the IMS connector:
+
+* Why use the IMS connector in Azure Logic Apps
+
+* Prerequisites and setup for using the IMS connector
+
+* Steps for adding IMS connector actions to your Standard logic app workflow
+
+## Why use this connector?
+
+IMS systems were one of the first mission-critical systems that run on mainframes. Microsoft [Host Integration Server (HIS)](/host-integration-server/what-is-his) provides connectivity to IMS systems by following two models: IMS Connect and APPC LU6.2. Customers have used the HIS Transaction Integrator (TI) to integrate their IMS systems with Windows on premises for many years. The **IMS Program Call** connector uses the IMS Connect model to interact with IMS transaction programs through TCP/IP.
+
+The following diagram shows how the IMS connector interacts with an IBM mainframe system:
++
+To extend these hybrid cloud scenarios, the IMS connector in a Standard workflow works with the [HIS Designer for Logic Apps](/host-integration-server/core/application-integration-ladesigner-2), which you can use to create a *program definition* or *program map* of the mainframe transaction program. For this task, the HIS Designer converts that information into metadata that the IMS connector uses when running an action in your workflow.
+
+After you generate the metadata file as a Host Integration Designer XML (HIDX) file from the HIS Designer, you can add that file as a map artifact to your Standard logic app resource. That way, your workflow can access your app's metadata when you add an IMS connector action. The connector reads the metadata file from your logic app resource, and dynamically presents the parameters to use with the IMS connector in your workflow. You can then provide parameters to the host application, and the connector returns the results to your workflow. As a result, you can integrate your legacy apps with Azure, Microsoft, other apps, services, and systems that Azure Logic Apps supports.
+
+## Connector technical reference
+
+The following section describes the operations for the IMS connector, which currently includes only the following action:
+
+### Call an IMS program
+
+| Parameter | Required | Type | Description |
+|--|-|-|-|
+| **HIDX Name** | Yes | String | Select the IMS HIDX file that you want to use. |
+| **Method Name** | Yes | String | Select the method in the HIDX file that you want to use. |
+| **Advanced parameters** | No | Varies | This list appears after you select a method so that you can add other parameters to use with the selected method. The available parameters vary based on your HIDX file and the method that you select. |
+
+This operation also includes advanced parameters, which appear after you select a method, for you to select and use with the selected method. These parameters vary based on your HIDX file and the method that you select.
+
+## Limitations
+
+Currently, this connector requires that you upload your HIDX file directly to your Standard logic app resource, not an integration account.
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* Access to the mainframe that hosts the IMS system
+
+* The Host Integration Designer XML (HIDX) file that provides the necessary metadata for the **IMS Program Call** connector to execute your mainframe program.
+
+ To create this HIDX file, [download and install the HIS Designer for Azure Logic Apps](https://aka.ms/his-designer-logicapps-download). The only prerequisite is [Microsoft .NET Framework 4.8](https://aka.ms/net-framework-download).
+
+ To invoke a mainframe program, your workflow needs to understand the mainframe program's type, parameters, and return values. The IMS connector manages the process and data conversions, which are required for providing input data from the workflow to the mainframe program and for sending any output data generated from the mainframe program to the workflow. The connector also provides tabular data definition and code page translation. For this process, Azure Logic Apps requires that you provide this information as metadata.
+
+ To create this metadata, use the [HIS Designer for Logic Apps](/host-integration-server/core/application-integration-ladesigner-2). With this tool, you can manually create the methods, parameters, and return values that you can use in your workflow. The tool also allows you to import COBOL or RPG program definitions (copybooks) that provide this information.
+
+ The tool generates a Host Integration Designer XML (HIDX) file that provides the necessary metadata for the connector. If you're using HIS, you can use the TI Designer to create the HIDX file.
+
+* The Standard logic app workflow to use for integrating with the IMS system
+
+ The IMS connector doesn't have triggers, so use any trigger to start your workflow, such as the **Recurrence** trigger or **Request** trigger. You can then add the IMS connector actions. To get started, create a blank workflow in your Standard logic app resource.
+
+<a name="define-generate-app-metadata"></a>
+
+## Define and generate metadata
+
+After you download and install the HIS Designer for Azure Logic Apps, follow [these steps to generate the HIDX file from the metadata artifact](/host-integration-server/core/application-integration-lahostapps).
+
+<a name="upload-hidx-file"></a>
+
+## Upload the HIDX file
+
+For your workflow to use the HIDX file, follow these steps:
+
+1. Go to the folder where you saved your HIDX file, and copy the file.
+
+1. In the [Azure portal](https://portal.azure.com), [upload the HIDX file as a map to your Standard logic app resource](../logic-apps/logic-apps-enterprise-integration-maps.md?tabs=standard#add-map-to-standard-logic-app-resource).
+
+1. Now, [add an IMS action to your workflow](#add-ims-action).
+
+Later in this guide, when you add a **IMS Program Call** connector action to your workflow for the first time, you're prompted to create a connection between your workflow and the mainframe system. After you create the connection, you can select your previously added HIDX file, the method to run, and the parameters to use.
+
+<a name="add-ims-action"></a>
+
+## Add an IMS action
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource and workflow in the designer.
+
+1. If you haven't already added a trigger to start your workflow, [follow these general steps to add the trigger that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+ This example continues with the **Request** trigger named **When a HTTP request is received**.
+
+ :::image type="content" source="media/integrate-ims-apps-ibm-mainframe/request-trigger.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and Request trigger.":::
+
+1. To add an IMS connector action, [follow these general steps to add the **IMS Program Call** built-in connector action named **Call an IMS Program**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+1. After the connection details pane appears, provide the following information:
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Connection Name** | Yes | <*connection-name*> | The name for your connection |
+ | **The IMS System ID** | Yes | <*IMS-system-ID*> | The name of the IMS system where the IMS Connect model directs incoming requests |
+ | **ITOC Exit Name** | No | <*ITOC-exit-name*> | The name for the exit routine that IMS uses to handle incoming requests |
+ | **MFS Mod Name** | No | <*MFS-Mod-Name*> | The name associated with the outbound IMS message output descriptor |
+ | **Use the HWSO1 Security Exit** | No | True or false | The server uses the HWSO1 security exit. |
+ | **Server certificate common name** | No | <*server-cert-common-name*> | The name of the Transport Security layer (TLS) certificate to use |
+ | **Code Page** | No | <*code-page*> | The code page number to use for converting text |
+ | **Password** | No | <*password*> | The optional user password for connection authentication |
+ | **Port Number** | Yes | <*port-number*> | The port number to use for connection authentication |
+ | **Server Name** | Yes | <*server-name*> | The server name |
+ | **Timeout** | No | <*time-out*> | The timeout period in seconds while waiting for responses from the server |
+ | **User Name** | No | <*user-Name*> | The optional username for connection authentication |
+ | **Use TLS** | No | True or false | Secure the connection with Transport Security Layer (TLS). |
+ | **Validate Server certificate** | No | True or false | Validate the server's certificate. |
+
+ For example:
+
+ :::image type="content" source="./media/integrate-ims-apps-ibm-mainframe/ims-connection.png" alt-text="Screenshot shows IMS action's connection properties." lightbox="./media/integrate-ims-apps-ibm-mainframe/ims-connection.png":::
+
+1. When you're done, select **Create New**.
+
+1. After the action details pane appears, in the **Parameters** section, provide the required information:
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **HIDX Name** | Yes | <*HIDX-file-name*> | Select the IMS HIDX file that you want to use. |
+ | **Method Name** | Yes | <*method-name*> | Select the method in the HIDX file that you want to use. |
+ | **Advanced parameters** | No | Varies | This list appears after you select a method so that you can add other parameters to use with the selected method. The available parameters vary based on your HIDX file and the method that you select. |
+
+ For example:
+
+ **Select HIDX file and method**
+
+ :::image type="content" source="./media/integrate-ims-apps-ibm-mainframe/action-parameters.png" alt-text="Screenshot shows IMS action with selected HIDX file and method." lightbox="./media/integrate-ims-apps-ibm-mainframe/action-parameters.png":::
+
+ **Select advanced parameters**
+
+ :::image type="content" source="./media/integrate-ims-apps-ibm-mainframe/action-advanced-parameters.png" alt-text="Screenshot shows IMS action with all parameters." lightbox="./media/integrate-ims-apps-ibm-mainframe/action-advanced-parameters.png":::
+
+1. When you're done, save your workflow. On designer toolbar, select **Save**.
+
+## Test your workflow
+
+1. To run your workflow, on the workflow menu, select **Overview**. On the **Overview** toolbar, select **Run** > **Run**.
+
+ After your workflow finishes running, your workflow's run history appears. Successful steps show check marks, while unsuccessful steps show an exclamation point (**!**).
+
+1. To review the inputs and outputs for each step, expand that step.
+
+1. To review the outputs, select **See raw outputs**.
+
+## Next steps
+
+* [Monitor workflow run status, review trigger and workflow run history, and set up alerts in Azure Logic Apps](../logic-apps/monitor-logic-apps.md?tabs=standard)
+* [View metrics for workflow health and performance in Azure Logic Apps](../logic-apps/view-workflow-metrics.md?tabs=standard)
+* [Monitor and collect diagnostic data for workflows in Azure Logic Apps](../logic-apps/monitor-workflows-collect-diagnostic-data.md?tabs=standard)
+* [Enable and view enhanced telemetry in Application Insights for Standard workflows in Azure Logic Apps](../logic-apps/enable-enhanced-telemetry-standard-workflows.md)
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
The following tables describe how to configure a collection of NSG allow rules.
### Inbound
-# [Workload profiles environment](#tab/workload-profiles-env)
+# [Workload profiles environment](#tab/workload-profiles)
>[!Note]
-> When using workload profiles, inbound NSG rules only apply for traffic going through your virtual network. If your container apps are set to accept traffic from the public internet, incoming traffic will go through the public endpoint instead of the virtual network.
+> When using workload profiles, inbound NSG rules only apply for traffic going through your virtual network. If your container apps are set to accept traffic from the public internet, incoming traffic goes through the public endpoint instead of the virtual network.
-| Protocol | Source | Source Ports | Destination | Destination Ports | Description |
+| Protocol | Source | Source ports | Destination | Destination ports | Description |
|--|--|--|--|--|--|
-| TCP | Your Client IPs | \* | Your container app's subnet<sup>1</sup> | `443`, `30,000-32,676`<sup>2</sup> | Allow your Client IPs to access Azure Container Apps. |
+| TCP | Your client IPs | \* | Your container app's subnet<sup>1</sup> | `443`, `30,000-32,676`<sup>2</sup> | Allow your Client IPs to access Azure Container Apps. |
| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30,000-32,676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
-# [Consumption only environment](#tab/consumption-only-env)
+# [Consumption only environment](#tab/consumption-only)
-| Protocol | Source | Source Ports | Destination | Destination Ports | Description |
+| Protocol | Source | Source ports | Destination | Destination ports | Description |
|--|--|--|--|--|--|
-| TCP | Your Client IPs | \* | Your container app's subnet<sup>1</sup> | `443` | Allow your Client IPs to access Azure Container Apps. |
+| TCP | Your client IPs | \* | Your container app's subnet<sup>1</sup> | `443` | Allow your Client IPs to access Azure Container Apps. |
+| TCP | Your client IPs | \* | The `staticIP` of your container app environment | `443` | Allow your Client IPs to access Azure Container Apps. |
| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30,000-32,676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
+| TCP | Your container app's subnet | \* | Your container app's subnet | \* | Required to allow the container app envoy sidecar to connect to envoy service. |
<sup>1</sup> This address is passed as a parameter when you create an environment. For example, `10.0.0.0/21`.
-<sup>2</sup> The full range is required when creating your Azure Container Apps as a port within the range will by dynamically allocated. Once created, the required ports are 2 immutable, static values, and you can update your NSG rules.
+<sup>2</sup> The full range is required when creating your Azure Container Apps as a port within the range will by dynamically allocated. Once created, the required ports are two immutable, static values, and you can update your NSG rules.
### Outbound
-# [Workload profiles environment](#tab/workload-profiles-env)
+# [Workload profiles environment](#tab/workload-profiles)
-| Protocol | Source | Source Ports | Destination | Destination Ports | Description |
+| Protocol | Source | Source ports | Destination | Destination ports | Description |
|--|--|--|--|--|--| | TCP | Your container app's subnet<sup>1</sup> | \* | Your Container Registry | Your container registry's port | This is required to communicate with your container registry. For example, when using ACR, you need `AzureContainerRegistry` and `AzureActiveDirectory` for the destination, and the port will be your container registry's port unless using private endpoints.<sup>2</sup> |
-| TCP | Your container app's subnet | \* | `AzureMonitor` | `443` | Allows outbound calls to Azure Monitor. |
| TCP | Your container app's subnet | \* | `MicrosoftContainerRegistry` | `443` | This is the service tag for Microsoft container registry for system containers. | | TCP | Your container app's subnet | \* | `AzureFrontDoor.FirstParty` | `443` | This is a dependency of the `MicrosoftContainerRegistry` service tag. |
-| UDP | Your container app's subnet | \* | \* | `123` | NTP server. |
| Any | Your container app's subnet | \* | Your container app's subnet | \* | Allow communication between IPs in your container app's subnet. | | TCP | Your container app's subnet | \* | `AzureActiveDirectory` | `443` | If you're using managed identity, this is required. |
+| TCP | Your container app's subnet | \* | `AzureMonitor` | `443` | Only required when using Azure Monitor. Allows outbound calls to Azure Monitor. |
-# [Consumption only environment](#tab/consumption-only-env)
+# [Consumption only environment](#tab/consumption-only)
-| Protocol | Source | Source Ports | Destination | Destination Ports | Description |
+| Protocol | Source | Source ports | Destination | Destination ports | Description |
|--|--|--|--|--|--| | TCP | Your container app's subnet<sup>1</sup> | \* | Your Container Registry | Your container registry's port | This is required to communicate with your container registry. For example, when using ACR, you need `AzureContainerRegistry` and `AzureActiveDirectory` for the destination, and the port will be your container registry's port unless using private endpoints.<sup>2</sup> | | UDP | Your container app's subnet | \* | `AzureCloud.<REGION>` | `1194` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. | | TCP | Your container app's subnet | \* | `AzureCloud.<REGION>` | `9000` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. |
-| TCP | Your container app's subnet | \* | `AzureMonitor` | `443` | Allows outbound calls to Azure Monitor. |
| TCP | Your container app's subnet | \* | `AzureCloud` | `443` | Allowing all outbound on port `443` provides a way to allow all FQDN based outbound dependencies that don't have a static IP. | | UDP | Your container app's subnet | \* | \* | `123` | NTP server. |
-| TCP | Your container app's subnet | \* | \* | `5671` | Container Apps control plane. |
-| TCP | Your container app's subnet | \* | \* | `5672` | Container Apps control plane. |
| Any | Your container app's subnet | \* | Your container app's subnet | \* | Allow communication between IPs in your container app's subnet. |
+| TCP | Your container app's subnet | \* | `AzureMonitor` | `443` | Only required when using Azure Monitor. Allows outbound calls to Azure Monitor. |
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
Title: Compatibility and feature support
-description: Review Azure Cosmos DB for MongoDB vCore supported features and syntax including; commands, query support, datatypes, aggregation, and operators.
---
+description: Review Azure Cosmos DB for MongoDB vCore supported features and syntax including; commands, query support, datatypes, aggregation, operators and indexes.
+++ Previously updated : 08/28/2023 Last updated : 10/21/2023 # MongoDB compatibility and feature support with Azure Cosmos DB for MongoDB vCore [!INCLUDE[MongoDB vCore](../../includes/appliesto-mongodb-vcore.md)]
-Azure Cosmos DB is Microsoft's fully managed NoSQL and relational database, offering [multiple database APIs](../../choose-api.md). You can communicate with Azure Cosmos DB for MongoDB using the MongoDB drivers, SDKs and tools you're already familiar with. Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB wire protocol.
+Azure Cosmos DB for MongoDB vCore allows you to experience the familiar MongoDB advantages while accessing the enhanced enterprise features offered by Azure Cosmos DB. It ensures compatibility by following the MongoDB wire protocol, allowing you to leverage existing client drivers, SDKs and other tools you're already familiar with.
+
-By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides.
-
-## Protocol Support
+## Protocol support
The supported operators and any limitations or exceptions are listed here. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for MongoDB. When you create Azure Cosmos DB for MongoDB vCore clusters, the endpoint is in the format `*.mongocluster.cosmos.azure.com`.
-> [!NOTE]
-> This article only lists the supported server commands, and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with the Azure Cosmos DB for MongoDB.
## Query language support
-Azure Cosmos DB for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
+Azure Cosmos DB for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported database commands, operators, stages, and options.
+
+> [!NOTE]
+> This article only lists the supported server commands, and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with the Azure Cosmos DB for MongoDB.
+ ## Database commands Azure Cosmos DB for MongoDB vCore supports the following database commands:
-### Query and write operation commands
-
-| Command | Supported |
-|||
-| `change streams` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `delete` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `find` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `findAndModify` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `getLastError` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `getMore` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Partial |
-| `insert` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `resetError` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `update` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Session commands
-
-| Command | Supported |
-|||
-| `abortTransaction` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `commitTransaction` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `endSessions` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `killAllSessions` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `killAllSessionsByPattern` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `killSessions` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `refreshSessions` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `startSession` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Authentication commands
-
-| Command | Supported |
-|||
-| `authenticate` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `getnonce` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `logout` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Geospatial command
-
-| Command | Supported |
-|||
-| `geoSearch` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Query Plan Cache commands
-
-| Command | Supported |
-|||
-| `planCacheClear` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `planCacheClearFilters` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `planCacheListFilters` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `planCacheSetFilter` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Administration commands
-
-| Command | Supported |
-|||
-| `cloneCollectionAsCapped` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `collMod` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Partial |
-| `compact` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `connPoolSync` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `convertToCapped` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `create` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Partial |
-| `createIndexes` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `currentOp` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `drop` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `dropDatabase` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `dropConnections` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `dropIndexes` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `filemd5` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `fsync` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `fsyncUnlock` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `getDefaultRWConcern` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `getClusterParameter` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `getParameter` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `killCursors` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `killOp` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `listCollections` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `listDatabases` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `listIndexes` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `logRotate` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `reIndex` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `renameCollection` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `rotateCertificates` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `setFeatureCompatibilityVersion` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `setIndexCommitQuorum` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `setParameter` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `setDefaultRWConcern` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `shutdown` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### User Management commands
-
-| Command | Supported |
-|||
-| `createUser` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `dropAllUsersFromDatabase` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `dropUser` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `grantRolesToUser` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `revokeRolesFromUser` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `updateUser` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `usersInfo` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Role Management commands
-
-| Command | Supported |
-|||
-| `createRole` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `dropRole` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `dropAllRolesFromDatabase` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `grantPrivilegesToRole` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `grantRolesToRole` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `invalidateUserCache` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `revokePrivilegesFromRole` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `revokeRolesFromRole` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `rolesInfo` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `updateRole` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Replication commands
-
-| Command | Supported |
-|||
-| `applyOps` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `hello` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `replSetAbortPrimaryCatchUp` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `replSetFreeze` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `replSetGetConfig` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `replSetGetStatus` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `replSetInitiate` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `replSetMaintenance` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `replSetReconfig` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `replSetResizeOplog` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `replSetStepDown` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `replSetSyncFrom` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Sharding commands
-
-| Command | Supported |
-|||
-| `abortReshardCollection` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `addShard` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `addShardToZone` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `balancerCollectionStatus` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `balancerStart` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `balancerStatus` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `balancerStop` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `checkShardingIndex` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `clearJumboFlag` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `cleanupOrphaned` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `cleanupReshardCollection` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `commitReshardCollection` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `configureCollectionBalancing` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `enableSharding` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `flushRouterConfig` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `getShardMap` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `getShardVersion` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `isdbgrid` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `listShards` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `medianKey` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `moveChunk` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `movePrimary` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `mergeChunks` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `refineCollectionShardKey` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `removeShard` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `removeShardFromZone` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `reshardCollection` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `setShardVersion` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `shardCollection` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `shardingState` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `split` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `splitVector` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `unsetSharding` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `updateZoneKeyRange` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Diagnostics commands
-
-| Command | Supported |
-|||
-| `availableQueryOptions` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `buildInfo`| :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `collStats` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `connPoolStats` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `connectionStatus` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Partial |
-| `cursorInfo` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `dataSize` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `dbHash` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `dbStats` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `driverOIDTest` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `explain`| :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `features` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `getCmdLineOpts` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `getLog`| :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `hostInfo` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Partial |
-| `_isSelf` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `listCommands` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `lockInfo` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `netstat` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `ping`| :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `profile` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `serverStatus` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `shardConnPoolStats` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `top` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `validate`| :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `whatsmyuri`| :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Free Monitoring Commands
-
-| Command | Supported |
-|||
-| `getFreeMonitoringStatus` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `setFreeMonitoring` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Auditing command
-
-| Command | Supported |
-|||
-| `logApplicationMessage` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-## Aggregation pipeline
-
-Azure Cosmos DB for MongoDB vCore supports the following aggregation pipeline features:
-
-### Aggregation commands
-
-| Command | Supported |
-|||
-| `aggregate` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `count` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `distinct` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `mapReduce` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Aggregation stages
-
-| Command | Supported |
-|||
-| `$addFields` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$bucket` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$bucketAuto` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$changeStream` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$count` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$currentOp` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$facet` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$geoNear` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$graphLookup` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$group` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$indexStats` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$limit` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$listLocalSessions` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$listSessions` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$lookup` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$match` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$merge` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$out` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$planCacheStats` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$project` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$redact` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$regexFind` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$regexFindAll` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$regexMatch` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$replaceRoot` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$replaceWith` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$sample` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$search` | :::image type="icon" source="medi) |
-| `$set` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$skip` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$sort` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$sortByCount` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$unset` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$unwind` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
+<table>
+<tr><td><b>Category</b></td><td><b>Command</b></td><td><b>Supported</b></td></tr>
+<tr><td rowspan="4">Aggregation Commands</td><td><code>aggregate</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>count</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>distinct</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>mapReduce</code></td><td>Deprecated</td></tr>
+
+<tr><td rowspan="3">Authentication Commands</td><td><code>authenticate</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>getnonce</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>logout</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="1">Geospatial Commands</td><td><code>geoSearch</code></td><td>Deprecated</td></tr>
+
+<tr><td rowspan="1">Query Plan Cache Commands</td><td></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+<tr><td rowspan="32">Administrative Commands</td><td><code>cloneCollectionAsCapped</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No. Capped collections are currently not supported.</td></tr>
+<tr><td><code>collMod</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Partial</td></tr>
+<tr><td><code>compact</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>connPoolSync</code></td><td>Deprecated</td></tr>
+<tr><td><code>convertToCapped</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No. Capped collections are currently not supported.</td></tr>
+<tr><td><code>create</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Partial</td></tr>
+<tr><td><code>createIndexes</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>currentOp</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>drop</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>dropDatabase</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>dropConnections</code></td><td>As a PaaS service, this will be managed by Azure.</td></tr>
+<tr><td><code>dropIndexes</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>filemd5</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>fsync</code></td><td>As a PaaS service, this will be managed by Azure.</td></tr>
+<tr><td><code>fsyncUnlock</code></td><td>As a PaaS service, this will be managed by Azure.</td></tr>
+<tr><td><code>getDefaultRWConcern</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>getClusterParameter</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>getParameter</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>killCursors</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>killOp</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>listCollections</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>listDatabases</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>listIndexes</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>logRotate</code></td><td>As a PaaS service, this will be managed by Azure.</td></tr>
+<tr><td><code>reIndex</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>renameCollection</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>rotateCertificates</code></td><td>As a PaaS service, this will be managed by Azure.</td></tr>
+<tr><td><code>setFeatureCompatibilityVersion</code></td><td>As a PaaS service, this will be managed by Azure.</td></tr>
+<tr><td><code>setIndexCommitQuorum</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>setParameter</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Partial</td></tr>
+<tr><td><code>setDefaultRWConcern</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>shutdown</code></td><td>As a PaaS service, this will be managed by Azure.</td></tr>
+
+<tr><td rowspan="1">User & Role Management Commands</td><td></td><td>Not supported today, but will be made available through Azure Active Directory in the future.</td></tr>
+
+<tr><td rowspan="1">Replication Commands</td><td></td><td>Azure manages replication, removing the necessity for customers to replicate manually.</td></tr>
+
+<tr><td rowspan="35">Sharding Commands</td><td><code>enableSharding</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>isdbgrid</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>reshardCollection</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>shardCollection</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>unsetSharding</code></td><td>Deprecated</td></tr>
+<tr><td><code>addShard</code></td><td rowspan="29">As a Platform-as-a-Service (PaaS) offering, Azure manages shard management and rebalancing. Users only need to specify the sharding strategy for the collections and Azure will handle the rest.</td></tr>
+<tr><td><code>addShardToZone</code></td></tr>
+<tr><td><code>clearJumboFlag</code></td></tr>
+<tr><td><code>cleanupOrphaned</code></td></tr>
+<tr><td><code>removeShard</code></td></tr>
+<tr><td><code>removeShardFromZone</code></td></tr>
+<tr><td><code>setShardVersion</code></td></tr>
+<tr><td><code>mergeChunks</code></td></tr>
+<tr><td><code>checkShardingIndex</code></td></tr>
+<tr><td><code>getShardMap</code></td></tr>
+<tr><td><code>getShardVersion</code></td></tr>
+<tr><td><code>medianKey</code></td></tr>
+<tr><td><code>splitVector</code></td></tr>
+<tr><td><code>shardingState</code></td></tr>
+<tr><td><code>cleanupReshardCollection</code></td></tr>
+<tr><td><code>flushRouterConfig</code></td></tr>
+<tr><td><code>balancerCollectionStatus</code></td></tr>
+<tr><td><code>balancerStart</code></td></tr>
+<tr><td><code>balancerStatus</code></td></tr>
+<tr><td><code>balancerStop</code></td></tr>
+<tr><td><code>configureCollectionBalancing</code></td></tr>
+<tr><td><code>listShards</code></td></tr>
+<tr><td><code>split</code></td></tr>
+<tr><td><code>moveChunk</code></td></tr>
+<tr><td><code>updateZoneKeyRange</code></td></tr>
+<tr><td><code>movePrimary</code></td></tr>
+<tr><td><code>abortReshardCollection</code></td></tr>
+<tr><td><code>commitReshardCollection</code></td></tr>
+<tr><td><code>refineCollectionShardKey</code></td></tr>
+<tr><td><code>reshardCollection</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+<tr><td rowspan="9">Query and Write Operation Commands</td><td><code>change streams</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>delete</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>find</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>findAndModify</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>getLastError</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>getMore</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Partial</td></tr>
+<tr><td><code>insert</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>resetError</code></td><td>Deprecated</td></tr>
+<tr><td><code>update</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="8">Session Commands</td><td><code>abortTransaction</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>commitTransaction</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>endSessions</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>killAllSessions</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>killAllSessionsByPattern</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>killSessions</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>refreshSessions</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>startSession</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="25">Diagnostic Commands</td><td><code>availableQueryOptions</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>buildInfo</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>collStats</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>connPoolStats</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>connectionStatus</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Partial</td></tr>
+<tr><td><code>dataSize</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>dbHash</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>dbStats</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>driverOIDTest</code></td><td>As a PaaS service, this will be managed by Azure.</td></tr>
+<tr><td><code>explain</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>features</code></td><td>As a PaaS service, this will be managed by Azure.</td></tr>
+<tr><td><code>getCmdLineOpts</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>getLog</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>hostInfo</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Partial</td></tr>
+<tr><td><code>_isSelf</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>listCommands</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>lockInfo</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>netstat</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>ping</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>profile</code></td><td>As a PaaS service, this will be managed by Azure.</td></tr>
+<tr><td><code>serverStatus</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>shardConnPoolStats</code></td><td>Deprecated</td></tr>
+<tr><td><code>top</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>validate</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>whatsmyuri</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
++
+<tr><td rowspan="1">System Events Auditing Commands</td><td><code>logApplicationMessage</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+</table>
++
+## Operators
+
+Below are the list of operators currently supported on Azure Cosmos DB for MongoDB vCore:
+
+<table>
+<tr><td><b>Category</b></td><td><b>Operator</b></td><td><b>Supported</b></td></tr>
+<tr><td rowspan="8">Comparison Query Operators</td><td><code>$eq</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$gt</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$gte</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$in</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$lt</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$lte</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$ne</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$nin</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="4">Logical Query Operators</td><td><code>$and</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$not</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$nor</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$or</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="2">Element Query Operators</td><td><code>$exists</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$type</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="6">Evaluation Query Operators</td><td><code>$expr</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$jsonSchema</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$mod</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$regex</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$text</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$where</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+<tr><td rowspan="1">Geospatial Operators</td><td></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+<tr><td rowspan="3">Array Query Operators</td><td><code>$all</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$elemMatch</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$size</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="4">Bitwise Query Operators</td><td><code>$bitsAllClear</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$bitsAllSet</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$bitsAnyClear</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$bitsAnySet</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="4">Projection Operators</td><td><code>$</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$elemMatch</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$meta</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$slice</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="3">Miscellaneous Query Operators</td><td><code>$comment</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$rand</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$natural</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+<tr><td rowspan="9">Field Update Operators</td><td><code>$currentDate</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$inc</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$min</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$max</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$mul</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$rename</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$set</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$setOnInsert</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$unset</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="12">Array Update Operators</td><td><code>$</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$[]</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$[identifier]</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$addToSet</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$pop</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$pull</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$push</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$pullAll</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$each</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$position</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$slice</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$sort</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="1">Bitwise Update Operators</td><td><code>$bit</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="16">Arithmetic Expression Operators</td><td><code>$abs</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$add</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$ceil</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$divide</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$exp</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$floor</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$ln</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$log</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$log10</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$mod</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$multiply</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$pow</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$round</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$sqrt</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$subtract</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$trunc</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="20">Array Expression Operators</td><td><code>$arrayElemAt</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$arrayToObject</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$concatArrays</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$filter</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$firstN</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$in</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$indexOfArray</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$isArray</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$lastN</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$map</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$maxN</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$minN</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$objectToArray</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$range</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$reduce</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$reverseArray</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$size</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$slice</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$sortArray</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$zip</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+<tr><td rowspan="4">Bitwise Operators</td><td><code>$bitAnd</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$bitNot</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$bitOr</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$bitXor</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="3">Boolean Expression Operators</td><td><code>$and</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$not</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$or</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="7">Comparison Expression Operators</td><td><code>$cmp</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$eq</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$gt</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$gte</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$lt</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$lte</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$ne</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="1">Custom Aggregation Expression Operators</td><td colspan="2">Not supported.</td></tr>
+
+<tr><td rowspan="2">Data Size Operators</td><td><code>$bsonSize</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$binarySize</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+<tr><td rowspan="22">Date Expression Operators</td><td><code>$dateAdd</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$dateDiff</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$dateFromParts</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$dateFromString</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$dateSubtract</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$dateToParts</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$dateToString</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$dateTrunc</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$dayOfMonth</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$dayOfWeek</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$dayOfYear</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$hour</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$isoDayOfWeek</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$isoWeek</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$isoWeekYear</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$millisecond</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$minute</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$month</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$second</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toDate</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$week</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$year</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="1">Literal Expression Operator</td><td><code>$literal</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="3">Miscellaneous Operators</td><td><code>$getField</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$rand</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$sampleRate</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+<tr><td rowspan="3">Object Expression Operators</td><td><code>$mergeObjects</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$objectToArray</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$setField</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+<tr><td rowspan="7">Set Expression Operators</td><td><code>$allElementsTrue</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$anyElementTrue</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$setDifference</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$setEquals</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$setIntersection</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$setIsSubset</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$setUnion</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="23">String Expression Operators</td><td><code>$concat</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$dateFromString</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$dateToString</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$indexOfBytes</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$indexOfCP</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$ltrim</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$regexFind</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$regexFindAll</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$regexMatch</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$replaceOne</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$replaceAll</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$rtrim</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$split</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$strLenBytes</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$strLenCP</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$strcasecmp</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$substr</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$substrBytes</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$substrCP</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toLower</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toString</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$trim</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toUpper</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="1">Text Expression Operator</td><td><code>$meta</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="1">Timestamp Expression Operators</td><td colspan="2">Not supported.</td></tr>
+
+<tr><td rowspan="1">Trigonometry Expression Operators</td><td colspan="2">Not supported.</td></tr>
+
+<tr><td rowspan="11">Type Expression Operators</td><td><code>$convert</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$isNumber</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toBool</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toDate</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toDecimal</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toDouble</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toInt</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toLong</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toObjectId</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$toString</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$type</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="22">Accumulators ($group, $bucket, $bucketAuto, $setWindowFields)</td><td><code>$accumulator</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$addToSet</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$avg</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$bottom</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$bottomN</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$count</code>/td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$first</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$firstN</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$last</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$lastN</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$max</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$maxN</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$median</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$mergeObjects</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$min</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$percentile</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$push</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$stdDevPop</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$stdDevSamp</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$sum</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$top</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$topN</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+<tr><td rowspan="10">Accumulators (in Other Stages)</td><td><code>$avg</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$first</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$last</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$max</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$median</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$min</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$percentile</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$stdDevPop</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$stdDevSamp</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$sum</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+
+<tr><td rowspan="1">Variable Expression Operators</td><td colspan="2">Not supported.</td></tr>
+
+<tr><td rowspan="1">Window Operators</td><td colspan="2">Not supported.</td></tr>
+
+<tr><td rowspan="3">Conditional Expression Operators</td><td><code>$cond</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$ifNull</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$switch</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+
+<tr><td rowspan="44">Aggregation Pipeline Stages</td><td><code>$addFields</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$bucket</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$bucketAuto</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$changeStream</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$changeStreamSplitLargeEvent</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$collStats</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$count</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$densify</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$documents</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$facet</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$fill</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$geoNear</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$graphLookup</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$group</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$indexStats</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$limit</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$listSampledQueries</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$listSearchIndexes</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$listSessions</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$lookup</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$match</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$merge</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$out</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$planCacheStats</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$project</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$redact</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$replaceRoot</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$replaceWith</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$sample</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$search</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$searchMeta</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$set</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$setWindowFields</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$skip</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$sort</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$sortByCount</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$unionWith</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$unset</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$unwind</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$shardedDataDistribution</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$changeStream</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$currentOp</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td><code>$listLocalSessions</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$documents</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
> [!NOTE] > The `$lookup` aggregation does not yet support using variable expressions using 'let'.
+> AvgObjsize and size in "collStats" works with document size less than 2KB only.
-### Boolean expressions
-
-| Command | Supported |
-|||
-| `and` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `not` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `or` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Type expressions
-
-| Command | Supported |
-|||
-| `$type` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$toLong` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$toString` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$convert` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$toDate` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$toDecimal` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$toObjectId` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$toDouble` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$toBool` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$toInt` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$isNumber` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Set expressions
-
-| Command | Supported |
-|||
-| `$anyElementTrue` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$setUnion` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$allElementsTrue` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$setIntersection` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$setDifference` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$setEquals` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$setIsSubset` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Comparison expressions
-
-| Command | Supported |
-|||
-| `$ne` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$lte` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$gt` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$gte` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$lt` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$eq` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$cmp` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Custom Aggregation expressions
-
-| Command | Supported |
-|||
-| `$accumulator` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$function` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Data size Operators
-
-| Command | Supported |
-|||
-| `$binarySize` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$bsonSize` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Arithmetic expressions
-
-| Command | Supported |
-|||
-| `$add` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$multiply` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$subtract` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$divide` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$ceil` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$floor` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$trunc` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$abs` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$mod` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$pow` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$sqrt` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$exp` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$ln` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$log` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$log10` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$round` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Timestamp expressions
-
-| Command | Supported |
-|||
-| `$tsIncrement` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$tsSecond` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Trigonometry expressions
-
-| Command | Supported |
-|||
-| `$sin` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$cos` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$tan` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$asin` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$acos` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$atan` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$atan2` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$asinh` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$acosh` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$atanh` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$sinh` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$cosh` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$tanh` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$degreesToRadians` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$radiansToDegrees` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### String expressions
-
-| Command | Supported |
-|||
-| `$concat` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$dateToString` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$toLower` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$toString` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$substr` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$split` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$strLenCP` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$toUpper` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$indexOfCP` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$substrCP` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$ltrim` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$substrBytes` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$indexOfBytes` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$trim` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$strLenBytes` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$dateFromString` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$regexFind` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$regexFindAll` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$regexMatch` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$replaceOne` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$replaceAll` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$rtrim` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$strcasecmp` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Text expression Operator
-
-| Command | Supported |
-|||
-| `$meta` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Array expressions
-
-| Command | Supported |
-|||
-| `$in` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$size` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$arrayElemAt` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$slice` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$filter` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$map` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$objectToArray` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$arrayToObject` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$reduce` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$indexOfArray` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$concatArrays` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$isArray` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$zip` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$reverseArray` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$range` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$first` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$firstN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$last` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$lastN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$maxN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$minN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$sortArray` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Variable operator
-
-| Command | Supported |
-|||
-| `$let` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### System variables
-
-| Command | Supported |
-|||
-| `$$CLUSTERTIME` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$$CURRENT` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$$DESCEND` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$$KEEP` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$$NOW` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$$PRUNE` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$$REMOVE` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$$ROOT` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Window operators
-
-| Command | Supported |
-|||
-| `$sum` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$push` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$addToSet` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$count` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$max` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$min` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$avg` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$stdDevPop` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$bottom` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$bottomN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$covariancePop` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$covarianceSamp` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$denseRank` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$derivative` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$documentNumber` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$expMovingAvg` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$first` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$integral` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$last` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$linearFill` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$locf` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$minN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$rank` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$shift` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$stdDevSamp` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$top` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$topN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Literal operator
-
-| Command | Supported |
-|||
-| `$literal` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Date expressions
-
-| Command | Supported |
-|||
-| `$dateToString` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$month` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$year` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$hour` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$minute` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$second` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$dayOfMonth` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$week` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$millisecond` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$toDate` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$dateToParts` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$dayOfWeek` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$dayOfYear` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$isoWeek` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$isoWeekYear` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$isoDayOfWeek` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$dateAdd` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$dateDiff` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$dateFromParts` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$dateFromString` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$dateSubtract` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$dateTrunc` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Conditional expressions
-
-| Command | Supported |
-|||
-| `$cond` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$ifNull` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$switch` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Accumulator expressions
-
-| Command | Supported |
-|||
-| `$accumulator` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$addToSet` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$avg` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$bottom` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$bottomN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$count` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$first` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$firstN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$last` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$lastN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$max` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$maxN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$mergeObjects` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$min` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$push` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$stdDevPop` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$stdDevSamp` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$sum` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$top` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$topN` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$stdDevPop` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$stdDevSamp` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$sum` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Miscellaneous operators
-
-| Command | Supported |
-|||
-| `$getField` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$rand` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$sampleRate` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Object expressions
-
-| Command | Supported |
-|||
-| `$mergeObjects` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$objectToArray` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$setField` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-## Data types
-
-Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format.
-
-| Command | Supported |
-|||
-| `Double` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `String` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Object` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Array` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Binary Data` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `ObjectId` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Boolean` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Date` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Null` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `32-bit Integer (int)` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Timestamp` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `64-bit Integer (long)` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `MinKey` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `MaxKey` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Decimal128` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Regular Expression` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `JavaScript` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `JavaScript (with scope)` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Undefined` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
+</table>
## Indexes and index properties
Azure Cosmos DB for MongoDB vCore supports the following indexes and index prope
### Indexes
-| Command | Supported |
-|||
-| `Single Field Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Compound Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Multikey Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Text Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Geospatial Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `Hashed Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Vector Index (only available in Cosmos DB)` | :::image type="icon" source="medi) |
+<table>
+<tr><td>Command</td><td>Supported</td></tr>
+<tr><td>Single Field Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td>Compound Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td>Multikey Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td>Text Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td>Geospatial Index</td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td>Hashed Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td>Vector Index (only available in Cosmos DB)</td><td><img src="medi)</td></tr>
+</table>
+ ### Index properties
-| Command | Supported |
-|||
-| `TTL` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Unique` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Partial` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Case Insensitive` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `Sparse` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Background` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
+<table>
+<tr><td>Command</td><td>Supported</td></tr>
+<tr><td>TTL</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td>Unique</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td>Partial</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+<tr><td>Case Insensitive</td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td>Sparse</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+ <tr><td>Background</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
+</td></tr>
+</table>
-## Operators
-Azure Cosmos DB for MongoDB vCore supports the following operators:
-
-### Comparison Query operators
-
-| Command | Supported |
-|||
-| `$eq` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$gt` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$gte` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$in` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$lt` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$lte` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$ne` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$nin` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Logical operators
-
-| Command | Supported |
-|||
-| `$or` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$and` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$not` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$nor` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Element operators
-
-| Command | Supported |
-|||
-| `$exists` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$type` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Evaluation query operators
-
-| Command | Supported |
-|||
-| `$expr` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$jsonSchema` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$mod` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$regex` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$text` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$where` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Array operators
-
-| Command | Supported |
-|||
-| `$all` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$elemMatch` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$size` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Bitwise Query operators
-
-| Command | Supported |
-|||
-| `$bitsAllClear` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$bitsAllSet` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$bitsAnyClear` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$bitsAnySet` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Miscellaneous operators
-
-| Command | Supported |
-|||
-| `$comment` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$rand` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$natural` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-
-### Projection operators
-
-| Command | Supported |
-|||
-| `$` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$elemMatch` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$slice` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Update operators
-
-Azure Cosmos DB for MongoDB vCore supports the following update operators:
-
-#### Field update operators
-
-| Command | Supported |
-|||
-| `$currentDate` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$inc` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$min` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$max` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$mul` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$rename` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$set` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$setOnInsert` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$unset` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-#### Array update operators
-
-| Command | Supported |
-|||
-| `$` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$[]` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$[<identifier>]` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$addToSet` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$pop` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$pull` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$push` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$pullAll` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-#### Update modifiers
-
-| Command | Supported |
-|||
-| `$each` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$position` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$slice` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$sort` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-#### Bitwise update operator
-
-| Command | Supported |
-|||
-| `$bit` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-
-### Geospatial operators
-
-| Operator | Supported |
-| | |
-| `$geoIntersects` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$geoWithin` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$near` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$nearSphere` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$box` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$center` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$centerSphere` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$geometry` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$maxDistance` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$minDistance` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$polygon` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `$uniqueDocs` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
## Next steps
databox Data Box Disk Portal Customer Managed Shipping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-portal-customer-managed-shipping.md
Last updated 06/07/2022 + # Use self-managed shipping for Azure Data Box Disk in the Azure portal
This article describes self-managed shipping tasks to order, pick-up, and drop-o
Self-managed shipping is available as an option when you [Order Azure Data Box Disk](data-box-disk-deploy-ordered.md). Self-managed shipping is only available in the following regions: * US Government
+* United States
* United Kingdom * Western Europe * Australia
Self-managed shipping is available as an option when you [Order Azure Data Box D
* Singapore * South Korea * South Africa
-* India (Preview)
+* India
* Brazil ## Use self-managed shipping
defender-for-cloud Auto Deploy Azure Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md
Title: Deploy the Azure Monitor Agent
+ Title: Azure Monitor Agent in Defender for Cloud
description: Learn how to deploy the Azure Monitor Agent on your Azure, multicloud, and on-premises servers to support Microsoft Defender for Cloud protections. Previously updated : 06/18/2023 Last updated : 11/02/2023
-# Deploy the Azure Monitor Agent to protect your servers with Microsoft Defender for Cloud
+# Azure Monitor Agent in Defender for Cloud
-To make sure that your server resources are secure, Microsoft Defender for Cloud uses agents installed on your servers to send information about your servers to Microsoft Defender for Cloud for analysis. You can quietly deploy the Azure Monitor Agent on your servers when you enable Defender for Servers.
+To make sure that your server resources are secure, Microsoft Defender for Cloud uses agents installed on your servers to send information about your servers to Microsoft Defender for Cloud for analysis.
+
+In this article, we give an overview of AMA preferences for when you deploy Defender for SQL servers on machines.
> [!NOTE]
-> As part of the Defender for Cloud updated strategy, Azure Monitor Agent will no longer be required for the Defender for Servers offering. However, it will still be required for Defender for SQL Server on machines. As a result, the autoprovisioning process for both agents will be adjusted accordingly. For more information about this change, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+> As part of the Defender for Cloud updated strategy, Azure Monitor Agent will no longer be required for the Defender for Servers offering. However, it will still be required for Defender for SQL Server on machines. As a result, the previous autoprovisioning process for both agents has been adjusted accordingly. Learn more about [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+
+## Azure Monitor Agent in Defender for Servers
-In this article, we're going to show you how to deploy the agent so that you can protect your servers.
+Azure Monitor Agent (AMA) is still available for deployment on your servers but is not required to receive Defender for Servers features and capabilities. To ensure your servers are secured, receive all the security content of Defender for Servers, verify [Defender for Endpoint (MDE) integration](integration-defender-for-endpoint.md) and [agentless disk scanning](concept-agentless-data-collection.md) are enabled on your subscriptions. This will ensure youΓÇÖll seamlessly be up to date and receive all the alternative deliverables once they are provided.
## Availability
+The following information on availability is relevant for the [Defender for SQL](defender-for-sql-introduction.md) plan only.
+ [!INCLUDE [azure-monitor-agent-availability](includes/azure-monitor-agent-availability.md)] ## Prerequisites
Before you deploy AMA with Defender for Cloud, you must have the following prere
- On-premises machines - [Install Azure Arc](../azure-arc/servers/learn/quick-enable-hybrid-vm.md). - Make sure the Defender plans that you want the Azure Monitor Agent to support are enabled:
- - [Enable Defender for Servers Plan 2 on Azure and on-premises VMs](enable-enhanced-security.md)
+ - [Enable Defender for SQL servers on machines](defender-for-sql-usage.md)
- [Enable Defender plans on the subscriptions for your AWS VMs](quickstart-onboard-aws.md) - [Enable Defender plans on the subscriptions for your GCP VMs](quickstart-onboard-gcp.md)
-## Deploy the Azure Monitor Agent with Defender for Cloud
-
-To deploy the Azure Monitor Agent with Defender for Cloud:
-
-1. From Defender for Cloud's menu, open **Environment settings**.
-1. Select the relevant subscription.
-1. In the Monitoring coverage column of the Defender for Server plan, select **Settings**.
-
- :::image type="content" source="media/auto-deploy-azure-monitoring-agent/select-server-setting.png" alt-text="Screenshot showing selecting settings for server service plan." lightbox="media/auto-deploy-azure-monitoring-agent/select-server-setting.png":::
-
-1. Enable deployment of the Azure Monitor Agent:
-
- 1. For the **Log Analytics agent/Azure Monitor Agent**, select the **On** status.
- :::image type="content" source="media/auto-deploy-azure-monitoring-agent/turn-on-azure-monitor-agent-auto-provision.png" alt-text="Screenshot showing turning on status for Log Analytics/Azure Monitor Agent." lightbox="media/auto-deploy-azure-monitoring-agent/turn-on-azure-monitor-agent-auto-provision.png":::
-
- In the Configuration column, you can see the enabled agent type. When you enable Defender plans, Defender for Cloud decides which agent to provision based on your environment. In most cases, the default is the Log Analytics agent.
-
- 1. For the **Log Analytics agent/Azure Monitor Agent**, select **Edit configuration**.
+## Deploy the SQL server-targeted AMA autoprovisioning process
- 1. For the Autoprovisioning configuration agent type, select **Azure Monitor Agent**.
-
- :::image type="content" source="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision.png" alt-text="Screenshot showing selecting Azure Monitor Agent for autoprovisioning." lightbox="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision.png":::
-
- By default:
-
- - The Azure Monitor Agent is installed on all existing machines in the selected subscription, and on all new machines created in the subscription.
- - The Log Analytics agent isn't uninstalled from machines that already have it installed. You can [leave the Log Analytics agent](#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) on the machine, or you can manually [remove the Log Analytics agent](../azure-monitor/agents/azure-monitor-agent-migration.md) if you don't require it for other protections.
- - The agent sends data to the default workspace for the subscription. You can also [configure a custom workspace](#configure-custom-destination-log-analytics-workspace) to send data to.
- - You can't enable [collection of other security events](#other-security-events-collection).
+Deploying Azure Monitor Agent with Defender for Cloud is available for SQL servers on machines as detailed [here](defender-for-sql-autoprovisioning.md#migrate-to-the-sql-server-targeted-ama-autoprovisioning-process).
## Impact of running with both the Log Analytics and Azure Monitor Agents
If you configure a custom Log Analytics workspace:
- Defender for Cloud only configures the data collection rules and other extensions for the custom workspace. You have to configure the workspace solution on the custom workspace. - Machines with Log Analytics agent that reports to a Log Analytics workspace with the security solution are billed even when the Defender for Servers plan isn't enabled. Machines with the Azure Monitor Agent are billed only when the plan is enabled on the subscription. The security solution is still required on the workspace to work with the plans features and to be eligible for the 500-MB benefit.
-To configure a custom destination workspace for the Azure Monitor Agent:
-
-1. From Defender for Cloud's menu, open **Environment settings**.
-1. Select the relevant subscription.
-1. In the Monitoring coverage column of the Defender for Server plan, select **Settings**.
-
- :::image type="content" source="media/auto-deploy-azure-monitoring-agent/select-server-setting.png" alt-text="Screenshot showing selecting settings in Monitoring coverage column." lightbox="media/auto-deploy-azure-monitoring-agent/select-server-setting.png":::
-
-1. For the **Log Analytics agent/Azure Monitor Agent**, select **Edit configuration**.
-
- :::image type="content" source="media/auto-deploy-azure-monitoring-agent/configure-azure-monitor-agent-auto-provision.png" alt-text="Screenshot showing where to select edit configuration for Log Analytics agent/Azure Monitor Agent." lightbox="media/auto-deploy-azure-monitoring-agent/configure-azure-monitor-agent-auto-provision.png":::
-
-1. Select **Custom workspace**, and select the workspace that you want to send data to.
-
- :::image type="content" source="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision-custom.png" alt-text="screenshot showing selection of custom workspace." lightbox="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision-custom.png":::
- ### Log analytics workspace solutions The Azure Monitor Agent requires Log analytics workspace solutions. These solutions are automatically installed when you autoprovision the Azure Monitor Agent with the default workspace. The required [Log Analytics workspace solutions](/previous-versions/azure/azure-monitor/insights/solutions) for the data that you're collecting are: -- Security posture management (CSPM) ΓÇô **SecurityCenterFree solution**
+- Cloud security posture management (CSPM) ΓÇô **SecurityCenterFree solution**
- Defender for Servers Plan 2 ΓÇô **Security solution** ### Other extensions for Defender for Cloud
When you autoprovision the Log Analytics agent in Defender for Cloud, you can ch
If you want to collect security events when you autoprovision the Azure Monitor Agent, you can create a [Data Collection Rule](../azure-monitor/essentials/data-collection-rule-overview.md) to collect the required events. Learn [how do it with PowerShell or with Azure Policy](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/how-to-configure-security-events-collection-with-azure-monitor/ba-p/3770719).
-Like for Log Analytics workspaces, Defender for Cloud users are eligible for [500 MB of free data](faq-defender-for-servers.yml) daily on defined data types that include security events.
+As in Log Analytics workspaces, Defender for Cloud users are eligible for [500 MB of free data](faq-defender-for-servers.yml) daily on defined data types that include security events.
## Next steps
Now that you enabled the Azure Monitor Agent, check out the features that are su
- [Endpoint protection assessment](endpoint-protection-recommendations-technical.md) - [Adaptive application controls](adaptive-application-controls.md) - [Fileless attack detection](defender-for-servers-introduction.md#plan-features)-- [File Integrity Monitoring](file-integrity-monitoring-enable-ama.md)
+- [File integrity monitoring](file-integrity-monitoring-enable-ama.md)
defender-for-cloud Plan Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers.md
The following table shows an overview of the Defender for Servers deployment pro
| Enable Defender for Servers | ΓÇó When you enable a paid plan, Defender for Cloud enables the *Security* solution on its default workspace.<br /><br />ΓÇó Enable Defender for Servers Plan 1 (subscription only) or Plan 2 (subscription and workspace).<br /><br />ΓÇó After enabling a plan, decide how you want to install agents and extensions on Azure VMs in the subscription or workgroup.<br /><br />ΓÇóBy default, auto-provisioning is enabled for some extensions. | | Protect AWS/GCP machines | ΓÇó For a Defender for Servers deployment, you set up a connector, turn off plans you don't need, configure auto-provisioning settings, authenticate to AWS/GCP, and deploy the settings.<br /><br />ΓÇó Auto-provisioning includes the agents used by Defender for Cloud and the Azure Connected Machine agent for onboarding to Azure with Azure Arc.<br /><br />ΓÇó AWS uses a CloudFormation template.<br /><br />ΓÇó GCP uses a Cloud Shell template.<br /><br />ΓÇó Recommendations start appearing in the portal. | | Protect on-premises servers | ΓÇó Onboard them as Azure Arc machines and deploy agents with automation provisioning. |
-| Foundational CSPM | ΓÇó There are no charges when you use foundational CSPM with no plans enabled.<br /><br />ΓÇó AWS/GCP machines don't need to be set up with Azure Arc for foundational CSPM. On-premises machines do.<br /><br />ΓÇó Some foundational recommendations rely only agents: Antimalware / endpoint protection (Log Analytics agent or Azure Monitor agent) \| OS baselines recommendations (Log Analytics agent or Azure Monitor agent and Guest Configuration extension) \| System updates recommendation (Log Analytics agent) |
+| Foundational CSPM | ΓÇó There are no charges when you use foundational CSPM with no plans enabled.<br /><br />ΓÇó AWS/GCP machines don't need to be set up with Azure Arc for foundational CSPM. On-premises machines do.<br /><br />ΓÇó Some foundational recommendations rely only agents: Antimalware / endpoint protection (Log Analytics agent or Azure Monitor agent) \| OS baselines recommendations (Log Analytics agent or Azure Monitor agent and Guest Configuration extension) \|
- Learn more about [foundational cloud security posture management (CSPM)](concept-cloud-security-posture-management.md#defender-cspm-plan-options). - Learn more about [Azure Arc](../azure-arc/index.yml) onboarding.
education-hub Deploy Resources Azure For Students https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/deploy-resources-azure-for-students.md
+
+ Title: Deploy Resources with Azure for Students
+description: Learn how to deploy resource with your Azure for Students subscription
++++ Last updated : 11/1/2023+++
+# Tutorial: Deploy Resources with Azure for Students
+
+With Azure for Students, you have access to the entire Azure platform and all its services. These tutorials will show you everything you need to know about some popular services that can be deployed using your Azure for Students subscription.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Deploy an App on Azure
+> * Deploy a Virtual Machine
+> * Deploy a SQL Database
+> * Deploy Azure AI Speech-to-Test
+> * Deploy Azure AI Custom Vision Service
+
+## Prerequisites
+
+You must have an Azure for Students account.
+
+## Follow the tutorials to deploy resources
+
+[Deploy an App on Azure](https://learn.microsoft.com/azure/app-service/)
+[Deploy a Virtual Machine](https://learn.microsoft.com/azure/virtual-machines/)
+[Deploy a SQL Database](https://learn.microsoft.com/azure/azure-sql/?view=azuresql)
+[Deploy Azure AI Speech-to-Test](https://learn.microsoft.com/azure/ai-services/speech-service/index-speech-to-text)
+[Deploy Azure AI Custom Vision Service](https://learn.microsoft.com/azure/ai-services/custom-vision-service/)
+
+## Next steps
+
+- [Learn about the education hub](about-education-hub.md)
+
+- [Support options](educator-service-desk.md)
education-hub Navigate Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/navigate-costs.md
Additionally, you can ΓÇÿView cost detailsΓÇÖ, which will send you into Microsof
## Create Budgets to help conserve your Azure for Students credit
-[![Budget](https://markdown-videos-api.jorgenkh.no/url?url=https%3A%2F%2Fyoutu.be%2FUrkHiUx19Po)](https://youtu.be/UrkHiUx19Po)
+<iframe width="560" height="315" src="https://www.youtube.com/embed/UrkHiUx19Po?si=EREdwKeBAGnlOeSS" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
Read more about this tutorial [Create and Manage Budgets](https://learn.microsoft.com/azure/cost-management-billing/costs/tutorial-acm-create-budgets)
frontdoor Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/domain.md
To use the WAF with a custom domain, use an Azure Front Door security policy res
## Next steps
-To learn how to add a custom domain to your Azure Front Door profile, see [Configure a custom domain on Azure Front Door using the Azure portal](standard-premium/how-to-add-custom-domain.md).
+* To learn how to add a custom domain to your Azure Front Door profile, see [Configure a custom domain on Azure Front Door using the Azure portal](standard-premium/how-to-add-custom-domain.md).
+* Learn more about how [End-to-end TLS with Azure Front Door](end-to-end-tls.md).
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
Azure Front Door doesnΓÇÖt support disabling or configuring specific cipher suit
::: zone pivot="front-door-standard-premium" * [Understand custom domains](domain.md) on Azure Front Door.
-* [Configure a custom domain on Azure Front Door using the Azure portal](standard-premium/how-to-add-custom-domain.md).
+* [Configure a custom domain](standard-premium/how-to-add-custom-domain.md) on Azure Front Door using the Azure portal.
+* Learn about [End-to-end TLS with Azure Front Door](end-to-end-tls.md).
::: zone-end
frontdoor How To Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-custom-domain.md
Lastly, validate that your application content is getting served using a browser
## Next steps
-Learn how to [enable HTTPS for your custom domain](how-to-configure-https-custom-domain.md).
+* Learn how to [enable HTTPS for your custom domain](how-to-configure-https-custom-domain.md).
+* Learn more about [custom domains in Azure Front Door](../domain.md).
+* Learn about [End-to-end TLS with Azure Front Door](../end-to-end-tls.md).
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
You can change a domain between using an Azure Front Door-managed certificate an
## Next steps
-Learn about [caching with Azure Front Door Standard/Premium](../front-door-caching.md).
+* Learn about [caching with Azure Front Door Standard/Premium](../front-door-caching.md).
+* [Understand custom domains](../domain.md) on Azure Front Door.
+* Learn about [End-to-end TLS with Azure Front Door](../end-to-end-tls.md).
governance How To Create Policy Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-create-policy-definition.md
Title: How to create custom machine configuration policy definitions description: Learn how to create a machine configuration policy. Previously updated : 04/18/2023 Last updated : 11/02/2023 # How to create custom machine configuration policy definitions
feature means that the values in the MOF file in the package don't have to be co
The override values are provided through Azure Policy and don't change how the DSC Configurations are authored or compiled.
+Machine configuration supports the following value types for parameters:
+
+- String
+- Boolean
+- Double
+- Float
+ The cmdlets `New-GuestConfigurationPolicy` and `Get-GuestConfigurationPackageComplianceStatus` include a parameter named **Parameter**. This parameter takes a hash table definition including all details about each parameter and creates the required sections of each file used for the Azure
governance Evaluate Impact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/evaluate-impact.md
compliant resources are incorrectly included (known as a _false positive_) in th
The recommended approach to validating a new policy definition is by following these steps: - Tightly define your policy-- Audit your existing resources
+- Test your policy's effectiveness
- Audit new or updated resource requests - Deploy your policy to resources - Continuous monitoring
impacts resources that are used by other services.
For this reason, your policy definitions should be as tightly defined and focused on the resources and the properties you need to evaluate for compliance as possible.
-## Audit existing resources
+
+## Test your policy's effectiveness
Before looking to manage new or updated resources with your new policy definition, it's best to see
-how it evaluates a limited subset of existing resources, such as a test resource group. Use the
+how it evaluates a limited subset of existing resources, such as a test resource group. The [Azure Policy VS Code extension](../how-to/extension-for-vscode.md#on-demand-evaluation-scan) allows for isolated testing of definitions against existing Azure resources using the on demand evaluation scan.
+You may also assign the definition in a _Dev_ environment using the
[enforcement mode](./assignment-structure.md#enforcement-mode) _Disabled_ (DoNotEnforce) on your policy assignment to prevent the [effect](./effects.md) from triggering or activity log entries from being created. This step gives you a chance to evaluate the compliance results of the new policy on existing
-resources without impacting work flow. Check that no compliant resources are marked as non-compliant
+resources without impacting work flow. Check that no compliant resources show as non-compliant
(_false positive_) and that all the resources you expect to be non-compliant are marked correctly.
-After the initial subset of resources validates as expected, slowly expand the evaluation to all
-existing resources.
+After the initial subset of resources validates as expected, slowly expand the evaluation to more
+existing resources and more scopes.
Evaluating existing resources in this way also provides an opportunity to remediate non-compliant resources before full implementation of the new policy. This cleanup can be done manually or through a [remediation task](../how-to/remediate-resources.md) if the policy definition effect is
-_DeployIfNotExists_.
+_DeployIfNotExists_ or _Modify_.
+
+Policy definitions with a _DeployIfNotExist_ should leverage the [Azure Resource Manager template what if](../../../azure-resource-manager/templates/deploy-what-if.md) to validate and test the changes that happen when deploying the ARM template.
## Audit new or updated resources Once you've validated your new policy definition is reporting correctly on existing resources, it's time to look at the impact of the policy when resources get created or updated. If the policy
-definition supports effect parameterization, use [Audit](./effects.md#audit). This configuration
+definition supports effect parameterization, use [Audit](./effects.md#audit) or [AuditIfNotExist](./effects.md#auditifnotexists). This configuration
allows you to monitor the creation and updating of resources to see whether the new policy definition triggers an entry in Azure Activity log for a resource that is non-compliant without impacting existing work or requests. It's recommended to both update and create new resources that match your policy definition to see
-that the _Audit_ effect is correctly being triggered when expected. Be on the lookout for resource
-requests that shouldn't be affected by the new policy definition that trigger the _Audit_ effect.
+that the _Audit_ or _AuditIfNotExist_ effect is correctly being triggered when expected. Be on the lookout for resource
+requests that shouldn't be affected by the new policy definition that trigger the _Audit_ or _AuditIfNotExist_ effect.
These affected resources are another example of _false positives_ and must be fixed in the policy definition before full implementation.
existing resources.
After completing validation of your new policy definition with both existing resources and new or updated resource requests, you begin the process of implementing the policy. It's recommended to create the policy assignment for the new policy definition to a subset of all resources first, such
-as a resource group. After validating initial deployment, extend the scope of the policy to broader
-and broader levels, such as subscriptions and management groups. This expansion is achieved by
-removing the assignment and creating a new one at the target scopes until it's assigned to the full
-scope of resources intended to be covered by your new policy definition.
+as a resource group. You can further filter by resource type or location using the [`resourceSelectors`](./assignment-structure.md#resource-selectors-preview) property within the policy assignment.After validating initial deployment, extend the scope of the policy to broader as a resource group. After validating initial deployment, expand the impact of the policy by adjusting the resourceSelector filters to target more locations or resource types, or by removing the assignment and replacing it with a new one at broader scopes like subscriptions and management groups. Continue this gradual rollout until it's assigned to the full scope of resources intended to be covered by your new policy definition.
During rollout, if resources are located that should be exempt from your new policy definition, address them in one of the following ways:
governance Policy As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-as-code.md
# Design Azure Policy as Code workflows As you progress on your journey with Cloud Governance, you'll want to shift from manually managing
-each policy definition in the Azure portal or through the various SDKs to something more manageable
-and repeatable at enterprise scale. Two of the predominant approaches to managing systems at scale
+each policy assignment in the Azure portal or through the various SDKs to something more manageable
+and repeatable at an enterprise scale. Two of the predominant approaches to managing systems at scale
in the cloud are: - Infrastructure as Code: The practice of treating the content that defines your environments,
before it's too late and they're attempting to deploy in production.
## Definitions and foundational information
-Before getting into the details of Azure Policy as Code workflow, it's important to understand how to author policy definitions and initiative definitions:
+Before getting into the details of Azure Policy as Code workflow, it's important to understand some fundamental concepts, like how to author policy definitions and initiative definitions, and how to leverage exemptions on assignments of those definitions:
- [Policy definition](./definition-structure.md) - [Initiative definition](./initiative-definition-structure.md)
+- [Policy exemption](./exemption-structure.md)
-The file names correspond with certain portions of policy or initiative definitions:
+The file names correspond with certain portions of policy or initiative definitions and other policy resources:
| File format | File contents | | :-- | :-- |
The file names correspond with certain portions of policy or initiative definiti
| `policyset.parameters.json` | The `properties.parameters` portion of the initiative definition | | `policy.rules.json` | The `properties.policyRule` portion of the policy definition | | `policyset.definitions.json` | The `properties.policyDefinitions` portion of the initiative definition |
+| `exemptionName.json` | The policy exemption that targets a particular resource or scope |
Examples of these file formats are available in the
-[Azure Policy GitHub Repo](https://github.com/Azure/azure-policy/):
+[Azure Policy GitHub Repo](https://github.com/Azure/azure-policy/)
-- Policy definition: [Add a tag to resources](https://github.com/Azure/azure-policy/tree/master/samples/Tags/add-tag)-- Initiative definition: [Billing Tags](https://github.com/Azure/azure-policy/tree/master/samples/PolicyInitiatives/multiple-billing-tags) ## Workflow overview
The recommended general workflow of Azure Policy as Code looks like this diagram
### Source control
-Existing policy and initiative definitions can be exported through PowerShell, CLI, or [Azure Resource Graph (ARG)](../../resource-graph/overview.md) queries. The source control management environment of choice to store these definitions can be one of many options, including a [GitHub](https://www.github.com) or [Azure DevOps](/azure/devops/user-guide/what-is-azure-devops).
+Existing [policy and initiative definitions can be exported](../how-to/export-resources.md) different ways such as through PowerShell, CLI, or [Azure Resource Graph (ARG)](../../resource-graph/overview.md) queries. The source control management environment of choice to store these definitions can be one of many options, including a [GitHub](https://www.github.com) or [Azure DevOps](/azure/devops/user-guide/what-is-azure-devops).
### Create and update policy definitions The policy definitions are created using JSON, and stored in source control. Each policy has its
-own set of files, such as the parameters, rules, and environment parameters, that should be stored
+own set of files, such as the parameters, rules, and environment parameters that should be stored
in the same folder. The following structure is a recommended way of keeping your policy definitions in source control.
in source control.
| |- policy.rules.json ___________ # Policy rule | |- assign.<name1>.json _________ # Assignment 1 for this policy definition | |- assign.<name2>.json _________ # Assignment 2 for this policy definition
+| |- exemptions.<name1>/__________ # Subfolder for exemptions on assignment 1
+ | - exemptionName.json________ # Exemption for this particular assignment
+ |- exemptions.<name2>/__________ # Subfolder for exemptions on assignment 2
+ | - exemptionName.json________ # Exemption for this particular assignment
+|
| |- policy2/ ______________________ # Subfolder for a policy | |- policy.json _________________ # Policy definition | |- policy.parameters.json ______ # Policy definition of parameters | |- policy.rules.json ___________ # Policy rule | |- assign.<name1>.json _________ # Assignment 1 for this policy definition
-| |- assign.<name2>.json _________ # Assignment 2 for this policy definition
+| |- exemptions.<name1>/__________ # Subfolder for exemptions on assignment 1
+ | - exemptionName.json________ # Exemption for this particular assignment
| ```
definitions in source control:
| |- policyset.parameters.json ___ # Initiative definition of parameters | |- assign.<name1>.json _________ # Assignment 1 for this policy initiative | |- assign.<name2>.json _________ # Assignment 2 for this policy initiative
+| |- exemptions.<name1>/__________ # Subfolder for exemptions on assignment 1
+ | - exemptionName.json________ # Exemption for this particular assignment
+ |- exemptions.<name2>/__________ # Subfolder for exemptions on assignment 2
+ | - exemptionName.json________ # Exemption for this particular assignment
| | |- init2/ _________________________ # Subfolder for an initiative | |- policyset.json ______________ # Initiative definition | |- policyset.definitions.json __ # Initiative list of policies | |- policyset.parameters.json ___ # Initiative definition of parameters | |- assign.<name1>.json _________ # Assignment 1 for this policy initiative
-| |- assign.<name2>.json _________ # Assignment 2 for this policy initiative
+| |- exemptions.<name1>/__________ # Subfolder for exemptions on assignment 1
+ | - exemptionName.json________ # Exemption for this particular assignment
| ```
definition comes in a later step.
> [!NOTE] > It's recommended to use a centralized deployment mechanism like GitHub workflows or Azure > Pipelines to deploy policies. This helps to ensure only reviewed policy resources are deployed
-> to your environment and that a central deployment mechanism is used. _Write_ permissions
+> to your environment and that a gradual and central deployment mechanism is used. _Write_ permissions
> to policy resources can be restricted to the identity used in the deployment. ### Test and validate the updated definition
the update to the object in Azure, it's time to test the changes that were made.
or the initiative(s) it's part of should then be assigned to resources in the environment farthest from production. This environment is typically _Dev_.
+>[!NOTE]
+> In this step, we are conducting integration testing of the policy definition within your Azure environment, this is seperate from [verfying the functionality of the policy definition](./evaluate-impact.md#test-your-policys-effectiveness) which should occur during the definition creation process.
+ The assignment should use [enforcementMode](./assignment-structure.md#enforcement-mode) of _disabled_ so that resource creation and updates aren't blocked, but that existing resources are still audited for compliance to the updated policy definition. Even with enforcementMode, it's
compliance change as expected.
After all validation gates have completed, update the assignment to use **enforcementMode** of _enabled_. It's recommended to make this change initially in the same environment far from
-production. Once that environment is validated as working as expected, the change should then be
+production. Validate that the desired effects are applied during resource creation and resource update. Once that environment is validated as working as expected, the change should then be
scoped to include the next environment, and so on, until the policy is deployed to production resources.
supports scripted steps and automation based on triggers.
- Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
-
+- Under how to [follow policy safe deployment practices](../how-to/policy-safe-deployment-practices.md)
+
governance Policy Safe Deployment Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/policy-safe-deployment-practices.md
the safe deployment practices (SDP) framework. The
safe deployment of Azure Policy definitions and assignments helps limiting the impact of unintended behaviors of policy resources.
-The high-level approach of implementing SDP with Azure Policy is to roll out policy assignments
+The high-level approach of implementing SDP with Azure Policy is to graudally rollout policy assignments
by rings to detect policy changes that affect the environment in early stages before it affects the critical cloud infrastructure.
Policy assignments that use the `deny` or `append` policy effects.
Flowchart step numbers:
-1. Begin the release by creating a policy definition at the highest designated Azure management scope. We recommend storing Azure Policy definitions at the management group scope for maximum flexibility.
-
-2. Once you've created your policy definition, assign the policy at the highest-level scope inclusive
+1. Once you've selected your policy definition, assign the policy at the highest-level scope inclusive
of all deployment rings. Apply _resource selectors_ to narrow the applicability to the least critical ring by using the `"kind": "resource location"` property. Configure the `audit` effect type by using _assignment overrides_. Sample selector with `eastUS` location and effect as `audit`:
by using _assignment overrides_. Sample selector with `eastUS` location and effe
}] ```
-3. Once the assignment is deployed and the initial compliance scan has completed,
+2. Once the assignment is deployed and the initial compliance scan has completed,
validate that the compliance result is as expected. You should also configure automated tests that run compliance checks. A compliance check should
validate that the compliance result is as expected.
and impact of the policy. If the results aren't as expected due to application configuration, refactor the application as appropriate.
-4. Repeat by expanding the resource selector property values to include the next rings'
+3. Repeat by expanding the resource selector property values to include the next rings.
locations and validating the expected compliance results and application health. Example selector with an added location value: ```json
locations and validating the expected compliance results and application health.
}] ```
-5. Once you have successfully assigned the policy to all rings using `audit` mode,
+4. Once you have successfully assigned the policy to all rings using `audit` mode,
the pipeline should trigger a task that changes the policy effect to `deny` and reset the resource selectors to the location associated with _Ring 0_. Example selector with one region and effect set to deny:
the resource selectors to the location associated with _Ring 0_. Example selecto
}] ```
-6. Once the effect is changed, automated tests should check whether enforcement is taking place as
+5. Once the effect is changed, automated tests should check whether enforcement is taking place as
expected.
-7. Repeat by including more rings in your resource selector configuration.
+6. Repeat by including more rings in your resource selector configuration.
-8. Repeat this process for all production rings.
+7. Repeat this process for all production rings.
## Steps for safe deployment of Azure Policy assignments with modify or deployIfNotExists effects
-Steps 1-4 for policies using the `modify` or `deployIfNotExists` effects are the same as steps previously explained.
+The steps for policies using the `modify` or `deployIfNotExists` effects are similar to steps previously explained with the additional action of using _enforcement mode_ and triggering a remediation task.
Review the following flowchart with modified steps 5-9: :::image type="content" source="../media/policy-safe-deployment-practices/safe-deployment-practices-flowchart-2.png" alt-text="Flowchart showing steps 5 through 9 in the Azure Policy safe deployment practices workflow." border="true"::: Flowchart step numbers:
-5. Once you've assigned the policy to all rings using `audit` mode, the pipeline should trigger
-a task that changes the policy effect to `modify` or `deployIfNotExists` and resets
-the resource selectors to _Ring 0_.
+1. Once you've selected your policy definition, assign the policy at the highest-level scope inclusive
+of all deployment rings. Apply _resource selectors_ to narrow the applicability to the least
+critical ring by using the `"kind": "resource location"` property. Configure the _enforcement mode_ of the assignment to _DoNotEnforce_. Sample selector with `eastUS` location and _enforcementMode_ as _DoNotEnforce_:
-6. Automated tests should then check whether the enforcement works as expected.
+ ```json
+ "resourceSelectors": [{
+ "name": "SDPRegions",
+ "selectors": [{
+ "kind": "resourceLocation",
+ "in": [ "eastUS" ]
+ }]
+ }],
+ "enforcementMode": "DoNotEnforce"
+ ```
+
+2. Once the assignment is deployed and the initial compliance scan has completed,
+validate that the compliance result is as expected.
-7. The pipeline should trigger a remediation task that corrects existing resources in that given ring.
+ You should also configure automated tests that run compliance checks. A compliance check should
+ encompass the following logic:
+
+ - Gather compliance results
+ - If compliance results are as expected, the pipeline should continue
+ - If compliance results aren't as expected, the pipeline should fail and you should start debugging
+
+ You can configure the compliance check by using other tools within
+ your continuous integration/continuous deployment (CI/CD) pipeline.
+
+ At each rollout stage, the application health checks should confirm the stability of the service
+ and impact of the policy. If the results aren't as expected due to application configuration,
+ refactor the application as appropriate.
-8. After the remediation task is complete, automated tests should verify the remediation works
-as expected using compliance and application health checks.
+ You may also [trigger remediation tasks](../how-to/remediate-resources.md) to remediate existing non-compliant resources. Ensure the remediation tasks are bringing resources into compliance as expected.
-9. Repeat by including more locations in your resource selector configuration. Then repeat all for production rings.
+3. Repeat by expanding the resource selector property values to include the next ring's
+locations and validating the expected compliance results and application health. Example selector with an added location value:
-> [!NOTE]
-> For more information on Azure policy remediation tasks, read [Remediate non-compliant resources with Azure Policy](./remediate-resources.md).
+ ```json
+ "resourceSelectors": [{
+ "name": "SDPRegions",
+ "selectors": [{
+ "kind": "resourceLocation",
+ "in": [ "eastUS", "westUS"]
+ }]
+ }]
+ ```
+
+4. Once you have successfully assigned the policy to all rings using _DoNotEnforce_ mode,
+the pipeline should trigger a task that changes the policy `enforcementMode` to _Default_ enablement and reset
+the resource selectors to the location associated with _Ring 0_. Example selector with one region and effect set to deny:
+
+ ```json
+ "resourceSelectors": [{
+ "name": "SDPRegions",
+ "selectors": [{
+ "kind": "resourceLocation",
+ "in": [ "eastUS" ]
+ }]
+ }],
+ "enforcementMode": "Default",
+ ```
+
+5. Once the effect is changed, automated tests should check whether enforcement is taking place as
+expected.
+
+6. Repeat by including more rings in your resource selector configuration.
+
+7. Repeat this process for all production rings.
## Next steps - Learn how to [programmatically create policies](./programmatically-create.md). - Review [Azure Policy as code workflows](../concepts/policy-as-code.md).-- Study Microsoft's guidance concerning [safe deployment practices](/devops/operate/safe-deployment-practices).
+- Study Microsoft's guidance concerning [safe deployment practices](/devops/operate/safe-deployment-practices).
+- Review [Remediate non-compliant resources with Azure Policy](./remediate-resources.md).
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
For workload specific versions, see
* To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customer might face creation failures if this check is not enabled. * Non-ESP ABFS clusters [Cluster Permissions for Word Readable]
- * Non-ESP ABFS clusters restrict non-Hadoop group users from executing Hadoop commands for storage operations. This change improves cluster security posture.ΓÇ»
+ * Non-ESP ABFS clusters restrict non-Hadoop group users from executing Hadoop commands for storage operations. This change improves cluster security posture.
+
+* In-line quota update.
+ * Now you can request quota increase directly from the My Quota page, with the direct API call it is much faster. In case the API call fails, you can create a new support request for quota increase.
## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon * The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. This change will be rolled out to all regions starting upcoming release.
-* In-line quota update.
- * Request quotas increase directly from the My Quota page, which will be a direct API call, which is faster. If the APdI call fails, then customers need to create a new support request for quota increase.
- * Basic and Standard A-series VMs Retirement. * On August 31, 2024, we will retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). * To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.
iot-hub Migrate Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-tls-certificate.md
Previously updated : 06/20/2023 Last updated : 10/31/2023 # Migrate IoT Hub resources to a new TLS certificate root
You should start planning now for the effects of migrating your IoT hubs to the
## Timeline
-The IoT Hub team will begin migrating IoT hubs by region on **February 15, 2023** and completing by October 15, 2023. After all IoT hubs have migrated, then DPS will perform its migration between January 15 and February 15, 2024.
+The IoT Hub team began migrating IoT hubs in February, 2023 and all IoT hubs have been migrated except for those that have already been approved for a later migration.
+
+After all IoT hubs have migrated, DPS will perform its migration between January 15 and February 15, 2024.
For each IoT hub, you can expect the following:
For each IoT hub, you can expect the following:
### Request an extension
-This TLS certificate migration is critical for the security of our customers and Microsoft's infrastructure, and is time-bound by the expiration of the Baltimore CyberTrust Root certificate. Therefore, there's little extra time that we can provide for customers that don't think their devices will be ready by the migration deadlines.
-
-As of June 2023 the extension request process is closed for IoT Hub customers.
-
-IoT Central applications are scheduled for migration between June 15th and October 15th, 2023. For IoT Central customers who absolutely can't have their devices ready for migration by June 2023, [fill out this form](https://aka.ms/BaltimoreAllowCentral) before August 15, 2023 with the details of your extension request, and then [email us](mailto:iot-ca-updates@microsoft.com?subject=Requesting%20extension%20for%20Baltimore%20migration) with a message that indicates you've completed the form, along with your company name. We can flag the specific IoT Central apps to be migrated on the requested extension date.
-
-> [!NOTE]
-> We are collecting this information to help with the Baltimore migration. We will hold onto this information until October 15th, 2023, when this migration is slated to complete. If you would like us to delete this information, please [email us](mailto:iot-ca-updates@microsoft.com) and we can assist you. For any additional questions about the Microsoft privacy policy, see the [Microsoft Privacy Statement](https://go.microsoft.com/fwlink/?LinkId=521839).
+As of August, 2023 the extension request process is closed for IoT Hub and IoT Central.
## Required steps
-To prepare for the migration, take the following steps before February 2023:
+To prepare for the migration, take the following steps:
1. Keep the Baltimore CyberTrust Root in your devices' trusted root store. Add the DigiCert Global Root G2 and the Microsoft RSA Root Certificate Authority 2017 certificates to your devices. You can download all of these certificates from the [Azure Certificate Authority details](../security/fundamentals/azure-CA-details.md).
To prepare for the migration, take the following steps before February 2023:
For more information about how to test whether your devices are ready for the TLS certificate migration, see the blog post [Azure IoT TLS: Critical changes are almost here](https://techcommunity.microsoft.com/t5/internet-of-things-blog/azure-iot-tls-critical-changes-are-almost-here-and-why-you/ba-p/2393169).
-## Optional manual IoT hub migration
-
-If you've prepared your devices and are ready for the TLS certificate migration, you can manually migrate your IoT hub root certificates yourself.
-
-After you migrate to the new root certificate, it will take about 45 minutes for all devices to disconnect and reconnect with the new certificate. This timing is because the Azure IoT SDKs are programmed to reverify their connection every 45 minutes. If you've implemented a different pattern in your solution, then your experience may vary.
-
->[!NOTE]
->There is no manual migration option for Device Provisioning Service instances. That migration will happen automatically once all IoT hub instances have migrated. No additional action is required from you beyond having the new root certificate on your devices.
-
-# [Azure portal](#tab/portal)
-
-1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
-
-1. Select **Certificates** in the **Security settings** section of the navigation menu.
-
-1. Select the **TLS certificate** tab and select **Migrate to DigiCert Global G2**.
-
- :::image type="content" source="./media/migrate-tls-certificate/migrate-to-digicert-global-g2.png" alt-text="Screenshot of the TLS certificate tab, select 'Migrate to DigiCert Global G2.'":::
-
-1. A series of checkboxes asks you to verify that you've prepared your devices for the migration. Check each box, confirming that your IoT solution is ready for the migration. Then, select **Update**.
-
-1. Use the **Connected Devices** metric to verify that your devices are successfully reconnecting with the new certificate.
-
-For more information about monitoring your devices, see [Monitoring IoT Hub](monitor-iot-hub.md).
-
-If you encounter any issues, you can undo the migration and revert to the Baltimore CyberTrust Root certificate.
-
-1. Select **Revert to Baltimore root** to undo the migration.
-
-1. Again, a series of checkboxes asks you to verify that you understand how reverting to the Baltimore CyberTrust Root will affect your devices. Check each box, then select **Update**.
-
-# [Azure CLI](#tab/cli)
-
-Use the [az extension update](/cli/azure/extension#az-extension-update) command to make sure you have the latest version of the `azure-iot` extension.
-
-```azurecli-interactive
-az extension update --name azure-iot
-```
-
-Use the [az iot hub certificate root-authority show](/cli/azure/iot/hub/certificate/root-authority#az-iot-hub-certificate-root-authority-show) command to view the current certificate root-authority for your IoT hub.
-
-```azurecli-interactive
-az iot hub certificate root-authority show --hub-name <iothub_name>
-```
-
->[!TIP]
->In the Azure CLI, the existing Baltimore CyberTrust Root certificate is referred to as `v1`, and the new DigiCert Global Root G2 certificate is referred to as `v2`.
-
-Use the [az iot hub certificate root-authority set](/cli/azure/iot/hub/certificate/root-authority#az-iot-hub-certificate-root-authority-set) command to migrate your IoT hub to the new DigiCert Global Root G2 certificate.
-
-```azurecli-interactive
-az iot hub certificate root-authority set --hub-name <iothub_name> --certificate-authority v2
-```
-
-Verify that your migration was successful. We recommend using the **connected devices** metric to view devices disconnecting and reconnecting post-migration.
-
-For more information about monitoring your devices, see [Monitoring IoT Hub](monitor-iot-hub.md).
-
-If you encounter any issues, you can undo the migration and revert to the Baltimore CyberTrust Root certificate by running the previous command again with `--certificate authority v1`.
--- ## Check the migration status of an IoT hub To know whether an IoT hub has been migrated or not, check the active certificate root for the hub.
You can migrate your application from the Baltimore CyberTrust Root to the DigiC
Several factors can affect device reconnection behavior.
-Devices are configured to reverify their connection at a specific interval. The default in the Azure IoT SDKs is to reverify every 45 minutes. If you've implemented a different pattern in your solution, then your experience may vary.
+Devices are configured to reverify their connection at a specific interval. The default in the Azure IoT SDKs is to reverify every 45 minutes. If you've implemented a different pattern in your solution, then your experience might vary.
-Also, as part of the migration, your IoT hub may get a new IP address. If your devices use a DNS server to connect to IoT hub, it can take up to an hour for DNS servers to refresh with the new address. For more information, see [IoT Hub IP addresses](iot-hub-understand-ip-address.md).
+Also, as part of the migration, your IoT hub might get a new IP address. If your devices use a DNS server to connect to IoT hub, it can take up to an hour for DNS servers to refresh with the new address. For more information, see [IoT Hub IP addresses](iot-hub-understand-ip-address.md).
### When can I remove the Baltimore Cybertrust Root from my devices?
You can remove the Baltimore root certificate once all stages of the migration a
## Troubleshoot
-### Troubleshoot the self-migration tool
-
-If you're using the CLI commands to migrate to a new root certificate and receive an error that `root-authority` isn't a valid command, make sure that you're running the latest version of the **azure-iot** extension.
-
-1. Use `az extension list` to verify that you have the correct extension installed.
-
- ```azurecli
- az extension list
- ```
-
- This article uses the newest version of the Azure IoT extension, called `azure-iot`. The legacy version is called `azure-cli-iot-ext`. You should only have one version installed at a time.
-
- Use `az extension remove --name azure-cli-iot-ext` to remove the legacy version of the extension.
-
- Use `az extension add --name azure-iot` to add the new version of the extension.
-
-1. Use `az extension update` to install the latest version of the **azure-iot** extension.
-
- ```azurecli
- az extension update --name azure-iot
- ```
-
-### Troubleshoot device reconnection
- If you're experiencing general connectivity issues with IoT Hub, check out these troubleshooting resources: * [Connection and retry patterns with device SDKs](../iot-develop/concepts-manage-device-reconnections.md#connection-and-retry).
logic-apps Mainframe Modernization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/mainframe-modernization-overview.md
+
+ Title: Mainframe and midrange integration workflows
+description: Learn about building mainframe and midrange system integration solutions in Azure Logic Apps using mainframe and midrange connectors.
++++ Last updated : 11/02/2023+
+#CustomerIntent: As an integration developer, I need to learn about mainframe and midrange system integration with Standard workflows in Azure Logic Apps.
++
+# Mainframe and midrange modernization with Azure Logic Apps
+
+This guide describes how your organization can increase business value and agility by extending your mainframe and midrange system workloads to Azure using workflows in Azure Logic Apps. The current business world is experiencing an era of hyper innovation and is on a permanent quest to obtain enterprise efficiencies, cost reduction, growth, and business alignment. Organizations are looking for ways to modernize, and one effective strategy is to increase and augment business value.
+
+For organizations with investments in mainframe and midrange systems, this means making the best use of platforms that sent humans to the moon or helped build current financial markets and extend their value using the cloud and artificial intelligence. This scenario is where Azure Logic Apps and its native capabilities for integrating with mainframe and midrange systems come into play. Among other features, this Azure cloud service incorporates the core capabilities of Host Integration Server (HIS), which has been used at the core of Microsoft's most strategic customers for more than 30 years.
+
+When enterprise developers build integration workflows with Azure Logic Apps, they can more quickly deliver new applications using little to no code or less custom code. Developers who use Visual Studio Code and Visual Studio can be more productive than those who use IBM mainframe development tools and technologies because they don't require knowledge about mainframe systems and infrastructure. Azure Logic Apps empowers business analysts and decision makers to more quickly analyze and report vital legacy information. They can directly access data in mainframe data sources, which removes the need to have mainframe developers create programs that extract and convert complex mainframe structures.
+
+## Cloud native capabilities for mainframe and midrange system integration
+
+Since 1990, Microsoft has provided integration with mainframe and midrange systems through Microsoft Communications Server. Further evolution of Microsoft Communications Server created Host Integration Server (HIS) in 2000. While HIS started as a System Network Architecture (SNA) Gateway, HIS expanded to include IBM data stores (DB2, VSAM, and Informix), IBM transaction systems (CICS, IMS, and IBMi), and IBM messaging (MQ Series). Microsoft's strategic customers have used these technologies for more than 20 years. To empower customers that run applications and data on Azure to continue using these technologies, Azure Logic Apps and Visual Studio have gradually incorporated these capabilities. For example, Visual Studio includes the following designers: HIS Designer for Logic Apps and the 3270 Design Tool.
++
+For more information about the Microsoft's capabilities for mainframe and midrange integration, continue to the following sections.
+
+### HIS Designer for Logic Apps
+
+This tool creates mainframe and midrange system metadata artifacts for Azure Logic Apps and works with Microsoft Visual Studio by providing a graphical designer so that you can create, view, edit, and map metadata objects to mainframe artifacts. Azure Logic Apps uses these maps to mirror the programs and data in mainframe and midrange systems. For more information, see [HIS Designer for Logic Apps](/host-integration-server/core/application-integration-ladesigner-2).
+
+### Microsoft 3270 Design Tool
+
+This tool records screens, navigation paths, methods, and parameters for the tasks in your application so that you can add and run those tasks as 3270 connector actions. While the HIS Designer for Logic Apps targets transactional systems and data, the 3270 Design Tool targets 3270 applications. For more information, see [3270 Design Tool](/host-integration-server/core/application-integration-3270designer-2).
+
+### Azure Logic Apps connectors for IBM mainframe and midrange systems
+
+The following sections describe the [built-in, service provider-based connectors](custom-connector-overview.md#service-provider-interface-implementation) that you can use to access and interact with IBM mainframe and midrange systems when you create Standard workflows in Azure Logic Apps.
+
+> [!NOTE]
+>
+> Although some of the following connectors are available as "shared" connectors that run
+> in global Azure, this guide is focused on the built-in, service provider-based connectors,
+> which are available only when you create Standard workflows in Azure Logic Apps.
+
+#### IBM 3270
+
+This Azure Logic Apps connector for 3270 allows Standard workflows to access and run IBM mainframe applications that you usually drive by navigating through 3270 emulator screens. The connector uses the TN3270 stream. For more information, see [Integrate 3270 screen-driven apps on IBM mainframes with Azure by using Azure Logic Apps and IBM 3270 connector](../connectors/integrate-3270-apps-ibm-mainframe.md).
+
+#### IBM Customer Information Control System (CICS)
+
+This Azure Logic Apps connector for CICS provides multiple protocols, including TCP/IP and HTTP, for Standard workflows to interact and integrate with CICS programs. If you need APPC support, the connector provides access to CICS transactions using LU6.2, which is available only in Host Integration Server (HIS). For more information, see [Integrate CICS programs on IBM mainframes with Standard workflows in Azure Logic Apps using the IBM CICS connector](../connectors/integrate-cics-apps-ibm-mainframe.md).
+
+#### IBM DB2
+
+This Azure Logic Apps connector for DB2 enables connections between Standard workflows and DB2 databases that are either on premises or in Azure. The connector offers enterprise IT professionals and developers direct access to vital information stored in DB2 database management systems. For more information, see [Access and manage IBM DB2 resources using Azure Logic Apps](../connectors/connectors-create-api-db2.md).
+
+#### IBM Host Files
+
+This Azure Logic Apps "connector" for Host Files provides a thin wrapper around the "Flat File Parser" feature in Host Integration Server. This offline "connector" provides operations that parse or generate binary data to and from host files. These operations require this data to come from any trigger or another action that produces binary data. For more information, see [Parse and generate IBM host files using Azure Logic Apps](../connectors/integrate-host-files-ibm-mainframe.md).
+
+#### IBM Information Management System (IMS)
+
+This Azure Logic Apps connector for IMS uses the IBM IMS Connect component, which provides high performance access from Standard workflows to IMS transactions using TCP/IP. This model uses the IMS message queue for processing data. For more information, see [Integrate IMS programs on IBM mainframes with Standard workflows in Azure Logic Apps using the IBM IMS connector](../connectors/integrate-ims-apps-ibm-mainframe.md).
+
+#### IBM MQ
+
+This Azure Logic Apps connector for MQ enables connections between Standard workflows and an MQ server on premises or in Azure. We also provide MQ Integration capabilities with Host Integration Server and BizTalk Server. For more information, see [Connect to an IBM MQ server from a workflow in Azure Logic Apps](../connectors/connectors-create-api-mq.md).
+
+## How to modernize mainframe workloads with Azure Logic Apps?
+
+While multiple approaches for modernization exist, Microsoft recommends modernizing mainframe applications by following an iterative, agile-based model. Mainframes host multiple environments with applications and data. A successful modernization strategy includes ways to handle the following tasks:
+
+- Maintain the current service level indicators and objectives.
+- Manage coexistence between legacy data along with migrated data.
+- Manage application interdependencies.
+- Define the future of the scheduler and jobs.
+- Define a strategy for replacing non-Microsoft tools.
+- Conduct hybrid functional and nonfunctional testing activities.
+- Maintain external dependencies or interfaces.
+
+The following paths are the most common ways to modernize mainframe applications:
+
+- Big bang
+
+ This approach is largely based on the waterfall software delivery model but with iterations in phases.
+
+- Agile waves
+
+ This approach follows the Agile principles of software engineering.
+
+The choice between these paths depends on your organization's needs and scenarios. Each path has benefits and drawbacks to consider. The following sections provide more information about these modernization approaches.
+
+### Big bang or waterfall
+
+A big bang migration typically has the following phases:
++
+1. **Envisioning**: Kickoff
+
+1. **Planning**: Identify and prepare planning deliverables, such as scope, time, and resources.
+
+1. **Building**: Begins after planning deliverables are approved
+
+ This phase also expects that all the work for dependencies has been identified, and then migration activities can begin. Multiple iterations occur to complete the migration work.
+
+1. **Stabilizing or testing**: Begins when the migrated environment, dependencies, and applications are tested against the test regions in the mainframe environment.
+
+1. **Deploy**: After everything is approved, the migration goes live into production.
+
+Organizations that typically choose this approach focus on locking time, migration scope, and resources. This path sounds like a positive choice but includes the following risks:
+
+- Migrations can take months or even years.
+
+- The analysis that you perform at the start of the migration journey or during planning is no longer accurate because that information is usually outdated.
+
+- Organizations typically focus on having comprehensive documentation to reduce delivery risks for delivery.
+
+ However, the time spent on providing planning artifacts causes exactly the opposite effect. Focusing on planning more than executing tends to create execution delays, which cause increased costs in the long run.
+
+### Agile waves
+
+An Agile approach is results oriented and focused on building software and not planning deliverables. The first stages of an Agile delivery might be chaotic and complex for the organizational barriers that need to break. However, when the migration team matures after several sprints of execution, the journey becomes smoother. The goal is to frequently release features to production and to provide business value sooner than with a big bang approach.
+
+An Agile waves migration typically has the following sprints:
++
+- Sprint zero (0)
+
+ - Define the team, an initial work backlog, and the core dependencies.
+ - Identify the features and a Minimum Viable Product (MVP) to deliver.
+ - Kick off mainframe readiness with a selected set of work items or user stories to begin the work.
+
+- Sprint 1, 2, ..., *N*
+
+ Each sprint has a goal where the team maintains a shipping mindset, meaning that they focus on completing migration goals and releasing deliverables to production. The team can use a group of sprints to deliver a specific feature or a wave of features. Each feature includes slices of integration workloads.
++
+Shared elements, such as jobs and interdependencies, exist and have impact across the entire environment. A successful strategy focuses on partially enabling jobs, redesigning applications for modernization, and leaving the systems with most interdependencies until the end to first reduce the amount of migration work and then complete the scope of the modernization effort.
+
+## Modernization patterns
+
+Good design includes factors such as consistency and coherence in component design and deployment, maintainability to simplify administration and development, and reusability that allows other applications and scenarios to reuse components and subsystems. For cloud-hosted applications and services, decisions made during the design and implementation phase have a huge impact on quality and the total cost of ownership.
+
+The Azure Architecture Center provides tested [design and implementation patterns](/azure/architecture/patterns/category/design-implementation) that describe the problem that they address, considerations for applying the pattern, and an example based on Microsoft Azure. While multiple design and implementation patterns exist, the two most relevant patterns for mainframe modernization include the "Anti-corruption Layer" and "Strangler Fig" patterns.
+
+### Anti-corruption Layer pattern
+
+Regardless which modernization approach that you select, you need to implement an "anti-corruption layer" using Azure Logic Apps. This service becomes the façade or adapter layer between the mainframe legacy system and Azure. For an effective approach, identify the mainframe workloads to integrate or coexist as mainframe integration workloads. Create a strategy for each integration workload, which is the set of interfaces that you need to enable for migrating a mainframe application.
++
+For more information, see [Anti-corruption Layer](/azure/architecture/patterns/anti-corruption-layer).
+
+### Strangler Fig pattern
+
+After you implement the anti-corruption layer, modernization progressively happens. For this phase, you need to use the "Strangler Fig" pattern where you identify mainframe workloads or features that you can incrementally modernize. For example, if you choose to modernize a CICS application, you have to modernize not only the CICS programs, but most likely the 3270 applications along with their corresponding external dependencies, data, and jobs.
+
+Eventually, after you replace all the workloads or features in the mainframe system with your new system, you'll finish the migration process, which means that you can decommission your legacy system.
++
+For more information, see [Strangler Fig pattern](/azure/architecture/patterns/strangler-fig).
+
+## Next step
+
+- [Azure Architecture Center for mainframes and midrange systems](/azure/architecture/browse/?terms=mainframe)
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
If you plan to use __Azure Machine Learning batch endpoints__ for deployment, ad
* Private endpoint to Azure AI Services * Private endpoint to Azure Cognitive Search
+### Scenario: Use HuggingFace models
+
+If you plan to use __HuggingFace models__ with Azure Machine Learning, add outbound _FQDN_ rules to allow traffic to the following hosts:
+
+> [!WARNING]
+> FQDN outbound rules are implemented using Azure Firewall. If you use outbound FQDN rules, charges for Azure Firewall are included in your billing. For more information, see [Pricing](#pricing).
+
+* `docker.io`
+* `*.docker.io`
+* `*.docker.com`
+* `production.cloudflare.docker.com`
+* `cdn.auth0.com`
+* `cdn-lfs.huggingface.co`
+ ## Private endpoints Private endpoints are currently supported for the following Azure
machine-learning How To Package Models App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-package-models-app-service.md
Follow these steps to prepare your system.
- This article uses the example in the folder **endpoints/online/deploy-packages/mlflow-model**.
+ This article uses the example in the folder **endpoints/online/deploy-with-packages/mlflow-model**.
1. Connect to the Azure Machine Learning workspace where you'll do your work.
Follow these steps to prepare your system.
# [Azure CLI](#tab/cli)
- ```azurecli
- MODEL_NAME='heart-classifier-mlflow'
- MODEL_PATH='model'
- az ml model create --name $MODEL_NAME --path $MODEL_PATH --type mlflow_model
- ```
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/deploy-with-packages/mlflow-model/deploy.sh" ID="register_model" :::
# [Python](#tab/sdk)
- ```python
- model_name = "heart-classifier-mlflow"
- model_path = "model"
- model = ml_client.models.create_or_update(
- Model(name=model_name, path=model_path, type=AssetTypes.MLFLOW_MODEL)
- )
- ```
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/deploy-with-packages/mlflow-model/sdk-deploy-and-test.ipynb?name=register_model)]
## Deploy a model package to the Azure App Service
In this section, you package the previously registered MLflow model and deploy i
__package-external.yml__
- ```yml
- $schema: http://azureml/sdk-2-0/ModelVersionPackage.json
- target_environment_name: heart-classifier-mlflow-pkg
- inferencing_server:
- type: azureml_online
- model_configuration:
- mode: copy
- ```
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/deploy-with-packages/mlflow-model/package-external.yml" :::
# [Python](#tab/sdk)
- ```python
- package_config = ModelPackage(
- target_environment_name="heart-classifier-mlflow-pkg",
- inferencing_server=AzureMLOnlineInferencingServer(),
- model_configuration=ModelConfiguration(
- mode="copy"
- )
- )
- ```
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/deploy-with-packages/mlflow-model/sdk-deploy-and-test.ipynb?name=configure_package_copy)]
> [!TIP]
In this section, you package the previously registered MLflow model and deploy i
# [Azure CLI](#tab/cli)
- ```azurecli
- az ml model package -n $MODEL_NAME -l latest --file package-external.yml
- ```
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/deploy-with-packages/mlflow-model/deploy.sh" ID="build_package_copy" :::
# [Python](#tab/sdk)
- ```python
- model_package = ml_client.models.begin_package(model_name, model.version, package_config)
- ```
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/deploy-with-packages/mlflow-model/sdk-deploy-and-test.ipynb?name=build_package_copy)]
1. The result of the package operation is an environment in Azure Machine Learning. The advantage of having this environment is that each environment has a corresponding docker image that you can use in an external deployment. Images are hosted in the Azure Container Registry. The following steps show how you get the name of the generated image:
In this section, you package the previously registered MLflow model and deploy i
1. Select **Create**. The model is now deployed in the App Service you created. 1. The way you invoke and get predictions depends on the inference server you used. In this example, you used the Azure Machine Learning inferencing server, which creates predictions under the route `/score`. For more information about the input formats and features, see the details of the package [azureml-inference-server-http](https://pypi.org/project/azureml-inference-server-http/).
+
+ 1. Prepare the request payload. The format for an MLflow model deployed with Azure Machine Learning inferencing server is as follows:
+
+ __sample-request.json__
+
+ :::code language="json" source="~/azureml-examples-main/cli/endpoints/online/deploy-with-packages/mlflow-model/sample-request.json" :::
1. Test the model deployment to see if it works.
In this section, you package the previously registered MLflow model and deploy i
## Next step > [!div class="nextstepaction"]
-> [Model packages for deployment (preview)](concept-package-models.md)
+> [Model packages for deployment (preview)](concept-package-models.md)
machine-learning How To Registry Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-registry-network-isolation.md
This section describes the scenarios and required network configuration if you h
The identity (for example, a Data Scientist's Microsoft Entra user identity) used to create assets in the registry must be assigned the __AzureML Registry User__, __owner__, or __contributor__ role in Azure role-based access control. For more information, see the [Manage access to Azure Machine Learning](how-to-assign-roles.md) article. ### Share assets from workspace to registry
+> [!NOTE]
+> Sharing a component from Azure Machine Learning workspace to Azure Machine Learning registry is not supported currently.
-Due to data exfiltration protection, it isn't possible to share an asset from secure workspace to a public registry if the storage account containing the asset has public access disabled.
-
-### Use assets from registry in workspace
+Due to data exfiltration protection, it isn't possible to share an asset from secure workspace to a public registry if the storage account containing the asset has public access disabled. To enable asset sharing from workspace to registry:
+* Go to the **Networking** blade on the storage account attached to the workspace (from where you would like to allow sharing of assets to registry)
+* Set __Public network access__ to **Enabled from selected virtual networks and IP addresses**
+* Scroll down and go to __Resource instances__ section. Select __Resource type__ to **Microsoft.MachineLearningServices/registries** and set __Instance name__ to the name of Azure Machine Learning registry resource were you would like to enable sharing to from workspace.
+* Make sure to check rest of the settings as per your network configuration.
+
+### Use assets from registry in workspace
Example operations: * Submit a job that uses an asset from registry.
To connect to a registry that's secured behind a VNet, use one of the following
### Share assets from workspace to registry > [!NOTE]
-> Currently, sharing an asset from secure workspace to a Azure machine learning registry is not supported if the storage account containing the asset has public access disabled.
+> Sharing a component from Azure Machine Learning workspace to Azure Machine Learning registry is not supported currently.
-Create a private endpoint to the registry, storage and ACR from the VNet of the workspace. If you're trying to connect to multiple registries, create private endpoint for each registry and associated storage and ACRs. For more information, see the [How to create a private endpoint](#how-to-create-a-private-endpoint) section.
+Due to data exfiltration protection, it isn't possible to share an asset from secure workspace to a private registry if the storage account containing the asset has public access disabled. To enable asset sharing from workspace to registry:
+* Go to the **Networking** blade on the storage account attached to the workspace (from where you would like to allow sharing of assets to registry)
+* Set __Public network access__ to **Enabled from selected virtual networks and IP addresses**
+* Scroll down and go to __Resource instances__ section. Select __Resource type__ to **Microsoft.MachineLearningServices/registries** and set __Instance name__ to the name of Azure Machine Learning registry resource were you would like to enable sharing to from workspace.
+* Make sure to check rest of the settings as per your network configuration.
### Use assets from registry in workspace
machine-learning How To Share Models Pipelines Across Workspaces With Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-models-pipelines-across-workspaces-with-registries.md
Previously updated : 05/23/2022 Last updated : 11/02/2023
The code examples in this article are based on the `nyc_taxi_data_regression` sa
```bash git clone https://github.com/Azure/azureml-examples cd azureml-examples
-# changing branch is temporary until samples merge to main
-git checkout mabables/registry
``` # [Azure CLI](#tab/cli)
managed-ccf Application Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/application-scenarios.md
+
+ Title: Azure Managed Confidential Consortium Framework application scenarios
+description: An overview of the application scenarios enabled by Azure Managed CCF.
++++ Last updated : 09/28/2023+++
+# Azure Managed Confidential Consortium Framework application scenarios
+
+The Confidential Consortium Framework (CCF) is an open-source framework for building secure, highly available, and performant applications that focus on multi-party compute and data. CFF uses the power of trusted execution environments (TEE, or enclave), decentralized systems concepts, and cryptography, to enable enterprise-ready multiparty systems. CCF is based on industry standard web technologies that allows clients to interact with CCF aware applications over HTTPS.
+
+The framework runs exclusively on hardware-backed secure enclaves, a heavily monitored and isolated runtime environment that keeps potential attacks at bay. It also runs on a minimalistic Trusted Computing Base (TCB) and limits the operator's role.
+
+Following are a few example application scenarios that CCF enables.
+
+## Decentralized Role Based Access Control(RBAC)
+
+A CCF application enables its members to run a confidential system with attested components to propose and approve changes without a single party holding all the power. Organizations have invested time and resources to build and operate Line of Business (LOB) applications that are critical to the smooth operation of their business. Many organizations want to implement confidentiality and decentralized governance in the LOB applications but are at a gridlock when deciding between day-to-day operation vs. funding new research and development.
+
+A recommended approach is to deploy the LOB application on an Azure Confidential Compute offering like an [Azure Confidential VM](../confidential-computing/confidential-vm-overview.md) or [Azure Confidential Containers](../confidential-computing/confidential-containers.md), which requires minimal to no changes. Specific parts of the application requiring multi-party governance can be offload to Managed CCF.
+
+Due to several recent breaches in the supply chain industry, organizations are exploring ways to increase visibility and auditability into their manufacturing process. On the other hand, consumer awareness on unfair manufacturing processes and mistreatment of workforce has increased. In this example, we describe a scenario that tracks the life of coffee beans from the farm to the cup. Fabrikam is a coffee bean roaster and retailer. It hosts an existing LOB web application that is used by different personas like farmers, distributors, Fabrikam's procurement team and the end consumers. To improve security and auditability, Fabrikam deploys the web application to an Azure Confidential VM and uses decentralized RBAC managed in Managed CCF by a consortium of members.
+
+A sample [decentralized RBAC application](https://github.com/microsoft/ccf-app-samples/tree/main/decentralize-rbac-app) is published in GitHub for reference.
+
+## Data for Purpose
+
+A CCF application enables multiple participants to share data confidentially for specific purposes by trusting only the hardware (TEEs). It can selectively reveal aggregated information or selectively reveal raw data to authorized parties (for example, regulator).ΓÇïΓÇï
+
+A use case that requires aggregation tied with confidentiality is data reconciliation. It is a common and frequent action in the financial services, healthcare and insurance domains. In this example, we target the healthcare industry. Patient data is generated, consumed and saved across multiple providers like the physician's offices, hospitals and insurance providers. It would be prudent to reconcile the patient data from the different sources to derive useful insights that could throw light into the effectiveness of the prescribed drugs and alter course if needed. However, due to industry and governmental regulations and privacy concerns, it is hard to share data in the clear.
+
+A CCF application is a good fit for this scenario as it guarantees confidentiality and auditability of the transactions. A sample [data reconciliation application](https://github.com/microsoft/ccf-app-samples/tree/main/data-reconciliation-app) is published in GitHub. It ingests data from three sources, performs aggregation inside a TEE and produces a report that summarizes the similarities and the differences in the data.
+
+## Transparent System Operation
+
+CCF enables organizations to operate a system where users can independently confirm that it is run correctly.ΓÇïΓÇï Organizations can share the CCF source code for users to audit both the systemΓÇÖs governance and application code and verify that their transactions are handled according to expectations.ΓÇï
+
+Auditability is one of the core tenants of the financial services industry. The various government and industry standards emphasize periodic audits of the processes, practices and services to ensure that customer data is handled securely and confidentially at all times. When it comes to implementation, confidentiality and auditability are at odds. CCF applications can play a significant role in breaking this barrier. By using receipts, auditors can independently verify the integrity of transactions without accessing the online service. Refer to the [Audit](https://microsoft.github.io/CCF/main/audit/https://docsupdatetracker.net/index.html) section in the CCF documentation to learn more about offline verification.
+
+A sample [banking application](https://github.com/microsoft/ccf-app-samples/tree/main/banking-app) is published in GitHub to demonstrate auditability in CCF applications.
+
+## Next steps
+
+- [Quickstart: Deploy an Azure Managed CCF application](quickstart-deploy-application.md)
+- [Quickstart: Azure CLI](quickstart-python.md)
+- [FAQ](faq.yml)
managed-ccf Confidential Consortium Framework Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/confidential-consortium-framework-overview.md
+
+ Title: Learn about the Confidential Consortium Framework
+description: An overview of Confidential Consortium Framework.
++++ Last updated : 09/28/2023++++
+# Overview of Confidential Consortium Framework
+
+The Confidential Consortium Framework (CCF) is an open-source framework for building secure, highly available, and performant applications that focus on multi-party compute and data. CCF leverages the power of trusted execution environments (TEE, or enclave), decentralized systems concepts, and cryptography, to enable enterprise-ready multiparty systems. CCF is based on industry standard web technologies that allows clients to interact with CCF aware applications over HTTPS.
+
+The following diagram shows a basic CCF network made of three nodes. All nodes run the same application code inside an enclave. The effects of user (business) and member (governance) transactions are eventually committed to a replicated, encrypted ledger. A consortium of members is in charge of governing the network.
+
+## Core Concepts
+
+### Network and Nodes
+
+A CCF network consists of several nodes, each running on top of a Trusted Execution Environment (TEE), such as Intel SGX. A CCF network is decentralized and highly available. Nodes are run and maintained by Operators. However, nodes must be trusted by the consortium of members before participating in a CCF network.
+
+To learn more about the operators, refer to the [Operators](https://microsoft.github.io/CCF/main/operations/https://docsupdatetracker.net/index.html) section in the CCF documentation.
+
+### Application
+
+Each node runs the same application, written in JavaScript. An application is a collection of endpoints that can be triggered by trusted Users' HTTP commands over TLS. Each endpoint can mutate or read the in-enclave-memory Key-Value Store that is replicated across all nodes in the network. Changes to the Key-Value Store must be agreed by at least a majority of nodes before being applied.
+
+The Key-Value Store is a collection of maps (associating a key to a value) defined by the application. These maps can be private (encrypted in the ledger) or public (integrity-protected and visible by anyone that has access to the ledger).
+
+As all the nodes in the CCF network can read the content of private maps, the application logic must control the access to such maps. As every application endpoint has access to the user identity in the request, it is easy to implement an authorization policy to restrict access to the maps.
+
+To learn more about CCF applications and start building it, refer to the [Get Started](get-started.md) page.
+
+### Ledger
+
+All changes to the Key-Value Store are encrypted and recorded by each node of the network to disk to a decentralized auditable ledger. The integrity of the ledger is guaranteed by a Merkle Tree whose root is periodically signed by the current primary or leader node.
+
+Find out how to audit the CCF ledger in the [Audit]https://microsoft.github.io/CCF/main/audit/https://docsupdatetracker.net/index.html) section in the CCF documentation.
+
+### Governance
+
+A CCF network is governed by a consortium of members. The scriptable Constitution, recorded in the ledger, defines a set of rules that members must follow.
+
+Members can submit proposals to modify the state of the Key-Value Store. For example, members can vote to allow a new trusted user to issue requests to the application or to add a new member to the consortium.
+
+Proposals are executed only when the conditions defined in the constitution are met (for example, a majority of members have voted favorably for that proposal).
+
+To learn more about the customizable constitution and governance, refer to the [Governance](https://microsoft.github.io/CCF/main/governance/https://docsupdatetracker.net/index.html) section in the CCF documentation.
+
+## Next steps
+
+- [Quickstart: Azure portal](quickstart-portal.md)
+- [Quickstart: Azure CLI](quickstart-python.md)
+- [FAQ](faq.yml)
managed-ccf Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/get-started.md
+
+ Title: Get started
+description: Learn to build and deploy a CCF JavaScript application
++ Last updated : 09/30/2023+++++
+# Build, test and deploy a TypeScript and JavaScript application
+
+This guide shows the steps to develop a TypeScript and JavaScript application targeting CCF, debug it locally and deploy it to a Managed CCF resource on the cloud.
+
+## Prerequisites
+
+- [Install CCF](https://github.com/Microsoft/CCF/releases)
+- Node.js
+- npm
+- [!INCLUDE [Prerequisites](./includes/proposal-prerequisites.md)]
+
+> This guide uses [Visual Studio Code](https://code.visualstudio.com/) as the IDE. But, any IDE with support for Node.js, JavaScript and TypeScript application development can be used.
+
+## Project set up
+
+1. Follow the [instructions](https://microsoft.github.io/CCF/main/build_apps/js_app_ts.html#conversion-to-an-app-bundle) in the CCF documentation to bootstrap the project and set up the required folder structure.
+
+## Develop the application
+
+1. Develop the TypeScript application by following the documentation [here](https://microsoft.github.io/CCF/main/build_apps/js_app_ts.html). Refer to the [CCF Key-Value store](https://microsoft.github.io/CCF/main/build_apps/kv/https://docsupdatetracker.net/index.html) documentation to learn about the naming standards and transaction semantics to use in the code. For examples and best practices, refer to the [sample applications](https://github.com/microsoft/ccf-app-samples) published in GitHub.
+
+## Build the application bundle
+
+1. The native format for JavaScript applications in CCF is a JavaScript application bundle, or short app bundle. A bundle can be wrapped directly into a governance proposal for deployment. Follow the instruction at [create an application bundle](https://microsoft.github.io/CCF/main/build_apps/js_app_bundle.html) in the CCF documentation to create an app bundle and prepare for deployment.
+
+2. Build the application. The application bundle is created in the dist folder. The application bundle is placed in a file named set_js_app.json.
+
+```bash
+npm run build
+
+> build
+> del-cli -f dist/ && rollup --config && cp app.json dist/ && node build_bundle.js dist/
++
+src/endpoints/all.ts → dist/src...
+created dist/src in 1.3s
+Writing bundle containing 8 modules to dist/bundle.json
+ls -ltr dist
+total 40
+drwxr-xr-x 4 settiy settiy 4096 Sep 11 10:20 src
+-rw-r--r-- 1 settiy settiy 3393 Sep 11 10:20 app.json
+-rw-r--r-- 1 settiy settiy 16146 Sep 11 10:20 set_js_app.json
+-rw-r--r-- 1 settiy settiy 16061 Sep 11 10:20 bundle.json
+```
+
+### Logging
+
+1. CCF provides macros to add your own lines to the nodeΓÇÖs output. Follow the instructions available at [add logging to an application](https://microsoft.github.io/CCF/main/build_apps/logging.html) in the CCF documentation.
+
+## Deploy a 1-node CCF network
+
+1. Run the /opt/ccf_virtual/bin/sandbox.sh script to start a 1-node CCF network and deploy the application bundle.
+
+```bash
+sudo /opt/ccf_virtual/bin/sandbox.sh --js-app-bundle ~/ccf-app-samples/banking-app/dist/
+Setting up Python environment...
+Python environment successfully setup
+[10:40:37.516] Virtual mode enabled
+[10:40:37.517] Starting 1 CCF node...
+[10:40:41.488] Started CCF network with the following nodes:
+[10:40:41.488] Node [0] = https://127.0.0.1:8000
+[10:40:41.489] You can now issue business transactions to the libjs_generic application
+[10:40:41.489] Loaded JS application: /home/demouser/ccf-app-samples/banking-app/dist/
+[10:40:41.489] Keys and certificates have been copied to the common folder: /home/demouser/ccf-app-samples/banking-app/workspace/sandbox_common
+[10:40:41.489] See https://microsoft.github.io/CCF/main/use_apps/issue_commands.html for more information
+[10:40:41.490] Press Ctrl+C to shutdown the network
+```
+
+2. The member certificate and the private key are available at /workspace/sandbox_0. The application log is available at /workspace/sandbox_0/out.
++
+3. At this point, we have created a local CCF network with one member and deployed the application. The network endpoint is `https://127.0.0.1:8000`. The member can participate in governance operations like updating the application or adding more members by submitting a proposal.
+
+```Bash
+curl -k --silent https://127.0.0.1:8000/node/version | jq
+{
+ "ccf_version": "ccf-4.0.7",
+ "quickjs_version": "2021-03-27",
+ "unsafe": false
+}
+```
+
+4. Download the service certificate from the network.
+
+```bash
+curl -k https://127.0.0.1:8000/node/network | jq -r .service_certificate > service_certificate.pem
+```
+
+## Update the application
+
+1. Application development is an iterative process. When new features are added or bugs are fixed, the application must be redeployed to the 1-node network which can be done with a set_js_app proposal.
+
+2. Rebuild the application to create a new set_js_app.json file in the dist folder.
+
+3. Create a proposal to submit the application. After the proposal is accepted, the new application is deployed to the 1-node network.
+
+> [!NOTE]
+> On a local 1-node network, a proposal is immediately accepted after it is submitted. There isn't a need to submit a vote to accept or reject the proposal. The rationale behind it is done to make the development process quick. However, this is different from how the governance works in Azure Managed CCF where member(s) must submit a vote to accept or reject a proposal.
+
+```Bash
+$ ccf_cose_sign1 --content dist/set_js_app.json --signing-cert workspace/sandbox_common/member0_cert.pem --signing-key workspace/sandbox_common/member0_privk.pem --ccf-gov-msg-type proposal --ccf-gov-msg-created_at `date -Is` | curl https://127.0.0.1:8000/gov/proposals -H 'Content-Type: application/cose' --data-binary @- --cacert service_cert.pem
+```
+
+## Deploy the application to a Managed CCF resource
+
+The next step is to [create a Managed CCF resource](quickstart-portal.md) and deploy the application by following the instructions at [deploy a JavaScript application](quickstart-deploy-application.md).
+
+## Next steps
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Create an Azure Managed CCF resource using the Azure portal](quickstart-portal.md)
+- [Quickstart: Deploy a JavaScript application to Azure Managed CCF](quickstart-deploy-application.md)
+- [How to: Update the JavaScript runtime options](how-to-update-javascript-runtime-options.md)
managed-ccf How To Activate Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/how-to-activate-members.md
+
+ Title: Activate members in an Azure Managed CCF resource
+description: Learn to activate the members in an Azure Managed CCF resource
++ Last updated : 09/08/2023+++++
+# Activate members in an Azure Managed CCF resource
+
+In this guide, you will learn how to activate the member(s) in an Azure Managed CCF (Managed CCF) resource. This tutorial builds on the Managed CCF resource created in the [Quickstart: Create an Azure Managed CCF resource using the Azure portal](quickstart-portal.md) tutorial.
+
+## Prerequisites
+
+- Python 3+.
+- Install the latest version of the [CCF Python package](https://pypi.org/project/ccf/).
+
+## Download the service identity
++
+## Activate Member(s)
+
+When a member is added to a Managed CCF resource, they are in the accepted state. They cannot participate in governance until they are activated. To do so, the member must acknowledge that they are satisfied with the state of the service (for example, after auditing the current constitution and the nodes currently trusted).
+
+1. The member must update and retrieve the latest state digest. In doing so, the new member confirms that they are satisfied with the current state of the service.
+
+```Bash
+curl https://confidentialbillingapp.confidential-ledger.azure.com/gov/ack/update_state_digest -X POST --cacert service_cert.pem --key member0_privk.pem --cert member0_cert.pem --silent | jq > request.json
+cat request.json
+{
+ "state_digest": <...>
+}
+```
++
+2. The member must sign the state digest using the ccf_cose_sign1 utility. This utility is installed along with the CCF Python package.
+
+```Bash
+ccf_cose_sign1 --ccf-gov-msg-type ack --ccf-gov-msg-created_at `date -Is` --signing-key member0_privk.pem --signing-cert member0_cert.pem --content request.json | \
+ curl https://confidentialbillingapp.confidential-ledger.azure.com/gov/ack --cacert service_cert.pem --data-binary @- -H "content-type: application/cose"
+```
+
+3. After the command completes, the member is active and can participate in governance. The members can be viewed using the following command.
++
+## Next steps
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Create an Azure Managed CCF resource](quickstart-portal.md)
+- [Quickstart: Deploy an Azure Managed CCF application](quickstart-deploy-application.md)
managed-ccf How To Backup Restore Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/how-to-backup-restore-resource.md
+
+ Title: Backup and restore an Azure Managed CCF resource
+description: Learn to back up and restore an Azure Managed CCF resource
++++ Last updated : 09/07/2023+
+#Customer intent: As a developer, I want to know how to perform a backup and restore of my Managed CCF app so that I can can access backups of my app files and restore my app in another region in the case of a disaster recovery.
++
+# Perform a backup and restore
+
+In this article, you'll learn to perform backup of an Azure Managed CCF (Managed CCF) resource and restore it to create a copy of the original Managed CCF resource. Here are some of the use cases that warrant this capability:
+
+- A Managed CCF resource is an append only ledger at the core. It is impossible to delete few erroneous transactions without impacting the integrity of the ledger. To keep the data clean, a business could decide to recreate the resource sans the erroneous transactions.
+- A developer could add reference data into a Managed CCF resource and create a back of it. The developer can use the copy later to create a fresh Managed CCF resource and save time.
+
+This article uses the commands found at the [Managed CCF's REST API Docs](/rest/api/confidentialledger/managed-ccf).
+
+## Prerequisites
+
+- Install the [Azure CLI](/cli/azure/install-azure-cli).
+- An Azure Storage Account.
+
+## Setup
+
+### Generate an access token
+
+An access token is required to use the Managed CCF REST API. Execute the following command to generate an access token.
+
+> [!NOTE]
+> An access token has a finite lifetime after which it is unusable. Generate a new token if the API request fails due to a HTTP 401 Unauthorized error.
+
+```bash
+az account get-access-token ΓÇôsubscription <subscription_id>
+```
+
+### Generate a Shared Access Signature token
+
+The backup is stored in an Azure Storage Fileshare that is owned and controlled by you. The backup and restore API requests require a [Shared Access Signature](../storage/common/storage-sas-overview.md) token to grant temporary read and write access to the Fileshare. Follow these steps:
+
+> [!NOTE]
+> A Shared Access Signature(SAS) token has a finite lifetime after which it is unusable. We recommend using short lived tokens to avoid tokens being leaked into the public and misused.
+
+1. Navigate to the Azure Storage Account where the backups will be stored.
+2. Navigate to the `Security + networking` -> `Shared access signature` blade.
+3. Generate a SAS token with the following configuration:
+
+ :::image type="content" source="./media/how-to/cedr-sas-uri.png" lightbox="./media/how-to/cedr-sas-uri.png" alt-text="Screenshot of the Azure portal in a web browser, showing the required SAS Generation configuration.":::
+4. Save the `File service SAS URL`.
+
+## Backup
+
+### Create a backup
+
+Creating a backup of the Managed CCF resource creates a Fileshare in the storage account. This backup can be used to restore the Managed CCF resource at a later time.
+
+Follow these steps to perform a backup.
+
+1. [Generate and save a bearer token](#generate-an-access-token) generated for the subscription that your Managed CCF resource is located in.
+1. [Generate a SAS token](#generate-a-shared-access-signature-token) for the Storage Account to store the backup.
+1. Execute the following command to trigger a backup. You must supply a few parameters:
+ - **subscription_id**: The subscription where the Managed CCF resource is deployed.
+ - **resource_group**: The resource group name of the Managed CCF resource.
+ - **app_name**: The name of the Managed CCF resource.
+ - **sas_token**: The Shared Access Signature token.
+ - **restore_region**: An optional parameter to indicate a region where the backup would be restored. It can be ignored if you expect to restore the backup in the same region as the Managed CCF resource.
+ ```bash
+ curl --request POST 'https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.ConfidentialLedger/ManagedCCFs/<app_name>/backup?api-version=2023-06-28-preview' \
+ --header 'Authorization: Bearer <bearer_token>' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "uri": "<sas_token>",
+ "restoreRegion": "<restore_region>"
+ }'
+ ```
+1. A Fileshare is created in the Azure Storage Account with the name `<mccf_app_name>-<timestamp>`.
+
+### Explore the backup files
+
+After the backup completes, you can view the files stored in your Azure Storage Fileshare.
++
+Refer to the following articles to explore the backup files.
+
+- [Understanding your Ledger and Snapshot Files](https://microsoft.github.io/CCF/main/operations/ledger_snapshot.html)
+- [Viewing your Ledger and Snapshot Files](https://microsoft.github.io/CCF/main/audit/python_library.html)
+
+## Restore
+
+### Create a Managed CCF resource using the backup files
+
+This restores the Managed CCF resource using a copy of the files in the backup Fileshare. The resource will be restored to the same state and transaction ID at the time of the backup.
+
+> [!IMPORTANT]
+> The restore will fail if the backup files are older than 90 days.
+
+> [!NOTE]
+> The original Managed CCF resource must be deleted before a restore is initiated. The restore command will fail if the original instance exists. [Delete your original Managed CCF resource](/cli/azure/confidentialledger/managedccfs?#az-confidentialledger-managedccfs-delete).
+>
+> The **app_name** should be the same as the original Managed CCF resource.
+
+Follow these steps to perform a restore.
+
+1. [Generate a Bearer token](#generate-an-access-token) for the subscription that the Managed CCF resource is located in.
+2. [Generate a SAS token](#generate-a-shared-access-signature-token) for the storage account that has the backup files.
+3. Execute the following command to trigger a restore. You must supply a few parameters.
+ - **subscription_id**: The subscription where the Managed CCF resource is deployed.
+ - **resource_group**: The resource group name of the Managed CCF resource.
+ - **app_name**: The name of the Managed CCF resource.
+ - **sas_token**: The Shared Access Signature token.
+ - **restore_region**: An optional parameter to indicate a region where the backup would be restored. It can be ignored if you expect to restore the backup in the same region as the Managed CCF resource.
+ - **fileshare_name**: The name of the Fileshare where the backup files are located.
+
+ ```bash
+ curl --request POST 'https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.ConfidentialLedger/ManagedCCFs/<app_name>/restore?api-version=2023-06-28-preview' \
+ --header 'Authorization: Bearer <bearer_token>' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "uri": "<sas_token>",
+ "restoreRegion": "<restore_region>",
+ "fileShareName": "<fileshare_name>"
+ }'
+ ```
+1. At the end of the command, the Managed CCF resource is restored.
+
+## Next steps
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Azure portal](quickstart-portal.md)
+- [Quickstart: Azure CLI](quickstart-python.md)
managed-ccf How To Enable Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/how-to-enable-azure-monitor.md
+
+ Title: View application logs in Azure Monitor
+description: Learn to view the application logs in Azure Monitor
++ Last updated : 09/09/2023+++++
+# View the application logs in Azure Monitor
+
+In this tutorial, you will learn how to view the application logs in Azure Monitor by creating a Log Analytics workspace. This tutorial builds on the Azure Managed CCF (Managed CCF) resource created in the [Quickstart: Create an Azure Managed CCF resource using the Azure portal](quickstart-portal.md) tutorial. Logs are essential pieces of information to understand, analyze and optimize the logic and performance of an application.
+
+The logs from your TypeScript and JavaScript application can be viewed in Azure Monitor by creating a Log Analytics workspace.
+
+## Create the Log Analytics workspace
+
+1. Follow the instructions at [Create a workspace](../azure-monitor/logs/quick-create-workspace.md) to create a workspace.
+2. After the workspace is created, make a note of the Resource ID from the properties page.
+ :::image type="content" source="media/how-to/log-analytics-workspace-properties.png" alt-text="Screenshot that shows the properties of a Log Analytics workspace screen.":::
+1. Navigate to the Managed CCF resource and make a note of the Resource ID from the properties page.
+
+## Link the Log Analytics workspace to the Managed CCF resource
+
+1. After the workspace is created, it must be linked with the Managed CCF resource. It takes a few minutes after linking for the logs to appear in the workspace.
+
+ ```azurecli
+ > az login
+
+ > az monitor diagnostic-settings create --name confidentialbillingapplogs --resource <Resource Id of the Managed CCF resource> --workspace <Resource Id of the workspace> --logs [{\"category\":\"applicationlogs\",\"enabled\":true,\"retentionPolicy\":{\"enabled\":false,\"days\":0}}]
+ ```
+1. Open the Logs page. Navigate to the Queries tab and group the queries by Resource type from the drop-down. Navigate to the 'Azure Managed CCF' resource and run the 'CCF application errors' query. Remove the 'Level' filter to view all the logs.
+
+ :::image type="content" source="media/how-to/log-analytics-logs.png" alt-text="Screenshot that shows the Managed CCF resource query in the Log Analytics screen.":::
+
+## Next steps
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Deploy an Azure Managed CCF application](quickstart-deploy-application.md)
managed-ccf How To Manage Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/how-to-manage-members.md
+
+ Title: Quickstart ΓÇô Add and remove members from a Microsoft Azure Managed CCF resource
+description: Learn to manage the members from a Microsoft Azure Managed CCF resource
++ Last updated : 09/10/2023++++
+# Add and remove members from an Azure Managed CCF resource
+
+Members can be added and removed from an Azure Managed CCF (Managed CCF) resource using governance operations. This tutorial builds on the Managed CCF resource created in the [Quickstart: Create an Azure Managed CCF resource using the Azure portal](quickstart-portal.md) tutorial.
+
+## Prerequisites
++
+## Download the service identity
+++
+## Add a member
++
+1. Submit a proposal to add the member.
+ ```bash
+ $cat set_member.json
+ {
+ "actions": [
+ {
+ "name": "set_member",
+ "args": {
+ "cert": "--BEGIN CERTIFICATE--\nMIIBtDCCATqgAwIBAgIUV...sy93h74oqHk=\n--END CERTIFICATE--",
+ "encryption_pub_key": ""
+ }
+ }
+ ]
+ }
+
+ $ proposal_id=$( (ccf_cose_sign1 --content set_member.json --signing-cert member0_cert.pem --signing-key member0_privk.pem --ccf-gov-msg-type proposal --ccf-gov-msg-created_at `date -Is` | curl https://confidentialbillingapp.confidential-ledger.azure.com/gov/proposals -H 'Content-Type: application/cose' --data-binary @- --cacert service_cert.pem) )
+ ```
+1. Accept the proposal by submitting a vote. Repeat the step for all the members in the resource.
+ [!INCLUDE [Submit a vote](./includes/submit-vote.md)]
+1. When the command completes, the member is added in the Managed CCF resource. But, they cannot participate in the governance operations unless they are activated. Refer to the quickstart tutorial [Activate a member](how-to-activate-members.md) to activate the member.
+1. View the members in the network using the following command.
++
+## Remove a member
+
+1. Submit a proposal to remove the member. The member is identified by their public certificate.
+ ```bash
+ $cat remove_member.json
+ {
+ "actions": [
+ {
+ "name": "remove_member",
+ "args": {
+ "cert": "--BEGIN CERTIFICATE--\nMIIBtDCCATqgAwIBAgIUV...sy93h74oqHk=\n--END CERTIFICATE--",
+ }
+ }
+ ]
+ }
+
+ $ proposal_id=$( (ccf_cose_sign1 --content remove_member.json --signing-cert member0_cert.pem --signing-key member0_privk.pem --ccf-gov-msg-type proposal --ccf-gov-msg-created_at `date -Is` | curl https://confidentialbillingapp.confidential-ledger.azure.com/gov/proposals -H 'Content-Type: application/cose' --data-binary @- --cacert service_cert.pem) )
+ ```
+2. Accept the proposal by submitting a vote. Repeat the step for all the members in the resource.
+ [!INCLUDE [Submit a vote](./includes/submit-vote.md)]
+3. When the command completes, the member is removed from the Managed CCF resource and they can no longer participate in the governance operations.
+4. View the members in the network using the following command.
++
+## Next steps
+
+- [Microsoft Azure Managed CCF overview](overview.md)
+- [Quickstart: Deploy an Azure Managed CCF application](quickstart-deploy-application.md)
+- [How to: Activate members](how-to-activate-members.md)
managed-ccf How To Update Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/how-to-update-application.md
+
+ Title: Quickstart ΓÇô Update the JavaScript application on a Microsoft Azure Managed CCF resource
+description: Learn to update the JavaScript application on a Microsoft Azure Managed CCF resource
++ Last updated : 09/10/2023++++
+# Quickstart: Update the JavaScript application
+
+With Azure Managed CCF (Managed CCF), it is simple and quick to update an application when new functionality is introduced or when bugs fixes are available. This tutorial builds on the Managed CCF resource created in the [Quickstart: Create an Azure Managed CCF resource using the Azure portal](quickstart-portal.md) tutorial.
+
+## Prerequisites
++
+## Download the service identity
++
+## Update the application
++
+> [!NOTE]
+> This tutorial assumes that the updated application bundle is created using the instructions available [here](https://microsoft.github.io/CCF/main/build_apps/js_app_bundle.html) and saved to set_js_app.json.
+>
+> Updating an application does not reset the JavaScript runtime options.
++
+When the command completes, the application will be updated and ready to accept user transactions.
+
+## Next steps
+
+- [Microsoft Azure Managed CCF overview](overview.md)
+- [How to: View application logs in Azure Monitor](how-to-enable-azure-monitor.md)
+- [Quickstart: Deploy an Azure Managed CCF application](quickstart-deploy-application.md)
managed-ccf How To Update Javascript Runtime Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/how-to-update-javascript-runtime-options.md
+
+ Title: Quickstart ΓÇô Update the JavaScript runtime options on an Azure Managed CCF resource
+description: Learn to update the JavaScript runtime options on an Azure Managed CCF resource
++ Last updated : 09/08/2023+++++
+# Quickstart: Update the runtime options of the JavaScript execution engine on an Azure Managed CCF resource
+
+Sometimes it is necessary to update the runtime options of the CCF JavaScript interpreter to extend the request execution duration or update the heap or stack allocation size. In this how to guide, you will learn to update the runtime settings. This tutorial builds on the Azure Managed CCF (Managed CCF) resource created in the [Quickstart: Create an Azure Managed CCF resource using the Azure portal](quickstart-portal.md) tutorial.
+
+## Prerequisites
++
+## Download the service identity
++
+## Update the runtime options
++
+1. Prepare a **set_js_runtime_options.json** file and submit it using this command:
+ ```Bash
+ $ cat set_js_runtime_options.json
+ {
+ "actions": [
+ {
+ "name": "set_js_runtime_options",
+ "args": {
+ "max_heap_bytes": 1024,
+ "max_stack_bytes": 1024,
+ "max_execution_time_ms": 5000, // increase the request execution time
+ "log_exception_details": false,
+ "return_exception_details": false
+ }
+ }
+ ]
+ }
+
+ $ proposal_id=$( (ccf_cose_sign1 --content set_js_runtime_options.json --signing-cert member0_cert.pem --signing-key member0_privk.pem --ccf-gov-msg-type proposal --ccf-gov-msg-created_at `date -Is` | curl https://confidentialbillingapp.confidential-ledger.azure.com/gov/proposals -H 'Content-Type: application/cose' --data-binary @- --cacert service_cert.pem | jq -r ΓÇÿ.proposal_idΓÇÖ) )
+ ```
+1. The next step is to accept the proposal by submitting a vote.
+ [!INCLUDE [Submit a vote](./includes/submit-vote.md)]
+1. Repeat the above step for every member in the Managed CCF resource.
+1. After the proposal is accepted, the runtime options will be applied to the subsequent requests.
+
+## Next steps
+
+- [Azure Managed CCF overview](overview.md)
+- [How to: View application logs in Azure Monitor](how-to-enable-azure-monitor.md)
+- [Quickstart: Deploy an Azure Managed CCF application](quickstart-deploy-application.md)
managed-ccf How To View Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/how-to-view-members.md
+
+ Title: View the members in an Azure Managed CCF resource
+description: Learn to view the members in an Azure Managed CCF resource
++ Last updated : 09/11/2023++++
+# View the members in an Azure Managed CCF resource
+
+The members in an Azure Managed CCF(Managed CCF) resource can be viewed on the Azure portal or via CLI. This tutorial builds on the Managed CCF resource created in the [Quickstart: Create an Azure Managed CCF resource](quickstart-portal.md) tutorial.
+
+## Download the service identity
++
+## View the members
+
+### Azure portal
+
+1. Navigate to the Managed CCF resource page.
+
+1. Under Operations, select the Members link. This is a view only page. To manage the members, follow the instructions at [manage members](how-to-manage-members.md).
++
+### Command Line Interface
+
+```bash
+curl --cacert service_cert.pem https://confidentialbillingapp.confidential-ledger.azure.com/gov/members | jq
+{
+ "3d08a5ddcb6fe939088b3f8f55040d069ba2f73e1946739b2a30910d7c60b011": {
+ "cert": "--BEGIN CERTIFICATE--\nMIIBtjCCATyg...zWP\nGeRSybu3EpITPg==\n--END CERTIFICATE--",
+ "member_data": {
+ "group": "IT",
+ "identifier": "member0"
+ },
+ "public_encryption_key": null,
+ "status": "Active"
+ },
+ "9a403f4811f3e3a5eb21528088d6619ad7f6f839405cf737b0e8b83767c59039": {
+ "cert": "--BEGIN CERTIFICATE--\nMIIB9zCCAX2gAwIBAgIQeA...lf8wPx0uzNRc1iGM+mv\n--END CERTIFICATE--",
+ "member_data": {
+ "is_operator": true,
+ "owner": "Microsoft Azure"
+ },
+ "public_encryption_key": "--BEGIN PUBLIC KEY--\nMIIBIjANBgkqhki...DAQAB\n--END PUBLIC KEY--\n",
+ "status": "Active"
+ }
+}
+```
+
+The output shows two active members in the resource. One is an operator member (identified by the is_operator field) and the other was added during deployment. An active member can submit a proposal to add or remove other members. Refer to the [how-to-manage-members](how-to-manage-members.md) guide for the instructions.
+
+## Next steps
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Deploy an Azure Managed CCF application](quickstart-deploy-application.md)
+- [How to: View application logs in Azure Monitor](how-to-enable-azure-monitor.md)
+- [How to: Activate members](how-to-activate-members.md)
managed-ccf Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/overview.md
+
+ Title: Overview of Azure Managed Confidential Consortium Framework
+description: An overview of Azure Managed Confidential Consortium Framework, a highly secure service for deploying confidential application.
++++ Last updated : 09/07/2023++++
+# Overview of Azure Managed Confidential Consortium Framework
+
+Azure Managed Confidential Consortium Framework (Managed CCF) is a new and highly secure service for deploying confidential applications. It enables developers to build confidential applications that require programmable confidentiality on data and information that might be needed between multiple parties. Typically, members agree on the constitution (rules that the members set up) of the network, set governance and business transaction rules, and the business transactions take place based on what was defined. If there are changes to the governance that impact the business rules, the consortium needs to approve those changes.
+
+In fact, the framework runs exclusively on hardware-backed secure enclaves, a heavily monitored and isolated runtime environment, which keeps potential attacks at bay. It also runs on a minimalistic Trusted Computing Base (TCB) and limits the operator's role.
+
+As the name suggests, Azure Managed CCF utilizes the [Azure Confidential Computing platform](../confidential-computing/index.yml) and the open-sourced [Confidential Consortium Framework](https://ccf.dev) as the underlying technology to provide a high integrity platform that is tamper-protected and evident. A Managed CCF instance spans across three or more identical machines, each of which run in a dedicated, fully attested hardware-backed enclave. The data integrity is maintained through a consensus-based blockchain.
+
+The following diagram shows a high-level overview of the different layers of the Managed CCF platform and where the application code fits into it.
++
+## Key features
+
+A few key features of Managed CCF are confidentiality, customizable governance, high availability, auditability and transparency.
++
+### Confidentiality
+
+The nodes run inside a hardware-based Trusted Execution Environment (TEE) which ensures that the data in use is protected and encrypted, while in RAM, and during computation. The following diagram shows how the code and data is protected while in use.
++
+### Customizable governance
+
+The participants, called members, share the responsibility for the network operationability established by a constitution. The shared governance model establishes trust and transparency among the members through a voting process that is public and auditable. The constitution is implemented as a set of JavaScript scripts, which can be customized during the network creation and later. The following diagram shows how the members participate in a governance operation to either accept or reject a proposed action enforced by the constitution.
++
+### High Availability and resiliency
+
+A Managed CCF resource is built on top of a network of distributed nodes that maintains an identical replica of the transactions. The platform is designed and built from the ground-up to be tolerant and resilient to network and infrastructure disruptions. The platform guarantees high availability and quick service healing by spreading the nodes across [Azure Availability Zones](../reliability/availability-zones-overview.md). When an unexpected disaster happens, quick recoverability and business continuity are enabled by automatic backup and restore of the ledger files.
+
+### Auditability and transparency
+
+The state of the network is auditable via receipts. A receipt is a signed proof that is associated with a transaction. The receipts are verifiable offline and by third parties, equivalent to "this transaction produced this outcome at this position in the network". Together with a copy of the ledger, or other receipts, they can be used to audit the service and hold the consortium accountable.
+
+The governance operations and the associated public key value maps are stored in plain text in the ledger. Customers are encouraged to download the ledger and verify its integrity using open-sourced scripts that are shipped with CCF.
+
+### Developer friendly
+
+Developers can use familiar development tools like Visual Studio Code and programming languages like TypeScript, JavaScript and C++ combined with Node.js to develop confidential applications targeting the Managed CCF platform. Open sourced sample applications are published for reference and continuously updated based on feed back.
+
+## Open-source CCF (IaaS) vs. Azure Managed CCF (PaaS)
+
+Customers can build applications that target the Confidential Consortium Framework (CCF) and host it themselves. But, like any other self-hosted applications, it requires regular maintenance and patching (both hardware and software) which consumes time and resource. Azure Managed CCF abstracts away the day-to-day operations, allowing teams to focus on the core business priorities. The following diagram compares and contrasts the differences between a self-hosted CCF network to Azure Managed CCF.
++
+## Next steps
+
+- [Quickstart: Azure portal](quickstart-portal.md)
+- [Quickstart: Azure CLI](quickstart-python.md)
+- [FAQ](faq.yml)
managed-ccf Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-cli.md
+
+ Title: Quickstart ΓÇô Create an Azure Managed Confidential Consortium Framework resource with the Azure CLI
+description: Learn to create an Azure Managed Confidential Consortium Framework resource with the Azure CLI
++ Last updated : 09/09/2023+++++
+# Quickstart: Create an Azure Managed CCF resource using Azure CLI
+
+Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Azure Managed CCF, see [About Azure Managed Confidential Consortium Framework](overview.md).
++
+Azure CLI is used to create and manage Azure resources using commands or scripts.
++
+- This quickstart requires version 2.51.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Create a resource group
++
+## Create a member
++
+## Create a Managed CCF resource
+
+Use the Azure CLI [az confidentialledger managedccfs create](/cli/azure/confidentialledger/managedccfs#az-confidentialledger-managedccfs-create) command to create a Managed CCF resource in the resource group from the previous step. You must provide some information:
+
+- Managed CCF name: A string of 3 to 32 characters that can contain only numbers (0-9), letters (a-z, A-Z), and hyphens (-)
+
+ > [!Important]
+ > Each Managed CCF resource must have a unique name. Replace \<your-unique-managed-ccf-name\> with the name of your resource in the following examples.
+
+- Resource group name: **myResourceGroup**.
+- Location: southcentralus or westeurope. Default value is southcentralus.
+- Members: A collection of initial members to be added to the resource. A minimum of one member is required.
+- Node count: Then number of nodes in the resource. Default value is 3.
+
+```azurecli
+az confidentialledger managedccfs create --name "<your-unique-managed-ccf-name>" --resource-group "myResourceGroup" --location "southcentralus" --members "[{certificate:'c:/certs/member0_cert.pem',identifier:'it-admin',group:'IT'},{certificate:'c:/certs/member1_cert.pem',identifier:'finance-admin',group:'Finance'}]"
+```
+
+To view the previously created resource:
+
+```azurecli
+az confidentialledger managedccfs show --name "<your-unique-managed-ccf-name>" --resource-group "myResourceGroup"
+```
+
+To list the Managed CCF resources in the **myResourceGroup**:
+
+```azurecli
+az confidentialledger managedccfs list --resource-group "myResourceGroup"
+```
+
+To list the Managed CCF resources in a subscription:
+
+```azurecli
+az confidentialledger managedccfs list --subscription <subscription id or subscription name>
+```
+
+## Next steps
+
+In this quickstart, you created a Managed CCF resource by using the Azure portal. To learn more about Azure confidential ledger and how to integrate it with your applications, continue on to these articles:
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Azure portal](quickstart-portal.md)
+- [How to: Activate members](how-to-activate-members.md)
managed-ccf Quickstart Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-deploy-application.md
+
+ Title: Quickstart ΓÇô Deploy a JavaScript application to an Azure Managed CCF resource
+description: Learn to deploy a JavaScript application to an Azure Managed CCF resource
++ Last updated : 09/08/2023+++++
+# Quickstart: Deploy a JavaScript application to an Azure Managed CCF resource
+
+In this quickstart tutorial, you will learn how to deploy an application to an Azure Managed CCF (Managed CCF) resource. This tutorial builds on the Managed CCF resource created in the [Quickstart: Create an Azure Managed CCF resource using the Azure portal](quickstart-portal.md) tutorial.
+
+## Prerequisites
++
+## Download the service identity
++
+## Deploy the application
++
+> [!NOTE]
+> This tutorial assumes that the JavaScript application bundle is created using the instructions available [here](https://microsoft.github.io/CCF/main/build_apps/js_app_bundle.html).
++
+When the command completes, the application is deployed to the Managed CCF resource and is ready to accept transactions.
+
+## Next steps
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Update the JavaScript runtime options](how-to-update-javascript-runtime-options.md)
+- [Quickstart: Deploy an Azure Managed CCF application](quickstart-deploy-application.md)
managed-ccf Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-go.md
+
+ Title: Quickstart ΓÇô Create an Azure Managed CCF resource using the Azure SDK for Go
+description: Learn to use the Azure SDK for Go to create an Azure Managed CCF resource
++ Last updated : 09/11/2023+++++
+# Quickstart: Create an Azure Managed CCF resource using the Azure SDK for Go
+
+Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Managed CCF, see [About Azure Managed Confidential Consortium Framework](overview.md).
+
+In this quickstart, you learn how to create a Managed CCF resource using the Azure SDK for Go library.
++
+[API reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/confidentialledger/armconfidentialledger@v1.2.0-beta.1#section-documentation) | [Library source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/confidentialledger/armconfidentialledger) | [Package (Go)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/confidentialledger/armconfidentialledger@v1.2.0-beta.1)
+
+## Prerequisites
+
+- An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Go 1.18 or higher.
+
+## Setup
+
+### Create a new Go application
+
+1. In a command shell, run the following command to create a folder named `managedccf-app`:
+
+```Bash
+mkdir managedccf-app && cd managedccf-app
+
+go mod init github.com/azure/resourcemanager/confidentialledger
+```
+
+### Install the modules
+
+1. Install the Azure Confidential Ledger module.
+
+```go
+go get -u github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/confidentialledger/armconfidentialledger@v1.2.0-beta.1
+```
+
+For this quickstart, you also need to install the [Azure Identity module for Go](/azure/developer/go/azure-sdk-authentication?tabs=bash).
+
+```go
+go get -u github.com/Azure/azure-sdk-for-go/sdk/azidentity
+```
+
+### Create a resource group
++
+### Register the resource provider
++
+### Create members
++
+## Create the Go application
+
+The management plane library allows operations on Managed CCF resources, such as creation and deletion, listing the resources associated with a subscription, and viewing the details of a specific resource. The following piece of code creates and views the properties of a Managed CCF resource.
+
+Add the following directives to the top of *main.go*:
+
+```go
+package main
+
+import (
+ "context"
+ "log"
+
+ "github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/confidentialledger/armconfidentialledger"
+)
+```
+
+### Authenticate and create a client factory
+
+In this quickstart, logged in user is used to authenticate to Azure Managed CCF, which is the preferred method for local development. This example uses ['NewDefaultAzureCredential()'](/azure/developer/go/azure-sdk-authentication?tabs=bash#authenticate-to-azure-with-defaultazurecredential) class from [Azure Identity module](/azure/developer/go/azure-sdk-authentication?tabs=bash), which allows to use the same code across different environments with different options to provide identity.
+
+```go
+cred, err := azidentity.NewDefaultAzureCredential(nil)
+if err != nil {
+ log.Fatalf("Failed to obtain a credential: %v", err)
+}
+```
+
+Create an Azure Resource Manager client factory and authenticate using the token credential.
+
+```go
+ctx := context.Background()
+clientFactory, err := armconfidentialledger.NewClientFactory("0000000-0000-0000-0000-000000000001", cred, nil)
+
+if err != nil {
+ log.Fatalf("Failed to create client: %v", err)
+}
+```
+
+### Create a Managed CCF resource
+
+```go
+appName := "confidentialbillingapp"
+rgName := "myResourceGroup"
+
+// Create a new resource
+poller, err := clientFactory.NewManagedCCFClient().BeginCreate(ctx, rgName, appName, armconfidentialledger.ManagedCCF{
+ Location: to.Ptr("SouthCentralUS"),
+ Tags: map[string]*string{
+ "Department": to.Ptr("Contoso IT"),
+ },
+ Properties: &armconfidentialledger.ManagedCCFProperties{
+ DeploymentType: &armconfidentialledger.DeploymentType{
+ AppSourceURI: to.Ptr(""),
+ LanguageRuntime: to.Ptr(armconfidentialledger.LanguageRuntimeJS),
+ },
+ MemberIdentityCertificates: []*armconfidentialledger.MemberIdentityCertificate{
+ {
+ Certificate: to.Ptr("--BEGIN CERTIFICATE--\nMIIU4G0d7....1ZtULNWo\n--END CERTIFICATE--"),
+ Encryptionkey: to.Ptr(""),
+ Tags: map[string]any{
+ "owner": "IT Admin1",
+ },
+ }},
+ NodeCount: to.Ptr[int32](3),
+ },
+}, nil)
+
+if err != nil {
+ log.Fatalf("Failed to finish the request: %v", err)
+}
+
+_, err = poller.PollUntilDone(ctx, nil)
+
+if err != nil {
+ log.Fatalf("Failed to pull the result: %v", err)
+}
+```
+
+### Get the properties of the Managed CCF resource
+
+The following piece of code retrieves the Managed CCF resource created in the previous step.
+
+```go
+log.Println("Getting the Managed CCF resource.")
+
+// Get the resource details and print it
+getResponse, err := clientFactory.NewManagedCCFClient().Get(ctx, rgName, appName, nil)
+
+if err != nil {
+ log.Fatalf("Failed to get details of mccf instance: %v", err)
+}
+
+// Print few properties of the Managed CCF resource
+log.Println("Application name:", *getResponse.ManagedCCF.Properties.AppName)
+log.Println("Node Count:", *getResponse.ManagedCCF.Properties.NodeCount)
+```
+
+### List the Managed CCF resources in a Resource Group
+
+The following piece of code retrieves the Managed CCF resources in the resource group.
+
+```go
+pager := clientFactory.NewManagedCCFClient().NewListByResourceGroupPager(rgName, nil)
+
+for pager.More() {
+ page, err := pager.NextPage(ctx)
+ if err != nil {
+ log.Fatalf("Failed to advance page: %v", err)
+ }
+
+ for _, v := range page.Value {
+ log.Println("Application Name:", *v.Name)
+ }
+}
+```
+
+### Delete the Managed CCF resource
+
+The following piece of code deletes the Managed CCF resource. Other Managed CCF articles can build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you might wish to leave these resources in place.
+
+```go
+deletePoller, err := clientFactory.NewManagedCCFClient().BeginDelete(ctx, rgName, appName, nil)
+
+if err != nil {
+ log.Fatalf("Failed to finish the delete request: %v", err)
+}
+
+_, err = deletePoller.PollUntilDone(ctx, nil)
+
+if err != nil {
+ log.Fatalf("Failed to get the delete result: %v", err)
+}
+```
+
+## Clean up resources
+
+Other Managed CCF articles can build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you might wish to leave these resources in place.
+
+Otherwise, when you're finished with the resources created in this article, use the Azure CLI [az group delete](/cli/azure/group?#az-group-delete) command to delete the resource group and all its contained resources.
+
+```azurecli
+az group delete --resource-group contoso-rg
+```
+
+## Next steps
+
+In this quickstart, you created a Managed CCF resource by using the Azure Python SDK for Confidential Ledger. To learn more about Azure Managed CCF and how to integrate it with your applications, continue on to these articles:
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Deploy an Azure Managed CCF application](quickstart-deploy-application.md)
+- [Quickstart: Azure CLI](quickstart-cli.md)
+- [How to: Activate members](how-to-activate-members.md)
managed-ccf Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-java.md
+
+ Title: Quickstart ΓÇô Azure SDK for Java for Azure Managed Confidential Consortium Framework
+description: Learn to use the Azure SDK for Java library for Azure Managed Confidential Consortium Framework
++ Last updated : 09/11/2023+++++
+# Quickstart: Create an Azure Managed CCF resource using the Azure SDK for Java
+
+Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Azure Managed CCF, see [About Azure Managed Confidential Consortium Framework](overview.md).
++
+[API reference documentation](/java/api/com.azure.resourcemanager.confidentialledger) | [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/confidentialledger) | [Package (maven central repository)](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-confidentialledger/1.0.0-beta.3)
+
+## Prerequisites
+
+- An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Java Development Kit (KDK) versions that are [supported by the Azure SDK for Java](https://github.com/Azure/azure-sdk-for-jav).
+
+## Setup
+
+This quickstart uses the Azure Identity library, along with Azure CLI or Azure PowerShell, to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls. For more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme).
+
+### Sign in to Azure
++
+### Install the dependencies
+
+```Java
+<dependency>
+ <groupId>com.azure.resourcemanager</groupId>
+ <artifactId>azure-resourcemanager-confidentialledger</artifactId>
+ <version>1.0.0-beta.3</version>
+</dependency>
+```
+
+### Create a resource group
++
+### Register the resource provider
++
+### Create members
++
+## Create the Java application
+
+The Azure SDK for Java library (azure-resourcemanager-confidentialledger) allows operations on Managed CCF resources, such as creation and deletion, listing the resources associated with a subscription, and viewing the details of a specific resource. The following piece of code creates and views the properties of a Managed CCF resource.
+
+```java
+import com.azure.core.management.AzureEnvironment;
+import com.azure.core.management.exception.ManagementException;
+import com.azure.core.management.profile.AzureProfile;
+import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.azure.resourcemanager.confidentialledger.ConfidentialLedgerManager;
+import com.azure.resourcemanager.confidentialledger.fluent.models.ManagedCcfInner;
+import com.azure.resourcemanager.confidentialledger.models.DeploymentType;
+import com.azure.resourcemanager.confidentialledger.models.LanguageRuntime;
+import com.azure.resourcemanager.confidentialledger.models.ManagedCcfProperties;
+import com.azure.resourcemanager.confidentialledger.models.MemberIdentityCertificate;
+import java.util.*;
+
+public class AzureJavaSdkClient {
+ public static void main(String[] args) {
+ try {
+ AzureProfile profile = new AzureProfile("<tenant id>","<subscription id>", AzureEnvironment.AZURE);
+ ConfidentialLedgerManager manager = ConfidentialLedgerManager.authenticate(new DefaultAzureCredentialBuilder().build(), profile);
+
+ MemberIdentityCertificate member0 = new MemberIdentityCertificate()
+ .withCertificate("--BEGIN CERTIFICATE--\nMIIBvjCCAUSgAwIBAgIUA0YHcPpUCtd...0Yet/xU4G0d71ZtULNWo\n--END CERTIFICATE--")
+ .withTags(Map.of("Dept", "IT"));
+ List<MemberIdentityCertificate> members = new ArrayList<MemberIdentityCertificate>();
+ members.add(member0);
+
+ DeploymentType deployment = new DeploymentType().withAppSourceUri("").withLanguageRuntime(LanguageRuntime.JS);
+ ManagedCcfProperties properties = new ManagedCcfProperties()
+ .withDeploymentType(deployment)
+ .withNodeCount(5)
+ .withMemberIdentityCertificates(members);
+
+ ManagedCcfInner inner = new ManagedCcfInner().withProperties(properties).withLocation("southcentralus");
+
+ // Send Create request
+ manager.serviceClient().getManagedCcfs().create("myResourceGroup", "confidentialbillingapp", inner);
+
+ // Print the Managed CCF resource properties
+ ManagedCcfInner app = manager.serviceClient().getManagedCcfs().getByResourceGroup("myResourceGroup", "confidentialbillingapp");
+ printAppInfo(app);
+
+ // Delete the resource
+ manager.serviceClient().getManagedCcfs().delete("myResourceGroup", "confidentialbillingapp");
+ } catch (ManagementException ex) {
+ // The x-ms-correlation-request-id is located in the Header
+ System.out.println(ex.getResponse().getHeaders().toString());
+ System.out.println(ex);
+ }
+ }
+
+ private static void printAppInfo(ManagedCcfInner app) {
+ System.out.println("App Name: " + app.name());
+ System.out.println("App Id: " + app.id());
+ System.out.println("App Location: " + app.location());
+ System.out.println("App type: " + app.type());
+ System.out.println("App Properties Uri: " + app.properties().appUri());
+ System.out.println("App Properties Language Runtime: " + app.properties().deploymentType().languageRuntime());
+ System.out.println("App Properties Source Uri: " + app.properties().deploymentType().appSourceUri());
+ System.out.println("App Properties NodeCount: " + app.properties().nodeCount());
+ System.out.println("App Properties Identity Uri: " + app.properties().identityServiceUri());
+ System.out.println("App Properties Cert 0: " + app.properties().memberIdentityCertificates().get(0).certificate());
+ System.out.println("App Properties Cert tags: " + app.properties().memberIdentityCertificates().get(0).tags());
+ }
+}
+```
+
+## Clean up resources
+
+Other Managed CCF articles can build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you might wish to leave these resources in place.
+
+Otherwise, when you're finished with the resources created in this article, use the Azure CLI [az group delete](/cli/azure/group?#az-group-delete) command to delete the resource group and all its contained resources.
+
+```azurecli
+az group delete --resource-group myResourceGroup
+```
+
+## Next steps
+
+In this quickstart, you created a Managed CCF resource by using the Azure Python SDK for Confidential Ledger. To learn more about Azure Managed CCF and how to integrate it with your applications, continue on to these articles:
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Azure portal](quickstart-portal.md)
+- [Quickstart: Azure CLI](quickstart-cli.md)
+- [How to: Activate members](how-to-activate-members.md)
managed-ccf Quickstart Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-net.md
+
+ Title: Quickstart ΓÇô Azure SDK for .NET for Azure Managed Confidential Consortium Framework
+description: Learn to use the Azure SDK for .NET for Azure Managed Confidential Consortium Framework
++ Last updated : 09/11/2023+++++
+# Quickstart: Create an Azure Managed CCF resource using the Azure SDK for .NET
+
+Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Managed CCF, and for examples use cases, see [About Azure Managed Confidential Consortium Framework](overview.md).
+
+In this quickstart, you learn how to create a Managed CCF resource using the .NET client management library.
++
+[API reference documentation](/dotnet/api/overview/azure/resourcemanager.confidentialledger-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/confidentialledger/Azure.ResourceManager.ConfidentialLedger) | [Package (NuGet)](https://www.nuget.org/packages/Azure.ResourceManager.ConfidentialLedger/1.1.0-beta.2)
+
+## Prerequisites
+
+- An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- .NET versions [supported by the Azure SDK for .NET](https://www.nuget.org/packages/Azure.ResourceManager.ConfidentialLedger/1.1.0-beta.2#dependencies-body-tab).
+
+## Setup
+
+### Create new .NET console app
+
+1. In a command shell, run the following command to create a project named `managedccf-app`:
+
+ ```dotnetcli
+ dotnet new console --name managedccf-app
+ ```
+
+1. Change to the newly created *managedccf-app* directory, and run the following command to build the project:
+
+ ```dotnetcli
+ dotnet build
+ ```
+
+ The build output should contain no warnings or errors.
+
+ ```console
+ Build succeeded.
+ 0 Warning(s)
+ 0 Error(s)
+ ```
+
+### Install the package
+
+Install the Azure Managed CCF client library for .NET with [NuGet](/nuget/install-nuget-client-tools):
+
+```dotnetcli
+dotnet add package Azure.ResourceManager.ConfidentialLedger --version 1.1.0-beta.2
+```
+
+For this quickstart, you also need to install the Azure SDK client library for Azure Identity:
+
+```dotnetcli
+dotnet add package Azure.Identity
+```
+
+### Create a resource group
++
+### Register the resource provider
++
+### Create members
++
+## Create the .NET application
+
+### Use the Management plane client library
+
+The Azure SDK for .NET (azure/arm-confidentialledger) allows operations on Managed CCF resources, such as creation and deletion, listing the resources associated with a subscription, and viewing the details of a specific resource. The following piece of code creates and views the properties of a Managed CCF resource.
+
+Add the following directives to the top of *Program.cs*:
+
+```csharp
+using System;
+using System.Collections.Generic;
+using System.Threading.Tasks;
+using Azure;
+using Azure.Core;
+using Azure.Identity;
+using Azure.ResourceManager;
+using Azure.ResourceManager.ConfidentialLedger;
+using Azure.ResourceManager.ConfidentialLedger.Models;
+using Azure.ResourceManager.Resources;
+```
+
+### Authenticate and create a client
+
+In this quickstart, logged in user is used to authenticate to Azure Managed CCF, which is the preferred method for local development. This example uses ['DefaultAzureCredential()'](/dotnet/api/azure.identity.defaultazurecredential) class from [Azure Identity Library](/dotnet/api/overview/azure/identity-readme), which allows to use the same code across different environments with different options to provide identity.
+
+```csharp
+// get your azure access token, for more details of how Azure SDK get your access token, please refer to https://learn.microsoft.com/en-us/dotnet/azure/sdk/authentication?tabs=command-line
+TokenCredential cred = new DefaultAzureCredential();
+```
+
+Create an Azure Resource Manager client and authenticate using the token credential.
+
+```csharp
+// authenticate your client
+ArmClient client = new ArmClient(cred);
+```
+
+### Create a Managed CCF resource
+
+```csharp
+// this example assumes you already have this ResourceGroupResource created on azure
+// for more information of creating ResourceGroupResource, please refer to the document of ResourceGroupResource
+string subscriptionId = "0000000-0000-0000-0000-000000000001";
+string resourceGroupName = "myResourceGroup";
+ResourceIdentifier resourceGroupResourceId = ResourceGroupResource.CreateResourceIdentifier(subscriptionId, resourceGroupName);
+ResourceGroupResource resourceGroupResource = client.GetResourceGroupResource(resourceGroupResourceId);
+
+// get the collection of this ManagedCcfResource
+ManagedCcfCollection collection = resourceGroupResource.GetManagedCcfs();
+
+// invoke the operation
+string appName = "confidentialbillingapp";
+ManagedCcfData data = new ManagedCcfData(new AzureLocation("SouthCentralUS"))
+{
+ Properties = new ManagedCcfProperties()
+ {
+ MemberIdentityCertificates =
+ {
+ new ConfidentialLedgerMemberIdentityCertificate()
+ {
+ Certificate = "--BEGIN CERTIFICATE--MIIBsjCCATigA...LjYAGDSGi7NJnSkA--END CERTIFICATE--",
+ Encryptionkey = "",
+ Tags = BinaryData.FromObjectAsJson(new Dictionary<string, object>()
+ {
+ ["additionalProps1"] = "additional properties"
+ }),
+ }
+ },
+ DeploymentType = new ConfidentialLedgerDeploymentType()
+ {
+ LanguageRuntime = ConfidentialLedgerLanguageRuntime.JS,
+ AppSourceUri = new Uri(""),
+ },
+ NodeCount = 3,
+ },
+ Tags =
+ {
+ ["additionalProps1"] = "additional properties",
+ },
+};
+
+ArmOperation<ManagedCcfResource> lro = await collection.CreateOrUpdateAsync(WaitUntil.Completed, appName, data);
+ManagedCcfResource result = lro.Value;
+
+// the variable result is a resource, you could call other operations on this instance as well
+// but just for demo, we get its data from this resource instance
+ManagedCcfData resourceData = result.Data;
+// for demo we just print out the id
+Console.WriteLine($"Succeeded on id: {resourceData.Id}");
+```
+
+### View the properties of a Managed CCF resource
+
+The following piece of code retrieves the Managed CCF resource and prints its properties.
+
+```csharp
+// this example assumes you already have this ResourceGroupResource created on azure
+// for more information of creating ResourceGroupResource, please refer to the document of ResourceGroupResource
+string subscriptionId = "0000000-0000-0000-0000-000000000001";
+string resourceGroupName = "myResourceGroup";
+ResourceIdentifier resourceGroupResourceId = ResourceGroupResource.CreateResourceIdentifier(subscriptionId, resourceGroupName);
+ResourceGroupResource resourceGroupResource = client.GetResourceGroupResource(resourceGroupResourceId);
+
+// get the collection of this ManagedCcfResource
+ManagedCcfCollection collection = resourceGroupResource.GetManagedCcfs();
+
+// invoke the operation
+string appName = "confidentialbillingapp";
+ManagedCcfResource result = await collection.GetAsync(appName);
+
+// the variable result is a resource, you could call other operations on this instance as well
+// but just for demo, we get its data from this resource instance
+ManagedCcfData resourceData = result.Data;
+// for demo we just print out the id
+Console.WriteLine($"Succeeded on id: {resourceData.Id}");
+```
+
+### List the Managed CCF resources in a Resource Group
+
+The following piece of code retrieves the Managed CCF resources in a resource group.
+
+```csharp
+// this example assumes you already have this ResourceGroupResource created on azure
+// for more information of creating ResourceGroupResource, please refer to the document of ResourceGroupResource
+string subscriptionId = "0000000-0000-0000-0000-000000000001";
+string resourceGroupName = "myResourceGroup";
+ResourceIdentifier resourceGroupResourceId = ResourceGroupResource.CreateResourceIdentifier(subscriptionId, resourceGroupName);
+ResourceGroupResource resourceGroupResource = client.GetResourceGroupResource(resourceGroupResourceId);
+
+// get the collection of this ManagedCcfResource
+ManagedCcfCollection collection = resourceGroupResource.GetManagedCcfs();
+
+// invoke the operation and iterate over the result
+await foreach (ManagedCcfResource item in collection.GetAllAsync())
+{
+ // the variable item is a resource, you could call other operations on this instance as well
+ // but just for demo, we get its data from this resource instance
+ ManagedCcfData resourceData = item.Data;
+ // for demo we just print out the id
+ Console.WriteLine($"Succeeded on id: {resourceData.Id}");
+}
+
+Console.WriteLine($"Succeeded");
+```
+
+### List the Managed CCF resources in a subscription
+
+The following piece of code retrieves the Managed CCF resources in a subscription.
+
+```csharp
+// this example assumes you already have this SubscriptionResource created on azure
+// for more information of creating SubscriptionResource, please refer to the document of SubscriptionResource
+string subscriptionId = "0000000-0000-0000-0000-000000000001";
+ResourceIdentifier subscriptionResourceId = SubscriptionResource.CreateResourceIdentifier(subscriptionId);
+SubscriptionResource subscriptionResource = client.GetSubscriptionResource(subscriptionResourceId);
+
+// invoke the operation and iterate over the result
+await foreach (ManagedCcfResource item in subscriptionResource.GetManagedCcfsAsync())
+{
+ // the variable item is a resource, you could call other operations on this instance as well
+ // but just for demo, we get its data from this resource instance
+ ManagedCcfData resourceData = item.Data;
+ // for demo we just print out the id
+ Console.WriteLine($"Succeeded on id: {resourceData.Id}");
+}
+
+Console.WriteLine($"Succeeded");
+```
+
+## Clean up resources
+
+Other Managed CCF articles can build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you might wish to leave these resources in place.
+
+Otherwise, when you're finished with the resources created in this article, use the Azure CLI [az group delete](/cli/azure/group?#az-group-delete) command to delete the resource group and all its contained resources.
+
+```azurecli
+az group delete --resource-group myResourceGroup
+```
+
+## Next steps
+
+In this quickstart, you created a Managed CCF resource by using the Azure Python SDK for Confidential Ledger. To learn more about Azure Managed CCF and how to integrate it with your applications, continue on to these articles:
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Azure portal](quickstart-portal.md)
+- [Quickstart: Azure CLI](quickstart-cli.md)
+- [How to: Activate members](how-to-activate-members.md)
managed-ccf Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-portal.md
+
+ Title: Quickstart ΓÇô Microsoft Azure Managed Confidential Consortium Framework with the Azure portal
+description: Learn to deploy a Microsoft Azure Managed Confidential Consortium Framework resource through the Azure portal
++ Last updated : 09/08/2023+++++
+# Quickstart: Create an Azure Managed CCF resource using the Azure portal
+
+Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Managed CCF, see [About Azure Managed Confidential Consortium Framework](overview.md).
++
+In this quickstart, you create a Managed CCF resource with the [Azure portal](https://portal.azure.com).
+
+## Prerequisites
+
+- Install [CCF](https://microsoft.github.io/CCF/main/build_apps/install_bin.html).
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+### Register the provider
+
+Register the resource provider in your subscription using the following commands.
++
+### Create a resource group
++
+### Create members
++
+### Create a Managed CCF resource
+
+1. From the Azure portal menu, or from the Home page, select **Create a resource**.
+
+2. In the Search box, enter "Confidential Ledger", select said application, and then choose **Create**.
+
+> [!NOTE]
+> The portal URL should contain the query string ΓÇÿfeature.Microsoft_Azure_ConfidentialLedger_managedccf=trueΓÇÖ to turn on the Managed CCF feature.
+
+1. On the Create confidential ledger section, provide the following information:
+ - **Subscription**: Choose the desired subscription.
+ - **Resource Group**: Choose the resource group created in the previous step.
+ - **Region**: In the pull-down menu, choose a region.
+ - **Name**: Provide a unique name.
+ - **Account Type**: Choose Custom CCF Application.
+ - **Application Type**: Choose Custom JavaScript Application.
+ - **Network Node Count**: Choose the desired node count.
++
+1. Select the **Security** tab.
+
+1. You must add one or more members to the Managed CCF resource. Select **+ Add Member Identity**.
+ - **Member Identifier**: A unique member name.
+ - **Member Group**: An optional group name.
+ - **Certificate**: Paste the contents of the member0_cert.pem file.
++
+1. Select **Review + Create**. After validation has passed, select **Create**.1.
++
+When the deployment is complete, select **Go to resource**.
++
+Make a note of the following properties as it is required to activate the member(s).
+
+- **Application endpoint**: In the example, this endpoint is `https://confidentialbillingapp.confidential-ledger.azure.com`.
+- **Identity Service endpoint**: In the example, this endpoint is `https://identity.confidential-ledger.core.azure.com/ledgerIdentity/confidentialbillingapp`.
+
+You will need these values to transact with the confidential ledger from the data plane.
+
+### Clean up resources
+
+Other Azure Managed CCF articles build upon this quickstart. If you plan to continue on to work with subsequent articles, you might wish to leave these resources in place.
+
+When no longer needed, delete the resource group, which deletes the Managed CCF and related resources. To delete the resource group through the portal:
+
+1. Enter the name of your resource group in the Search box at the top of the portal. When you see the resource group used in this quickstart in the search results, select it.
+
+1. Select **Delete resource group**.
+
+1. In the **TYPE THE RESOURCE GROUP NAME:** box, enter the name of the resource group, and select **Delete**.
+
+## Next steps
+
+In this quickstart, you created a Managed CCF resource by using the Azure portal. To learn more about Azure Managed CCF and how to integrate it with your applications, continue on to these articles:
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Azure CLI](quickstart-cli.md)
+- [How to: Activate members](how-to-activate-members.md)
managed-ccf Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-python.md
+
+ Title: Quickstart ΓÇô Azure libraries (SDK) for Python for Azure Managed Confidential Consortium Framework
+description: Learn to use the Azure libraries for Python for Azure Managed Confidential Consortium Framework
++ Last updated : 09/11/2023+++++
+# Quickstart: Create an Azure Managed CCF resource using the Azure SDK for Python
+
+Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Azure Managed CCF, see [About Azure Managed Confidential Consortium Framework](overview.md).
++
+[API reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-confidentialledger/latest/azure.confidentialledger.html) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger) | [Package (Python Package Index) Management Library](https://pypi.org/project/azure-mgmt-confidentialledger/)
+
+## Prerequisites
+
+- An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Python versions supported by the [Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python#prerequisites).
+
+## Setup
+
+This quickstart uses the Azure Identity library, along with Azure CLI or Azure PowerShell, to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls. For more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme).
+
+### Sign in to Azure
++
+### Install the packages
+
+In a terminal or command prompt, create a suitable project folder, and then create and activate a Python virtual environment as described on [Use Python virtual environments](/azure/developer/python/configure-local-development-environment?tabs=cmd#use-python-virtual-environments).
+
+Install the Azure Active Directory identity client library:
+
+```terminal
+pip install azure-identity
+```
+
+Install the Azure confidential ledger management plane client library.
+
+```terminal
+pip install azure.mgmt.confidentialledger
+```
+
+### Create a resource group
++
+### Register the resource provider
++
+### Create members
++
+## Create the Python application
+
+### Use the Management plane client library
+
+The management plane library (azure.mgmt.confidentialledger) allows operations on Managed CCF resources, such as creation and deletion, listing the resources associated with a subscription, and viewing the details of a specific resource. The following piece of code creates and views the properties of a Managed CCF resource.
+
+```python
+from azure.identity import DefaultAzureCredential
+
+# Import the Azure Managed CCF management plane library
+from azure.mgmt.confidentialledger import ConfidentialLedger
+
+import os
+
+sub_id = "0000000-0000-0000-0000-000000000001"
+client = ConfidentialLedger(credential=DefaultAzureCredential(), subscription_id=sub_id)
+
+# ********** Create a Managed CCF app **********
+app_properties = {
+ "location": "southcentralus",
+ "properties": {
+ "deploymentType": {
+ "appSourceUri": "",
+ "languageRuntime": "JS"
+ },
+ "memberIdentityCertificates": [ # Multiple members can be supplied
+ {
+ "certificate": "--BEGIN CERTIFICATE--\nMIIBvzC...f0ZoeNw==\n--END CERTIFICATE--",
+ "tags": { "owner": "ITAdmin1" }
+ }
+ ],
+ "nodeCount": 3 # Maximum allowed value is 9
+ },
+ "tags": { "costcenter": "12345" }
+}
+
+result = client.managed_ccf.begin_create("myResourceGroup", "confidentialbillingapp", app_properties).result()
+
+# ********** Retrieve the Managed CCF app details **********
+confidential_billing_app = client.managed_ccf.get("myResourceGroup", "confidentialbillingapp")
+
+# ********** Delete the Managed CCF app **********
+result = client.managed_ccf.begin_delete("myResourceGroup", "confidentialbillingapp").result()
+```
+
+## Clean up resources
+
+Other Managed CCF articles can build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you might wish to leave these resources in place.
+
+Otherwise, when you're finished with the resources created in this article, use the Azure CLI [az group delete](/cli/azure/group?#az-group-delete) command to delete the resource group and all its contained resources.
+
+```azurecli
+az group delete --resource-group myResourceGroup
+```
+
+## Next steps
+
+In this quickstart, you created a Managed CCF resource by using the Azure Python SDK for Confidential Ledger. To learn more about Azure Managed CCF and how to integrate it with your applications, continue on to these articles:
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Azure portal](quickstart-portal.md)
+- [Quickstart: Azure CLI](quickstart-cli.md)
+- [How to: Activate members](how-to-activate-members.md)
managed-ccf Quickstart Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-typescript.md
+
+ Title: Quickstart ΓÇô Azure SDK for JavaScript and TypeScript for Azure Managed Confidential Consortium Framework
+description: Learn to use the Azure SDK for JavaScript and TypeScript library for Azure Managed Confidential Consortium Framework
++ Last updated : 09/11/2023+++++
+# Quickstart: Create an Azure Managed CCF resource using the Azure SDK for JavaScript and TypeScript
+
+Microsoft Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Azure Managed CCF, see [About Azure Managed Confidential Consortium Framework](overview.md).
++
+[API reference documentation](/javascript/api/overview/azure/confidential-ledger) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/confidentialledger/arm-confidentialledger) | [Package (npm)](https://www.npmjs.com/package/@azure/arm-confidentialledger)
+
+## Prerequisites
+
+- An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Node.js versions supported by the [Azure SDK for JavaScript](/javascript/api/overview/azure/arm-confidentialledger-readme#currently-supported-environments).
+
+## Setup
+
+This quickstart uses the Azure Identity library, along with Azure CLI or Azure PowerShell, to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls. For more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme).
+
+### Sign in to Azure
++
+### Install the packages
+
+In a terminal or command prompt, create a suitable project folder, and then create and activate a Python virtual environment as described on [Use Python virtual environments](/azure/developer/python/configure-local-development-environment?tabs=cmd#use-python-virtual-environments).
+
+Install the Azure Active Directory identity client library.
+
+```terminal
+npm install @azure/identity
+```
+
+Install the Azure Confidential Ledger management plane client library.
+
+```terminal
+npm install @azure/arm-confidentialledger@1.3.0-beta.1
+```
+
+### Create a resource group
++
+### Register the resource provider
++
+### Create members
++
+## Create the JavaScript application
+
+### Use the Management plane client library
+
+The Azure SDK for JavaScript and TypeScript library (azure/arm-confidentialledger) allows operations on Managed CCF resources, such as creation and deletion, listing the resources associated with a subscription, and viewing the details of a specific resource. The following piece of code creates and views the properties of a Managed CCF resource.
+
+```JavaScript
+import { ConfidentialLedgerClient, ManagedCCFProperties, ManagedCCF, KnownLanguageRuntime, DeploymentType, MemberIdentityCertificate } from "@azure/arm-confidentialledger";
+import { DefaultAzureCredential } from "@azure/identity";
+import { Console } from "console";
+
+const subscriptionId = "0000000-0000-0000-0000-000000000001"; // replace
+const rgName = "myResourceGroup";
+const ledgerId = "confidentialbillingapp";
+
+let client: ConfidentialLedgerClient;
+
+export async function main() {
+ console.log("Creating a new instance.")
+ client = new ConfidentialLedgerClient(new DefaultAzureCredential(), subscriptionId);
+
+ let properties = <ManagedCCFProperties> {
+ deploymentType: <DeploymentType> {
+ appSourceUri: "",
+ languageRuntime: KnownLanguageRuntime.JS
+ },
+ memberIdentityCertificates: [
+ <MemberIdentityCertificate>{
+ certificate: "--BEGIN CERTIFICATE--\nMIIBvjCCAUSgAwIBAg...0d71ZtULNWo\n--END CERTIFICATE--",
+ encryptionkey: "",
+ tags: {
+ "owner":"member0"
+ }
+ },
+ <MemberIdentityCertificate>{
+ certificate: "--BEGIN CERTIFICATE--\nMIIBwDCCAUagAwIBAgI...2FSyKIC+vY=\n--END CERTIFICATE--",
+ encryptionkey: "",
+ tags: {
+ "owner":"member1"
+ }
+ },
+ ],
+ nodeCount: 3,
+ };
+
+ let mccf = <ManagedCCF> {
+ location: "SouthCentralUS",
+ properties: properties,
+ }
+
+ let createResponse = await client.managedCCFOperations.beginCreateAndWait(rgName, ledgerId, mccf);
+ console.log("Created. Instance id: " + createResponse.id);
+
+ // Get details of the instance
+ console.log("Getting instance details.");
+ let getResponse = await client.managedCCFOperations.get(rgName, ledgerId);
+ console.log(getResponse.properties?.identityServiceUri);
+ console.log(getResponse.properties?.nodeCount);
+
+ // List mccf instances in the RG
+ console.log("Listing the instances in the resource group.");
+ let instancePages = await client.managedCCFOperations.listByResourceGroup(rgName).byPage();
+ for await(const page of instancePages){
+ for(const instance of page)
+ {
+ console.log(instance.name + "\t" + instance.location + "\t" + instance.properties?.nodeCount);
+ }
+ }
+
+ console.log("Delete the instance.");
+ await client.managedCCFOperations.beginDeleteAndWait(rgName, ledgerId);
+ console.log("Deleted.");
+}
+
+main().catch((err) => {
+ console.error(err);
+});
+```
+
+## Clean up resources
+
+Other Managed CCF articles can build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you might wish to leave these resources in place.
+
+Otherwise, when you're finished with the resources created in this article, use the Azure CLI [az group delete](/cli/azure/group?#az-group-delete) command to delete the resource group and all its contained resources.
+
+```azurecli
+az group delete --resource-group myResourceGroup
+```
+
+## Next steps
+
+In this quickstart, you created a Managed CCF resource by using the Azure Python SDK for Confidential Ledger. To learn more about Azure Managed CCF and how to integrate it with your applications, continue on to these articles:
+
+- [Azure Managed CCF overview](overview.md)
+- [Quickstart: Azure portal](quickstart-portal.md)
+- [Quickstart: Azure CLI](quickstart-cli.md)
+- [How to: Activate members](how-to-activate-members.md)
mariadb Concepts Data Access Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-data-access-security-private-link.md
Configure [VNet peering](../virtual-network/tutorial-connect-virtual-networks-po
### Connecting from an Azure VM in VNet-to-VNet environment
-Configure [VNet-to-VNet VPN gateway connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to a Azure Database for MariaDB from an Azure VM in a different region or subscription.
+Configure [VNet-to-VNet VPN gateway connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to an Azure Database for MariaDB from an Azure VM in a different region or subscription.
### Connecting from an on-premises environment over VPN
migrate Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-network-connectivity.md
Make sure the private endpoint is an approved state.
3. Select the private endpoint you want to diagnose. a. Validate that the connection state is Approved. b. If the connection is in a Pending state, you need to get it approved.
- c. You may also navigate to the private endpoint resource and review if the virtual network matches the Migrate project private endpoint virtual network.
+ c. You might also navigate to the private endpoint resource and review if the virtual network matches the Migrate project private endpoint virtual network.
:::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection.png" alt-text="Screenshot of View Private Endpoint connection.":::
Review the data flow metrics to verify the traffic flow through private endpoint
## Verify DNS resolution
-The on-premises appliance (or replication provider) will access the Azure Migrate resources using their fully qualified private link domain names (FQDNs). You may require additional DNS settings to resolve the private IP address of the private endpoints from the source environment. [See this article](../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder) to understand the DNS configuration scenarios that can help troubleshoot any network connectivity issues.
+The on-premises appliance (or replication provider) will access the Azure Migrate resources using their fully qualified private link domain names (FQDNs). You might require additional DNS settings to resolve the private IP address of the private endpoints from the source environment. [See this article](../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder) to understand the DNS configuration scenarios that can help troubleshoot any network connectivity issues.
To validate the private link connection, perform a DNS resolution of the Azure Migrate resource endpoints (private link resource FQDNs) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address.
If the DNS resolution is incorrect, follow these steps:
## Validate the Private DNS Zone
-If the DNS resolution is not working as described in the previous section, there might be an issue with your Private DNS Zone.
+If the DNS resolution isn't working as described in the previous section, there might be an issue with your Private DNS Zone.
### Confirm that the required Private DNS Zone resource exists
-By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type. The private DNS zone will be created in the same Azure resource group as the private endpoint resource group. The Azure resource group should contain private DNS zone resources with the following format:
+By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type. The private DNS zone is created in the same Azure resource group as the private endpoint resource group. The Azure resource group should contain private DNS zone resources with the following format:
- privatelink.vaultcore.azure.net for the key vault - privatelink.blob.core.windows.net for the storage account - privatelink.siterecovery.windowsazure.com for the recovery services vault (for Hyper-V and agent-based replications)
Azure Migrate automatically creates the private DNS zone (except for the cache/r
[![DNS configuration screenshot](./media/how-to-use-azure-migrate-with-private-endpoints/dns-configuration-inline.png)](./media/how-to-use-azure-migrate-with-private-endpoints/dns-configuration-expanded.png#lightbox)
-If the DNS zone is not present (as shown below), [create a new Private DNS Zone resource.](../dns/private-dns-getstarted-portal.md)
+If the DNS zone isn't present (as shown below), [create a new Private DNS Zone resource.](../dns/private-dns-getstarted-portal.md)
[![Create a Private DNS Zone](./media/how-to-use-azure-migrate-with-private-endpoints/create-dns-zone-inline.png)](./media/how-to-use-azure-migrate-with-private-endpoints/create-dns-zone-expanded.png#lightbox) ### Confirm that the Private DNS Zone is linked to the virtual network
-The private DNS zone should be linked to the virtual network that contains the private endpoint for the DNS query to resolve the private IP address of the resource endpoint. If the private DNS zone is not linked to the correct Virtual Network, any DNS resolution from that virtual network will ignore the private DNS zone.
+The private DNS zone should be linked to the virtual network that contains the private endpoint for the DNS query to resolve the private IP address of the resource endpoint. If the private DNS zone isn't linked to the correct Virtual Network, any DNS resolution from that virtual network will ignore the private DNS zone.
Navigate to the private DNS zone resource in the Azure portal and select the virtual network links from the left menu. You should see the virtual networks linked. [![View virtual network links](./media/how-to-use-azure-migrate-with-private-endpoints/virtual-network-links-inline.png)](./media/how-to-use-azure-migrate-with-private-endpoints/virtual-network-links-expanded.png#lightbox)
-This will show a list of links, each with the name of a virtual network in your subscription. The virtual network that contains the Private Endpoint resource must be listed here. Else, [follow this article](../dns/private-dns-getstarted-portal.md#link-the-virtual-network) to link the private DNS zone to a virtual network.
+This shows a list of links, each with the name of a virtual network in your subscription. The virtual network that contains the Private Endpoint resource must be listed here. Else, [follow this article](../dns/private-dns-getstarted-portal.md#link-the-virtual-network) to link the private DNS zone to a virtual network.
-Once the private DNS zone is linked to the virtual network, DNS requests originating from the virtual network will look for DNS records in the private DNS zone. This is required for correct address resolution to the virtual network where the private endpoint was created.
+Once the private DNS zone is linked to the virtual network, DNS requests originating from the virtual network looks for DNS records in the private DNS zone. This is required for correct address resolution to the virtual network where the private endpoint was created.
### Confirm that the private DNS zone contains the right A records
An illustrative example for the Recovery Services vault microservices DNS A reco
[![DNS records for Recovery Services vault](./media/how-to-use-azure-migrate-with-private-endpoints/rsv-a-records-inline.png)](./media/how-to-use-azure-migrate-with-private-endpoints/rsv-a-records-expanded.png#lightbox) >[!Note]
-> When you remove or modify an A record, the machine may still resolve to the old IP address because the TTL (Time To Live) value might not have expired yet.
+> When you remove or modify an A record, the machine might still resolve to the old IP address because the TTL (Time To Live) value might not have expired yet.
-### Items that may affect private link connectivity
+### Items that might affect private link connectivity
This is a non-exhaustive list of items that can be found in advanced or complex scenarios: - Firewall settings, either the Azure Firewall connected to the Virtual network or a custom firewall solution deploying in the appliance machine. -- Network peering, which may impact which DNS servers are used and how traffic is routed. -- Custom gateway (NAT) solutions may impact how traffic is routed, including traffic from DNS queries.
+- Network peering, which might impact which DNS servers are used and how traffic is routed.
+- Custom gateway (NAT) solutions might impact how traffic is routed, including traffic from DNS queries.
For more information, review the [troubleshooting guide for Private Endpoint connectivity problems.](../private-link/troubleshoot-private-endpoint-connectivity.md) ## Common issues while using Azure Migrate with private endpoints
-In this section, we will list some of the commonly occurring issues and suggest do-it-yourself troubleshooting steps to remediate the problem.
+In this section, we'll list some of the commonly occurring issues and suggest do-it-yourself troubleshooting steps to remediate the problem.
### Appliance registration fails with the error ForbiddenToAccessKeyVault Azure Key Vault create or update operation failed for <_KeyVaultName_> due to the error <_ErrorMessage_>
This issue can occur if the Azure account being used to register the appliance d
**Steps to troubleshoot connectivity issues to the Key Vault:** If you have enabled the appliance for private endpoint connectivity, use the following steps to troubleshoot network connectivity issues:-- Ensure that the appliance is either hosted in the same virtual network or is connected to the target Azure virtual network (where the Key Vault private endpoint has been created) over a private link. The Key Vault private endpoint will be created in the virtual network selected during the project creation experience. You can verify the virtual network details in the **Azure Migrate > Properties** page.
+- Ensure that the appliance is either hosted in the same virtual network or is connected to the target Azure virtual network (where the Key Vault private endpoint has been created) over a private link. The Key Vault private endpoint is created in the virtual network selected during the project creation experience. You can verify the virtual network details in the **Azure Migrate > Properties** page.
![Azure Migrate properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-properties-page.png) - Ensure that the appliance has network connectivity to the Key Vault over a private link. To validate the private link connectivity, perform a DNS resolution of the Key Vault resource endpoint from the on-premises server hosting the appliance and ensure that it resolves to a private IP address.
If the DNS resolution is incorrect, follow these steps:
1. If you use a custom DNS server, review your custom DNS settings, and validate that the DNS configuration is correct. For guidance, see [private endpoint overview: DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration).
-1. **Proxy server considerations**: If the appliance uses a proxy server for outbound connectivity, you may need to validate your network settings and configurations to ensure the private link URLs are reachable and can be routed as expected.
+1. **Proxy server considerations**: If the appliance uses a proxy server for outbound connectivity, you might need to validate your network settings and configurations to ensure the private link URLs are reachable and can be routed as expected.
- - If the proxy server is for internet connectivity, you may need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./discover-and-assess-using-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
+ - If the proxy server is for internet connectivity, you might need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./discover-and-assess-using-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
- Alternatively, if the proxy server is for all outbound traffic, make sure the proxy server can resolve the private link FQDNs to their respective private IP addresses. For a quick workaround, you can manually update the DNS records on the proxy server with the DNS mappings and the associated private IP addresses, as shown above. This option is recommended for testing. 1. If the issue still persists, [refer to this section](#validate-the-private-dns-zone) for further troubleshooting. After youΓÇÖve verified the connectivity, retry the registration process.
+### Validate private endpoint network connectivity
+You can use the Test-NetConnection command in PowerShell to check if the port is reachable from the appliance to the private endpoint. Ensure that you can resolve the Storage Account and the Key Vault for the Azure migrate project using the private IP address.
+
+![Screenshot of Vault private endpoint connectivity.](./media/troubleshoot-network-connectivity/vault-network-connectivity-test.png)
+
+![Screenshot of storage private endpoint connectivity.](./media/troubleshoot-network-connectivity/storage-network-connectivity-test.png)
+ ### Start Discovery fails with the error AgentNotConnected The appliance could not initiate discovery as the on-premises agent is unable to communicate to the Azure Migrate service endpoint: <_URLname_> in Azure.
If the DNS resolution is incorrect, follow these steps:
1. If you use a custom DNS server, review your custom DNS settings, and validate that the DNS configuration is correct. For guidance, see [private endpoint overview: DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration).
-1. **Proxy server considerations**: If the appliance uses a proxy server for outbound connectivity, you may need to validate your network settings and configurations to ensure the private link URLs are reachable and can be routed as expected.
+1. **Proxy server considerations**: If the appliance uses a proxy server for outbound connectivity, you might need to validate your network settings and configurations to ensure the private link URLs are reachable and can be routed as expected.
- - If the proxy server is for internet connectivity, you may need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./discover-and-assess-using-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
+ - If the proxy server is for internet connectivity, you might need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./discover-and-assess-using-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
- Alternatively, if the proxy server is for all outbound traffic, make sure the proxy server can resolve the private link FQDNs to their respective private IP addresses. For a quick workaround, you can manually update the DNS records on the proxy server with the DNS mappings and the associated private IP addresses, as shown above. This option is recommended for testing. 1. If the issue still persists, [refer to this section](#validate-the-private-dns-zone) for further troubleshooting.
After youΓÇÖve verified the connectivity, retry the discovery process.
The export/import/download report request fails with the error *"403: This request is not authorized to perform this operation"* for projects with private endpoint connectivity. #### Possible causes:
-This error may occur if the export/import/download request was not initiated from an authorized network. This can happen if the import/export/download request was initiated from a client that is not connected to the Azure Migrate service (Azure virtual network) over a private network.
+This error might occur if the export/import/download request was not initiated from an authorized network. This can happen if the import/export/download request was initiated from a client that is not connected to the Azure Migrate service (Azure virtual network) over a private network.
#### Remediation **Option 1** *(recommended)*:
network-watcher Usage Scenarios Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/usage-scenarios-traffic-analytics.md
Title: Usage scenarios of traffic analytics
-description: This article describes the usage scenarios of Azure Network Watcher traffic analytics.
-
+description: Learn about Azure Network Watcher traffic analytics and the insights it can provide in different usage scenarios.
+ - Previously updated : 05/30/2022-- Last updated : 11/02/2023+
-# Usage scenarios of Azure Network Watcher traffic analytics
+# Usage scenarios of traffic analytics
-Some of the insights you might want to gain after Traffic Analytics is fully configured, are as follows:
+In this article, you learn how to get insights about your traffic after configuring traffic analytics in different scenarios.
## Find traffic hotspots
Some of the insights you might want to gain after Traffic Analytics is fully con
Select **See all** under **IP** as shown in the following image:
- ![Screenshot of dashboard showcasing host with most traffic details.](media/traffic-analytics/dashboard-showcasing-host-with-most-traffic-details.png)
+ :::image type="content" source="./media/traffic-analytics/dashboard-showcasing-host-with-most-traffic-details.png" alt-text="Screenshot of dashboard showcasing host with most traffic details.":::
The following image shows time trending for the top five talking hosts and the flow-related details (allowed ΓÇô inbound/outbound and denied - inbound/outbound flows) for a host: Select **See more** under **Details of top 5 talking IPs'** as shown in the following image to get insights about all the hosts:
- ![Screenshot of top five most-talking host trends.](media/traffic-analytics/top-five-most-talking-host-trend.png)
-
+ :::image type="content" source="./media/traffic-analytics/top-five-most-talking-host-trend.png" alt-text="Screenshot of top five most-talking host trends.":::
**Look for**
Some of the insights you might want to gain after Traffic Analytics is fully con
- Are these applications allowed on this network? - Are the applications configured properly? Are they using the appropriate protocol for communication? Select **See all** under **Frequent conversation**, as show in the following image:
- ![Screenshot of dashboard showcasing most frequent conversations.](./media/traffic-analytics/dashboard-showcasing-most-frequent-conversation.png)
+ :::image type="content" source="./media/traffic-analytics/dashboard-showcasing-most-frequent-conversation.png" alt-text="Screenshot of dashboard showcasing most frequent conversations.":::
- The following image shows time trending for the top five conversations and the flow-related details such as allowed and denied inbound and outbound flows for a conversation pair:
- ![Screenshot of top five chatty conversation details and trends.](./media/traffic-analytics/top-five-chatty-conversation-details-and-trend.png)
+ :::image type="content" source="./media/traffic-analytics/top-five-chatty-conversation-details-and-trend.png" alt-text="Screenshot of top five chatty conversation details and trends.":::
**Look for**
Some of the insights you might want to gain after Traffic Analytics is fully con
- Are these applications allowed on this network? - Are the applications configured properly? Are they using the appropriate protocol for communication? Expected behavior is common ports such as 80 and 443. For standard communication, if any unusual ports are displayed, they might require a configuration change. Select **See all** under **Application port**, in the following image:
- ![Screenshot of dashboard showcasing top application protocols.](./media/traffic-analytics/dashboard-showcasing-top-application-protocols.png)
+ :::image type="content" source="./media/traffic-analytics/dashboard-showcasing-top-application-protocols.png" alt-text="Screenshot of dashboard showcasing top application protocols.":::
- The following images show time trending for the top five L7 protocols and the flow-related details (for example, allowed and denied flows) for an L7 protocol:
- ![Screenshot of top five layer 7 protocols details and trends.](./media/traffic-analytics/top-five-layer-seven-protocols-details-and-trend.png)
+ :::image type="content" source="./media/traffic-analytics/top-five-layer-seven-protocols-details-and-trend.png" alt-text="Screenshot of top five layer 7 protocols details and trends.":::
- ![Screenshot of the flow details for application protocol in log search.](./media/traffic-analytics/flow-details-for-application-protocol-in-log-search.png)
+ :::image type="content" source="./media/traffic-analytics/flow-details-for-application-protocol-in-log-search.png" alt-text="Screenshot of the flow details for application protocol in log search.":::
**Look for**
Some of the insights you might want to gain after Traffic Analytics is fully con
- Which are the most conversing hosts, via which VPN gateway, over which port? - Is this pattern normal? Select **See all** under **VPN gateway**, as shown in the following image:
- ![Screenshot of dashboard showcasing top active V P N connections.](./media/traffic-analytics/dashboard-showcasing-top-active-vpn-connections.png)
+ :::image type="content" source="./media/traffic-analytics/dashboard-showcasing-top-active-vpn-connections.png" alt-text="Screenshot of dashboard showcasing top active VPN connections.":::
- The following image shows time trending for capacity utilization of an Azure VPN Gateway and the flow-related details (such as allowed flows and ports):
- ![Screenshot of V P N gateway utilization trend and flow details.](./media/traffic-analytics/vpn-gateway-utilization-trend-and-flow-details.png)
+ :::image type="content" source="./media/traffic-analytics/vpn-gateway-utilization-trend-and-flow-details.png" alt-text="Screenshot of VPN gateway utilization trend and flow details.":::
## Visualize traffic distribution by geography
Some of the insights you might want to gain after Traffic Analytics is fully con
Select **View map** under **Your environment**, as shown in the following image:
- ![Screenshot of dashboard showcasing traffic distribution.](./media/traffic-analytics/dashboard-showcasing-traffic-distribution.png)
+ :::image type="content" source="./media/traffic-analytics/dashboard-showcasing-traffic-distribution.png" alt-text="Screenshot of dashboard showcasing traffic distribution.":::
- The geo-map shows the top ribbon for selection of parameters such as data centers (Deployed/No-deployment/Active/Inactive/Traffic Analytics Enabled/Traffic Analytics Not Enabled) and countries/regions contributing Benign/Malicious traffic to the active deployment:
- ![Screenshot of geo map view showcasing active deployment.](./media/traffic-analytics/geo-map-view-showcasing-active-deployment.png)
+ :::image type="content" source="./media/traffic-analytics/geo-map-view-showcasing-active-deployment.png" alt-text="Screenshot of geo map view showcasing active deployment.":::
- The geo-map shows the traffic distribution to a data center from countries/regions and continents communicating to it in blue (Benign traffic) and red (malicious traffic) colored lines:
- ![Screenshot of geo map view showcasing traffic distribution to countries/regions and continents.](./media/traffic-analytics/geo-map-view-showcasing-traffic-distribution-to-countries-and-continents.png)
+ :::image type="content" source="./media/traffic-analytics/geo-map-view-showcasing-traffic-distribution-to-countries-and-continents.png" alt-text="Screenshot of geo map view showcasing traffic distribution to countries/regions and continents.":::
- ![Screenshot of flow details for traffic distribution in log search.](./media/traffic-analytics/flow-details-for-traffic-distribution-in-log-search.png)
+ :::image type="content" source="./media/traffic-analytics/flow-details-for-traffic-distribution-in-log-search.png" alt-text="Screenshot of flow details for traffic distribution in log search.":::
- The **More Insight** blade of an Azure region also shows the total traffic remaining inside that region (that is, source and destination in same region). It further gives insights about traffic exchanged between availability zones of a datacenter
- ![Screenshot of Inter Zone and Intra region traffic.](./media/traffic-analytics/inter-zone-and-intra-region-traffic.png)
+ :::image type="content" source="./media/traffic-analytics/inter-zone-and-intra-region-traffic.png" alt-text="Screenshot of Inter Zone and Intra region traffic.":::
## Visualize traffic distribution by virtual networks **Look for** - Traffic distribution per virtual network, topology, top sources of traffic to the virtual network, top rogue networks conversing to the virtual network, and top conversing application protocols.
- - Knowing which virtual network is conversing to which virtual network. If the conversation is not expected, it can be corrected.
+ - Knowing which virtual network is conversing to which virtual network. If the conversation isn't expected, it can be corrected.
- If rogue networks are conversing with a virtual network, you can correct NSG rules to block the rogue networks. Select **View VNets** under **Your environment** as shown in the following image:
- ![Screenshot of dashboard showcasing virtual network distribution.](./media/traffic-analytics/dashboard-showcasing-virtual-network-distribution.png)
+ :::image type="content" source="./media/traffic-analytics/dashboard-showcasing-virtual-network-distribution.png" alt-text="Screenshot of dashboard showcasing virtual network distribution.":::
- The Virtual Network Topology shows the top ribbon for selection of parameters like a virtual network's (Inter virtual network Connections/Active/Inactive), External Connections, Active Flows, and Malicious flows of the virtual network. - You can filter the Virtual Network Topology based on subscriptions, workspaces, resource groups and time interval. Extra filters that help you understand the flow are:
Some of the insights you might want to gain after Traffic Analytics is fully con
- You can zoom-in and zoom-out while viewing Virtual Network Topology using mouse scroll wheel. Left-click and moving the mouse lets you drag the topology in desired direction. You can also use keyboard shortcuts to achieve these actions: A (to drag left), D (to drag right), W (to drag up), S (to drag down), + (to zoom in), - (to zoom out), R (to zoom reset). - The Virtual Network Topology shows the traffic distribution to a virtual network to flows (Allowed/Blocked/Inbound/Outbound/Benign/Malicious), application protocol, and network security groups, for example:
- ![Screenshot of virtual network topology showcasing traffic distribution and flow details.](./media/traffic-analytics/virtual-network-topology-showcasing-traffic-distribution-and-flow-details.png)
+ :::image type="content" source="./media/traffic-analytics/virtual-network-topology-showcasing-traffic-distribution-and-flow-details.png" alt-text="Screenshot of virtual network topology showcasing traffic distribution and flow details.":::
- ![Screenshot of virtual network topology showcasing top level and more filters.](./media/traffic-analytics/virtual-network-filters.png)
+ :::image type="content" source="./media/traffic-analytics/virtual-network-filters.png" alt-text="Screenshot of virtual network topology showcasing top level and more filters.":::
- ![Screenshot of flow details for virtual network traffic distribution in log search.](./media/traffic-analytics/flow-details-for-virtual-network-traffic-distribution-in-log-search.png)
+ :::image type="content" source="./media/traffic-analytics/flow-details-for-virtual-network-traffic-distribution-in-log-search.png" alt-text="Screenshot of flow details for virtual network traffic distribution in log search.":::
**Look for** - Traffic distribution per subnet, topology, top sources of traffic to the subnet, top rogue networks conversing to the subnet, and top conversing application protocols. - Knowing which subnet is conversing to which subnet. If you see unexpected conversations, you can correct your configuration.
- - If rogue networks are conversing with a subnet, you are able to correct it by configuring NSG rules to block the rogue networks.
+ - If rogue networks are conversing with a subnet, you're able to correct it by configuring NSG rules to block the rogue networks.
- The Subnets Topology shows the top ribbon for selection of parameters such as Active/Inactive subnet, External Connections, Active Flows, and Malicious flows of the subnet. - You can zoom-in and zoom-out while viewing Virtual Network Topology using mouse scroll wheel. Left-click and moving the mouse lets you drag the topology in desired direction. You can also use keyboard shortcuts to achieve these actions: A (to drag left), D (to drag right), W (to drag up), S (to drag down), + (to zoom in), - (to zoom out), R (to zoom reset). - The Subnet Topology shows the traffic distribution to a virtual network regarding flows (Allowed/Blocked/Inbound/Outbound/Benign/Malicious), application protocol, and NSGs, for example:
- ![Screenshot of subnet topology showcasing traffic distribution a virtual network subnet with regards to flows.](./media/traffic-analytics/subnet-topology-showcasing-traffic-distribution-to-a-virtual-subnet-with-regards-to-flows.png)
+ :::image type="content" source="./media/traffic-analytics/subnet-topology-showcasing-traffic-distribution-to-a-virtual-subnet-with-regards-to-flows.png" alt-text="Screenshot of subnet topology showcasing traffic distribution to a virtual network subnet with regards to flows.":::
**Look for** Traffic distribution per Application gateway & Load Balancer, topology, top sources of traffic, top rogue networks conversing to the Application gateway & Load Balancer, and top conversing application protocols. - Knowing which subnet is conversing to which Application gateway or Load Balancer. If you observe unexpected conversations, you can correct your configuration.
+ - If rogue networks are conversing with an Application gateway or Load Balancer, you're able to correct it by configuring NSG rules to block the rogue networks.
- ![Screenshot shows a subnet topology with traffic distribution to an application gateway subnet regarding flows.](./media/traffic-analytics/subnet-topology-showcasing-traffic-distribution-to-a-application-gateway-subnet-with-regards-to-flows.png)
+ :::image type="content" source="./media/traffic-analytics/subnet-topology-showcasing-traffic-distribution-to-a-application-gateway-subnet-with-regards-to-flows.png" alt-text="Screenshot shows a subnet topology with traffic distribution to an application gateway subnet regarding flows.":::
## View ports and virtual machines receiving traffic from the internet
Traffic distribution per Application gateway & Load Balancer, topology, top sour
- Which open ports are conversing over the internet? - If unexpected ports are found open, you can correct your configuration:
- ![Screenshot of dashboard showcasing ports receiving and sending traffic to the internet.](./media/traffic-analytics/dashboard-showcasing-ports-receiving-and-sending-traffic-to-the-internet.png)
+ :::image type="content" source="./media/traffic-analytics/dashboard-showcasing-ports-receiving-and-sending-traffic-to-the-internet.png" alt-text="Screenshot of dashboard showcasing ports receiving and sending traffic to the internet.":::
- ![Screenshot of Azure destination ports and hosts details.](./media/traffic-analytics/details-of-azure-destination-ports-and-hosts.png)
+ :::image type="content" source="./media/traffic-analytics/details-of-azure-destination-ports-and-hosts.png" alt-text="Screenshot of Azure destination ports and hosts details.":::
**Look for** Do you have malicious traffic in your environment? Where is it originating from? Where is it destined to?
-![Screenshot of malicious traffic flows detail in log search.](./media/traffic-analytics/malicious-traffic-flows-detail-in-log-search.png)
## View information about public IPs' interacting with your deployment
Do you have malicious traffic in your environment? Where is it originating from?
- The Public IP Information section, gives a summary of all types of public IPs' present in your network traffic. Select the public IP type of interest to view details. This [schema document](./traffic-analytics-schema.md#public-ip-details-schema) defines the data fields presented.
- :::image type="content" source="./media/traffic-analytics/public-ip-information.png" alt-text="Screenshot that displays the public I P information." lightbox="./media/traffic-analytics/public-ip-information.png":::
+ :::image type="content" source="./media/traffic-analytics/public-ip-information.png" alt-text="Screenshot that displays the public IP information." lightbox="./media/traffic-analytics/public-ip-information.png":::
- - On the traffic analytics dashboard, click on any IP to view its information
+ - On the traffic analytics dashboard, select any IP to view its information
- :::image type="content" source="./media/traffic-analytics/external-public-ip-details.png" alt-text="Screenshot that displays the external I P information in tool tip." lightbox="./media/traffic-analytics/external-public-ip-details.png":::
+ :::image type="content" source="./media/traffic-analytics/external-public-ip-details.png" alt-text="Screenshot that displays the external IP information in tool tip." lightbox="./media/traffic-analytics/external-public-ip-details.png":::
- :::image type="content" source="./media/traffic-analytics/malicious-ip-details.png" alt-text="Screenshot that displays the malicious I P information in tool tip." lightbox="./media/traffic-analytics/malicious-ip-details.png":::
+ :::image type="content" source="./media/traffic-analytics/malicious-ip-details.png" alt-text="Screenshot that displays the malicious IP information in tool tip." lightbox="./media/traffic-analytics/malicious-ip-details.png":::
## Visualize the trends in NSG/NSG rules hits
Do you have malicious traffic in your environment? Where is it originating from?
- Which NSG/NSG rules have the most hits in comparative chart with flows distribution? - What are the top source and destination conversation pairs per NSG/NSG rules?
- ![Screenshot of dashboard showcasing N S G hits statistics.](./media/traffic-analytics/dashboard-showcasing-nsg-hits-statistics.png)
+ :::image type="content" source="./media/traffic-analytics/dashboard-showcasing-nsg-hits-statistics.png" alt-text="Screenshot of dashboard showcasing NSG hits statistics.":::
- The following images show time trending for hits of NSG rules and source-destination flow details for a network security group:
Do you have malicious traffic in your environment? Where is it originating from?
- Identify which NSG/NSG rules are allowing/blocking significant network traffic - Select top filters for granular inspection of an NSG or NSG rules
- ![Screenshot showcasing time trending for N S G rule hits and top N S G rules.](./media/traffic-analytics/showcasing-time-trending-for-nsg-rule-hits-and-top-nsg-rules.png)
+ :::image type="content" source="./media/traffic-analytics/showcasing-time-trending-for-nsg-rule-hits-and-top-nsg-rules.png" alt-text="Screenshot showcasing time trending for NSG rule hits and top NSG rules.":::
- ![Screenshot of top N S G rules statistics details in log search.](./media/traffic-analytics/top-nsg-rules-statistics-details-in-log-search.png)
+ :::image type="content" source="./media/traffic-analytics/top-nsg-rules-statistics-details-in-log-search.png" alt-text="Screenshot of top N S G rules statistics details in log search.":::
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Previously updated : 10/26/2023 Last updated : 11/01/2023 #Customer intent: I need to understand the Azure Red Hat OpenShift support policies for OpenShift 4.0.
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
* Don't circumvent the deny assignment that is configured as part of the service, or perform administrative tasks that are normally prohibited by the deny assignment. * OpenShift relies on the ability to automatically tag Azure resources. If you have configured a tagging policy, do not apply more than 10 user-defined tags to resources in the managed resource group. +
+## Incident management
+
+An incident is an event that results in a degradation or outage Azure Red Hat OpenShift services. An incident can be raised by a customer or Customer Experience and Engagement (CEE) member through a [support case](openshift-service-definitions.md#support), directly by the centralized monitoring and alerting system, or directly by a member of the ARO Site Reliability Engineer (SRE) team.
+
+Depending on the impact on the service and customer, the incident is categorized in terms of severity.
+
+The general workflow of how a new incident is managed is described below:
+
+1. An SRE first responder is alerted to a new incident and begins an initial investigation.
+
+1. After the initial investigation, the incident is assigned an incident lead, who coordinates the recovery efforts.
+
+1. The incident lead manages all communication and coordination around recovery, including any relevant notifications or support case updates.
+
+1. The incident is recovered.
+
+1. The incident is documented and a root cause analysis (RCA) is performed within 5 business days of the incident.
+
+1. An RCA draft document is shared with the customer within 7 business days of the incident.
+ ## Supported virtual machine sizes Azure Red Hat OpenShift 4 supports node instances on the following virtual machine sizes:
operator-insights How To Install Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-mcc-edr-agent.md
+
+ Title: Create and configure MCC EDR Ingestion Agents
+description: Learn how to create and configure MCC EDR Ingestion Agents for Azure Operator Insights
++++ Last updated : 10/31/2023++
+# Create and configure MCC EDR Ingestion Agents for Azure Operator Insights
+
+The MCC EDR agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you. The agent receives EDRs from an Affirmed MCC, and forwards them to Azure Operator Insights. 
+
+## Prerequisites
+
+- You must have an Affirmed Networks MCC deployment that generates EDRs.
+- You must have an Azure Operator Insights MCC Data product deployment.
+- You must provide VMs with the following specifications to run the agent:
+ - OS - Red Hat Enterprise Linux 8.6 or later
+ - Minimum hardware - 4 vCPU,  8-GB RAM, 30-GB disk
+ - Network - connectivity from MCCs and to Azure
+ - Software - systemd and logrotate installed
+ - SSH or alternative access to run shell commands
+ - (Preferable) Ability to resolve public DNS.  If not, you need to perform additional manual steps to resolve Azure locations. Refer to [Running without public DNS](#running-without-public-dns) for instructions.
+
+The number of VMs needed depends on the scale and redundancy characteristics of your deployment. Each agent instance must run on its own VM. Talk to the Affirmed Support Team to determine your requirements.
+
+## Deploy the agent on your VMs
+
+To deploy the agent on your VMs, follow the procedures outlined in the following sections.
+
+### Authentication
+
+You must have a service principal with a certificate credential that can access the Azure Key Vault created by the Data Product to retrieve storage credentials. Each agent must also have a copy of a valid certificate and private key for the service principal stored on this virtual machine.
+
+#### Create a service principal
+
+> [!IMPORTANT]
+> You may need a Microsoft Entra tenant administrator in your organization to perform this set up for you.
+
+1. Create or obtain a Microsoft Entra ID service principal. Follow the instructions detailed in [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal).
+1. Note the Application (client) ID, and your Microsoft Entra Directory (tenant) ID (these IDs are UUIDs of the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, where each character is a hexadecimal digit).
+
+#### Prepare certificates
+
+It's up to you whether you use the same certificate and key for each VM, or use a unique certificate and key for each.  Using a certificate per VM provides better security and has a smaller impact if a key is leaked or the certificate expires. However, this method adds a higher maintainability and operational complexity.
+
+1. Obtain a certificate. We strongly recommend using trusted certificate(s) from a certificate authority.
+1. Add the certificate(s) as credential(s) to your service principal, following [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal).
+1. We **strongly recommend** additionally storing the certificates in a secure location such as Azure Key vault.  Doing so allows you to configure expiry alerting and gives you time to regenerate new certificates and apply them to your ingestion agents before they expire.  Once a certificate has expired, the agent is unable to authenticate to Azure and no longer uploads data.  For details of this approach see [Renew your Azure Key Vault certificates Azure portal](../key-vault/certificates/overview-renew-certificate.md).
+
+1. Ensure the certificate(s) are available in pkcs12 format, with no passphrase protecting them. On Linux, you can convert a certificate and key from PEM format using openssl:
+
+ `openssl pkcs12 -nodes -export -in $certificate\_pem\_filename -inkey $key\_pem\_filename -out $pkcs12\_filename`
+
+5. Ensure the certificate(s) are base64 encoded. On Linux, you can based64 encode a pkcs12-formatted certificate by using the command:
+
+ `base64 -w 0 $pkcs12\_filename &gt; $base64filename`
+
+#### Permissions
+
+1. Find the Azure Key Vault that holds the storage credentials for the input storage account. This key vault is in a resource group named *\<data-product-name\>-HostedResources-\<unique-id\>*.
+1. Grant your service principal the Key Vault Secrets User role on this Key Vault.  You need Owner level permissions on your Azure subscription.  See [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) for details of how to assign roles in Azure.
+1. Note the name of the Key Vault.
+
+### Prepare the VMs
+
+Repeat these steps for each VM onto which you want to install the agent:
+
+1. Ensure you have an SSH session open to the VM, and that you have `sudo` permissions.
+1. Verify that the VM has the following ports open:
+ - Port 36001/TCP inbound from the MCCs
+ - Port 443/TCP outbound to Azure
+
+ These ports must be open both in cloud network security groups and in any firewall running on the VM itself (such as firewalld or iptables).
+1. Install systemd, logrotate and zip on the VM, if not already present.
+1. Obtain the ingestion agent RPM and copy it to the VM.
+1. Copy the pkcs12-formatted base64-encoded certificate (created in the [Prepare certificates](#prepare-certificates) step) to an accessible location on the VM (such as /etc/az-mcc-edr-uploader).
+
+### Running without public DNS
+
+If your agent VMs don't have access to public DNS, then you need to add entries on each agent VM to map the Azure host names to IP addresses.
+
+This process assumes that you're connecting to Azure over ExpressRoute and are using Private Links and/or Service Endpoints. If you're connecting over public IP addressing, you **cannot** use this workaround and must use public DNS.
+
+Create the following from a virtual network that is peered to your ingestion agents:
+
+- A Service Endpoint to Azure Storage
+- A Private Link  or Service Endpoint to the Key Vault created by your Data Product.  The Key Vault is the same one you found in step 5 of [Prepare the certificates](#prepare-certificates).
+
+Steps:
+
+1. Note the IPs of these two connections.
+1. Note the domain of your Azure Operator Insights Input Storage Account.  You can find the domain on your Data Product overview page in the Azure portal, in the form of *\<account name\>.blob.core.windows.net*
+1. Note the domain of the Key Vault.  The domain appears as *\<vault name\>.vault. azure.net*
+1. Add a line to */etc/hosts* on the VM linking the two values in this format, for each of the storage and Key Vault:
+
+ *\<Storage private IP\>*   *\<Storage hostname\>*
+
+ *\<Key Vault private IP\>*  *\<Key Vault hostname\>*
+
+### Install agent software
+
+Repeat these steps for each VM onto which you want to install the agent:
+
+1. In an SSH session, change to the directory where the RPM was copied.
+1. Install the RPM:  `sudo dnf install \*.rpm`.  Answer 'y' when prompted.  If there are any missing dependencies, the RPM isn't installed.
+1. Change to the configuration directory: cd /etc/az-mcc-edr-uploader
+1. Make a copy of the default configuration file:  `sudo cp example\_config.yaml config.yaml`
+1. Edit the *config.yaml* and fill out the fields.  Most of them are set to default values and do not require input.  The full reference for each parameter is described in [MCC EDR Ingestion Agents configuration reference](mcc-edr-agent-configuration.md). The following parameters must be set:
+
+ 1. **site\_id** should be changed to a unique identifier for your on-premises site – for example, the name of the city or state for this site.  This name becomes searchable metadata in Operator Insights for all EDRs from this agent. 
+ 1. **agent\_id** should be a unique identifier for this agent ΓÇô for example, the VM hostname.
+ 1. **secret\_providers\[0\].provider.vault\_name** must be the name of the key vault for your Data Product  
+ 1. **secret\_providers\[0\].provider.auth** must be filled out with:
+
+ 1. **tenant\_id** as your Microsoft Entra ID tenant.
+
+ 2. **identity\_name** as the application ID of your service principal
+
+ 3. **cert\_path** as the path on disk to the location of the base64-encoded certificate and private key for the service principal to authenticate with.
+
+ 1. **sink.container\_name** *must be left as "edr".*
+
+1. Start the agent: `sudo systemctl start az-mcc-edr-uploader`
+
+1. Check that the agent is running: `sudo systemctl status az-mcc-edr-uploader`
+
+ 1. If you see any status other than "active (running)", look at the logs as described in the [Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insights](troubleshoot-mcc-edr-agent.md) article to understand the error.  It's likely that some configuration is incorrect.
+
+ 2. Once you resolve the issue,  attempt to start the agent again.
+
+ 3. If issues persist, raise a support ticket.
+
+1. Once the agent is running, ensure it will automatically start on a reboot: `sudo systemctl enable az-mcc-edr-uploader.service`
+
+1. Save a copy of the delivered RPM ΓÇô you'll need it to reinstall or to back out any future upgrades.
+
+### Configure affirmed MCCs
+
+Once the agents are installed and running, configure the MCCs to send EDRs to them.
+
+1. Follow the steps under "Generating SESSION, BEARER, FLOW, and HTTP Transaction EDRs" in the [Affirmed Networks Active Intelligent vProbe System Administration Guide](https://manuals.metaswitch.com/vProbe/13.1/vProbe_System_Admin/Content/02%20AI-vProbe%20Configuration/Generating_SESSION__BEARER__FLOW__and_HTTP_Transac.htm) (1), making the following changes:
+
+ - Replace the IP addresses of the MSFs in MCC configuration with the IP addresses of the VMs running the ingestion agents.
+
+ - Confirm that the following EDR server parameters are set:
+
+ - port: 36001
+ - encoding: protobuf
+ - keep-alive: 2 seconds
+
+## Important considerations
+
+### Security
+
+The VM used for the MCC EDR agent should be set up following best practice for security. For example:
+
+- Networking - Only allow network traffic on the ports that are required to run the agent and maintain the VM.
+
+- OS version - Keep the OS version up-to-date to avoid known vulnerabilities.
+
+- Access - Limit access to the VM to a minimal set of users, and set up audit logging for their actions. For the MCC EDR agent, we recommend that the following are restricted:
+
+ - Admin access to the VM (for example, to stop/start/install the MCC EDR software)
+
+ - Access to the directory where the logs are stored *(/var/log/az-mcc-edr-uploader/)*
+
+ - Access to the certificate and private key for the service principal
+
+### Deploying for fault tolerance
+
+The MCC EDR agent is designed to be highly reliable and resilient to low levels of network disruption. If an unexpected error occurs, the agent restarts and provides service again as soon as it's running.
+
+The agent doesn't buffer data, so if a persistent error or extended connectivity problems occur, EDRs are dropped.
+
+For additional fault tolerance, you can deploy multiple instances of the MCC EDR agent and configure the MCC to switch to a different instance if the original instance becomes unresponsive, or to shared EDR traffic across a pool of agents. For more information, refer to the [Affirmed Networks Active Intelligent vProbe System Administration Guide](https://manuals.metaswitch.com/vProbe/13.1/vProbe_System_Admin/Content/02%20AI-vProbe%20Configuration/Generating_SESSION__BEARER__FLOW__and_HTTP_Transac.htm)(2) or speak to the Affirmed Networks Support Team.
+
+## Related content
+
+[Manage MCC EDR Ingestion Agents for Azure Operator Insights](how-to-manage-mcc-edr-agent.md)
+
+[Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insights](troubleshoot-mcc-edr-agent.md)
+
+[1] Only accessible for customers with Affirmed Support
+
+[2] Only accessible for customers with Affirmed support
operator-insights How To Manage Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-manage-mcc-edr-agent.md
+
+ Title: Manage MCC EDR Ingestion Agents for Azure Operator Insights
+description: Learn how to upgrade, update, roll back and manage MCC EDR Ingestion agents for AOI
++++ Last updated : 11/02/2023++
+# Manage MCC EDR Ingestion Agents for Azure Operator Insights
+
+> [!WARNING]
+> When the agent is restarted, a small number of EDRs being handled may be dropped.  It is not possible to gracefully restart without dropping any data.  For safety, update agents one at a time, only updating the next when you are sure the previous was successful.
+
+## Agent software upgrade
+
+To upgrade to a new release of the agent, repeat the following steps on each VM that has the old agent.
+
+> [!WARNING]
+> When the agent restarts, a small number of EDRs being handled may be dropped.  It is not possible to gracefully upgrade without dropping any data.  For safety, upgrade agents one at a time, only upgrading the next when you are sure the previous was successful.
+
+1. Copy the RPM to the VM.  In an SSH session, change to the directory where the RPM was copied.
+
+1. Save a copy of the existing */etc/az-mcc-edr-uploader/config.yaml* configuration file.
+
+1. Upgrade the RPM: `sudo dnf install \*.rpm`.  Answer 'y' when prompted.  
+
+1. Create a new config file based on the new sample, keeping values from the original. Follow specific instructions in the release notes for the upgrade to ensure the new configuration is generated correctly.
+
+1. Restart the agent: `sudo systemctl restart az-mcc-edr-uploader.service`
+
+1. Once the agent is running, make sure it will automatically start on a reboot: `sudo systemctl enable az-mcc-edr-uploader.service`
+1. Verify that the agent is running and that EDRs are being routed to it as described in [Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insights](troubleshoot-mcc-edr-agent.md).
+
+### Agent configuration update
+
+> [!WARNING]
+> Changing the configuration requires restarting the agent, whereupon a small number of EDRs being handled may be dropped.  It is not possible to gracefully restart without dropping any data.  For safety, update agents one at a time, only updating the next when you are sure the previous was successful.
+
+If you need to change the agent's configuration, perform the following steps:
+
+1. Save a copy of the original configuration file */etc/az-mcc-edr-uploader/config.yaml*
+
+1. Edit the configuration file to change the config values.  
+
+1. Restart the agent: `sudo systemctl restart az-mcc-edr-uploader.service`
+
+### Rollback
+
+If an upgrade or configuration change fails:
+
+1. Copy the backed-up configuration file from before the change to the */etc/az-mcc-edr-uploader/config.yaml* file.
+
+1. If a software upgrade failed, downgrade back to the original RPM.
+
+1. Restart the agent: `sudo systemctl restart az-mcc-edr-uploader.service`
+
+1. If this was software upgrade, make sure it will automatically start on a reboot: `sudo systemctl enable az-mcc-edr-uploader.service`
+
+## Certificate rotation
+
+You must refresh your service principal credentials before they expire.
+
+To do so:
+
+1. Create a new certificate, and add it to the service principal. For instructions, refer to [Upload a trusted certificate issued by a certificate authority](/entra/identity-platform/howto-create-service-principal-portal).
+
+1. Obtain the new certificate and private key in the base64-encoded PKCS12 format, as described in [Create and configure MCC EDR Ingestion Agents for Azure Operator Insights](how-to-install-mcc-edr-agent.md).
+
+1. Copy the certificate to the ingestion agent VM.
+
+1. Save the existing certificate file and replace with the new certificate file.
+
+1. Restart the agent: `sudo systemctl restart az-mcc-edr-uploader.service`
operator-insights Mcc Edr Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/mcc-edr-agent-configuration.md
+
+ Title: MCC EDR Ingestion Agents configuration reference for Azure Operator Insights
+description: This article documents the complete set of configuration for the agent, listing all fields with examples and explanatory comments.
+++ Last updated : 11/02/2023+++
+# MCC EDR Ingestion Agents configuration reference
+
+This reference provides the complete set of configuration for the agent, listing all fields with examples and explanatory comments.
+
+```
+# The name of the site this agent lives in
+site_id: london-lab01
+# The identifier for this agent
+agent_id: mcc-edr-agent01
+# Config for secrets providers. We currently support reading secrets from Azure Key Vault and from the local filesystem.
+# Multiple secret providers can be defined and each must be given
+# a unique name.
+# The name can then be referenced for secrets later in the config.
+secret_providers:
+ΓÇ» - name: dp_keyvault
+ΓÇ» ΓÇ» provider:
+ΓÇ» ΓÇ» ΓÇ» type: key_vault
+ΓÇ» ΓÇ» ΓÇ» vault_name: contoso-dp-kv
+ΓÇ» ΓÇ» ΓÇ» auth:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» tenant_id: ad5421f5-99e4-44a9-8a46-cc30f34e8dc7
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» identity_name: 98f3263d-218e-4adf-b939-eacce6a590d2
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» cert_path: /path/to/local/certkey.pkcs
+# Source configuration. This controls how EDRs are ingested from
+# MCC.
+source:
+ # The TCP port to listen on. Must match the port MCC is
+ # configured to send to.
+ listen_port: 36001
+ # The maximum amount of data to buffer in memory before uploading.
+ message_queue_capacity_in_bytes: 33554432
+ # The maximum size of a single blob (file) to store in the input
+ # storage account in Azure.
+ maximum_blob_size_in_bytes: 134217728
+ # Quick check on the maximum RAM that the agent should use.
+ # This is a guide to check the other tuning parameters, rather
+ # than a hard limit.
+ maximum_overall_capacity_in_bytes: 1275068416
+ # The maximum time to wait when no data is received before
+ # uploading pending batched data to Azure.
+ blob_rollover_period_in_seconds: 300
+sink:
+ # The container within the ingestion account. This *must* be in
+ # the format Azure Operator Insights expects. Do not adjust
+ # without consulting your support representative.
+ container_name: edrs
+ auth:
+ type: sas_token
+ # This must reference a secret provider configured above.
+ secret_provider: dp_keyvault
+ # How often to check for a new ADLS token
+ cache_period_hours: 12
+ # The name of a secret in the corresponding provider.
+ # This will be the name of a secret in the Key Vault.
+ # This is created by the Data Product and should not be changed.
+ secret_name: adls-sas-token
+# The maximum size of each block that is uploaded to Azure.
+# Each blob is composed of one or more blocks.
+ block_size_in_bytes : 33554432
+```
operator-insights Purview Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/purview-setup.md
 Title: Use Microsoft Purview with an Azure Operator Insights Data Product description: In this article, learn how to set up Microsoft Purview to explore an Azure Operator Insights Data Product.--++ Previously updated : 10/24/2023 Last updated : 11/02/2023 # Use Microsoft Purview with an Azure Operator Insights Data Product
You can access your Purview account through the Azure portal by going to `https:
To begin to catalog a data product in this account, [create a collection](../purview/how-to-create-and-manage-collections.md) to hold the Data Product.
+Provide your User-Assigned-Managed-Identity (UAMI) with necessary roles in the Microsoft Purview compliance portal. The UAMI you enter is the one that was set up when creating an AOI Data Product. For information on how to set up this UAMI, refer to [Set up user-assigned managed identity](data-product-create.md#set-up-user-assigned-managed-identity). At the desired collection, assign this UAMI to the **Collection admin**, **Data source admin**, and **Data curator** roles. Alternately, you can apply the UAMI at the root collection/account level. All collections would inherit these role assignments by default.
++ Assign roles to your users using effective role-based access control (RBAC). There are multiple roles that can be assigned, and assignments can be done on an account root and collection level. For more information, see how to [add roles and restrict access through collections](../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections). [Using the Microsoft Purview compliance portal](../purview/use-microsoft-purview-governance-portal.md) explains how to use the user interface and navigate the service. Microsoft Purview includes options to scan in data sources, but this option isn't required for integrating Azure Operator Insights Data Products with Microsoft Purview. When you complete this procedure, all Azure services and assets are automatically populated to your Purview catalog.
Assign roles to your users using effective role-based access control (RBAC). The
When creating an Azure Operator Insights Data Product, select the **Advanced** tab and enable Purview. Select **Select Purview Account** to provide the required values to populate a Purview collection with data product details. - **Purview account name** - When you select your subscription, all Purview accounts in that subscription are available. Select the account you created. - **Purview collection ID** - The five-character ID visible in the URL of the Purview collection. To find the ID, select your collection and the collection ID is the five characters following `?collection=` in the URL. In the following example, the Investment collection has the collection ID *50h55*. ### Data Product representation in Microsoft Purview
There are relationships between assets where necessary. For example, a Data Prod
When the Data Product creation process is complete, you can see the catalog details of your Data Product in the collection. Select **Data map > Collections** from the left pane and select your collection. > [!NOTE] > The Microsoft Purview integration with Azure Operator Insights Data Products only features the Data catalog and Data map of the Purview portal.
Select **Assets** to view the data product catalog and to list all assets of you
Select **Assets** to view the asset catalog of your data product. You can filter by the data source type for the asset type. For each asset, you can display properties, a list of owners (if applicable), and the related assets. When viewing all assets, filtering by data source type is helpful.
When viewing all assets, filtering by data source type is helpful.
When looking at individual assets, select the **Properties** tab to display properties and related assets for that asset. You can use the Properties tab to find endpoints in AOI Database and AOI Tables.
You can use the Properties tab to find endpoints in AOI Database and AOI Tables.
Select the **Related** tab of an asset to display a visual representation of the existing relationships, summarized and grouped by the asset types. Select an asset type (such as aoi\_database as shown in the example) to view a list of related assets.
Select an asset type (such as aoi\_database as shown in the example) to view a l
The AOI Table and AOI Parquet Details have schemas. Select the **Schema** tab to display the details of each column.
-## Related Content
+## Related content
[Use the Microsoft Purview compliance portal](../purview/use-microsoft-purview-governance-portal.md)
operator-insights Troubleshoot Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/troubleshoot-mcc-edr-agent.md
+
+ Title: Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insights
+description: Learn how to monitor MCC EDR Ingestion Agents and troubleshoot common issues
++++ Last updated : 10/30/2023++
+# Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insights
+
+## Agent diagnostics overview
+
+Because the ingestion bus agents are software packages, their diagnostics are limited to the functioning of the application. Microsoft doesn't provide OS or resource monitoring. You're encouraged to use standard tooling such as snmpd, Prometheus node exporter, or others to send OS-level data and telemetry to your on-premises monitoring systems.
+
+The diagnostics provided by the MCCs, or by Azure Operator Insights itself in Azure Monitor, are expected to be sufficient for most other use cases.
+
+The agent writes logs and metrics to files under */var/log/az-mcc-edr-uploader/*. If the agent is failing to start for any reason, such as misconfiguration, the stdout.log file contains human-readable logs explaining the issue.
+
+Metrics are reported in a simple human-friendly form. They're provided primarily for Microsoft support to have telemetry for debugging unexpected issues. The diagnostics provided by the MCCs, or by Azure Operator Insights itself in Azure Monitor, are expected to be sufficient for most other use cases.
+
+## Collecting diagnostics
+
+Microsoft Support may request diagnostic packages when investigating an issue.
+
+To collect a diagnostics package, SSH to the Virtual Machine and run the command `/usr/bin/microsoft/az-ingestion-gather-diags`. This command generates a date-stamped zip file in the current directory that you can copy from the system.
+
+> [!NOTE]
+> Diagnostics packages don't contain any customer data or the value of the Azure Storage connection string.
+
+## Troubleshooting common issues
+
+For most of these troubleshooting techniques, you need an SSH connection to the VM running the agent.
+
+If none of these suggested remediation steps help, or you're unsure how to proceed, collect a diagnostics package and contact your support representative.
+
+### Agent fails to start
+
+Symptoms: `sudo systemctl status az-mcc-edr-uploader` shows that the service is in failed state.
+
+Steps to remediate:
+
+- Ensure the service is running: `sudo systemctl start az-mcc-edr-uploader`.
+
+- Look at the */var/log/az-mcc-edr-uploader/stdout.log* file and check for any reported errors.  Fix any issues with the configuration file and start the agent again.
+
+### MCC cannot connect
+
+Symptoms: MCC reports alarms about MSFs being unavailable.
+
+Steps to remediate:
+
+- Check that the agent is running.
+- Ensure that MCC is configured with the correct IP and port.
+
+- Check the logs from the agent and see if it's reporting connections.  If not, check the network connectivity to the agent VM and verify that the firewalls aren't blocking traffic to port 36001.
+
+- Collect a packet capture to see where the connection is failing.
+
+### No EDRs appearing in AOI
+
+Symptoms: no data appears in Azure Data Explorer.
+
+Steps to remediate:
+
+- Check that the MCC is healthy and ingestion bus agents are running.
+
+- Check the logs from the ingestion agent for errors uploading to Azure. If the logs point to an invalid connection string, or connectivity issues, fix the configuration/connection string and restart the agent.
+
+- Check the network connectivity and firewall configuration on the storage account.
+
+### Data missing or incomplete
+
+Symptoms: Azure Monitor shows a lower incoming EDR rate in ADX than expected.
+
+Steps to remediate:
+
+- Check that the agent is running on all VMs and isn't reporting errors in logs.
+
+- Verify that the agent VMs aren't being sent more than the rated load.  
+
+- Check agent metrics for dropped bytes/dropped EDRs.  If the metrics don't show any dropped data, then MCC isn't sending the data to the agent. Check the "received bytes" metrics to see how much data is being received from MCC.
+
+- Check that the agent VM isn't overloaded – monitor CPU and memory usage.   In particular, ensure no other process is taking resources from the VM.
operator-nexus Howto Kubernetes Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-connect.md
When operating in connected mode, it's possible to connect to the cluster's kube
[!INCLUDE [quickstart-cluster-connect](./includes/kubernetes-cluster/cluster-connect.md)]
+### Access to cluster nodes via Azure Arc for Kubernetes
+Once you are connected to a cluster via Arc for Kuberentes, you can connect to individual Kubernetes Node using the `kubectl debug` command to run a privileged container on your node.
+
+1. List the nodes in your Nexus Kubernetes cluster:
+
+ ```console
+ $> kubectl get nodes
+ NAME STATUS ROLES AGE VERSION
+ cluster-01-627e99ee-agentpool1-md-chfwd Ready <none> 125m v1.27.1
+ cluster-01-627e99ee-agentpool1-md-kfw4t Ready <none> 125m v1.27.1
+ cluster-01-627e99ee-agentpool1-md-z2n8n Ready <none> 124m v1.27.1
+ cluster-01-627e99ee-control-plane-5scjz Ready control-plane 129m v1.27.1
+ ```
+
+2. Start a privileged container on your node and connect to it:
+
+ ```console
+ $> kubectl debug node/cluster-01-627e99ee-agentpool1-md-chfwd -it --image=mcr.microsoft.com/cbl-mariner/base/core:2.0
+ Creating debugging pod node-debugger-cluster-01-627e99ee-agentpool1-md-chfwd-694gg with container debugger on node cluster-01-627e99ee-agentpool1-md-chfwd.
+ If you don't see a command prompt, try pressing enter.
+ root [ / ]#
+ ```
+
+ This privileged container gives access to the node. Execute commands on the baremetal host machine by running `chroot /host` at the command line.
+
+3. When you are done with a debugging pod, enter the `exit` command to end the interactive shell session. After exiting the shell, make sure to delete the pod:
+
+ ```bash
+ kubectl delete pod node-debugger-cluster-01-627e99ee-agentpool1-md-chfwd-694gg
+ ```
+ ### Azure Arc for servers The `az ssh arc` command allows users to remotely access a cluster VM that has been connected to Azure Arc. This method is a secure way to SSH into the cluster node directly from the command line, while in connected mode. Once the cluster VM has been registered with Azure Arc, the `az ssh arc` command can be used to manage the machine remotely, making it a quick and efficient method for remote management.
operator-service-manager Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/glossary.md
A Subscription is a billing and management container in Azure that holds resourc
## T ### Tenant
-A Tenant refers to an organization or entity that owns and manages a Microsoft Entra ID instance. Tenants provide a way to manage and control access to Azure resources, ensuring secure and controlled access for users and applications.
+A Tenant refers to an organization or entity that owns and manages a Microsoft Entra ID instance. It serves as a secure container for Azure resources, allowing organizations to control and manage access. Tenants enable organizations to efficiently allocate resources, enforce access policies, and integrate with Azure services, providing a centralized and secure environment for managing their cloud infrastructure.
## U
operator-service-manager Quickstart Containerized Network Function Create Site Network Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-containerized-network-function-create-site-network-service.md
"customLocationId": "<resource id of your custom location>", "nginx_nfdg_nfd_version": "1.0.0" },
- "managedIdentity": "`<managed-identity-resource-id>"
+ "managedIdentity": "<managed-identity-resource-id>"
} ```
:::image type="content" source="media/site-network-service-preview.png" alt-text="Screenshot shows an overview of the site network service created.":::
-You have successfully created a Site Network Service for a Nginx Container as a CNF in Azure. You can now manage and monitor your CNF through the Azure portal.
+You have successfully created a Site Network Service for a Nginx Container as a CNF in Azure. You can now manage and monitor your CNF through the Azure portal.
operator-service-manager Quickstart Containerized Network Function Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-containerized-network-function-prerequisites.md
az extension add --name aosm
## Register and verify required resource providers
-Before you begin using the Azure Operator Service Manager, make sure to register the required resource provider. Execute the following commands. This registration process can take up to 5 minutes.
+Before you begin using the Azure Operator Service Manager, execute the following commands to register the required resource provider. This registration process can take up to 5 minutes.
```azurecli # Register Resource Provider
spec:
## Next steps -- [Quickstart: Publish Nginx container as Containerized Network Function (CNF)](quickstart-publish-containerized-network-function-definition.md)
+- [Quickstart: Publish Nginx container as Containerized Network Function (CNF)](quickstart-publish-containerized-network-function-definition.md)
operator-service-manager Quickstart Publish Containerized Network Function Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-publish-containerized-network-function-definition.md
Here's sample input-cnf-nfd.json file:
```json {
- {
"publisher_name": "nginx-publisher", "publisher_resource_group_name": "nginx-publisher-rg", "nf_name": "nginx",
Here's sample input-cnf-nfd.json file:
"helm_packages": [ { "name": "nginxdemo",
- "path_to_chart": "../nginxdemo-0.1.0.tgz",
+ "path_to_chart": "nginxdemo-0.1.0.tgz",
"path_to_mappings": "", "depends_on": [] }
Once the build is complete, examine the generated files to gain a better underst
-|File |Description |
+|Directory/File |Description |
||| |configMappings | Maps the deployment parameters for the Network Function Definition Version (NFDV) to the values required for the helm chart. | |generatedValuesMappings | The yaml output of interactive mode that created configMappings. Edit and rerun the command if necessary. |
When the command completes, inspect the resources within your Publisher Resource
## Next steps -- [Quickstart: Design a Containerized Network Function (CNF) Network Service Design with Nginx](quickstart-containerized-network-function-network-design.md)
+- [Quickstart: Design a Containerized Network Function (CNF) Network Service Design with Nginx](quickstart-containerized-network-function-network-design.md)
operator-service-manager Quickstart Virtualized Network Function Create Site Network Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-virtualized-network-function-create-site-network-service.md
This quickstart assumes you followed the prerequisites in these quickstarts:
}, "ubuntu_vm_nfdg_nfd_version": "1.0.0" },
- "managedIdentity": "`<managed-identity-resource-id>`"
+ "managedIdentity": "<managed-identity-resource-id>"
} ```
Wait for the deployment to reach the 'Succeeded' state. After completion, your V
1. To access your Virtual Network Function (VNF), go to the Site Network Service object in the Azure portal. 1. Select the link under **Current State -> Resources**. The link takes you to the managed resource group created by Azure Operator Service Manager.
-Congratulations! You have successfully created a Site Network Service for Ubuntu Virtual Machine (VM) as a Virtual Network Function (VNF) in Azure. You can now manage and monitor your Virtual Network Function (VNF) through the Azure portal.
+Congratulations! You have successfully created a Site Network Service for Ubuntu Virtual Machine (VM) as a Virtual Network Function (VNF) in Azure. You can now manage and monitor your Virtual Network Function (VNF) through the Azure portal.
operator-service-manager Quickstart Virtualized Network Function Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-virtualized-network-function-operator.md
This quickstart contains the prerequisite tasks for Operator and Virtualized Net
1. **Login to Azure portal**: Open a web browser and sign in to the Azure portal (https://portal.azure.com/) using your Azure account credentials. 1. **Navigate to All Services**: Under *Identity* select *Managed identities*.
-1. **Locate the Managed Identity**: In the list of managed identities, find and select the one named **identity-for-ubuntu-vm-sns**. You should now be on the overview page for that managed identity.
+1. **Locate the Managed Identity**: In the list of managed identities, find and select the one named **identity-for-ubuntu-vm-sns** within your resource group. You should now be on the overview page for that managed identity.
1. **Locate ID**: Select the properties section of the managed identity. You should see various information about the identity. Look for the **ID** field. 1. **Copy to clipboard**: Select the **Copy** button or icon next to the Resource ID. 1. **Save copied Resource ID**: Save the copied Resource ID as this information is required for the **Config Group Values** when creating the Site Network Service.
Completion of all the tasks outlined in this article ensures that the Service Ne
## Next steps -- [Quickstart: Create a Virtualized Network Functions (VNF) Site](quickstart-virtualized-network-function-create-site.md).
+- [Quickstart: Create a Virtualized Network Functions (VNF) Site](quickstart-virtualized-network-function-create-site.md).
orbital About Ground Stations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/about-ground-stations.md
Microsoft owns and operates five ground stations around the world.
:::image type="content" source="./media/ground-station-map.png" alt-text="Diagram shows a world map with the five Azure Orbital Ground Station sites labeled.":::
-Our antennas are 6.1 meters in diameter and support X-band and S-band.
+Our antennas are 6.1 meters in diameter and support the following frequency bands for commercial satellites:
-### X-bank
-| Downlink Frequencies (MHz) | G/T (dB/K) |
-|-||
-| 8000-8400 | 30.0 |
+| Ground Station | X-band Downlink (MHz) | S-band Downlink (MHz) | S-band Uplink (MHz) |
+|-|--|--||
+| Quincy, WA, USA | 8025-8400 | | 2025-2110 |
+| Longovilo, Chile | 8025-8400 | 2200-2290 | 2025-2110 |
+| Singapore | 8025-8400 | 2200-2290 | 2025-2110 |
+| Johannesburg, South Africa | 8025-8400 | 2200-2290 | 2025-2110 |
+| Gavle, Sweden | 8025-8400 | 2200-2290 | 2025-2110 |
-### S-band
-| Uplink Frequencies (MHz) | EIRP (dBW) | Downlink Frequencies (MHz) | G/T (dB/K) |
-|--||-||
-| 2025-2120 | 52.0 | 2200-2300 | 15.0 |
+In addition, we support public satellites for downlink-only operations that utilize frequencies between 7800-8025 MHz.
## Partner ground stations Azure Orbital Ground Station offers a common data plane and API to access all antenna in the global network. An active contract with the partner network(s) you wish to integrate with Azure Orbital Ground Station is required to onboard with a partner.+
+## Next Steps
+
+- [Support all mission phases](mission-phases.md)
orbital Initiate Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/initiate-licensing.md
Title: Azure Orbital Ground Station - initiate ground station licensing
+ Title: Azure Orbital Ground Station - Initiate ground station licensing
description: How to initiate ground station licensing
## About satellite and ground station licensing
-Satellites and ground stations require authorizations from federal regulators and other government agencies to operate. If you're contacting [select public satellites supported by Azure Orbital Ground Station](https://learn.microsoft.com/azure/orbital/modem-chain#named-modem-configuration), Microsoft has already completed all regulatory requirements. If you're planning to launch a private satellite, we recommend that you hire outside counsel to assist you in filing with the appropriate regulators. The application process can be lengthy, so we recommend you start one year ahead of launch.
+Both satellites and ground stations require authorizations from federal regulators and other government agencies to operate.
-During the satellite licensing application process, Azure Orbital Ground Station provides the technical information for the ground station portion of your satellite license request. We require information from your satellite license to modify our ground station licenses and authorize your satellite for use with Microsoft ground stations. Similarly, if you plan to use partner network ground stations, work with the partner's regulatory team ensure their ground stations are updated for use with your spacecraft.
+Azure Orbital Ground Station consists of five first-party, Microsoft-owned ground stations and networks of third-party Partner ground stations. Except in South Africa, adding a new satellite point of communication to licensed Microsoft ground stations requires an authorization from the respective federal regulator. While the specifics of obtaining authorization vary by geography, coordination with incumbent users is always required.
-If your spacecraft is already licensed and in orbit, you must still work with Azure Orbital Ground Station and partner teams to update all relevant ground station licenses. Contact Microsoft as soon as you have an idea of which ground stations you might use.
+- If you're interested in contacting [select **public** satellites supported by Azure Orbital Ground Station](https://learn.microsoft.com/azure/orbital/modem-chain#named-modem-configuration), Microsoft has already completed all regulatory requirements to add these satellite points of communication to all Microsoft ground stations.
-## Coordination
+- If you're interested in having your **existing** satellite space station or constellation communicate with one or more Microsoft ground stations, you must modify your authorization for the US market to add each ground station.
-Coordination is required between regulators and outside counsel as well as between regulators and various government entities to avoid interference between radio frequencies. These entities include the International Telecommunication Union (ITU) and armed forces for the relevant country.
+- If you're interested in having your **planned** satellite space station or constellation communicate with one or more Microsoft ground stations, each ground station must be referenced in the technical exhibits accompanying your US license (or market access) application. As the US application process can be lengthy due to the required coordination with federal users in the X- and S-bands, we recommend you start at least one year ahead of launch if possible.
-The license application may need to be resubmitted based on feedback obtained during the coordination phase. If you have to update your satellite license request, you also need to inform the regulatory teams updating the ground station licenses.
+If you are seeking a new or modified satellite license, Azure Orbital Ground Station provides your in-house or outside counsel with geo-coordinates and technical information for each Microsoft ground station. Similarly, we require information from your satellite space station or constellation license application in order to complete our applications to modify our first-party ground station licenses and add new satellite points of communication. If you plan to use Partner network ground stations, work with the Partner's regulatory team to ensure their ground station authorizations are updated for use with your spacecrafts.
+
+## Coordination during the authorization process
+
+During the process of licensing new satellite space stations, applications sometimes need to be amended or modified. It's important that the satellite operator keeps Microsoft and Partner ground station operators informed of these changes as soon as possible. Delays in the regulatory review process are more likely if the information in the satellite operatorΓÇÖs license application regarding Microsoft and Partner ground stations don't match the information in the respective ground station licenses. Likewise, delays can occur if the information in an application to add a new satellite point of communication to Microsoft or Partner ground stations does not match the information in the current satellite license application.
## Costs
-Regulators have filing fees for obtaining licenses; usually the authorizations aren't released until payment is made. In addition to the filing fees, there are fees for outside counsel. Satellite operators are responsible for all costs associated with obtaining the satellite licenses.
+Satellite operators are responsible for all costs associated with obtaining satellite space station or constellation licenses.
-The costs associated with ground station license are defined in your agreement with Azure Orbital Ground Station and/or the partner ground station network.
+The costs associated with ground station license are defined in your agreement with Azure Orbital Ground Station and/or the Partner ground station network.
## Enforcement
-Satellite operators are responsible for complying with their satellite licenses.
+Satellite operators are responsible for complying with the conditions and restrictions in their satellite licenses.
When you're ready to use Azure Orbital Ground Station, you should [create a spacecraft resource](register-spacecraft.md). Here, you're required to provide your authorizations on a per-link and per-site level.
orbital Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview.md
Title: Why use Azure Orbital Ground Station?
+ Title: Azure Orbital Ground Station - Overview
description: Azure Orbital Ground Station is a cloud-based ground station as a service that allows you to streamline your operations by ingesting space data directly into Azure.
# Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
-# Why use Azure Orbital Ground Station?
+# Azure Orbital Ground Station overview
-Azure Orbital Ground Station is a fully managed cloud-based ground station as a service that allows you to streamline your operations by ingesting space data directly into Azure.
+With Azure Orbital Ground Station, your space data is delivered with near-zero latency to your Azure region over the secure and highly available Microsoft network. Azure Orbital Ground Station supports both Microsoft and industry leading Partner ground station networks, ensuring access to the best sites and networks to support your space missions. Deploying and operating a large, globally distributed ground station solution for your space mission can now be done with the reliability and flexibility of the cloud&mdash;at any classification level.
-With Azure Orbital Ground Station, you can focus on your missions by off-loading the responsibility for deployment and maintenance of ground stations.
-Azure Orbital Ground Station uses MicrosoftΓÇÖs global infrastructure and low-latency global network along with an expansive partner ecosystem of ground station networks, cloud modems, and "Telemetry, Tracking, & Control" (TT&C) functions.
+## Product highlights
-
-## Earth Observation with Azure Orbital Ground Station
-
-Schedule contacts with satellites on a pay-as-you-go basis to ingest data from satellites, monitor satellite health and status, or transmit commands to satellites. Incoming data is delivered to your private virtual network allowing it to be processed or stored in Azure.
-
-The fully digitized service allows you to use managed software modems from Kratos to do the modulation / demodulation, and encoding / decoding functions to recover the data. Alternatively, choose to leverage virtual RF and GNU Radio to send raw RF signal directly to your VM for processing.
-
-For a full end-to-end solution to manage fleet operations and "Telemetry, Tracking, & Control" (TT&C) functions, seamlessly integrate your Azure Orbital Ground Station operations with Kubos Major Tom. Lower your operational costs and maximize your capabilities by using Azure Space.
-
- * Spacecraft contact self-service scheduling
- * Direct data ingestion into Azure
- * Marketplace integration with third-party data processing and image calibration services
- * Integrated cloud modems for X and S bands
- * Global reach through first-party and integrated third-party networks
-
+- Self-service scheduling of spacecraft contacts to ingest data, monitor satellite health and status, or transmit commands to satellites.
+- The managed data path provides one-click access to a global network of ground stations and direct data ingestion into your private Azure virtual network.
+- Take advantage of integrated software modems from Kratos for X and S bands, or leverage virtual RF and GNU radio for unrestricted modem implementations.
+- Avoid building and managing ground station infrastructure and instead pay-as-you-go with any antenna in the global Azure Orbital network.
+- Space data are transmitted and managed securely, according to stringent compliance requirements.
+- Integrate command and control software with the Azure Orbital Ground Station API to manage fleet operations.
## Links to learn more - [Overview, features, security, and FAQ](https://azure.microsoft.com/products/orbital/#layout-container-uid189e)-- [Pricing](https://azure.microsoft.com/pricing/details/orbital/)
+- [Pricing](https://azure.microsoft.com/pricing/details/orbital/) and [SLA](https://azure.microsoft.com/support/legal/sla/orbital/)
- [Microsoft Learn training session](/training/modules/introduction-to-ground-station/)-- [Azure Space Blog](https://techcommunity.microsoft.com/t5/azure-space-blog/bg-p/AzureSpaceBlog)
+- [Azure Space tech blog](https://techcommunity.microsoft.com/t5/azure-space-blog/bg-p/AzureSpaceBlog)
- [General Availability press announcement](https://azure.microsoft.com/blog/new-azure-space-products-enable-digital-resiliency-and-empower-the-industry/) ## Next steps--- [Register Spacecraft](register-spacecraft.md)-- [Configure a Contact Profile](contact-profile.md)
+- [About Microsoft and Partner ground stations](about-ground-stations.md)
+- [Get started with Azure Orbital Ground Station](get-started.md)
+- [Support all mission phases](mission-phases.md)
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
psql "host=mydb.postgres... user=user@tenant.onmicrosoft.com dbname=postgres ssl
To connect by using a Microsoft Entra token with PgAdmin, follow these steps:
-1. Clear the **Connect now** option at server creation.
-1. Enter your server details on the **Connection** tab and save.
-1. From the browser menu, select **Connect to Azure Database for PostgreSQL - Flexible Server**.
-1. Enter the Active Directory token password when you're prompted.
+1. Open Pgadmin and click **Register** from left hand menu and select **Server**
+2. In **General** Tab provide a connection name and clear the **Connect now** option.
+3. Click the **Connection** tab and provide your Flexible Server details for **Hostname/address** and **username** and save.
+4. From the browser menu, select your Azure Database for PostgreSQL - Flexible Server connection and click **Connect Server**
+5. Enter the your Active Directory token password when you're prompted.
++ Here are some essential considerations when you're connecting:
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net | | Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net | | Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.usgovcloudapi.net| mariadb.database.usgovcloudapi.net |
+| Azure Data Factory (Microsoft.DataFactory/factories) | dataFactory | privatelink.datafactory.azure.us | datafactory.azure.us |
+| Azure Data Factory (Microsoft.DataFactory/factories) | portal | privatelink.adf.azure.us | adf.azure.us |
| Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.usgovcloudapi.net | vault.usgovcloudapi.net <br> vaultcore.usgovcloudapi.net | | Azure Search (Microsoft.Search/searchServices) | searchService | privatelink.search.windows.us | search.windows.us | | Azure Container Registry (Microsoft.ContainerRegistry/registries) | registry | privatelink.azurecr.us </br> {regionName}.privatelink.azurecr.us | azurecr.us </br> {regionName}.azurecr.us |
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
[Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Data Factory](../data-factory/concepts-data-redundancy.md?bc=%2fazure%2freliability%2fbreadcrumb%2ftoc.json&toc=%2fazure%2freliability%2ftoc.json)| [Azure Database for MySQL - Flexible Server](../mysql/flexible-server/concepts-high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Database for PostgreSQL - Flexible Server](../postgresql/single-server/concepts-high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Data Manager for Energy](../energy-data-services/reliability-energy-data-services.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
+[Azure Database for PostgreSQL - Flexible Server](./reliability-postgresql-flexible-server.md)|
+[Azure Data Manager for Energy](./reliability-energy-data-services.md) |
[Azure DDoS Protection](../ddos-protection/ddos-faq.yml?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Disk Encryption](../virtual-machines/disks-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure DNS - Azure DNS Private Zones](../dns/private-dns-getstarted-portal.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
reliability Reliability Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md
Microsoft and its customers operate under the [Shared Responsibility Model](./av
For deploying virtual machines, you can use [flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) mode on Virtual Machine Scale Sets. All VM sizes can be used with flexible orchestration mode. Flexible orchestration mode also offers high availability guarantees (up to 1000 VMs) by spreading VMs across fault domains either within a region or within an availability zone.
-## Additional guidance
+## Next steps
- [Well-Architected Framework for virtual machines](/azure/architecture/framework/services/compute/virtual-machines/virtual-machines-review) - [Azure to Azure disaster recovery architecture](/azure/site-recovery/azure-to-azure-architecture) - [Accelerated networking with Azure VM disaster recovery](/azure/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking) - [Express Route with Azure VM disaster recovery](../site-recovery/azure-vm-disaster-recovery-with-expressroute.md) - [Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml)-
-## Next steps
-> [!div class="nextstepaction"]
-> [Reliability in Azure](/azure/reliability/availability-zones-overview)
+- [Reliability in Azure](/azure/reliability/availability-zones-overview)
search Vector Search How To Generate Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-generate-embeddings.md
If you want resources in the same region, start with:
The Postman collection assumes that you already have a vector query. Here's some Python code for generating an embedding that you can paste into the "values" property of a vector query. ```python
-!pip install openai
+!pip install openai==0.28.1
import openai
sentinel Cef Name Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/cef-name-mapping.md
For more information, see [Connect your external solution using Common Event For
| deviceDirection | <a name="communicationdirection"></a> CommunicationDirection | Any information about the direction the observed communication has taken. Valid values: <br>- `0` = Inbound <br>- `1` = Outbound | | deviceDnsDomain | DeviceDnsDomain | The DNS domain part of the full qualified domain name (FQDN) | |DeviceEventClassID | DeviceEventClassID | String or integer that serves as a unique identifier per event type. |
-| deviceExternalID | DeviceExternalID | A name that uniquely identifies the device generating the event. |
+| deviceExternalId | deviceExternalId | A name that uniquely identifies the device generating the event. |
| deviceFacility | DeviceFacility | The facility generating the event.| | deviceInboundInterface | DeviceInboundInterface |The interface on which the packet or data entered the device. | | deviceNtDomain | DeviceNtDomain | The Windows domain of the device address |
sentinel Cloudwatch Lambda Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/cloudwatch-lambda-function.md
Last updated 02/09/2023
# Create a Lambda function to send CloudWatch events to an S3 bucket
-In some cases, your CloudWatch logs may not match the format accepted by Microsoft Sentinel - .csv file in a GZIP format without a header. In this article, you use a [lambda function](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-S3/CloudWatchLanbdaFunction.py) within the Amazon Web Services (AWS) environment to send [CloudWatch events to an S3 bucket](connect-aws.md), and convert the format to the accepted format.
+In some cases, your CloudWatch logs may not match the format accepted by Microsoft Sentinel - .csv file in a GZIP format without a header. In this article, you use a [lambda function](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-S3/CloudWatchLambdaFunction.py) within the Amazon Web Services (AWS) environment to send [CloudWatch events to an S3 bucket](connect-aws.md), and convert the format to the accepted format.
## Create the lambda function
The lambda function uses Python 3.9 runtime and x86_64 architecture.
In this document, you learned how to create a Lambda function to send CloudWatch events to an S3 bucket. To learn more about Microsoft Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).-- [Use workbooks](monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Cef Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md
Select the machines on which you want to install the AMA. These machines are VMs
```python sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py ```
- The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon.
+ The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon. The script opens port 514 to listen to incoming messages in both UDP and TCP protocols. To change this setting, refer to the
+ Syslog daemon configuration file according to the daemon type running on the machine:
+ - Rsyslog: `/etc/rsyslog.conf`
+ - Syslog-ng: `/etc/syslog-ng/syslog-ng.conf`
> [!NOTE] > To avoid [Full Disk scenarios](../azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) where the agent can't function, we recommend that you set the `syslog-ng` or `rsyslog` configuration not to store unneeded logs. A Full Disk scenario disrupts the function of the installed AMA.
sentinel Connect Cef Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-syslog.md
Create the DCR for your Syslog-based logs using the Azure Monitor [guidelines](.
```python sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python3 Forwarder_AMA_installer.py ```
- The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon.
+ The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon. The script opens port 514 to listen to incoming messages in both UDP and TCP protocols. To change this setting, refer to the
+ Syslog daemon configuration file according to the daemon type running on the machine:
+ - Rsyslog: `/etc/rsyslog.conf`
+ - Syslog-ng: `/etc/syslog-ng/syslog-ng.conf`
1. Create the request URL and header:ΓÇ»
sentinel Connect Google Cloud Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-google-cloud-platform.md
Before you begin, verify that you have:
You can set up the GCP environment in one of two ways: - [Create GCP resources via the Terraform API](#create-gcp-resources-via-the-terraform-api): Terraform provides an API for the Identity and Access Management (IAM) that creates the resources: The topic, a subscription for the topic, a workload identity pool, a workload identity provider, a service account, and a role. -- [Set up GCP environment manually](#) via the GCP console.
+- [Set up GCP environment manually](#set-up-the-gcp-environment-manually-via-the-gcp-portal) via the GCP console.
### Create GCP resources via the Terraform API
service-connector Concept Service Connector Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-service-connector-internals.md
The concept of *service connection* is a key concept in the resource model of Se
| Property | Description | ||-| | Connection Name | The unique name of the service connection. |
-| Source Service Type | Source services are usually Azure compute services. These are the services you can connect to target services. Source services include Azure App Service, Azure Container Apps and Azure Spring Apps. |
+| Source Service Type | Source services are services you can connect to target services. They are usually Azure compute services and they include Azure App Service, Azure Container Apps and Azure Spring Apps. |
| Target Service Type | Target services are backing services or dependency services that your compute services connect to. Service Connector supports various target service types including major databases, storage, real-time services, state, and secret stores. | | Client Type | Client type refers to your compute runtime stack, development framework, or specific type of client library that accepts the specific format of the connection environment variables or properties. | | Authentication Type | The authentication type used for the service connection. It could be a secret/connection string, a managed identity, or a service principal. | Source services and target services support multiple simultaneous service connections, which means that you can connect each resource to multiple resources.
-Service Connector manages connections in the properties of the source instance. Creating, getting, updating, and deleting connections is done directly by opening the source service instance in the Azure portal or by using the CLI commands of the source service.
+Service Connector manages connections in the properties of the source instance. Creating, getting, updating and deleting connections is done directly by opening the source service instance in the Azure portal, or by using the CLI commands of the source service.
Connections can be made across subscriptions or tenants, meaning that source and target services can belong to different subscriptions or tenants. When you create a new service connection, the connection resource is created in the same region as your compute service instance by default.
Connections can be made across subscriptions or tenants, meaning that source and
Service Connector runs multiple tasks while creating or updating service connections, including: -- Configuring the network and firewall settings-- Configuring connection information-- Configuring authentication information
+- Configuring the network and firewall settings.
+ [Learn more](#service-network-solution) about network solutions.
+- Configuring connection information.
+ [Learn more](#connection-configurations) about connection configurations.
+- Configuring authentication information.
+ Service Connector supports all available authentication types between source services and target services.
+ - **System assigned managed identity**. Service Connector enables system assigned managed identity on source services if not enabled yet, then grants RBAC roles of target services to the managed identity. The user can specify the roles to be granted.
+ - **User assigned managed identity**. Service Connector enables user assigned managed identity on source services if not enabled yet, then grants RBAC roles of target services to the managed identity. The user can specify the roles to be granted.
+ - **Connection String**. Service Connector retrieves connection strings from target services such as Storage, Redis Cache etc., or constructs connection strings based on user input, such as Azure database for SQL, PostgreSQL etc.
+ - **Service principal**. Service Connector grants RBAC roles of target services to the managed identity. The user can specify the roles to be granted.
+
+ Service Connector saves corresponding authentication configurations to source services, for example, saving AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_STORAGEACCOUNT_ENDPOINT for Storage with authentication type user assigned managed identity.
- Creating or updating connection rollback if failure occurs If a step fails during this process, Service Connector rolls back all previous steps to keep the initial settings in the source and target instances.
az containerapp connection list-configuration --resource-group <source-service-r
## Configuration naming convention
-Service Connector sets the connection configuration when creating a connection. The environment variable key-value pairs are determined by your client type and authentication type. For example, using the Azure SDK with a managed identity requires a client ID, client secret, etc. Using a JDBC driver requires a database connection string. Follow these conventions to name the configurations:
+Service Connector sets the connection configuration when creating a connection. The environment variable key-value pairs are determined based on your client type and authentication type. For example, using the Azure SDK with a managed identity requires a client ID, client secret, etc. Using a JDBC driver requires a database connection string. Follow these conventions to name the configurations:
- Spring Boot client: the Spring Boot library for each target service has its own naming convention. For example, MySQL connection settings would be `spring.datasource.url`, `spring.datasource.username`, `spring.datasource.password`. Kafka connection settings would be `spring.kafka.properties.bootstrap.servers`.
Service Connector sets the connection configuration when creating a connection.
Service Connector offers three network solutions for users to choose from when creating a connection. These solutions are designed to facilitate secure and efficient communication between resources.
-1. **Firewall**: This solution allows connection through public network and compute resource will access target resource with public IP address. When selecting this option, Service Connector verifies the target resource's firewall settings and adds a rule to allow connections from the source resource's public IP address. If the resource's firewall has an option to allow all Azure resources accessing, Service Connector enables this setting. However, if the target resource denies all public network traffic by default, Service Connector doesn't modify this setting. In this case, you should choose another option or update the network settings manually before trying again.
+1. **Firewall**: This solution allows connection through public network and compute resource accessing target resource with public IP address. When selecting this option, Service Connector verifies the target resource's firewall settings and adds a rule to allow connections from the source resource's public IP address. If the resource's firewall supports allowing all Azure resources accessing, Service Connector enables this setting. However, if the target resource denies all public network traffic by default, Service Connector doesn't modify this setting. In this case, you should choose another option or update the network settings manually before trying again.
-2. **Service Endpoint**: This solution enables compute resource to connect to target resources via a virtual network, ensuring that connection traffic doesn't pass through the public network. Its only available if certain preconditions are met:
- - The compute resource must have virtual network integration enabled. For Azure App Service, this can be configured in its networking settings; for Azure Spring Apps, users must set VNet injection during the resource creation stage.
+2. **Service Endpoint**: This solution enables compute resource to connect to target resources via a virtual network, ensuring that connection traffic doesn't pass through the public network. It's only available if certain preconditions are met:
+ - The compute resource must have virtual network integration enabled. For Azure App Service, it can be configured in its networking settings; for Azure Spring Apps, users must set Virtual Network injection during the resource creation stage.
- The target service must support Service Endpoint. For a list of supported services, refer to [Virtual Network service endpoints](/azure/virtual-network/virtual-network-service-endpoints-overview).
- When selecting this option, Service Connector adds the private IP address of the compute resource in the virtual network to the target resource's Virtual Network rules and enables the service endpoint in the source resource's subnet configuration. If the user lacks sufficient permissions or the resource's SKU or region doesn't support service endpoints, connection creation fails.
+ When selecting this option, Service Connector adds the private IP address of the compute resource in the virtual network to the target resource's Virtual Network rules, and enables the service endpoint in the source resource's subnet configuration. If the user lacks sufficient permissions or the resource's SKU or region doesn't support service endpoints, connection creation fails.
3. **Private Endpoint**: This solution is a recommended way to connect resources via a virtual network and is only available if certain preconditions are met:-- The compute resource must have virtual network integration enabled. For Azure App Service, this can be configured in its networking settings; for Azure Spring Apps, users must set VNet injection during the resource creation stage.
+- The compute resource must have virtual network integration enabled. For Azure App Service, it can be configured in its networking settings; for Azure Spring Apps, users must set VNet injection during the resource creation stage.
- The target service must support private endpoints. For a list of supported services, refer to [Private-link resource](/azure/private-link/private-endpoint-overview#private-link-resource).
- When selecting this option, Service Connector doesn't perform any more configurations in the compute or target resources. Instead, it verifies the existence of a valid private endpoint and fails the connection if not found. For convenience, users can select the "New Private Endpoint" checkbox in the Azure Portal when creating a connection. With it, Service Connector will automatically create all related resources for the private endpoint in the proper sequence, simplifying the connection creation process.
+ When selecting this option, Service Connector doesn't perform any more configurations in the compute or target resources. Instead, it verifies the existence of a valid private endpoint and fails the connection if not found. For convenience, users can select the "New Private Endpoint" checkbox in the Azure Portal when creating a connection. With it, Service Connector automatically creates all related resources for the private endpoint in the proper sequence, simplifying the connection creation process.
storage Data Lake Storage Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-known-issues.md
Previously updated : 03/09/2023 Last updated : 11/02/2023
Data Lake Storage Gen2 APIs, NFS 3.0, and Blob APIs can operate on the same data
This section describes issues and limitations with using blob APIs, NFS 3.0, and Data Lake Storage Gen2 APIs to operate on the same data. -- You can't use blob APIs, NFS 3.0, and Data Lake Storage APIs to write to the same instance of a file. If you write to a file by using Data Lake Storage Gen2 APIs or NFS 3.0, then that file's blocks won't be visible to calls to the [Get Block List](/rest/api/storageservices/get-block-list) blob API. The only exception is when you're overwriting. You can overwrite a file/blob using either API or with NFS 3.0 by using the zero-truncate option.
+- You can't use blob APIs, NFS 3.0, and Data Lake Storage APIs to write to the same instance of a file. If you write to a file by using Data Lake Storage Gen2 APIs or NFS 3.0, then that file's blocks won't be visible to calls to the [Get Block List](/rest/api/storageservices/get-block-list) blob API. The only exception is when you're overwriting. You can overwrite a file/blob using either API or with NFS 3.0 by using the zero-truncate option.
+
+ Blobs that are created by using a Data Lake Storage Gen2 operation such the [Path - Create](/rest/api/storageservices/datalakestoragegen2/path/create) operation, can't be overwritten by using [PutBlock](/rest/api/storageservices/put-block) or [PutBlockList](/rest/api/storageservices/put-block-list) operations, but they can be overwritten by using a [PutBlob](/rest/api/storageservices/put-block) operation subject to the maximum permitted blob size imposed by the corresponding api-version that PutBlob uses.
- When you use the [List Blobs](/rest/api/storageservices/list-blobs) operation without specifying a delimiter, the results include both directories and blobs. If you choose to use a delimiter, use only a forward slash (`/`). This is the only supported delimiter.
storage Object Replication Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-configure.md
To create a replication policy in the Azure portal, follow these steps:
1. Navigate to the source storage account in the Azure portal. 1. Under **Data management**, select **Object replication**.
-1. Select **Set up replication rules**.
+1. Select **Create replication rules**.
1. Select the destination subscription and storage account.
-1. In the **Container pairs** section, select a source container from the source account, and a destination container from the destination account. You can create up to 10 container pairs per replication policy from the Azure portal. To configure more than 10 container pairs (up to 1000), see [Configure object replication using a JSON file](#configure-object-replication-using-a-json-file).
+1. In the **Container pair details** section, select a source container from the source account, and a destination container from the destination account. You can create up to 10 container pairs per replication policy from the Azure portal. To configure more than 10 container pairs (up to 1000), see [Configure object replication using a JSON file](#configure-object-replication-using-a-json-file).
The following image shows a set of replication rules.
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-overview.md
The following table describes the expected behavior for delete and write operati
[!INCLUDE [Blob Storage feature support in Azure Storage accounts](../../../includes/azure-storage-feature-support.md)]
+Soft delete is not supported for blobs that are uploaded by using Data Lake Storage Gen2 APIs on Blob Storage accounts.
+ ## Pricing and billing All soft-deleted data is billed at the same rate as active data. You won't be charged for data that is permanently deleted after the retention period elapses.
storage Storage Files Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring-reference.md
This table shows [supported metrics for Azure Files](/azure/azure-monitor/refere
| FileShareCapacityQuota | The upper limit on the amount of storage that can be used by Azure Files Service in bytes. <br/><br/> Unit: Bytes <br/> Aggregation Type: Average <br/> Value example: 1024| | FileShareCount | The number of file shares in the storage account. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Value example: 1024 | | FileShareProvisionedIOPS | The number of provisioned IOPS on a file share. This metric is applicable to premium file storage only. <br/><br/> Unit: CountPerSecond <br/> Aggregation Type: Average |
+| FileShareProvisionedBandwidthMiBps | The baseline number of provisioned bandwidth in MiB/s for the premium file share in the premium file storage account. This number is calculated based on the provisioned size (quota) of the share capacity. <br/><br/> Unit: BytesPerSecond <br/> Aggregation Type: Average |
| FileShareSnapshotCount | The number of snapshots present on the share in storage account's Azure Files service. <br/><br/> Unit: Count <br/> Aggregation Type: Average | | FileShareSnapshotSize | The amount of storage used by the snapshots in storage account's Azure Files service. <br/><br/> Unit: Bytes <br/> Aggregation Type: Average | | FileShareMaxUsedIOPS | The maximum number of used IOPS at the lowest time granularity of 1-minute for the premium file share in the premium files storage account. <br/><br/> Unit: CountPerSecond <br/> Aggregation Type: Max |
stream-analytics Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md
Previously updated : 10/27/2023 Last updated : 11/02/2023 # Kafka output from Azure Stream Analytics (Preview)
The following table lists the property names and their description for creating
||-| | Output Alias | A friendly name used in queries to reference your output | | Bootstrap server addresses | A list of host/port pairs to establish the connection to the Kafka cluster. |
-| Kafka topic | A unit of your Kafka cluster you want to write events to. |
+| Kafka topic | A named, ordered, and partitioned stream of data that allows for the publish-subscribe and event-driven processing of messages. |
| Security Protocol | How you want to connect to your Kafka cluster. Azure Stream Analytics supports mTLS, SASL_SSL, SASL_PLAINTEXT or None. | | Event Serialization format | The serialization format (JSON, CSV, Avro) of the outgoing data stream. | | Partition key | Azure Stream Analytics assigns partitions using round partitioning. |
You can use four types of security protocols to connect to your Kafka clusters:
The ASA Kafka output is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server auth. Confluent uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA) You can download the ISRG Root X1 certificate in PEM format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/).
+> [!IMPORTANT]
+> You must use Azure CLI to upload the certificate as a secret to your key vault. You cannot use Azure Portal to upload a certificate that has multiline secrets to key vault.
+ To authenticate using the API Key confluent offers, you must use the SASL_SSL protocol and complete the configuration as follows: | Setting | Value |
To authenticate using the API Key confluent offers, you must use the SASL_SSL pr
| Username | Key/ Username from API Key | | Password | Secret/ Password from API key | | KeyVault | Name of Azure Key vault with Uploaded certificate from LetΓÇÖs Encrypt |
- | Certificate | Certificate uploaded to KeyVault downloaded from LetΓÇÖs Encrypt (Download the ISRG Root X1 certificate in PEM format) |
+ | Certificate | name of the certificate uploaded to KeyVault downloaded from LetΓÇÖs Encrypt (Download the ISRG Root X1 certificate in PEM format). Note: you must upload the certificate as a secret using Azure CLI. Refer to the **Key vault integration** guide below |
+> [!NOTE]
+> Depending on how your confluent cloud kafka cluster is configured, you may need a certificate different from the standard certificate confluent cloud uses for server authentication. Confirm with the admin of the confluent cloud kafka cluster to verify what certificate to use.
+>
## Key vault integration
To be able to upload certificates, you must have "**Key Vault Administrator**"
> [!IMPORTANT] > You must have "**Key Vault Administrator**" permissions access to your Key vault for this command to work properly > You must upload the certificate as a secret. You must use Azure CLI to upload certificates as secrets to your key vault.
-> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job
+> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job.
+Below are some steps you can follow to upload your certificate as a secret to Azure CLI using your PowerShell:
+
+Make sure you have Azure CLI configured locally with PowerShell.
You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](https://learn.microsoft.com/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
-Below are some steps you can follow to upload the your certificate to Azure CLI using your powershell
+Below are some steps you can follow to upload your certificate as a secret to Azure CLI using your PowerShell
**Login to Azure CLI:**
-```azurecli-interactive
+```PowerShell
az login ``` **Connect to your subscription containing your key vault:**
-```azurecli-interactive
+```PowerShell
az account set --subscription <subscription name> ``` **The following command can upload the certificate as a secret to your key vault:**
-```azurecli-interactive
-az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to secret>
+
+The `<your key vault>` is the name of the key vault you want to upload the certificate to. `<name of the secret>` is any name you want to give to your secret and how it will show up in the keyvault. Note the name, you will use it to configure your kafka output in your ASA job. `<file path to certificate>` is the path to where you have downloaded your certificate.
+
+```PowerShell
+az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to certificate>
``` ### Configure Managed identity
stream-analytics Stream Analytics Define Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md
Previously updated : 10/27/2023 Last updated : 11/02/2023 # Stream data from Kafka into Azure Stream Analytics (Preview)
The following table lists the property names and their description for creating
> [!IMPORTANT] > To configure your Kafka cluster as an input, the timestamp type of the input topic should be **LogAppendTime**. The only timestamp type Azure Stream Analytics supports is **LogAppendTime**.
->
+> Azure Stream Analytics supports only numerical decimal format.
| Property name | Description | ||-| | Input/Output Alias | A friendly name used in queries to reference your input or output | | Bootstrap server addresses | A list of host/port pairs to establish the connection to the Kafka cluster. |
-| Kafka topic | A unit of your Kafka cluster you want to write events to. |
+| Kafka topic | A named, ordered, and partitioned stream of data that allows for the publish-subscribe and event-driven processing of messages.|
| Security Protocol | How you want to connect to your Kafka cluster. Azure Stream Analytics supports mTLS, SASL_SSL, SASL_PLAINTEXT or None. | | Event Serialization format | The serialization format (JSON, CSV, Avro, Parquet, Protobuf) of the incoming data stream. |
You can use four types of security protocols to connect to your Kafka clusters:
### Connect to Confluent Cloud using API key The ASA Kafka input is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server auth.
-Confluent uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA) You can download the ISRG Root X1 certificate in PEM format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/).
+Confluent uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA). You can download the ISRG Root X1 certificate in PEM format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/).
+
+> [!IMPORTANT]
+> You must use Azure CLI to upload the certificate as a secret to your key vault. You cannot use Azure Portal to upload a certificate that has multiline secrets to key vault.
+> The default timestamp type for a topic in a confluent cloud kafka cluster is **CreateTime**, make sure you update it to **LogAppendTime**.
+> Azure Stream Analytics supports only numerical decimal format.
To authenticate using the API Key confluent offers, you must use the SASL_SSL protocol and complete the configuration as follows:
To authenticate using the API Key confluent offers, you must use the SASL_SSL pr
| Username | Key/ Username from API Key | | Password | Secret/ Password from API key | | KeyVault | Name of Azure Key vault with Uploaded certificate from LetΓÇÖs Encrypt |
- | Certificate | Certificate uploaded to KeyVault downloaded from LetΓÇÖs Encrypt (Download the ISRG Root X1 certificate in PEM format) |
-
+ | Certificate | name of the certificate uploaded to KeyVault downloaded from LetΓÇÖs Encrypt (Download the ISRG Root X1 certificate in PEM format). Note: you must upload the certificate as a secret using Azure CLI. Refer to the **Key vault integration** guide below |
+
+> [!NOTE]
+> Depending on how your confluent cloud kafka cluster is configured, you may need a certificate different from the standard certificate confluent cloud uses for server authentication. Confirm with the admin of the confluent cloud kafka cluster to verify what certificate to use.
+>
## Key vault integration
Certificates are stored as secrets in the key vault and must be in PEM format.
### Configure Key vault with permissions You can create a key vault resource by following the documentation [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md)
-To be able to upload certificates, you must have "**Key Vault Administrator**" access to your Key vault. Follow the following to grant admin access.
+To upload certificates, you must have "**Key Vault Administrator**" access to your Key vault.
+Follow the following to grant admin access:
> [!NOTE] > You must have "**Owner**" permissions to grant other key vault permissions.
To be able to upload certificates, you must have "**Key Vault Administrator**"
> [!IMPORTANT] > You must have "**Key Vault Administrator**" permissions access to your Key vault for this command to work properly > You must upload the certificate as a secret. You must use Azure CLI to upload certificates as secrets to your key vault.
-> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job
+> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job.
+Make sure you have Azure CLI configured locally with PowerShell.
You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](https://learn.microsoft.com/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
-The following command can upload the certificate as a secret to your key vault. You must have "**Key Vault Administrator**" permissions access to your Key vault for this command to work properly.
**Login to Azure CLI:**
-```azurecli-interactive
+```PowerShell
az login ``` **Connect to your subscription containing your key vault:**
-```azurecli-interactive
+```PowerShell
az account set --subscription <subscription name> ``` **The following command can upload the certificate as a secret to your key vault:**
-```azurecli-interactive
-az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to secret>
+
+The `<your key vault>` is the name of the key vault you want to upload the certificate to. `<name of the secret>` is any name you want to give to your secret and how it will show up in the key vault. Note the name; you will use it to configure your kafka output in your ASA job. `<file path to certificate>` is the path to where you have downloaded your certificate.
+
+```PowerShell
+az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to certificate>
```
virtual-desktop Configure Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md
Previously updated : 09/29/2023 Last updated : 10/30/2023 # Configure single sign-on for Azure Virtual Desktop using Microsoft Entra authentication
> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-This article walks you through the process of configuring single sign-on (SSO) using Microsoft Entra authentication for Azure Virtual Desktop (preview). When you enable SSO, you can use passwordless authentication and third-party Identity Providers that federate with Microsoft Entra ID to sign in to your Azure Virtual Desktop resources. When enabled, this feature provides a single sign-on experience when authenticating to the session host and configures the session to provide single sign-on to Microsoft Entra ID-based resources inside the session.
+This article walks you through the process of configuring single sign-on (SSO) using Microsoft Entra authentication for Azure Virtual Desktop (preview). When you enable SSO, users will authenticate to Windows using a Microsoft Entra ID token, obtained for the *Microsoft Remote Desktop* resource application (changing to *Windows Cloud Login* beginning in 2024). This enables them to use passwordless authentication and third-party Identity Providers that federate with Microsoft Entra ID to sign in to your Azure Virtual Desktop resources. When enabled, this feature provides a single sign-on experience when authenticating to the session host and configures the session to provide single sign-on to Microsoft Entra ID-based resources inside the session.
For information on using passwordless authentication within the session, see [In-session passwordless authentication (preview)](authentication.md#in-session-passwordless-authentication-preview).
Clients currently supported:
Before enabling single sign-on, review the following information for using SSO in your environment.
-### Allow remote desktop connection dialog
-
-When enabling single sign-on, you'll currently be prompted to authenticate to Microsoft Entra ID and allow the Remote Desktop connection when launching a connection to a new host. Microsoft Entra remembers up to 15 hosts for 30 days before prompting again. If you see this dialogue, select **Yes** to connect.
- ### Disconnection when the session is locked When SSO is enabled, you sign in to Windows using a Microsoft Entra authentication token, which provides support for passwordless authentication to Windows. The Windows lock screen in the remote session doesn't support Microsoft Entra authentication tokens or passwordless authentication methods like FIDO keys. The lack of support for these authentication methods means that users can't unlock their screens in a remote session. When you try to lock a remote session, either through user action or system policy, the session is instead disconnected and the service sends a message to the user explaining they've been disconnected.
Disconnecting the session also ensures that when the connection is relaunched af
### Using an Active Directory domain admin account with single sign-on
-In environments with an Active Directory (AD) and hybrid user accounts, the default Password Replication Policy on Read-only Domain Controllers denies password replication for members of Domain Admins and Administrators security groups. This will prevent these admin accounts from signing in to Microsoft Entra hybrid joined hosts and may keep prompting them to enter their credentials. It will also prevent admin accounts from accessing on-premises resources that leverage Kerberos authentication from Microsoft Entra joined hosts.
+In environments with an Active Directory (AD) and hybrid user accounts, the default Password Replication Policy on Read-only Domain Controllers denies password replication for members of Domain Admins and Administrators security groups. This will prevent these admin accounts from signing in to Microsoft Entra hybrid joined hosts and might keep prompting them to enter their credentials. It will also prevent admin accounts from accessing on-premises resources that leverage Kerberos authentication from Microsoft Entra joined hosts.
To allow these admin accounts to connect when single sign-on is enabled:
To allow these admin accounts to connect when single sign-on is enabled:
## Enable single sign-on
-To enable single sign-on in your environment, you must first create a Kerberos Server object, then configure your host pool to enable the feature.
+To enable single sign-on in your environment, you must:
+
+1. Enable Microsoft Entra authentication for Remote Desktop Protocol (RDP).
+1. Configure the target device groups.
+1. Create a Kerberos Server object.
+1. Review your conditional access policies.
+1. Configure your host pool to enable single sign-on.
+
+### Enable Microsoft Entra authentication for RDP
+
+> [!IMPORTANT]
+> Due to an upcoming change, the steps below should be completed for the following Microsoft Entra Apps:
+>
+> - Microsoft Remote Desktop (App ID a4a365df-50f1-4397-bc59-1a1564b8bb9c).
+> - Windows Cloud Login (App ID 270efc09-cd0d-444b-a71f-39af4910ec45)
+
+Before enabling the single sign-on feature, you must first allow Microsoft Entra authentication for Windows in your Microsoft Entra tenant. This will enable issuing RDP access tokens allowing users to sign in to Azure Virtual Desktop session hosts. This is done by enabling the isRemoteDesktopProtocolEnabled property on the service principal's remoteDesktopSecurityConfiguration object for the apps listed above.
+
+Use the [Microsoft Graph API](/graph/use-the-api) to [create remoteDesktopSecurityConfiguration](/graph/api/serviceprincipal-post-remotedesktopsecurityconfiguration) and set the property **isRemoteDesktopProtocolEnabled** to **true** to enable Microsoft Entra authentication.
+
+### Configure the target device groups
+
+> [!IMPORTANT]
+> Due to an upcoming change, the steps below should be completed for the following Microsoft Entra Apps:
+>
+> - Microsoft Remote Desktop (App ID a4a365df-50f1-4397-bc59-1a1564b8bb9c).
+> - Windows Cloud Login (App ID 270efc09-cd0d-444b-a71f-39af4910ec45)
+
+By default when enabling single sign-on, users are prompted to authenticate to Microsoft Entra ID and allow the Remote Desktop connection when launching a connection to a new session host. Microsoft Entra remembers up to 15 hosts for 30 days before prompting again. If you see this dialogue, select **Yes** to connect.
+
+To provide single sign-on for all connections, you can hide this dialog by configuring a list of trusted devices. This is done by adding one or more Device Groups containing Azure Virtual Desktop session hosts to a property on the service principals for the apps listed above in your Microsoft Entra tenant.
+
+Follow these steps to hide the dialog:
+
+1. [Create a Dynamic Device Group](/entra/identity/users/groups-create-rule) in Microsoft Entra containing the devices to hide the dialog for. Remember the device group ID for the next step.
+ > [!TIP]
+ > It's recommended to use a dynamic device group and configure the dynamic membership rules to includes all your Azure Virtual Desktop session hosts. This can be done using the device names or for a more secure option, you can set and use [device extension attributes](/graph/extensibility-overview) using [Microsoft Graph API](/graph/api/resources/device).
+1. Use the [Microsoft Graph API](/graph/use-the-api) to [create a new targetDeviceGroup object](/graph/api/remotedesktopsecurityconfiguration-post-targetdevicegroups) to suppress the prompt from these devices.
### Create a Kerberos Server object
You must [Create a Kerberos Server object](../active-directory/authentication/ho
> > To resolve these issues, create the Kerberos server object before trying to connect again.
+### Review your conditional access policies
+
+When single sign-on is enabled, a new Microsoft Entra ID app is introduced to authenticate users to the session host. If you have conditional access policies that apply when accessing Azure Virtual Desktop, review the recommendations on setting up [multifactor authentication](set-up-mfa.md) to ensure users have the desired experience.
+ ### Configure your host pool To enable SSO on your host pool, you must configure the following RDP property, which you can do using the Azure portal or PowerShell. You can find the steps to do this in [Customize Remote Desktop Protocol (RDP) properties for a host pool](customize-rdp-properties.md).
virtual-machines Auto Shutdown Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/auto-shutdown-vm.md
Title: Auto-shutdown the VM
+ Title: Auto-shutdown a VM
description: Learn how to set up auto-shutdown for VMs in Azure.
Last updated 09/27/2023
-# Auto-shutdown the VM
+# Auto-shutdown a virtual machine
-In this tutorial, you learn how to automatically shut-down virtual machines (VMs) in Azure. The auto-shutdown feature for Azure VMs can help reduce costs by shutting down the VMs during off hours when they aren't needed and automatically restarting them when they're needed again.
+In this tutorial, you learn how to automatically shut down virtual machines (VMs) in Azure. The auto-shutdown feature for Azure VMs can help reduce costs by shutting down the VMs during off hours when they aren't needed and automatically restarting them when they're needed again.
## Configure auto-shutdown for a virtual machine
virtual-machines Av1 Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/av1-series-retirement.md
# Av1-series retirement On August 31, 2024, we retire Basic and Standard A-series virtual machines (VMs). Before that date, migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs).
+The remaining VMs with these specific sizes on your subscription will be set to a deallocated state. These VMs will be stopped and removed from the host. These VMs will no longer be billed in the deallocated state.
> [!NOTE] > In some cases, you must deallocate the VM prior to resizing. This can happen if the new size is not available on the hardware cluster that is currently hosting the VM.
virtual-machines Basv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/basv2.md
Basv2-series virtual machines offer a balance of compute, memory, and network re
|--||--|--||||--|--|-||-| | Standard_B2ats_v2 | 2 | 1 | 20% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 | | Standard_B2als_v2 | 2 | 4 | 30% | 60 | 36 | 864 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 |
-| Standard_B2as_v2 | 2 | 8 | 40% | 600 | 48 | 1152 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 |
+| Standard_B2as_v2 | 2 | 8 | 40% | 60 | 48 | 1152 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 |
| Standard_B4als_v2 | 4 | 8 | 30% | 120 | 72 | 1728 | 6,400/145 | 20,000/960 | 8 | 6.25 | 2 | | Standard_B4as_v2 | 4 | 16 | 40% | 120 | 96 | 2304 | 6,400/145 | 20,000/960 | 8 | 6.25 | 2 | | Standard_B8als_v2 | 8 | 16 | 30% | 240 | 144 | 3456 | 12,800/290 | 20,000/960 | 16 | 6.25 | 2 |
virtual-machines Edv5 Edsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv5-edsv5-series.md
Edv5-series virtual machines support Standard SSD and Standard HDD disk types. T
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br><br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network egress bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB)<sup>*</sup> | Max NICs |Max network egress bandwidth (Mbps) |
||||||||| | Standard_E2d_v5 | 2 | 16 | 75 | 4 | 9000/125 | 2 | 12500 | | Standard_E4d_v5 | 4 | 32 | 150 | 8 | 19000/250 | 2 | 12500 |
Edsv5-series virtual machines support Standard SSD and Standard HDD disk types.
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network egress bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB)<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network egress bandwidth (Mbps) |
||||||||||| | Standard_E2ds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 3750/85 | 10000/1200 | 2 | 12500 | | Standard_E4ds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 |
virtual-machines Hibernate Resume Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hibernate-resume-troubleshooting.md
+
+ Title: Troubleshoot VM hibernation
+description: Learn how to troubleshoot VM hibernation.
++++ Last updated : 10/31/2023+++++
+# Troubleshooting VM hibernation
+
+> [!IMPORTANT]
+> Azure Virtual Machines - Hibernation is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Hibernating a virtual machine allows you to persist the VM state to the OS disk. This article describes how to troubleshoot issues with the hibernation feature, issues creating hibernation enabled VMs, and issues with hibernating a VM.
+
+## Subscription not registered to use hibernation
+If you receive the error "Your subscription isn't registered to use Hibernate" and the box is greyed out in the Azure portal, make sure you have [register for the Hibernation preview.](hibernate-resume.md)
+
+![Screenshot of the greyed-out 'enable hibernation' box with a warning below it and a link to "Learn More" about registering your subscription.](./media/hibernate-resume/subscription-not-registered.png)
++
+## Unable to create a VM with hibernation enabled
+If you're unable to create a VM with hibernation enabled, ensure that you're using a VM size, OS version that supports Hibernation. Refer to the supported VM sizes, OS versions section in the user guide and the limitations section for more details. Here are some common error codes that you might observe:
+
+| ResultCode | Error Message | Action |
+|--|--|--|
+| OperationNotAllowed | The referenced os disk should support hibernation for a VM with hibernation capability. | Validate that the OS disk has hibernation support enabled. |
+| OperationNotAllowed | The referenced platform image should support hibernation for a VM with hibernation capability. | Use a platform image that supports hibernation. |
+| OperationNotAllowed | The referenced shared gallery image should support hibernation for a VM with hibernation capability. | Validate that the Shared Gallery Image Definition has hibernation support enabled |
+| OperationNotAllowed | Hibernation capability isn't supported for Spot VMs. | |
+| OperationNotAllowed | User VM Image isn't supported for a VM with Hibernation capability. | Use a platform image or Shared Gallery Image if you want to use the hibernation feature |
+| OperationNotAllowed | Referencing a Dedicated Host isn't supported for a VM with Hibernation capability. | |
+| OperationNotAllowed | Referencing a Capacity Reservation Group isn't supported for a VM with Hibernation capability. | |
+| OperationNotAllowed | Enabling/disabling hibernation on an existing VM requires the VM to be stopped (deallocated) first. | Stop-deallocate the VM, patch with VM to enable hibernation and then start the VM |
+| OperationNotAllowed | Hibernation can't be enabled on Virtual Machine since the OS Disk Size ({0} bytes) should at least be greater than the VM memory ({1} bytes). | Ensure the OS disk has enough space to be able to persist the RAM contents once the VM is hibernated |
+| OperationNotAllowed | Hibernation can't be enabled on Virtual Machines created in an Availability Set. | Hibernation is only supported for standalone VMs & Virtual Machine Scale Sets Flex VMs |
++
+## Unable to hibernate a VM
+
+If you're unable to hibernate a VM, first check whether hibernation is enabled on the VM. For example, using the GET VM API, you can check if hibernation is enabled on the VM
+
+```
+ "properties": {
+ "vmId": "XXX",
+ "hardwareProfile": {
+ "vmSize": "Standard_D4s_v5"
+ },
+ "additionalCapabilities": {
+ "hibernationEnabled": true
+ },
+```
+If hibernation is enabled on the VM, check if hibernation is successfully enabled in the guest OS.
+
+### [Linux](#tab/troubleshootLinuxCantHiber)
+
+On Linux, you can check the extension status if you used the extension to enable hibernation in the guest OS.
++
+### [Windows](#tab/troubleshootWindowsCantHiber)
+
+On Windows, you can check the status of the Hibernation extension to see if the extension was able to successfully configure the guest OS for hibernation.
++
+The VM instance view would have the final output of the extension:
+```
+"extensions": [
+ {
+ "name": "AzureHibernateExtension",
+ "type": "Microsoft.CPlat.Core.WindowsHibernateExtension",
+ "typeHandlerVersion": "1.0.2",
+ "statuses": [
+ {
+ "code": "ProvisioningState/succeeded",
+ "level": "Info",
+ "displayStatus": "Provisioning succeeded",
+ "message": "Enabling hibernate succeeded. Response from the powercfg command: \tThe hiberfile size has been set to: 17178693632 bytes.\r\n"
+ }
+ ]
+ },
+```
+
+Additionally, confirm that hibernate is enabled as a sleep state inside the guest. The expected output for the guest should look like this.
+
+```
+C:\Users\vmadmin>powercfg /a
+ The following sleep states are available on this system:
+ Hibernate
+ Fast Startup
+
+ The following sleep states are not available on this system:
+ Standby (S1)
+ The system firmware does not support this standby state.
+
+ Standby (S2)
+ The system firmware does not support this standby state.
+
+ Standby (S3)
+ The system firmware does not support this standby state.
+
+ Standby (S0 Low Power Idle)
+ The system firmware does not support this standby state.
+
+ Hybrid Sleep
+ Standby (S3) isn't available.
++
+```
+If 'Hibernate' isn't listed as a supported sleep state, there should be a reason associated with it, which should help determine why hibernate isn't supported. This occurs if guest hibernate hasn't been configured for the VM.
+
+```
+C:\Users\vmadmin>powercfg /a
+ The following sleep states are not available on this system:
+ Standby (S1)
+ The system firmware does not support this standby state.
+
+ Standby (S2)
+ The system firmware does not support this standby state.
+
+ Standby (S3)
+ The system firmware does not support this standby state.
+
+ Hibernate
+ Hibernation hasn't been enabled.
+
+ Standby (S0 Low Power Idle)
+ The system firmware does not support this standby state.
+
+ Hybrid Sleep
+ Standby (S3) is not available.
+ Hibernation is not available.
+
+ Fast Startup
+ Hibernation is not available.
+
+```
+
+If the extension or the guest sleep state reports an error, you'd need to update the guest configurations as per the error descriptions to resolve the issue. After fixing all the issues, you can validate that hibernation has been enabled successfully inside the guest by running the 'powercfg /a' command - which should return Hibernate as one of the sleep states.
+Also validate that the AzureHibernateExtension returns to a Succeeded state. If the extension is still in a failed state, then update the extension state by triggering [reapply VM API](/rest/api/compute/virtual-machines/reapply?tabs=HTTP)
+
+>[!NOTE]
+>If the extension remains in a failed state, you can't hibernate the VM
+
+Commonly seen issues where the extension fails
+
+| Issue | Action |
+|--|--|
+| Page file is in temp disk. Move it to OS disk to enable hibernation. | Move page file to the C: drive and trigger reapply on the VM to rerun the extension |
+| Windows failed to configure hibernation due to insufficient space for the hiberfile | Ensure that C: drive has sufficient space. You can try expanding your OS disk, your C: partition size to overcome this issue. Once you have sufficient space, trigger the Reapply operation so that the extension can retry enabling hibernation in the guest and succeeds. |
+| Extension error message: ΓÇ£A device attached to the system isn't functioningΓÇ¥ | Ensure that C: drive has sufficient space. You can try expanding your OS disk, your C: partition size to overcome this issue. Once you have sufficient space, trigger the Reapply operation so that the extension can retry enabling hibernation in the guest and succeeds. |
+| Hibernation is no longer supported after Virtualization Based Security (VBS) was enabled inside the guest | Enable Virtualization in the guest to get VBS capabilities along with the ability to hibernate the guest. [Enable virtualization in the guest OS.](/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v#enable-hyper-v-using-powershell) |
+| Enabling hibernate failed. Response from the powercfg command. Exit Code: 1. Error message: Hibernation failed with the following error: The request isn't supported. The following items are preventing hibernation on this system. The current Device Guard configuration disables hibernation. An internal system component disabled hibernation. Hypervisor | Enable Virtualization in the guest to get VBS capabilities along with the ability to hibernate the guest. To enable virtualization in the guest, refer to [this document](/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v#enable-hyper-v-using-powershell) |
+++
+## Guest VMs unable to hibernate
+
+### [Windows](#tab/troubleshootWindowsGuestCantHiber)
+If a hibernate operation succeeds, the following events are seen in the guest:
+```
+Guest responds to the hibernate operation (note that the following event is logged on the guest on resume)
+
+ Log Name: System
+ Source: Kernel-Power
+ Event ID: 42
+ Level: Information
+ Description:
+ The system is entering sleep
+
+```
+
+If the guest fails to hibernate, then all or some of these events are missing.
+Commonly seen issues:
+
+| Issue | Action |
+|--|--|
+| Guest fails to hibernate because Hyper-V Guest Shutdown Service is disabled. | [Ensure that Hyper-V Guest Shutdown Service isn't disabled.](/virtualization/hyper-v-on-windows/reference/integration-services#hyper-v-guest-shutdown-service) Enabling this service should resolve the issue. |
+| Guest fails to hibernate because HVCI (Memory integrity) is enabled. | Hibernation isn't supported with HVCI. Disabling HVCI should resolve the issue. |
+
+Logs needed for troubleshooting:
+
+If you encounter an issue outside of these known scenarios, the following logs can help Azure troubleshoot the issue:
+1. Event logs on the guest: Microsoft-Windows-Kernel-Power, Microsoft-Windows-Kernel-General, Microsoft-Windows-Kernel-Boot.
+1. On bug check, a guest crash dump is helpful.
++
+### [Linux](#tab/troubleshootLinuxGuestCantHiber)
+on Linux, you can check the extension status if you used the extension to enable hibernation in the guest OS.
++
+If you used the hibernation-setup-tool to configure the guest for hibernation, you can check if the tool executed successfully through this command:
+
+```
+systemctl status hibernation-setup-tool
+```
+
+A successful status should return "Inactive (dead)ΓÇ¥, and the log messages should say "Swap file for VM hibernation set up successfully"
+
+Example:
+```
+azureuser@:~$ systemctl status hibernation-setup-tool
+ΓùÅ hibernation-setup-tool.service - Hibernation Setup Tool
+ Loaded: loaded (/lib/systemd/system/hibernation-setup-tool.service; enabled; vendor preset: enabled)
+ Active: inactive (dead) since Wed 2021-08-25 22:44:29 UTC; 17min ago
+ Process: 1131 ExecStart=/usr/sbin/hibernation-setup-tool (code=exited, status=0/SUCCESS)
+ Main PID: 1131 (code=exited, status=0/SUCCESS)
+
+linuxhib2 hibernation-setup-tool[1131]: INFO: update-grub2 finished successfully.
+linuxhib2 hibernation-setup-tool[1131]: INFO: udev rule to hibernate with systemd set up in /etc/udev/rules.d/99-vm-hibernation.rules. Telling udev about it.
+…
+…
+linuxhib2 hibernation-setup-tool[1131]: INFO: systemctl finished successfully.
+linuxhib2 hibernation-setup-tool[1131]: INFO: Swap file for VM hibernation set up successfully
+
+```
+If the guest OS isn't configured for hibernation, take the appropriate action to resolve the issue. For example, if the guest failed to configure hibernation due to insufficient space, resize the OS disk to resolve the issue.
+++
+## Common error codes
+| ResultCode | errorDetails | Action |
+|--|--|--|
+| InternalOperationError | The fabric operation failed. | This is usually a transient issue. Retry the Hibernate operation after 5mins. |
+| OperationNotAllowed | Operation 'HibernateAndDeallocate' isn't allowed on VM 'Z0000ZYH000' since VM has extension 'AzureHibernateExtension' in failed state | Customer issue. Confirm that VM creation with hibernation enabled succeeded, and that the extension is in a healthy state |
+| OperationNotAllowed | The Hibernate-Deallocate Operation can only be triggered on a VM that is successfully provisioned and is running. | Customer error. Ensure that the VM is successfully running before attempting to Hibernate-Deallocate the VM. |
+| OperationNotAllowed | The Hibernate-Deallocate Operation can only be triggered on a VM that is enabled for hibernation. Enable the property additionalCapabilities.hibernationEnabled during VM creation, or after stopping and deallocating the VM. | Customer error. |
+| VMHibernateFailed | Hibernating the VM 'hiber_vm_res_5' failed due to an internal error. Retry later. | Retry after 5mins. If it continues to fail after multiple retries, check if the guest is correctly configured to support hibernation or contact Azure support. |
+| VMHibernateNotSupported | The VM 'Z0000ZYJ000' doesn't support hibernation. Ensure that the VM is correctly configured to support hibernation. | Hibernating a VM immediately after boot isn't supported. Retry hibernating the VM after a few minutes. |
+
+## Unable to resume a VM
+Starting a hibernated VM is similar to starting a stopped VM. For errors and troubleshooting steps related to starting a VM, refer to this guide
+
+In addition to commonly seen issues while starting VMs, certain issues are specific to starting a hibernated VM. These are described below-
+
+| ResultCode | errorDetails |
+|--|--|--|
+| OverconstrainedResumeFromHibernatedStateAllocationRequest | Allocation failed. VM(s) with the following constraints can't be allocated, because the condition is too restrictive. Remove some constraints and try again. Constraints applied are: Networking Constraints (such as Accelerated Networking or IPv6), Resuming from hibernated state (retry starting the VM after some time or alternatively stop-deallocate the VM and try starting the VM again). |
+| AllocationFailed | VM allocation failed from hibernated state due to insufficient capacity. Try again later or alternatively stop-deallocate the VM and try starting the VM. |
+
+## Windows guest resume status through VM instance view
+For Windows VMs, when you start a VM from a hibernated state, you can use the VM instance view to get more details on whether the guest successfully resumed from its previous hibernated state or if it failed to resume and instead did a cold boot.
+
+VM instance view output when the guest successfully resumes:
+```
+{
+ "computerName": "myVM",
+ "osName": "Windows 11 Enterprise",
+ "osVersion": "10.0.22000.1817",
+ "vmAgent": {
+ "vmAgentVersion": "2.7.41491.1083",
+ "statuses": [
+ {
+ "code": "ProvisioningState/succeeded",
+ "level": "Info",
+ "displayStatus": "Ready",
+ "message": "GuestAgent is running and processing the extensions.",
+ "time": "2023-04-25T04:41:17.296+00:00"
+ }
+ ],
+ "extensionHandlers": [
+ {
+ "type": "Microsoft.CPlat.Core.RunCommandWindows",
+ "typeHandlerVersion": "1.1.15",
+ "status": {
+ "code": "ProvisioningState/succeeded",
+ "level": "Info",
+ "displayStatus": "Ready"
+ }
+ },
+ {
+ "type": "Microsoft.CPlat.Core.WindowsHibernateExtension",
+ "typeHandlerVersion": "1.0.3",
+ "status": {
+ "code": "ProvisioningState/succeeded",
+ "level": "Info",
+ "displayStatus": "Ready"
+ }
+ }
+ ]
+ },
+ "extensions": [
+ {
+ "name": "AzureHibernateExtension",
+ "type": "Microsoft.CPlat.Core.WindowsHibernateExtension",
+ "typeHandlerVersion": "1.0.3",
+ "substatuses": [
+ {
+ "code": "ComponentStatus/VMBootState/Resume/succeeded",
+ "level": "Info",
+ "displayStatus": "Provisioning succeeded",
+ "message": "Last guest resume was successful."
+ }
+ ],
+ "statuses": [
+ {
+ "code": "ProvisioningState/succeeded",
+ "level": "Info",
+ "displayStatus": "Provisioning succeeded",
+ "message": "Enabling hibernate succeeded. Response from the powercfg command: \tThe hiberfile size has been set to: XX bytes.\r\n"
+ }
+ ]
+ }
+ ],
+ "statuses": [
+ {
+ "code": "ProvisioningState/succeeded",
+ "level": "Info",
+ "displayStatus": "Provisioning succeeded",
+ "time": "2023-04-25T04:41:17.8996086+00:00"
+ },
+ {
+ "code": "PowerState/running",
+ "level": "Info",
+ "displayStatus": "VM running"
+ }
+ ]
+}
++
+```
+If the Windows guest fails to resume from its previous state and cold boots, then the VM instance view response is:
+```
+ "extensions": [
+ {
+ "name": "AzureHibernateExtension",
+ "type": "Microsoft.CPlat.Core.WindowsHibernateExtension",
+ "typeHandlerVersion": "1.0.3",
+ "substatuses": [
+ {
+ "code": "ComponentStatus/VMBootState/Start/succeeded",
+ "level": "Info",
+ "displayStatus": "Provisioning succeeded",
+ "message": "VM booted."
+ }
+ ],
+ "statuses": [
+ {
+ "code": "ProvisioningState/succeeded",
+ "level": "Info",
+ "displayStatus": "Provisioning succeeded",
+ "message": "Enabling hibernate succeeded. Response from the powercfg command: \tThe hiberfile size has been set to: XX bytes.\r\n"
+ }
+ ]
+ }
+ ],
+ "statuses": [
+ {
+ "code": "ProvisioningState/succeeded",
+ "level": "Info",
+ "displayStatus": "Provisioning succeeded",
+ "time": "2023-04-19T17:18:18.7774088+00:00"
+ },
+ {
+ "code": "PowerState/running",
+ "level": "Info",
+ "displayStatus": "VM running"
+ }
+ ]
+}
+
+```
+
+## Windows guest events while resuming
+If a guest successfully resumes, the following guest events are available:
+```
+Log Name: System
+ Source: Kernel-Power
+ Event ID: 107
+ Level: Information
+ Description:
+ The system has resumed from sleep.
+
+```
+If the guest fails to resume, all or some of these events are missing. To troubleshoot why the guest failed to resume, the following logs are needed:
+- Event logs on the guest: Microsoft-Windows-Kernel-Power, Microsoft-Windows-Kernel-General, Microsoft-Windows-Kernel-Boot.
+- On bugcheck, a guest crash dump is needed.
virtual-machines Hibernate Resume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hibernate-resume.md
+
+ Title: Learn about hibernating your VM
+description: Learn how to hibernate a VM.
++++ Last updated : 10/31/2023+++++
+# Hibernating virtual machines
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
+
+> [!IMPORTANT]
+> Azure Virtual Machines - Hibernation is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Hibernation allows you to pause VMs that aren't being used and save on compute costs. It's an effective cost management feature for scenarios such as:
+- Virtual desktops, dev/test, and other scenarios where the VMs don't need to run 24/7.
+- Systems with long boot times due to memory intensive applications. These applications can be initialized on VMs and hibernated. These ΓÇ£prewarmedΓÇ¥ VMs can then be quickly started when needed, with the applications already up and running in the desired state.
+
+## How hibernation works
+
+When you hibernate a VM, Azure signals the VM's operating system to perform a suspend-to-disk action. Azure stores the memory contents of the VM in the OS disk, then deallocates the VM. When the VM is started again, the memory contents are transferred from the OS disk back into memory. Applications and processes that were previously running in your VM resume from the state prior to hibernation.
+
+Once a VM is in a hibernated state, you aren't billed for the VM usage. Your account is only billed for the storage (OS disk, data disks) and networking resources (IPs, etc.) attached to the VM.
+
+When hibernating a VM:
+- Hibernation is triggered on a VM using the Azure portal, CLI, PowerShell, SDKs, or APIs. Azure then signals the guest operating system to perform suspend-to-disk (S4).
+- The VM's memory contents are stored on the OS disk. The VM is then deallocated, releases the lease on the underlying hardware, and is powered off. Refer to VM [states and billing](states-billing.md) for more details on the VM deallocated state.
+- Data in the temporary disk isn't persisted.
+- The OS disk, data disks, and NICs remain attached to your VM. Any static IPs remain unchanged.
+- You aren't billed for the VM usage for a hibernated VM.
+- You continue to be billed for the storage and networking resources associated with the hibernated VM.
+
+## Supported configurations
+Hibernation support is limited to certain VM sizes and OS versions. Make sure you have a supported configuration before using hibernation.
+
+### Supported VM sizes
+
+VM sizes with up to 32-GB RAM from the following VM series support hibernation.
+- [Dasv5-series](dasv5-dadsv5-series.md)
+- [Dadsv5-series](dasv5-dadsv5-series.md)
+- [Dsv5-series](../virtual-machines/dv5-dsv5-series.md)
+- [Ddsv5-series](ddv5-ddsv5-series.md)
++
+### Operating system support and limitations
+
+#### [Linux](#tab/osLimitsLinux)
+
+##### Supported Linux versions
+The following Linux operating systems support hibernation:
+
+- Ubuntu 22.04 LTS
+- Ubuntu 20.04 LTS
+- Ubuntu 18.04 LTS
+- Debian 11
+- Debian 10 (with backports kernel)
+
+##### Linux Limitations
+- Hibernation isn't supported with Trusted Launch for Linux VMs
++
+#### [Windows](#tab/osLimitsWindows)
+
+##### Supported Windows versions
+The following Windows operating systems support hibernation:
+
+- Windows Server 2022
+- Windows Server 2019
+- Windows 11 Pro
+- Windows 11 Enterprise
+- Windows 11 Enterprise multi-session
+- Windows 10 Pro
+- Windows 10 Enterprise
+- Windows 10 Enterprise multi-session
+
+##### Windows limitations
+- The page file can't be on the temp disk.
+- Applications such as Device Guard and Credential Guard that require virtualization-based security (VBS) work with hibernation when you enable Trusted Launch on the VM and Nested Virtualization in the guest OS.
+- Hibernation is only supported with Nested Virtualization when Trusted Launch is enabled on the VM
+++
+### General limitations
+- You can't enable hibernation on existing VMs.
+- You can't resize a VM if it has hibernation enabled.
+- When a VM is hibernated, you can't attach, detach, or modify any disks or NICs associated with the VM. The VM must instead be moved to a Stop-Deallocated state.
+- When a VM is hibernated, there's no capacity guarantee to ensure that there's sufficient capacity to start the VM later. In the rare case that you encounter capacity issues, you can try starting the VM at a later time. Capacity reservations don't guarantee capacity for hibernated VMs.
+- You can only hibernate a VM using the Azure portal, CLI, PowerShell, SDKs and API. Hibernating the VM using guest OS operations don't result in the VM moving to a hibernated state and the VM continues to be billed.
+- You can't disable hibernation on a VM once enabled.
+
+### Azure feature limitations
+- Ephemeral OS disks
+- Shared disks
+- Availability Sets
+- Virtual Machine Scale Sets Uniform
+- Spot VMs
+- Managed images
+- Azure Backup
+- Capacity reservations
+
+## Prerequisites to use hibernation
+- The hibernate feature is enabled for your subscription.
+- A persistent OS disk large enough to store the contents of the RAM, OS and other applications running on the VM is connected.
+- The VM size supports hibernation.
+- The VM OS supports hibernation.
+- The Azure VM Agent is installed if you're using the Windows or Linux Hibernate Extensions.
+- Hibernation is enabled on your VM when creating the VM.
+- If a VM is being created from an OS disk or a Compute Gallery image, then the OS disk or Gallery Image definition supports hibernation.
+
+## Enabling hibernation feature for your subscription
+Use the following steps to enable this feature for your subscription:
+
+### [Portal](#tab/enablehiberPortal)
+1. In your Azure subscription, go to the Settings section and select 'Preview features'.
+1. Search for 'hibernation'.
+1. Check the 'Hibernation Preview' item.
+1. Click 'Register'.
+
+![Screenshot showing the Azure subscription preview portal with 4 numbers representing different steps in enabling the hibernation feature.](./media/hibernate-resume/hibernate-register-preview-feature.png)
+
+### [PowerShell](#tab/enablehiberPS)
+```powershell
+Register-AzProviderFeature -FeatureName "VMHibernationPreview" -ProviderNamespace "Microsoft.Compute"
+```
+### [CLI](#tab/enablehiberCLI)
+```azurecli
+az feature register --name VMHibernationPreview --namespace Microsoft.Compute
+```
++
+Confirm that the registration state is Registered (registration takes a few minutes) using the following command before trying out the feature.
+
+### [Portal](#tab/checkhiberPortal)
+In the Azure portal under 'Preview features', select 'Hibernation Preview'. The registration state should show as 'Registered'.
+
+![Screenshot showing the Azure subscription preview portal with the hibernation feature listed as registered.](./media/hibernate-resume/hibernate-is-registered-preview-feature.png)
+
+### [PowerShell](#tab/checkhiberPS)
+```powershell
+Get-AzProviderFeature -FeatureName " VMHibernationPreview " -ProviderNamespace "Microsoft.Compute"
+```
+### [CLI](#tab/checkhiberCLI)
+```azurecli
+az feature show --name VMHibernationPreview --namespace Microsoft.Compute
+```
++
+## Getting started with hibernation
+
+To hibernate a VM, you must first enable the feature while creating the VM. You can only enable hibernation for a VM on initial creation. You can't enable this feature after the VM is created.
+
+To enable hibernation during VM creation, you can use the Azure portal, CLI, PowerShell, ARM templates and API.
+
+### [Portal](#tab/enableWithPortal)
+
+To enable hibernation in the Azure portal, check the 'Enable hibernation' box during VM creation.
+
+![Screenshot of the checkbox in the Azure portal to enable hibernation when creating a new VM.](./media/hibernate-resume/hibernate-enable-during-vm-creation.png)
++
+### [CLI](#tab/enableWithCLI)
+
+To enable hibernation in the Azure CLI, create a VM by running the following [az vm create]() command with ` --enable-hibernation` set to `true`.
+
+```azurecli
+ az vm create --resource-group myRG \
+ --name myVM \
+ --image Win2019Datacenter \
+ --public-ip-sku Standard \
+ --size Standard_D2s_v5 \
+ --enable-hibernation true
+```
+
+### [PowerShell](#tab/enableWithPS)
+
+To enable hibernation when creating a VM with PowerShell, run the following command:
+
+```powershell
+New-AzVm `
+ -ResourceGroupName 'myRG' `
+ -Name 'myVM' `
+ -Location 'East US' `
+ -VirtualNetworkName 'myVnet' `
+ -SubnetName 'mySubnet' `
+ -SecurityGroupName 'myNetworkSecurityGroup' `
+ -PublicIpAddressName 'myPublicIpAddress' `
+ -Size Standard_D2s_v5 `
+ -Image Win2019Datacenter `
+ -HibernationEnabled `
+ -OpenPorts 80,3389
+```
+
+### [REST](#tab/enableWithREST)
+
+First, [create a VM with hibernation enabled](/rest/api/compute/virtual-machines/create-or-update#create-a-vm-with-hibernationenabled)
+
+```json
+PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/{vm-name}?api-version=2021-11-01
+```
+Your output should look something like this:
+
+```
+{
+ "location": "eastus",
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "Standard_D2s_v5"
+ },
+ "additionalCapabilities": {
+ "hibernationEnabled": true
+ },
+ "storageProfile": {
+ "imageReference": {
+ "publisher": "MicrosoftWindowsServer",
+ "offer": "WindowsServer",
+ "sku": "2019-Datacenter",
+ "version": "latest"
+ },
+ "osDisk": {
+ "caching": "ReadWrite",
+ "managedDisk": {
+ "storageAccountType": "Standard_LRS"
+ },
+ "name": "vmOSdisk",
+ "createOption": "FromImage"
+ }
+ },
+ "networkProfile": {
+ "networkInterfaces": [
+ {
+ "id": "/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/{existing-nic-name}",
+ "properties": {
+ "primary": true
+ }
+ }
+ ]
+ },
+ "osProfile": {
+ "adminUsername": "{your-username}",
+ "computerName": "{vm-name}",
+ "adminPassword": "{your-password}"
+ },
+ "diagnosticsProfile": {
+ "bootDiagnostics": {
+ "storageUri": "http://{existing-storage-account-name}.blob.core.windows.net",
+ "enabled": true
+ }
+ }
+ }
+}
+
+```
+To learn more about REST, check out an [API example](/rest/api/compute/virtual-machines/create-or-update#create-a-vm-with-hibernationenabled)
+++
+Once you've created a VM with hibernation enabled, you need to configure the guest OS to successfully hibernate your VM.
+
+## Guest configuration for hibernation
+
+### Configuring hibernation on Linux
+There are many ways you can configure the guest OS for hibernation in Linux VMs.
+
+#### Option 1: LinuxHibernateExtension
+ You can install the [LinuxHibernateExtension](/cli/azure/azure-cli-extensions-overview) on your Linux VM to configure the guest OS for hibernation.
+
+##### [CLI](#tab/cliLHE)
+
+To install LinuxHibernateExtension with the Azure CLI, run the following command:
+
+```azurecli
+az vm extension set -n LinuxHibernateExtension --publisher Microsoft.CPlat.Core --version 1.0 \ --vm-name MyVm --resource-group MyResourceGroup --enable-auto-upgrade true
+```
+
+##### [PowerShell](#tab/powershellLHE)
+
+To install LinuxHibernateExtension with PowerShell, run the following command:
+
+```powershell
+Set-AzVMExtension -Publisher Microsoft.CPlat.Core -ExtensionType LinuxHibernateExtension -VMName <VMName> -ResourceGroupName <RGNAME> -Name "LinuxHibernateExtension" -Location <Location> -TypeHandlerVersion 1.0
+```
++
+#### Option 2: hibernation-setup-tool
+You can install the hibernation-setup-tool package on your Linux VM from MicrosoftΓÇÖs Linux software repository at [packages.microsoft.com](https://packages.microsoft.com).
+
+To use the Linux software repository, follow the instructions at [Linux package repository for Microsoft software](/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software#ubuntu).
+
+##### [Ubuntu 18.04 (Bionic)](#tab/Ubuntu18HST)
+
+To use the repository in Ubuntu 18.04, open git bash and run this command:
+
+```bash
+curl -sSL https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
+
+sudo apt-add-repository https://packages.microsoft.com/ubuntu/18.04/prod
+
+sudo apt-get update
+```
+
+##### [Ubuntu 20.04 (Focal)](#tab/Ubuntu20HST)
+
+To use the repository in Ubuntu 20.04, open git bash and run this command:
+
+```bash
+curl -sSL https://packages.microsoft.com/keys/microsoft.asc | sudo tee etc/apt/trusted.gpg.d/microsoft.asc
+
+sudo apt-add-repository https://packages.microsoft.com/ubuntu/20.04/prod
+
+sudo apt-get update
+```
+++
+To install the package, run this command in git bash:
+```bash
+sudo apt-get install hibernation-setup-tool
+```
+
+Once the package installs successfully, your Linux guest OS has been configured for hibernation. You can also create a new Azure Compute Gallery Image from this VM and use the image to create VMs. VMs created with this image have the hibernation package preinstalled, thereby simplifying your VM creation experience.
+
+### Configuring hibernation on Windows
+Enabling hibernation while creating a Windows VM automatically installs the 'Microsoft.CPlat.Core.WindowsHibernateExtension' VM extension. This extension configures the guest OS for hibernation. This extension doesn't need to be manually installed or updated, as this extension is managed by the Azure platform.
+
+>[!NOTE]
+>When you create a VM with hibernation enabled, Azure automatically places the page file on the C: drive. If you're using a specialized image, then you'll need to follow additional steps to ensure that the pagefile is located on the C: drive.
+
+>[!NOTE]
+>Using the WindowsHibernateExtension requires the Azure VM Agent to be installed on the VM. If you choose to opt-out of the Azure VM Agent, then you can configure the OS for hibernation by running powercfg /h /type full inside the guest. You can then verify if hibernation is enabled inside guest using the powercfg /a command.
+
+## Hibernating a VM
+
+Once a VM with hibernation enabled has been created and the guest OS is configured for hibernation, you can hibernate the VM through the Azure portal, the Azure CLI, PowerShell, or REST API.
++
+#### [Portal](#tab/PortalDoHiber)
+
+To hibernate a VM in the Azure portal, click the 'Hibernate' button on the VM Overview page.
+
+![Screenshot of the button to hibernate a VM in the Azure portal.](./media/hibernate-resume/hibernate-overview-button.png)
+
+#### [CLI](#tab/CLIDoHiber)
+
+To hibernate a VM in the Azure CLI, run this command:
+
+```azurecli
+az vm deallocate --resource-group TestRG --name TestVM --hibernate true
+```
+
+#### [PowerShell](#tab/PSDoHiber)
+
+To hibernate a VM in PowerShell, run this command:
+
+```powershell
+Stop-AzVM -ResourceGroupName "TestRG" -Name "TestVM" -Hibernate
+```
+
+After running the above command, enter 'Y' to continue:
+
+```
+Virtual machine stopping operation
+
+This cmdlet will stop the specified virtual machine. Do you want to continue?
+
+[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Y
+```
+
+#### [REST API](#tab/APIDoHiber)
+
+To hibernate a VM using the REST API, run this command:
+
+```json
+POST
+https://management.azure.com/subscriptions/.../providers/Microsoft.Compute/virtualMachines/{vmName}/deallocate?hibernate=true&api-version=2021-03-01
+```
++
+## View state of hibernated VM
+
+#### [Portal](#tab/PortalStatCheck)
+
+To view the state of a VM in the portal, check the 'Status' on the overview page. It should report as "Hibernated (deallocated)"
+
+![Screenshot of the Hibernated VM's status in the Azure portal listing as 'Hibernated (deallocated)'.](./media/hibernate-resume/is-hibernated-status.png)
+
+#### [PowerShell](#tab/PSStatCheck)
+
+To view the state of a VM using PowerShell:
+
+```powershell
+Get-AzVM -ResourceGroupName "testRG" -Name "testVM" -Status
+```
+
+Your output should look something like this:
+
+```
+ResourceGroupName : testRG
+Name : testVM
+HyperVGeneration : V1
+Disks[0] :
+ Name : testVM_OsDisk_1_d564d424ff9b40c987b5c6636d8ea655
+ Statuses[0] :
+ Code : ProvisioningState/succeeded
+ Level : Info
+ DisplayStatus : Provisioning succeeded
+ Time : 4/17/2022 2:39:51 AM
+Statuses[0] :
+ Code : ProvisioningState/succeeded
+ Level : Info
+ DisplayStatus : Provisioning succeeded
+ Time : 4/17/2022 2:39:51 AM
+Statuses[1] :
+ Code : PowerState/deallocated
+ Level : Info
+ DisplayStatus : VM deallocated
+Statuses[2] :
+ Code : HibernationState/Hibernated
+ Level : Info
+ DisplayStatus : VM hibernated
+```
+
+#### [CLI](#tab/CLIStatCheck)
+
+To view the state of a VM using Azure CLI:
+
+```azurecli
+az vm get-instance-view -g MyResourceGroup -n myVM
+```
+
+Your output should look something like this:
+```
+{
+ "additionalCapabilities": {
+ "hibernationEnabled": true,
+ "ultraSsdEnabled": null
+ },
+ "hardwareProfile": {
+ "vmSize": "Standard_D2s_v5",
+ "vmSizeProperties": null
+ },
+ "instanceView": {
+ "assignedHost": null,
+ "bootDiagnostics": null,
+ "computerName": null,
+ "statuses": [
+ {
+ "code": "ProvisioningState/succeeded",
+ "displayStatus": "Provisioning succeeded",
+ "level": "Info",
+ "message": null,
+ "time": "2022-04-17T02:39:51.122866+00:00"
+ },
+ {
+ "code": "PowerState/deallocated",
+ "displayStatus": "VM deallocated",
+ "level": "Info",
+ "message": null,
+ "time": null
+ },
+ {
+ "code": "HibernationState/Hibernated",
+ "displayStatus": "VM hibernated",
+ "level": "Info",
+ "message": null,
+ "time": null
+ }
+ ],
+ },
+```
+
+#### [REST API](#tab/APIStatCheck)
+
+To view the state of a VM using REST API, run this command:
+
+```json
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/instanceView?api-version=2020-12-01
+```
+
+Your output should look something like this:
+
+```
+"statuses":
+[
+    {
+      "code": "ProvisioningState/succeeded",
+      "level": "Info",
+      "displayStatus": "Provisioning succeeded",
+      "time": "2019-10-14T21:30:12.8051917+00:00"
+    },
+    {
+      "code": "PowerState/deallocated",
+      "level": "Info",
+      "displayStatus": "VM deallocated"
+    },
+   {
+      "code": "HibernationState/Hibernated",
+      "level": "Info",
+      "displayStatus": "VM hibernated"
+    }
+]
+```
++
+## Start hibernated VMs
+
+You can start hibernated VMs just like how you would start a stopped VM.
+
+### [Portal](#tab/PortalStartHiber)
+To start a hibernated VM using the Azure portal, click the 'Start' button on the VM Overview page.
+
+![Screenshot of the Azure portal button to start a hibernated VM with an underlined status listed as 'Hibernated (deallocated)'.](./media/hibernate-resume/start-hibernated-vm.png)
+
+### [CLI](#tab/CLIStartHiber)
+
+To start a hibernated VM using the Azure CLI, run this command:
+```azurecli
+az vm start -g MyResourceGroup -n MyVm
+```
+
+### [PowerShell](#tab/PSStartHiber)
+
+To start a hibernated VM using PowerShell, run this command:
+
+```powershell
+Start-AzVM -ResourceGroupName "ExampleRG" -Name "ExampleName"
+```
+
+### [REST API](#tab/RESTStartHiber)
+
+To start a hibernated VM using the REST API, run this command:
+
+```json
+POST https://management.azure.com/subscriptions/../providers/Microsoft.Compute/virtualMachines/{vmName}/start?api-version=2020-12-01
+```
++
+## Deploy hibernation enabled VMs from the Azure Compute Gallery
+
+VMs created from Compute Gallery images can also be enabled for hibernation. Ensure that the OS version associated with your Gallery image supports hibernation on Azure. Refer to the list of supported OS versions.
+
+To create VMs with hibernation enabled using Gallery images, you'll first need to create a new image definition with the hibernation property enabled. Once this feature property is enabled on the Gallery Image definition, you can [create an image version](/azure/virtual-machines/image-version?tabs=portal#create-an-image) and use that image version to create hibernation enabled VMs.
+
+>[!NOTE]
+> For specialized Windows images, the page file location must be set to C: drive in order for Azure to successfully configure your guest OS for hibernation.
+> If you're creating an Image version from an existing VM, you should first move the page file to the OS disk and then use the VM as the source for the Image version.
+
+#### [Portal](#tab/PortalImageGallery)
+To create an image definition with the hibernation property enabled, select the checkmark for 'Enable hibernation'.
+
+![Screenshot of the option to enable hibernation in the Azure portal while creating a VM image definition.](./media/hibernate-resume/hibernate-images-support.png)
++
+#### [CLI](#tab/CLIImageGallery)
+```azurecli
+az sig image-definition create --resource-group MyResourceGroup \
+--gallery-name MyGallery --gallery-image-definition MyImage \
+--publisher GreatPublisher --offer GreatOffer --sku GreatSku \
+--os-type linux --os-state Specialized \
+--features IsHibernateSupported=true
+```
+
+#### [PowerShell](#tab/PSImageGallery)
+```powershell
+$rgName = "myResourceGroup"
+$galleryName = "myGallery"
+$galleryImageDefinitionName = "myImage"
+$location = "eastus"
+$publisherName = "GreatPublisher"
+$offerName = "GreatOffer"
+$skuName = "GreatSku"
+$description = "My gallery"
+$IsHibernateSupported = @{Name='IsHibernateSupported';Value='True'}
+$features = @($IsHibernateSupported)
+New-AzGalleryImageDefinition -ResourceGroupName $rgName -GalleryName $galleryName -Name $galleryImageDefinitionName -Location $location -Publisher $publisherName -Offer $offerName -Sku $skuName -OsState "Generalized" -OsType "Windows" -Description $description -Feature $features
+```
++
+## Deploy hibernation enabled VMs from an OS disk
+
+VMs created from OS disks can also be enabled for hibernation. Ensure that the OS version associated with your OS disk supports hibernation on Azure. Refer to the list of supported OS versions.
+
+To create VMs with hibernation enabled using OS disks, ensure that the OS disk has the hibernation property enabled. Refer to API example to enable this property on OS disks. Once the hibernation property is enabled on the OS disk, you can create hibernation enabled VMs using that OS disk.
+
+```
+PATCH https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/disks/myDisk?api-version=2021-12-01
+
+{
+ "properties": {
+ "supportsHibernation": true
+ }
+}
+```
+
+## Troubleshooting
+Refer to the [Hibernate troubleshooting guide](./hibernate-resume-troubleshooting.md) for more information
+
+## FAQs
+
+- What are the charges for using this feature?
+ - Once a VM is placed in a hibernated state, you aren't charged for the VM, just like how you aren't charged for VMs in a stop (deallocated) state. You're only charged for the OS disk, data disks and any static IPs associated with the VM.
+
+- Can I enable hibernation on existing VMs?
+ - No, you can't enable hibernation on existing VMs. You can only enable hibernation at the time of creating a VM.
+
+- Can I resize a VM with hibernation enabled?
+ - No. Once you enable hibernation on a VM, you can't resize the VM.
+
+- Can I modify a VM once it is in a hibernated state?
+ - No, once a VM is in a hibernated state, you can't perform actions like resizing the VM and modifying the disks. Additionally, you can't detach any disks or networking resources that are currently attached to the VM or attach new resources to the VM. You can however stop(deallocate) or delete the VM if you want to detach these resources.
+
+- What is the difference between stop(deallocating) and hibernating a VM?
+ - When you stop(deallocate) a VM, the VM shuts down without persisting the memory contents. You can resize stop(deallocated) VMs and detach/attach disks to the VM.
+
+ - When you hibernate a VM, the memory contents are first persisted in the OS disk, then the VM hibernates. You can't resize VMs in a hibernated state, nor detach/attach disks and networking resources to the VM.
+
+- Can you disable hibernation?
+ - No, you can't disable hibernation on a VM.
+
+- Can I initiate hibernation from within the VM?
+ - To hibernate a VM you should use the Azure portal, CLI, PowerShell commands, SDKs and APIs. Triggering hibernation from inside the VM still results in your VM being billed for the compute resources.
+
+- When a VM is hibernated, is there a capacity assurance at the time of starting the VM?
+ - No, there's no capacity assurance for starting hibernated VMs. In rare scenarios if you encounter a capacity issue, then you can try starting the VM at a later time.
+
+## Next Steps:
+- [Learn more about Azure billing](/azure/cost-management-billing/)
+- [Learn about Azure Virtual Desktop](../virtual-desktop/overview.md)
+- [Look into Azure VM Sizes](sizes.md)
virtual-machines Instance Metadata Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/instance-metadata-service.md
Example document:
} ```
+#### Signature Validation Guidance
+
+When validating the signature, you should confirm that the signature was created with a certificate from Azure. This is done by validating the certificate Subject Alternative Name (SAN).
+
+Example SAN `DNS Name=eastus.metadata.azure.com, DNS Name=metadata.azure.com`
+
+> [!NOTE]
+> The domain for the public cloud and each sovereign cloud will be different.
+
+| Cloud | Domain in SAN |
+|-|-|
+| [All generally available global Azure regions](https://azure.microsoft.com/regions/) | *.metadata.azure.com
+| [Azure Government](https://azure.microsoft.com/overview/clouds/government/) | *.metadata.azure.us
+| [Azure operated by 21Vianet](https://azure.microsoft.com/global-infrastructure/china/) | *.metadata.azure.cn
+| [Azure Germany](https://azure.microsoft.com/overview/clouds/germany/) | *.metadata.microsoftazure.de
+
+> [!NOTE]
+> The certificates might not have an exact match for the domain. For this reason, the certification validation should accept any subdomain (for example, in public cloud general availability regions accept `*.metadata.azure.com`).
+
+We don't recommend certificate pinning for intermediate certs. For further guidance, see [Certificate pinning - Certificate pinning and Azure services](/azure/security/fundamentals/certificate-pinning).
+Please note that the Azure Instance Metadata Service will NOT offer notifications for future Certificate Authority changes.
+Instead, you must follow the centralized [Azure Certificate Authority details](/azure/security/fundamentals/azure-ca-details?tabs=root-and-subordinate-cas-list) article for all future updates.
+ #### Sample 1: Validate that the VM is running in Azure Vendors in Azure Marketplace want to ensure that their software is licensed to run only in Azure. If someone copies the VHD to an on-premises environment, the vendor needs to be able to detect that. Through IMDS, these vendors can get signed data that guarantees response only from Azure.
openssl verify -verbose -CAfile /etc/ssl/certs/DigiCert_Global_Root.pem -untrust
The `nonce` in the signed document can be compared if you provided a `nonce` parameter in the initial request.
-> [!NOTE]
-> The certificate for the public cloud and each sovereign cloud will be different.
-
-| Cloud | Certificate |
-|-|-|
-| [All generally available global Azure regions](https://azure.microsoft.com/regions/) | *.metadata.azure.com
-| [Azure Government](https://azure.microsoft.com/overview/clouds/government/) | *.metadata.azure.us
-| [Azure operated by 21Vianet](https://azure.microsoft.com/global-infrastructure/china/) | *.metadata.azure.cn
-| [Azure Germany](https://azure.microsoft.com/overview/clouds/germany/) | *.metadata.microsoftazure.de
-
-> [!NOTE]
-> The certificates might not have an exact match of `metadata.azure.com` for the public cloud. For this reason, the certification validation should allow a common name from any `.metadata.azure.com` subdomain.
-
-In cases where the intermediate certificate can't be downloaded due to network constraints during validation, you can pin the intermediate certificate. Azure rolls over the certificates, which is standard PKI practice. You must update the pinned certificates when rollover happens. Whenever a change to update the intermediate certificate is planned, the Azure blog is updated, and Azure customers are notified.
-
-You can find the intermediate certificates on [this page](../security/fundamentals/azure-CA-details.md). The intermediate certificates for each of the regions can be different.
-
-> [!NOTE]
-> The intermediate certificate for Azure operated by 21Vianet will be from DigiCert Global Root CA, instead of Baltimore.
-If you pinned the intermediate certificates for Azure operated by 21Vianet as part of a root chain authority change, the intermediate certificates must be updated.
-
-> [!NOTE]
-> Starting February 2022, our Attested Data certificates will be impacted by a TLS change. Due to this, the root CA will change from Baltimore CyberTrust to DigiCert Global G2 only for Public and US Government clouds. If you have the Baltimore CyberTrust cert or other intermediate certificates listed in **[this post](https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-instance-metadata-service-attested-data-tls-critical/ba-p/2888953)** pinned, please follow the instructions listed there **immediately** to prevent any disruptions from using the Attested Data endpoint.
- ## Managed identity A managed identity, assigned by the system, can be enabled on the VM. You can also assign one or more user-assigned managed identities to the VM.
virtual-machines States Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/states-billing.md
Previously updated : 06/08/2022 Last updated : 10/31/2023 -+
The following table provides a description of each instance state and indicates
| Starting| Virtual machine is powering up. | Billed | | Running | Virtual machine is fully up. This state is the standard working state. | Billed | | Stopping | This state is transitional between running and stopped. | Billed |
-| Stopped | The virtual machine is allocated on a host but not running. Also called *PoweredOff* state or *Stopped (Allocated)*. This state can be result of invoking the `PowerOff` API operation or invoking shutdown from within the guest OS. The *Stopped* state may also be observed briefly during VM creation or while starting a VM from *Deallocated* state. | Billed |
+| Stopped | The virtual machine is allocated on a host but not running. Also called *PoweredOff* state or *Stopped (Allocated)*. This state can be result of invoking the `PowerOff` API operation or invoking shutdown from within the guest OS. The *Stopped* state might also be observed briefly during VM creation or while starting a VM from *Stopped (Deallocated)* state. | Billed |
| Deallocating | This state is transitional between *Running* and *Deallocated*. | Not billed* |
-| Deallocated | The virtual machine has released the lease on the underlying hardware and is powered off. This state is also referred to as *Stopped (Deallocated)*. | Not billed* |
+| Deallocated | The virtual machine has released the lease on the underlying hardware. If the machine is powered off it is shown as *Stopped (Deallocated)*. If it has entered [hibernation](./hibernate-resume.md) it is shown as *Hibernated (Deallocated)* | Not billed* |
\* Some Azure resources, such as [Disks](https://azure.microsoft.com/pricing/details/managed-disks) and [Networking](https://azure.microsoft.com/pricing/details/bandwidth/) continue to incur charges.
OS Provisioning states only apply to virtual machines created with a [generalize
To troubleshoot specific VM state issues, see [Troubleshoot Windows VM deployments](/troubleshoot/azure/virtual-machines/troubleshoot-deployment-new-vm-windows) and [Troubleshoot Linux VM deployments](/troubleshoot/azure/virtual-machines/troubleshoot-deployment-new-vm-linux).
+To troubleshoot hibernation, see [Troubleshoot VM hibernation](/hibernate-resume-troubleshooting.md).
+ For other troubleshooting help visit [Azure Virtual Machines troubleshooting documentation](/troubleshoot/azure/virtual-machines/welcome-virtual-machines). ## Next steps
virtual-machines Virtual Machines Copy Restore Points How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-copy-restore-points-how-to.md
+
+ Title: how to copy Virtual Machine Restore Points to another region
+description: how to copy Virtual Machine Restore Points to another region
+++++ Last updated : 10/31/2023+++
+# Cross-region copy of VM Restore Points
+
+## Prerequisites
+
+For copying a RestorePoint across region, you need to pre-create a RestorePointCollection in the target region.
+Learn more about [cross region copy and its limitation](virtual-machines-restore-points-copy.md) before copying a restore points.
+
+### Create Restore Point Collection in target region
+
+First step in copying an existing VM Restore point from one region to another is to create a RestorePointCollection in the target region by referencing the RestorePointCollection from the source region.
+
+#### URI Request
+
+```
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/restorePointCollections/{restorePointCollectionName}&api-version={api-version}
+```
+
+#### Request Body
+
+```
+{
+ "name": "name of target restorePointCollection resource",
+ "location": "location of target restorePointCollection resource",
+ "tags": {
+ "department": "finance"
+ },
+ "properties": {
+ "source": {
+ "id": "/subscriptions/{subid}/resourceGroups/{resourceGroupName}/providers/microsoft.compute/restorePointCollections/{restorePointCollectionName}"
+ }
+ }
+}
+```
+
+#### Response
+The request response will include a status code and set of response headers.
+
+##### Status code
+The operation returns a 201 during create and 200 during Update.
+
+##### Response body
+
+```
+{
+ "name": "name of the copied restorePointCollection resource",
+ "id": "CSM Id of copied restorePointCollection resource",
+ "type": "Microsoft.Compute/restorePointCollections",
+ "location": "location of the copied restorePointCollection resource",
+ "tags": {
+ "department": "finance"
+ },
+ "properties": {
+ "source": {
+ "id": "/subscriptions/{subid}/resourceGroups/{resourceGroupName}/providers/microsoft.compute/restorePointCollections/{restorePointCollectionName}",
+ "location": "location of source RPC"
+ }
+ }
+}
+```
+
+### Create VM Restore Point in Target Region
+Next step is to trigger copy of a RestorePoint in the target RestorePointCollection referencing the RestorePoint in the source region that needs to be copied.
+
+#### URI request
+
+```
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/restorePointCollections/{restorePointCollectionName}/restorePoints/{restorePointName}&api-version={api-version}
+```
+
+#### Request body
+
+```
+{
+ "name": "name of the restore point resource",
+ "properties": {
+ "sourceRestorePoint": {
+ "id": "/subscriptions/{subid}/resourceGroups/{resourceGroupName}/providers/microsoft.compute/restorePointCollections/{restorePointCollectionName}/restorePoints/{restorePointName}"
+ }
+ }
+}
+```
+
+**NOTE:** Location of the sourceRestorePoint would be inferred from that of the source RestorePointCollection
+
+#### Response
+The request response will include a status code and set of response headers.
+
+##### Status Code
+This is a long running operation; hence the operation returns a 201 during create. The client is expected to poll for the status using the operation. (Both the Location and Azure-AsyncOperation headers are provided for this purpose.)
+
+During restore point creation, the ProvisioningState would appear as Creating in GET restore point API response. If creation fails, its ProvisioningState will be Failed. ProvisioningState would be set to Succeeded when the data copy across regions is initiated.
+
+**NOTE:** You can track the copy status by calling GET instance View (?$expand=instanceView) on the target VM Restore Point. Please check the "Get VM Restore Points Copy/Replication Status" section below on how to do this. VM Restore Point is considered usable (can be used to restore a VM) only when copy of all the disk restore points are successful.
+
+##### Response body
+
+```
+{
+ "id": "CSM Id of the restore point",
+ "name": "name of the restore point",
+ "properties": {
+ "optionalProperties": "opaque bag of properties to be passed to extension",
+ "sourceRestorePoint": {
+ "id": "/subscriptions/{subid}/resourceGroups/{resourceGroupName}/providers/microsoft.compute/restorePointCollections/{restorePointCollectionName}/restorePoints/{restorePointName}"
+ },
+ "consistencyMode": "CrashConsistent | FileSystemConsistent | ApplicationConsistent",
+ "sourceMetadata": {
+ "vmId": "Unique Guid of the VM from which the restore point was created",
+ "location": "source VM location",
+ "hardwareProfile": {
+ "vmSize": "Standard_A1"
+ },
+ "osProfile": {
+ "computername": "",
+ "adminUsername": "",
+ "secrets": [
+ {
+ "sourceVault": {
+ "id": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.KeyVault/vaults/<keyvault-name>"
+ },
+ "vaultCertificates": [
+ {
+ "certificateUrl": "https://<keyvault-name>.vault.azure.net/secrets/<secret-name>/<secret-version>",
+ "certificateStore": "certificateStoreName on Windows"
+ }
+ ]
+ }
+ ],
+ "customData": "",
+ "windowsConfiguration": {
+ "provisionVMAgent": "true|false",
+ "winRM": {
+ "listeners": [
+ {
+ "protocol": "http"
+ },
+ {
+ "protocol": "https",
+ "certificateUrl": ""
+ }
+ ]
+ },
+ "additionalUnattendContent": [
+ {
+ "pass": "oobesystem",
+ "component": "Microsoft-Windows-Shell-Setup",
+ "settingName": "FirstLogonCommands|AutoLogon",
+ "content": "<XML unattend content>"
+ }
+ ],
+ "enableAutomaticUpdates": "true|false"
+ },
+ "linuxConfiguration": {
+ "disablePasswordAuthentication": "true|false",
+ "ssh": {
+ "publicKeys": [
+ {
+ "path": "Path-Where-To-Place-Public-Key-On-VM",
+ "keyData": "PEM-Encoded-public-key-file"
+ }
+ ]
+ }
+ }
+ },
+ "storageProfile": {
+ "osDisk": {
+ "osType": "Windows|Linux",
+ "name": "OSDiskName",
+ "diskSizeGB": "10",
+ "caching": "ReadWrite",
+ "managedDisk": {
+ "id": "CSM Id of the managed disk",
+ "storageAccountType": "Standard_LRS"
+ },
+ "diskRestorePoint": {
+ "id": "/subscriptions/<subId>/resourceGroups/<rgName>/restorePointCollections/<rpcName>/restorePoints/<rpName>/diskRestorePoints/<diskRestorePointName>"
+ }
+ },
+ "dataDisks": [
+ {
+ "lun": "0",
+ "name": "datadisk0",
+ "diskSizeGB": "10",
+ "caching": "ReadWrite",
+ "managedDisk": {
+ "id": "CSM Id of the managed disk",
+ "storageAccountType": "Standard_LRS"
+ },
+ "diskRestorePoint": {
+ "id": "/subscriptions/<subId>/resourceGroups/<rgName>/restorePointCollections/<rpcName>/restorePoints/<rpName>/diskRestorePoints/<diskRestorePointName>"
+ }
+ }
+ ]
+ },
+ "diagnosticsProfile": {
+ "bootDiagnostics": {
+ "enabled": true,
+ "storageUri": " http://storageaccount.blob.core.windows.net/"
+ }
+ }
+ },
+ "provisioningState": "Succeeded | Failed | Creating | Deleting",
+ "provisioningDetails": {
+ "creationTime": "Creation Time of Restore point in UTC"
+ }
+ }
+}
+```
+
+### Get VM Restore Points Copy/Replication Status
+Once copy of VM Restore Points is initiated, you can track the copy status by calling GET instance View (?$expand=instanceView) on the target VM Restore Point.
+
+#### URI Request
+
+```
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/restorePointCollections/{restorePointCollectionName}/restorePoints/{restorePointName}?$expand=instanceView&api-version={api-version}
+```
+
+#### Response
+
+```
+{
+ "id": "CSM Id of the restore point",
+ "name": "name of the restore point",
+ "properties": {
+ "optionalProperties": "opaque bag of properties to be passed to extension",
+ "sourceRestorePoint": {
+ "id": "/subscriptions/{subid}/resourceGroups/{resourceGroupName}/providers/microsoft.compute/restorePointCollections/{restorePointCollectionName}/restorePoints/{restorePointName}"
+ },
+ "consistencyMode": "CrashConsistent | FileSystemConsistent | ApplicationConsistent",
+ "sourceMetadata": {
+ "vmId": "Unique Guid of the VM from which the restore point was created",
+ "location": "source VM location",
+ "hardwareProfile": {
+ "vmSize": "Standard_A1"
+ },
+ "osProfile": {
+ "computername": "",
+ "adminUsername": "",
+ "secrets": [
+ {
+ "sourceVault": {
+ "id": "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.KeyVault/vaults/<keyvault-name>"
+ },
+ "vaultCertificates": [
+ {
+ "certificateUrl": "https://<keyvault-name>.vault.azure.net/secrets/<secret-name>/<secret-version>",
+ "certificateStore": "certificateStoreName on Windows"
+ }
+ ]
+ }
+ ],
+ "customData": "",
+ "windowsConfiguration": {
+ "provisionVMAgent": "true|false",
+ "winRM": {
+ "listeners": [
+ {
+ "protocol": "http"
+ },
+ {
+ "protocol": "https",
+ "certificateUrl": ""
+ }
+ ]
+ },
+ "additionalUnattendContent": [
+ {
+ "pass": "oobesystem",
+ "component": "Microsoft-Windows-Shell-Setup",
+ "settingName": "FirstLogonCommands|AutoLogon",
+ "content": "<XML unattend content>"
+ }
+ ],
+ "enableAutomaticUpdates": "true|false"
+ },
+ "linuxConfiguration": {
+ "disablePasswordAuthentication": "true|false",
+ "ssh": {
+ "publicKeys": [
+ {
+ "path": "Path-Where-To-Place-Public-Key-On-VM",
+ "keyData": "PEM-Encoded-public-key-file"
+ }
+ ]
+ }
+ }
+ },
+ "storageProfile": {
+ "osDisk": {
+ "osType": "Windows|Linux",
+ "name": "OSDiskName",
+ "diskSizeGB": "10",
+ "caching": "ReadWrite",
+ "managedDisk": {
+ "id": "CSM Id of the managed disk",
+ "storageAccountType": "Standard_LRS"
+ },
+ "diskRestorePoint": {
+ "id": "/subscriptions/<subId>/resourceGroups/<rgName>/restorePointCollections/<rpcName>/restorePoints/<rpName>/diskRestorePoints/<diskRestorePointName>"
+ }
+ },
+ "dataDisks": [
+ {
+ "lun": "0",
+ "name": "datadisk0",
+ "diskSizeGB": "10",
+ "caching": "ReadWrite",
+ "managedDisk": {
+ "id": "CSM Id of the managed disk",
+ "storageAccountType": "Standard_LRS"
+ },
+ "diskRestorePoint": {
+ "id": "/subscriptions/<subId>/resourceGroups/<rgName>/restorePointCollections/<rpcName>/restorePoints/<rpName>/diskRestorePoints/<diskRestorePointName>"
+ }
+ }
+ ]
+ },
+ "diagnosticsProfile": {
+ "bootDiagnostics": {
+ "enabled": true,
+ "storageUri": " http://storageaccount.blob.core.windows.net/"
+ }
+ }
+ },
+ "provisioningState": "Succeeded | Failed | Creating | Deleting",
+ "provisioningDetails": {
+ "creationTime": "Creation Time of Restore point in UTC"
+ },
+ "instanceView": {
+ "statuses": [
+ {
+ "code": "ReplicationState/succeeded",
+ "level": "Info",
+ "displayStatus": "Replication succeeded"
+ }
+ ],
+ "diskRestorePoints": [
+ {
+ "id": "<diskRestorePoint Arm Id>",
+ "replicationStatus": {
+ "status": {
+ "code": "ReplicationState/succeeded",
+ "level": "Info",
+ "displayStatus": "Replication succeeded"
+ },
+ "completionPercent": "<completion percentage of the replication>"
+ }
+ }
+ ]
+ }
+ }
+}
+```
+
+## Next steps
+
+- [Create a VM restore point](create-restore-points.md).
+- [Learn more](backup-recovery.md) about Backup and restore options for virtual machines in Azure.
virtual-machines Virtual Machines Create Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points.md
Previously updated : 02/14/2022 Last updated : 11/01/2023
virtual-machines Virtual Machines Restore Points Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-restore-points-copy.md
+
+ Title: Using cross-region copy of virtual machine restore points
+description: Using cross-region copy of virtual machine restore points
+++++ Last updated : 11/2/2023+++
+# Overview of Cross-region copy VM restore points (in preview)
+Azure VM restore point APIs are a lightweight option you can use to implement granular backup and retention policies. VM restore points support application consistency and crash consistency (in preview). You can copy a VM restore point from one region to another region. This capability would help our partners to build BCDR solutions for Azure VMs.
+Scenarios where this API can be helpful:
+* Extend multiple copies of restore points to different regions
+* Extend local restore point solutions to support disaster recovery from region failures
+
+> [!NOTE]
+> For copying a RestorePoint across region, you need to pre-create a RestorePointCollection in the target region.
+
+## Limitations
+
+* Private links aren't supported when copying restore points across regions or creating restore points in a region other than the source VM.
+* Azure Confidential Virtual Machines 's not supported.
+* API version for Cross Region Copy of VM Restore Point feature is: '2021-03-01' or later.
+* Copy of copy isn't supported. You can't copy a restore point that is already copied from another region. For ex. If you copied RP1 from East US to West US as RRP1. You can't copy RRP1 from West US to another region (or back to East US).
+* Multiple copies of the same restore point in a single target region aren't supported. A single Restore Point in the source region can only be copied once to a target region.
+* Copying a restore point that is CMK encrypted in source will be encrypted using CMK in target region. This feature is currently in preview.
+* Target Restore Point only shows the creation time when the source Restore Point was created.
+* Currently, the replication progress is updated every 10 mins. Hence for disks that have low churn, there can be scenarios where only the initial (0) and the final replication progress (100) can be seen.
+* Maximum copy time that is supported is two weeks. For huge amount of data to be copied to target region, depending on the bandwidth available between the regions, the copy time could be couple of days. If the copy time exceeds two weeks, the copy operation is terminated automatically.
+* No error details are provided when a Disk Restore Point copy fails.
+* When a disk restore point copy fails, intermediate completion percentage where the copy failed isn't shown.
+* Restoring of Disk from Restore point doesn't automatically check if the disk restore points replication is completed. You need to manually check the percentcompletion of replication status is 100% and then start restoring the disk.
+* Restore points that are copied to the target region don't have a reference to the source VM. They have reference to the source Restore points. So, If the source Restore point is deleted there's no way to identify the source VM using the target Restore points.
+* Copying of restore points in a non-sequential order isn't supported. For example, if you have three restore points RP1, RP2 and RP3. If you have already successfully copied RP1 and RP3, you won't be allowed to copy RP2.
+* The full snapshot on source side should always exist and can't be deleted to save cost. For example if RP1 (full snapshot), RP2 (incremental) and RP3 (incremental) exists in source and are successfully copied to target you can delete RP2 and RP3 on source side to save cost. Deleting the RP1 in the source side will result in creating a full snapshot say RRP1 the next time and copying will also result in a full snapshot. This is because our storage layer maintains the relationship with each pair of source and target snapshot that needs to be preserved.
+
+## Troubleshoot VM restore points
+Most common restore points failures are attributed to the communication with the VM agent and extension, and can be resolved by following the troubleshooting steps listed in the [troubleshooting](restore-point-troubleshooting.md) article.
+
+## Next steps
+
+- [Copy a VM restore point](virtual-machines-copy-restore-points-how-to.md).
+- [Learn more](backup-recovery.md) about Backup and restore options for virtual machines in Azure.
virtual-machines Virtual Machines Restore Points Vm Snapshot Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-restore-points-vm-snapshot-extension.md
+
+ Title: VM Snapshot extension using VM restore points
+description: VM Snapshot extension using VM restore points
+++++ Last updated : 11/2/2023+++
+# VMSnapshot extension
+
+Application-consistent restore points use VSS service (or pre/post-scripts for Linux) to verify application data consistency prior to creating a restore point. Achieving an application-consistent restore point involves the VM's running application providing a VSS service (for Windows) or pre- and post-scripts (for Linux).
+
+For Windows images, **VMSnapshot Windows** extension and for Linux images, **VMSnapshot Linux** extension is used for taking application consistent restore points. When there's a create application consistent restore point request issued from a VM, Azure installs the VM snapshot extension if not already present. The extension will be updated automatically.
+
+> [!IMPORTANT]
+> Azure will begin creating a restore point only after all extensions (including but not limited to VMSnapshot) provisioning state are complete.
+
+## Extension logs
+
+You can view logs for the VMSnapshot extension on the VM under
+```C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot``` for Windows and under ```/var/log/azure/Microsoft.Azure.RecoveryServices.VMSnapshotLinux/extension.log``` for Linux.
++
+## Troubleshooting
+
+Most common restore points failures are attributed to the communication with the VM agent and extension, and can be resolved by following the troubleshooting steps listed in the [troubleshooting](restore-point-troubleshooting.md) article.
+
+During certain VSS writer failure, Azure takes a file system consistent restore points consequently for next three times (irrespective of the frequency at which the restore point creation is scheduled) upon failing the initial creation request. From the fourth time onwards an application consistent restore point will be attempted.
+
+Follow these steps to [troubleshoot VSS writer issues](../backup/backup-azure-vms-troubleshoot.md#extensionfailedvsswriterinbadstatesnapshot-operation-failed-because-vss-writers-were-in-a-bad-state).
+
+> [!NOTE]
+> Avoid manually deleting the extension, as it will lead to failure of the subsequent creation of an application-consistent restore point
+
+## Next steps
+
+- [Create a VM restore point](create-restore-points.md).
virtual-network Virtual Network Bandwidth Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-bandwidth-testing.md
Previously updated : 03/23/2023 Last updated : 11/01/2023
This article describes how to use the free NTTTCP tool from Microsoft to test ne
## Prerequisites
-To test throughput, you need two VMs of the same size to function as *sender* and *receiver*. The two VMs should be in the same [proximity placement group](/azure/virtual-machines/co-location) or [availability set](/azure/virtual-machines/availability-set-overview), so you can use their internal IP addresses and exclude load balancers from the test.
-
-Note the number of VM cores and the receiver VM IP address to use in the commands. Both the sender and receiver commands use the receiver's IP address.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Two Windows or Linux virtual machines in Azure. [Create a Windows VM](/azure/virtual-machines/windows/quick-create-portal) or [create a Linux VM](/azure/virtual-machines/linux/quick-create-portal).
+ - To test throughput, you need two VMs of the same size to function as *sender* and *receiver*. The two VMs should be in the same [proximity placement group](/azure/virtual-machines/co-location) or [availability set](/azure/virtual-machines/availability-set-overview), so you can use their internal IP addresses and exclude load balancers from the test.
+ - Note the number of VM cores and the receiver VM IP address to use in the commands. Both the sender and receiver commands use the receiver's IP address.
>[!NOTE] >Testing by using a virtual IP (VIP) is possible, but is beyond the scope of this article.
+**Examples used in this article**
+
+| Setting | Value |
+|||
+| Receiver VM IP address | **10.0.0.5** |
+| Number of VM cores | **2** |
+ ## Test throughput with Windows VMs or Linux VMs You can test throughput from Windows VMs by using [NTTTCP](https://github.com/microsoft/ntttcp) or from Linux VMs by using [NTTTCP-for-Linux](https://github.com/Microsoft/ntttcp-for-linux). # [Windows](#tab/windows)
-### Set up NTTTCP and test configuration
+### Prepare VMs and install NTTTCP-for-Windows
-1. On both the sender and receiver VMs, [download the latest version of NTTTCP](https://github.com/microsoft/ntttcp/releases/latest) into a separate folder like *c:\\tools*.
+1. On both the sender and receiver VMs, [download the latest version of NTTTCP](https://github.com/microsoft/ntttcp/releases/latest) into a separate folder like **c:\\tools**.
-1. On the receiver VM, create a Windows Defender Firewall `allow` rule to allow the NTTTCP traffic to arrive. It's easier to allow *nttcp.exe* by name than to allow specific inbound TCP ports. Run the following command, replacing `c:\tools` with your download path for *ntttcp.exe* if different.
+1. Open the Windows command line and navigate to the folder where you downloaded **ntttcp.exe**.
- ```cmd
- netsh advfirewall firewall add rule program=c:\tools\ntttcp.exe name="ntttcp" protocol=any dir=in action=allow enable=yes profile=ANY
- ```
+1. On the receiver VM, create a Windows Firewall `allow` rule to allow the NTTTCP traffic to arrive. It's easier to allow **nttcp.exe** by name than to allow specific inbound TCP ports. Run the following command, replacing `c:\tools` with your download path for **ntttcp.exe** if different.
+
+ ```cmd
+ netsh advfirewall firewall add rule program=c:\tools\ntttcp.exe name="ntttcp" protocol=any dir=in action=allow enable=yes profile=ANY
+ ```
-1. To confirm your configuration, test a single Transfer Control Protocol (TCP) stream for 10 seconds by running the following commands:
+1. To confirm your configuration, use the following commands to test a single Transfer Control Protocol (TCP) stream for 10 seconds on the receiver and sender virtual machines:
- - On the receiver VM, run `ntttcp -r -t 10 -P 1`.
- - On the sender VM, run `ntttcp -s<receiver IP address> -t 10 -n 1 -P 1`.
+ **Receiver VM**
+
+ `ntttcp -r -m [<number of VM cores> x 2],*,<receiver IP address> -t 10 -P 1`
+
+ ```cmd
+ ntttcp -r -m 4,*,10.0.0.5 -t 10 -P 1
+ ```
+
+ **Sender VM**
+
+ `ntttcp -s -m [<number of VM cores> x 2],*,<receiver IP address> -t 10 -P 1`
+
+ ```cmd
+ ntttcp -s -m 4,*,10.0.0.5 -t 10 -P 1
+ ```
>[!NOTE] >Use the preceding commands only to test configuration.
You can test throughput from Windows VMs by using [NTTTCP](https://github.com/mi
### Run throughput tests
-Run *ntttcp.exe* from the Windows command line, not from PowerShell. Run the test for 300 seconds, or five minutes, on both the sender and receiver VMs. The sender and receiver must specify the same test duration for the `-t` parameter.
+Run the test for 300 seconds, or five minutes, on both the sender and receiver VMs. The sender and receiver must specify the same test duration for the `-t` parameter.
1. On the receiver VM, run the following command, replacing the `<number of VM cores>`, and `<receiver IP address>` placeholders with your own values.
+
+ **`ntttcp -r -m [<number of VM cores> x 2],*,<receiver IP address> -t 300`**
+
+ ```cmd
+ ntttcp -r -m 4,*,10.0.0.5 -t 300
+ ```
+
+1. On the sender VM, run the following command. The sender and receiver commands differ only in the `-s` or `-r` parameter that designates the sender or receiver VM.
+
+ **`ntttcp -s -m [<number of VM cores> x 2],*,<receiver IP address> -t 300`**
```cmd
- ntttcp -r -m [<number of VM cores> x 2],*,<receiver IP address> -t 300
+ ntttcp -s -m 4,*,10.0.0.5 -t 300
```
- The following example shows a command for a VM with four cores and an IP address of `10.0.0.4`.
+1. Wait for the results.
- `ntttcp -r -m 8,*,10.0.0.4 -t 300`
+When the test is complete, the output should be similar as the following example:
+
+```output
+C:\tools>ntttcp -s -m 4,*,10.0.0.5 -t 300
+Copyright Version 5.39
+Network activity progressing...
-1. On the sender VM, run the following command. The sender and receiver commands differ only in the `-s` or `-r` parameter that designates the sender or receiver VM.
- ```cmd
- ntttcp -s -m [<number of VM cores> x 2],*,<receiver IP address> -t 300
- ```
+Thread Time(s) Throughput(KB/s) Avg B / Compl
+====== ======= ================ =============
+ 0 300.006 29617.328 65536.000
+ 1 300.006 29267.468 65536.000
+ 2 300.006 28978.834 65536.000
+ 3 300.006 29016.806 65536.000
- The following example shows the sender command for a receiver IP address of `10.0.0.4`.
-
- ```cmd
- ntttcp -s -m 8,*,10.0.0.4 -t 300 
- ```
-1. Wait for the results.
+##### Totals: #####
++
+ Bytes(MEG) realtime(s) Avg Frame Size Throughput(MB/s)
+================ =========== ============== ================
+ 34243.000000 300.005 1417.829 114.141
++
+Throughput(Buffers/s) Cycles/Byte Buffers
+===================== =========== =============
+ 1826.262 7.036 547888.000
++
+DPCs(count/s) Pkts(num/DPC) Intr(count/s) Pkts(num/intr)
+============= ============= =============== ==============
+ 4218.744 1.708 6055.769 1.190
++
+Packets Sent Packets Received Retransmits Errors Avg. CPU %
+============ ================ =========== ====== ==========
+ 25324915 2161992 60412 0 15.075
+
+```
# [Linux](#tab/linux)
To measure throughput from Linux machines, use [NTTTCP-for-Linux](https://github
1. Prepare both the sender and receiver VMs for NTTTCP-for-Linux by running the following commands, depending on your distro:
- - For **CentOS**, install `gcc` and `git`.
+ - For **CentOS**, install `gcc` , `make` and `git`.
``` bash
- yum install gcc -y
- yum install git -y
+ sudo yum install gcc -y
+ sudo yum install git -y
+ sudo yum install make -y
``` - For **Ubuntu**, install `build-essential` and `git`.
- ``` bash
- apt-get -y install build-essential
- apt-get -y install git
+ ```bash
+ sudo apt-get -y install build-essential
+ sudo apt-get -y install git
``` - For **SUSE**, install `git-core`, `gcc`, and `make`.
- ``` bash
- zypper in -y git-core gcc make
+ ```bash
+ sudo zypper in -y git-core gcc make
``` 1. Make and install NTTTCP-for-Linux.
- ``` bash
+ ```bash
git clone https://github.com/Microsoft/ntttcp-for-linux cd ntttcp-for-linux/src
- make && make install
+ sudo make && sudo make install
``` ### Run throughput tests
Run the NTTTCP test for 300 seconds, or five minutes, on both the sender VM and
1. On the receiver VM, run the following command:
- ``` bash
- ntttcp -r -t 300
+ ```bash
+ ntttcp -r -m 4,*,10.0.0.5 -t 300
```
-1. On the sender VM, run the following command. This example shows a sender command for a receiver IP address of `10.0.0.4`.
+1. On the sender VM, run the following command. This example shows a sender command for a receiver IP address of `10.0.0.5`.
- ``` bash
- ntttcp -s10.0.0.4 -t 300
+ ```bash
+ ntttcp -s -m 4,*,10.0.0.5 -t 300
```
+When the test is complete, the output should be similar as the following example:
+
+```output
+azureuser@vm-3:~/ntttcp-for-linux/src$ ntttcp -s -m 4,*,10.0.0.5 -t 300
+NTTTCP for Linux 1.4.0
+
+23:59:01 INFO: 4 threads created
+23:59:01 INFO: 4 connections created in 1933 microseconds
+23:59:01 INFO: Network activity progressing...
+00:04:01 INFO: Test run completed.
+00:04:01 INFO: Test cycle finished.
+00:04:01 INFO: 4 connections tested
+00:04:01 INFO: ##### Totals: #####
+00:04:01 INFO: test duration:300.00 seconds
+00:04:01 INFO: total bytes:35750674432
+00:04:01 INFO: throughput:953.35Mbps
+00:04:01 INFO: retrans segs:13889
+00:04:01 INFO: cpu cores:2
+00:04:01 INFO: cpu speed:2793.437MHz
+00:04:01 INFO: user:0.16%
+00:04:01 INFO: system:1.60%
+00:04:01 INFO: idle:98.07%
+00:04:01 INFO: iowait:0.05%
+00:04:01 INFO: softirq:0.12%
+00:04:01 INFO: cycles/byte:0.91
+00:04:01 INFO: cpu busy (all):3.96%
+
+```
+ + ## Test throughput between a Windows VM and a Linux VM To run NTTTCP throughput tests between a Windows VM and a Linux VM, enable no-sync mode by using the `-ns` flag on Windows or the `-N` flag on Linux.
To test with the Windows VM as the receiver, run the following command:
```cmd ntttcp -r -m [<number of VM cores> x 2],*,<Linux VM IP address> -t 300 ```+ To test with the Windows VM as the sender, run the following command: ```cmd
To test with the Linux VM as the sender, run the following command:
```bash ntttcp -s -m [<number of VM cores> x 2],*,<Windows VM IP address> -N -t 300 ```--
-## Test Cloud Service instances
-
-Add the following section to *ServiceDefinition.csdef*:
-```xml
-<Endpoints>
- <InternalEndpoint name="Endpoint3" protocol="any" />
-</Endpoints>
-```
+ ## Next steps - [Optimize network throughput for Azure virtual machines](virtual-network-optimize-network-bandwidth.md).+ - [Virtual machine network bandwidth](virtual-machine-network-throughput.md).+ - [Test VM network latency](virtual-network-test-latency.md)+ - [Azure Virtual Network frequently asked questions (FAQ)](virtual-networks-faq.md)
virtual-wan About Nva Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-nva-hub.md
description: Learn about Network Virtual Appliances in a Virtual WAN hub.
Previously updated : 06/30/2023 Last updated : 11/02/2023 # Customer intent: As someone with a networking background, I want to learn about Network Virtual Appliances in a Virtual WAN hub.
Deploying NVAs into a Virtual WAN hub provides the following benefits:
* **Pre-defined and pre-tested selection of infrastructure choices ([NVA Infrastructure Units](#units))**: Microsoft and the partner work together to validate throughput and bandwidth limits prior to solution being made available to customers. * **Built-in availability and resiliency**: Virtual WAN NVA deployments are Availability Zone (AZ) aware and are automatically configured to be highly available.
-* **No-hassle provisioning and boot-strapping**: A managed application is pre-qualified for provisioning and boot-strapping for the Virtual WAN platform. This managed application is available through the Azure Marketplace link.
+* **No-hassle provisioning and boot-strapping**: A managed application is prequalified for provisioning and boot-strapping for the Virtual WAN platform. This managed application is available through the Azure Marketplace link.
* **Simplified routing**: Leverage Virtual WAN's intelligent routing systems. NVA solutions peer with the Virtual WAN hub router and participate in the Virtual WAN routing decision process similarly to Microsoft Gateways. * **Integrated support**: Partners have a special support agreement with Microsoft Azure Virtual WAN to quickly diagnose and resolve any customer problems. * **Platform-provided lifecycle management**: Upgrades and patches are a part of the Azure Virtual WAN service. This takes away the complexity of lifecycle management from a customer deploying Virtual Appliance solutions.
Deploying NVAs into a Virtual WAN hub provides the following benefits:
## Partners
-The following tables describes the Network Virtual Appliances that are eligible to be deployed in the Virtual WAN hub and the relevant use cases (connectivity and/or firewall). The Virtual WAN NVA Vendor Identifier column corresponds to the NVA Vendor that is displayed in Azure portal when you deploy a new NVA or view existing NVA's deployed in the Virtual Hub.
+The following tables describe the Network Virtual Appliances that are eligible to be deployed in the Virtual WAN hub and the relevant use cases (connectivity and/or firewall). The Virtual WAN NVA Vendor Identifier column corresponds to the NVA Vendor that is displayed in Azure portal when you deploy a new NVA or view existing NVAs deployed in the Virtual hub.
[!INCLUDE [NVA partners](../../includes/virtual-wan-nva-hub-partners.md)]
All NVA offerings that are available to be deployed into a Virtual WAN hub will
* Bill software licensing costs directly, or through Azure Marketplace. * Expose custom properties and resource meters.
-NVA Partners may create different resources depending on their appliance deployment, configuration licensing, and management needs. When a customer creates an NVA in a Virtual WAN hub, like all managed applications, there will be two resource groups created in their subscription.
+NVA Partners might create different resources depending on their appliance deployment, configuration licensing, and management needs. When a customer creates an NVA in a Virtual WAN hub, like all managed applications, there will be two resource groups created in their subscription.
-* **Customer resource group** - This will contain an application placeholder for the managed application. Partners can use this to expose whatever customer properties they choose here.
-* **Managed resource group** - Customers can't configure or change resources in this resource group directly, as this is controlled by the publisher of the managed application. This resource group will contain the **NetworkVirtualAppliances** resource.
+* **Customer resource group** - This contains an application placeholder for the managed application. Partners can use this to expose whatever customer properties they choose here.
+* **Managed resource group** - Customers can't configure or change resources in this resource group directly, as this is controlled by the publisher of the managed application. This resource group contains the **NetworkVirtualAppliances** resource.
:::image type="content" source="./media/about-nva-hub/managed-app.png" alt-text="Managed Application resource groups":::
NVA Partners may create different resources depending on their appliance deploym
By default, all managed resource groups have a deny-all Microsoft Entra assignment. Deny-all assignments prevent customers from calling write operations on any resources in the managed resource group, including Network Virtual Appliance resources.
-However, partners may create exceptions for specific actions that customers are allowed to perform on resources deployed in managed resource groups.
+However, partners might create exceptions for specific actions that customers are allowed to perform on resources deployed in managed resource groups.
Permissions on resources in existing managed resource groups aren't dynamically updated as new permitted actions are added by partners and require a manual refresh.
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/
When you create an NVA in a Virtual WAN hub, you must choose the number of NVA Infrastructure Units you want to deploy it with. An **NVA Infrastructure Unit** is a unit of aggregate bandwidth capacity for an NVA in a Virtual WAN hub. An **NVA Infrastructure Unit** is similar to a VPN [Scale Unit](pricing-concepts.md#scale-unit) in terms of the way you think about capacity and sizing.
-* NVA Infrastructure Units are a guideline for how much aggregate networking throughput the **virtual machine infrastructure** on which NVA's are deployed can support. 1 NVA Infrastructure Unit corresponds to 500 Mbps of aggregate throughput. This 500 Mbps number does not take into consideration differences between the software that runs on Network Virtual Appliances. Depending on the features turned on in the NVA or partner-specific software implementation, networking functions such as encryption/decryption, encapsulation/decapsulation or deep packet inspection may be more intensive, meaning you may see less throughput than the NVA infrastructure unit. For a mapping of Virtual WAN NVA infrastructure units to expected throughputs, please contact the vendor.
-* Azure supports deployments ranging from 2-80 NVA Infrastructure Units for a given NVA virtual hub deployment, but partners may choose which scale units they support. As such, you may not be able to deploy all possible scale unit configurations.
+* NVA Infrastructure Units are a guideline for how much aggregate networking throughput the **virtual machine infrastructure** on which NVAs are deployed can support. 1 NVA Infrastructure Unit corresponds to 500 Mbps of aggregate throughput. This 500 Mbps number doesn't take into consideration differences between the software that runs on Network Virtual Appliances. Depending on the features turned on in the NVA or partner-specific software implementation, networking functions such as encryption/decryption, encapsulation/decapsulation or deep packet inspection might be more intensive. This means you might see less throughput than the NVA infrastructure unit. For a mapping of Virtual WAN NVA infrastructure units to expected throughputs, please contact the vendor.
+* Azure supports deployments ranging from 2-80 NVA Infrastructure Units for a given NVA virtual hub deployment, but partners might choose which scale units they support. As such, you might not be able to deploy all possible scale unit configurations.
-NVAs in Virtual WAN are deployed to ensure you always are able to achieve at minimum the vendor-specific throughput numbers for a particular chosen scale unit. To achieve this, NVAs in Virtual WAN are overprovisioned with additional capacity in the form of multiple instances in a 'n+1' manner. This means that at any given time you may see aggregate throughput across the instances to be greater than the vendor-specific throughput numbers. This ensures if an instance is unhealthy, the remaining 'n' instance(s) can service customer traffic and provide the vendor-specific throughput for that scale unit.
+NVAs in Virtual WAN are deployed to ensure you always are able to achieve at minimum the vendor-specific throughput numbers for a particular chosen scale unit. To achieve this, NVAs in Virtual WAN are overprovisioned with additional capacity in the form of multiple instances in a 'n+1' manner. This means that at any given time you might see aggregate throughput across the instances to be greater than the vendor-specific throughput numbers. This ensures if an instance is unhealthy, the remaining 'n' instance(s) can service customer traffic and provide the vendor-specific throughput for that scale unit.
-If the total amount of traffic that passes through a NVA at a given time goes above the vendor-specific throughput numbers for the chosen scale unit, events that may cause a NVA instance to be unavailable including but not limited to routine Azure platform maintenance activities or software upgrades can result in service or connectivity disruption. To minimize service disruptions, you should choose the scale unit based on your peak traffic profile and vendor-specific throughput numbers for a particular scale unit as opposed to relying on best-case throughput numbers observed during testing.
+If the total amount of traffic that passes through an NVA at a given time goes above the vendor-specific throughput numbers for the chosen scale unit, events that might cause an NVA instance to be unavailable including but not limited to routine Azure platform maintenance activities or software upgrades can result in service or connectivity disruption. To minimize service disruptions, you should choose the scale unit based on your peak traffic profile and vendor-specific throughput numbers for a particular scale unit as opposed to relying on best-case throughput numbers observed during testing.
## <a name="configuration"></a>NVA configuration process
-Partners have worked to provide an experience that configures the NVA automatically as part of the deployment process. Once the NVA has been provisioned into the virtual hub, any additional configuration that may be required for the NVA must be done via the NVA partners portal or management application. Direct access to the NVA isn't available.
+Partners have worked to provide an experience that configures the NVA automatically as part of the deployment process. Once the NVA is provisioned into the virtual hub, any additional configuration that might be required for the NVA must be done via the NVA partners portal or management application. Direct access to the NVA isn't available.
## <a name="resources"></a>Site and connection resources with NVAs
vpn-gateway Vpn Gateway About Vpn Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md
In the following tables:
| Authentication Method |Pre-Shared Key |Pre-Shared Key | | Encryption & Hashing Algorithms |1. AES256, SHA256<br>2. AES256, SHA1<br>3. AES128, SHA1<br>4. 3DES, SHA1 |1. AES256, SHA1<br>2. AES256, SHA256<br>3. AES128, SHA1<br>4. AES128, SHA256<br>5. 3DES, SHA1<br>6. 3DES, SHA256 | | SA Lifetime |28,800 seconds |28,800 seconds |
+| Number of Quick Mode SA |100 |100 |
### IKE Phase 2 (Quick Mode) parameters
vpn-gateway Vpn Gateway Classic Resource Manager Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-classic-resource-manager-migration.md
Previously updated : 08/21/2023 Last updated : 11/02/2023 # VPN Gateway classic to Resource Manager migration
-VPN gateways can now be migrated from the classic deployment model to [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). For more information, see [Resource Manager deployment model](../azure-resource-manager/management/overview.md). In this article, we discuss how to migrate from classic deployments to the Resource Manager model.
+VPN gateways can now be migrated from the classic deployment model to [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). For more information, see [Resource Manager deployment model](../azure-resource-manager/management/overview.md). In this article, we discuss how to migrate from classic deployments to the Resource Manager model.
+
+> [!IMPORTANT]
+> [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)]
VPN gateways are migrated as part of VNet migration from classic to Resource Manager. This migration is done one VNet at a time. There aren't additional requirements in terms of tools or prerequisites to migrate. Migration steps are identical to the existing VNet migration and are documented at [IaaS resources migration page](../virtual-machines/migration-classic-resource-manager-ps.md).
vpn-gateway Vpn Gateway Delete Vnet Gateway Classic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-classic-powershell.md
Previously updated : 08/21/2023 Last updated : 10/31/2023 # Delete a virtual network gateway using PowerShell (classic)
-This article helps you delete a VPN gateway in the classic (legacy) deployment model by using PowerShell. After the virtual network gateway has been deleted, modify the network configuration file to remove elements that you're no longer using.
+This article helps you delete a VPN gateway in the classic (legacy) deployment model by using PowerShell. After the virtual network gateway is deleted, modify the network configuration file to remove elements that you're no longer using.
The steps in this article apply to the classic deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-delete-vnet-gateway-powershell.md)**.
+> [!IMPORTANT]
+> [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)]
+ ## <a name="connect"></a>Step 1: Connect to Azure ### 1. Install the latest PowerShell cmdlets.
In this example, the network configuration file is exported to C:\AzureNet.
Get-AzureVNetConfig -ExportToFile C:\AzureNet\NetworkConfig.xml ```
-Open the file with a text editor and view the name for your classic VNet. When you create a VNet in the Azure portal, the full name that Azure uses isn't visible in the portal. For example, a VNet that appears to be named 'ClassicVNet1' in the Azure portal, may have a longer name in the network configuration file. The name might look something like: 'Group ClassicRG1 ClassicVNet1'. Virtual network names are listed as **'VirtualNetworkSite name ='**. Use the names in the network configuration file when running your PowerShell cmdlets.
+Open the file with a text editor and view the name for your classic VNet. When you create a VNet in the Azure portal, the full name that Azure uses isn't visible in the portal. For example, a VNet that appears to be named 'ClassicVNet1' in the Azure portal, might have a longer name in the network configuration file. The name might look something like: 'Group ClassicRG1 ClassicVNet1'. Virtual network names are listed as **'VirtualNetworkSite name ='**. Use the names in the network configuration file when running your PowerShell cmdlets.
## <a name="delete"></a>Step 3: Delete the virtual network gateway
When you delete a virtual network gateway, the cmdlet doesn't modify the network
### <a name="lnsref"></a>Local Network Site References
-To remove site reference information, make configuration changes to **ConnectionsToLocalNetwork/LocalNetworkSiteRef**. Removing a local site reference triggers Azure to delete a tunnel. Depending on the configuration that you created, you may not have a **LocalNetworkSiteRef** listed.
+To remove site reference information, make configuration changes to **ConnectionsToLocalNetwork/LocalNetworkSiteRef**. Removing a local site reference triggers Azure to delete a tunnel. Depending on the configuration that you created, you might not have a **LocalNetworkSiteRef** listed.
``` <Gateway>
vpn-gateway Vpn Gateway Howto Point To Site Classic Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md
description: Learn how to create a classic Point-to-Site VPN Gateway connection
Previously updated : 08/21/2023 Last updated : 10/31/2023 # Configure a Point-to-Site connection by using certificate authentication (classic)
-This article shows you how to create a VNet with a Point-to-Site connection using the classic (legacy) deployment model. This configuration uses certificates to authenticate the connecting client, either self-signed or CA issued. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-point-to-site-resource-manager-portal.md)**.
+This article shows you how to create a VNet with a Point-to-Site connection using the classic (legacy) deployment model. This configuration uses certificates to authenticate the connecting client, either self-signed or CA issued. These instructions are for the classic deployment model. You can no longer create a gateway using the classic deployment model. See the [Resource Manager version of this article](vpn-gateway-howto-point-to-site-resource-manager-portal.md) instead.
+
+> [!IMPORTANT]
+> [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)]
You use a Point-to-Site (P2S) VPN gateway to create a secure connection to your virtual network from an individual client computer. Point-to-Site VPN connections are useful when you want to connect to your VNet from a remote location. When you have only a few clients that need to connect to a VNet, a P2S VPN is a useful solution to use instead of a Site-to-Site VPN. A P2S VPN connection is established by starting it from the client computer.
Before you begin, verify that you have an Azure subscription. If you don't alrea
## <a name="vnet"></a>Create a virtual network
-If you already have a VNet, verify that the settings are compatible with your VPN gateway design. Pay particular attention to any subnets that may overlap with other networks.
+If you already have a VNet, verify that the settings are compatible with your VPN gateway design. Pay particular attention to any subnets that might overlap with other networks.
[!INCLUDE [basic classic vnet](../../includes/vpn-gateway-vnet-classic.md)]
If you already have a VNet, verify that the settings are compatible with your VP
* **Size:** The size is the gateway SKU for your virtual network gateway. In the Azure portal, the default SKU is **Default**. For more information about gateway SKUs, see [About VPN gateway settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku). * **Routing Type:** You must select **Dynamic** for a point-to-site configuration. Static routing won't work. * **Gateway subnet:** This field is already autofilled. You can't change the name. If you try to change the name using PowerShell or any other means, the gateway won't work properly.
- * **Address range (CIDR block):** While it's possible to create a gateway subnet as small as /29, we recommend that you create a larger subnet that includes more addresses by selecting at least /28 or /27. Doing so will allow for enough addresses to accommodate possible additional configurations that you may want in the future. When working with gateway subnets, avoid associating a network security group (NSG) to the gateway subnet. Associating a network security group to this subnet may cause your VPN gateway to not function as expected.
+ * **Address range (CIDR block):** While it's possible to create a gateway subnet as small as /29, we recommend that you create a larger subnet that includes more addresses by selecting at least /28 or /27. Doing so will allow for enough addresses to accommodate possible additional configurations that you might want in the future. When working with gateway subnets, avoid associating a network security group (NSG) to the gateway subnet. Associating a network security group to this subnet might cause your VPN gateway to not function as expected.
1. Select **Review + create** to validate your settings. 1. Once validation passes, select **Create**. A VPN gateway can take up to 45 minutes to complete, depending on the gateway SKU that you select.
After the gateway has been created, upload the .cer file (which contains the pub
1. Select **Upload**. 1. On the **Upload a certificate** pane, select the folder icon and navigate to the certificate you want to upload. 1. Select **Upload**.
-1. After the certificate has uploaded successfully, you can view it on the Manage certificate page. You may need to select **Refresh** to view the certificate you just uploaded.
+1. After the certificate has uploaded successfully, you can view it on the Manage certificate page. You might need to select **Refresh** to view the certificate you just uploaded.
## Configure the client
You can revoke a client certificate by adding the thumbprint to the revocation l
1. In **Thumbprint**, paste the certificate thumbprint as one continuous line of text, with no spaces. 1. Select **+ Add to list** to add the thumbprint to the certificate revocation list (CRL).
-After updating has completed, the certificate can no longer be used to connect. Clients that try to connect by using this certificate receive a message saying that the certificate is no longer valid.
+After updating completes, the certificate can no longer be used to connect. Clients that try to connect by using this certificate receive a message saying that the certificate is no longer valid.
## <a name="faq"></a>FAQ
vpn-gateway Vpn Gateway Howto Site To Site Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-site-to-site-classic-portal.md
Previously updated : 10/06/2023 Last updated : 10/31/2023 # Create a Site-to-Site connection using the Azure portal (classic)
-This article shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the **classic (legacy) deployment model** and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](./tutorial-site-to-site-portal.md)**.
+This article shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the **classic (legacy) deployment model** and don't apply to the current deployment model, Resource Manager. See the [Resource Manager version of this article](./tutorial-site-to-site-portal.md) instead.
+
+> [!IMPORTANT]
+> [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)]
A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about VPN gateways, see [About VPN gateway](vpn-gateway-about-vpngateways.md).
The examples in this article use the following values. You can use these values
When you create a virtual network to use for a S2S connection, you need to make sure that the address spaces that you specify don't overlap with any of the client address spaces for the local sites that you want to connect to. If you have overlapping subnets, your connection won't work properly.
-* If you already have a VNet, verify that the settings are compatible with your VPN gateway design. Pay particular attention to any subnets that may overlap with other networks.
+* If you already have a VNet, verify that the settings are compatible with your VPN gateway design. Pay particular attention to any subnets that might overlap with other networks.
* If you don't already have a virtual network, create one. Screenshots are provided as examples. Be sure to replace the values with your own.
vpn-gateway Vpn Gateway Howto Vnet Vnet Portal Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-portal-classic.md
Previously updated : 10/06/2023 Last updated : 10/31/2023 # Configure a VNet-to-VNet connection (classic) This article helps you create a VPN gateway connection between virtual networks. The virtual networks can be in the same or different regions, and from the same or different subscriptions.
-The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).**
+The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. You can no longer create a gateway using the classic deployment model. See the [Resource Manager version of this article](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) instead.
+
+> [!IMPORTANT]
+> [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)]
:::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-portal-classic/classic-diagram.png" alt-text="Diagram showing classic VNet-to-VNet architecture.":::
The VNets you connect can be in different subscriptions and different regions. Y
### <a name="why"></a>Why connect virtual networks?
-You may want to connect virtual networks for the following reasons:
+You might want to connect virtual networks for the following reasons:
* **Cross region geo-redundancy and geo-presence**
The local site typically refers to your on-premises location. It contains the IP
* **Size:** This is the gateway SKU that you use to create your virtual network gateway. Classic VPN gateways use the old (legacy) gateway SKUs. For more information about the legacy gateway SKUs, see [Working with virtual network gateway SKUs (old SKUs)](vpn-gateway-about-skus-legacy.md). You can select **Standard** for this exercise.
- * **Gateway subnet:** The size of the gateway subnet that you specify depends on the VPN gateway configuration that you want to create. While it is possible to create a gateway subnet as small as /29, we recommend that you use /27 or /28. This creates a larger subnet that includes more addresses. Using a larger gateway subnet allows for enough IP addresses to accommodate possible future configurations.
+ * **Gateway subnet:** The size of the gateway subnet that you specify depends on the VPN gateway configuration that you want to create. While it's possible to create a gateway subnet as small as /29, we recommend that you use /27 or /28. This creates a larger subnet that includes more addresses. Using a larger gateway subnet allows for enough IP addresses to accommodate possible future configurations.
1. Select **Review + create** at the bottom of the page to validate your settings. Select **Create** to deploy. It can take up to 45 minutes to create a virtual network gateway, depending on the gateway SKU that you selected. 1. You can start proceed to the next step while this gateway is creating.
web-application-firewall Rate Limiting Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/rate-limiting-configure.md
Title: Create rate limiting custom rules for Application Gateway WAF v2 (preview)
+ Title: Create rate limiting custom rules for Application Gateway WAF v2
description: Learn how to configure rate limit custom rules for Application Gateway WAF v2. Previously updated : 08/16/2023 Last updated : 11/01/2023
-# Create rate limiting custom rules for Application Gateway WAF v2 (preview)
-
-> [!IMPORTANT]
-> Rate limiting for Web Application Firewall on Application Gateway is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Create rate limiting custom rules for Application Gateway WAF v2
Rate limiting enables you to detect and block abnormally high levels of traffic destined for your application. Rate Limiting works by counting all traffic that that matches the configured Rate Limit rule and performing the configured action for traffic matching that rule which exceeds the configured threshold. For more information, see [Rate limiting overview](rate-limiting-overview.md).
web-application-firewall Rate Limiting Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/rate-limiting-overview.md
Title: Azure Web Application Firewall (WAF) rate limiting (preview)
+ Title: Azure Web Application Firewall (WAF) rate limiting
description: This article is an overview of Azure Web Application Firewall (WAF) on Application Gateway rate limiting. Previously updated : 08/16/2023 Last updated : 11/01/2023
-# What is rate limiting for Web Application Firewall on Application Gateway (preview)?
+# What is rate limiting for Web Application Firewall on Application Gateway?
-> [!IMPORTANT]
-> Rate limiting for Web Application Firewall on Application Gateway is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-Rate limiting for Web Application Firewall on Application Gateway (preview) allows you to detect and block abnormally high levels of traffic destined for your application. By using rate limiting on Application Gateway WAF_v2, you can mitigate many types of denial-of-service attacks, protect against clients that have accidentally been misconfigured to send large volumes of requests in a short time period, or control traffic rates to your site from specific geographies.
+Rate limiting for Web Application Firewall on Application Gateway allows you to detect and block abnormally high levels of traffic destined for your application. By using rate limiting on Application Gateway WAF_v2, you can mitigate many types of denial-of-service attacks, protect against clients that have accidentally been misconfigured to send large volumes of requests in a short time period, or control traffic rates to your site from specific geographies.
## Rate limiting policies
The sliding window algorithm blocks all matching traffic for the first window i
## Next step -- [Create rate limiting custom rules for Application Gateway WAF v2 (preview)](rate-limiting-configure.md)
+- [Create rate limiting custom rules for Application Gateway WAF v2](rate-limiting-configure.md)