Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
advisor | Advisor How To Calculate Total Cost Savings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-how-to-calculate-total-cost-savings.md | Title: Calculate cost savings in Azure Advisor Previously updated : 02/06/2024 Last updated : 09/05/2024 description: Export cost savings in Azure Advisor and calculate the aggregated potential yearly savings by using the cost savings amount for each recommendation. description: Export cost savings in Azure Advisor and calculate the aggregated p This article provides guidance on how to calculate total cost savings in Azure Advisor. +## Understand cost savings ++Azure Advisor provides recommendations for resizing/shutting down underutilized resources, purchasing compute reserved instances, and savings plans for compute. ++These recommendations contain one or more calls-to-action and forecasted savings from following the recommendations. Recommendations should be followed in a specific order: rightsizing/shutdown, followed by reservation purchases, and finally, the savings plan purchase. This sequence allows each step to impact the subsequent ones positively. ++For example, rightsizing or shutting down resources reduces on-demand costs immediately. This change in your usage pattern essentially invalidates your existing reservation and savings plan recommendations, as they were based on your pre-rightsizing usage and costs. Updated reservation and savings plan recommendations (and their forecasted savings) should appear within three days. ++The forecasted savings from reservations and savings plans are based on actual rates and usage, while the forecasted savings from rightsizing/shutdown are based on retail rates. The actual savings may vary depending on the usage patterns and rates. Assuming there are no material changes to your usage patterns, your actual savings from reservations and savings plan should be in line with the forecasts. Savings from rightsizing/shutdown vary based on your actual rates. This is important if you intend to track cost savings forecasts from Azure Advisor. + ## Export cost savings for recommendations To calculate aggregated potential yearly savings, follow these steps: The Advisor **Overview** page opens. > [!NOTE] > Different types of cost savings recommendations are generated using overlapping datasets (for example, VM rightsizing/shutdown, VM reservations and savings plan recommendations all consider on-demand VM usage). As a result, resource changes (e.g., VM shutdowns) or reservation/savings plan purchases will impact on-demand usage, and the resulting recommendations and associated savings forecast. --## Understand cost savings --Azure Advisor provides recommendations for resizing/shutting down underutilized resources, purchasing compute reserved instances, and savings plans for compute. --These recommendations contain one or more calls-to-action and forecasted savings from following the recommendations. Recommendations should be followed in a specific order: rightsizing/shutdown, followed by reservation purchases, and finally, the savings plan purchase. This sequence allows each step to impact the subsequent ones positively. --For example, rightsizing or shutting down resources reduces on-demand costs immediately. This change in your usage pattern essentially invalidates your existing reservation and savings plan recommendations, as they were based on your pre-rightsizing usage and costs. Updated reservation and savings plan recommendations (and their forecasted savings) should appear within three days. --The forecasted savings from reservations and savings plans are based on actual rates and usage, while the forecasted savings from rightsizing/shutdown are based on retail rates. The actual savings may vary depending on the usage patterns and rates. Assuming there are no material changes to your usage patterns, your actual savings from reservations and savings plan should be in line with the forecasts. Savings from rightsizing/shutdown vary based on your actual rates. This is important if you intend to track cost savings forecasts from Azure Advisor. |
api-management | Api Management Howto Configure Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-notifications.md | If you don't have an API Management service instance, complete the following qui - **Close account message** - The specified email recipients and users will receive email notifications when an account is closed. - **Approaching subscription quota limit** - The specified email recipients and users will receive email notifications when subscription usage gets close to usage quota. - > [!NOTE] - > Notifications are triggered by the [quota by subscription](quota-policy.md) policy only. The [quota by key](quota-by-key-policy.md) policy doesn't generate notifications. - 1. Select a notification, and specify one or more email addresses to be notified: * To add the administrator email address, select **+ Add admin**. * To add another email address, select **+ Add email**, enter an email address, and select **Add**. |
api-management | Direct Management Api Retirement March 2025 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/direct-management-api-retirement-march-2025.md | -Effective 15 March 2025, Azure API Management will retire its direct management REST API. If you enable and use the direct management API to configure and manage your API Management instance programmatically, migrate to use the standard Azure Resource Manager-based API instead. +The direct management API in Azure API Management is deprecated and will be retired effective 15 March 2025. You should discontinue use of the direct management API to configure and manage your API Management instance programmatically, and migrate to the standard Azure Resource Manager-based API instead. ## Is my service affected by this? -A built-in [direct management API](/rest/api/apimanagement/apimanagementrest/api-management-rest) to programmatically manage your API Management is disabled by default but can be enabled in the Premium, Standard, Basic, and Developer tiers of API Management. While your API Management instance isn't affected by this change, any tool, script, or program that uses the direct management API to interact with the API Management service is affected by this change. You'll be unable to run those tools successfully after the retirement date unless you update the tools to use the standard [Azure Resource Manager-based REST API](/rest/api/apimanagement) for API Management. +A built-in [direct management API](/rest/api/apimanagement/apimanagementrest/api-management-rest) to programmatically manage your API Management instance is disabled by default but can be enabled in the Premium, Standard, Basic, and Developer tiers of API Management. This API is deprecated. While your API Management instance isn't affected by this change, any tool, script, or program that uses the direct management API to interact with the API Management service is affected by this change. You'll be unable to run those tools successfully after the retirement date unless you update the tools to use the standard [Azure Resource Manager-based REST API](/rest/api/apimanagement) for API Management. ## What is the deadline for the change? -Support for the direct management API will no longer be available after 15 March 2025. +The direct management API is deprecated. Support for the direct management API will no longer be available after 15 March 2025. ## What do I need to do? -From now through 15 March 2025, if you have enabled the direct management API, you can continue to use it normally. At any time before the retirement date, update your tools, scripts, and programs to use equivalent operations in the Azure Resource Manager-based REST API instead. +You should no longer use the direct management API. Before the retirement date, update your tools, scripts, and programs to use equivalent operations in the Azure Resource Manager-based REST API instead. ## Help and support |
api-management | Validate Content Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-content-policy.md | The policy validates the following content in the request or response against th | Attribute | Description | Required | Default | | -- | | -- | - | | unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. Policy expressions are allowed. | Yes | N/A |-| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) Policy expressions are allowed. | Yes | N/A | +| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 4 MB. Policy expressions are allowed. | Yes | N/A | | size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. Policy expressions are allowed.| Yes | N/A | | errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. Policy expressions aren't allowed. | No | N/A | |
api-management | Virtual Network Workspaces Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-workspaces-resources.md | For information about networking options in API Management, see [Use a virtual n * The virtual network must be in the same region and Azure subscription as the API Management instance. -## Subnet size +## Subnet requirements -* The subnet size must be `/24` (256 IP addresses). * The subnet can't be shared with another Azure resource, including another workspace gateway. ## Subnet delegation |
application-gateway | How To Url Redirect Gateway Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-redirect-gateway-api.md | The following figure illustrates an example of a request destined for _contoso.c Apply the following deployment.yaml file on your cluster to deploy a sample TLS certificate to demonstrate redirect capabilities. ```bash- kubectl apply -f kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml + kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml ``` This command creates the following on your cluster: |
application-gateway | How To Url Rewrite Gateway Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-gateway-api.md | The following figure illustrates an example of a request destined for _contoso.c Apply the following deployment.yaml file on your cluster to deploy a sample TLS certificate to demonstrate redirect capabilities. ```bash- kubectl apply -f kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml + kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml ``` This command creates the following on your cluster: |
automation | Automation Child Runbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-child-runbooks.md | Currently, PowerShell 5.1 is supported and only certain runbook types can call e > [!IMPORTANT] > Executing child scripts using `.\child-runbook.ps1` is not supported in PowerShell 7.1 and PowerShell 7.2 - **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook. + **Workaround**: Use `Start-AutomationRunbook` ([internal cmdlet](/azure/automation/shared-resources/modules#internal-cmdlets)) or `Start-AzAutomationRunbook` (from [Az.Automation module](/powershell/module/Az.Automation/Start-AzAutomationRunbook)) to start another runbook from parent runbook. The publish order of runbooks matters only for PowerShell Workflow and graphical PowerShell Workflow runbooks. |
automation | Automation Hrw Run Runbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md | Title: Run Azure Automation runbooks on a Hybrid Runbook Worker description: This article describes how to run runbooks on machines in your local datacenter or other cloud provider with the Hybrid Runbook Worker. Previously updated : 06/28/2024 Last updated : 09/04/2024 -> [!IMPORTANT] -> - Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 November 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md). -> - Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md#sample-scripts) to start migrating the runbooks from Run As account to managed identities before September 30, 2023. ++> [!Important] +> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) has retired on **31 August 2024** and is no longer supported. Follow the guidelines on how to [migrate from an existing Agent-based User Hybrid Runbook Workers to Extension-based Hybrid Workers](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md). ++> [!NOTE] +> Azure Automation Run As Account has retired on September 30, 2023 and is replaced with Managed Identities. Follow the guidelines on [how to start migrating your runbooks to use managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md#sample-scripts). Runbooks that run on a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md) typically manage resources on the local computer or against resources in the local environment where the worker is deployed. Runbooks in Azure Automation typically manage resources in the Azure cloud. Even though they are used differently, runbooks that run in Azure Automation and runbooks that run on a Hybrid Runbook Worker are identical in structure. |
automation | Automation Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md | Title: Azure Automation Hybrid Runbook Worker overview description: Know about Hybrid Runbook Worker. How to install and run the runbooks on machines in your local datacenter or cloud provider. Previously updated : 09/17/2023 Last updated : 09/04/2024 # Automation Hybrid Runbook Worker overview -> [!IMPORTANT] -> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 November 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md). Runbooks in Azure Automation might not have access to resources in other clouds or in your on-premises environment because they run on the Azure cloud platform. You can use the Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the machine hosting the role and against resources in the environment to manage those local resources. Runbooks are stored and managed in Azure Automation and then delivered to one or more assigned machines. |
automation | Automation Linux Hrw Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md | description: This article tells how to install an agent-based Hybrid Runbook Wo Previously updated : 06/29/2024 Last updated : 09/04/2024 -> [!IMPORTANT] -> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 November 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md). You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly it and against resources in the environment to manage those local resources. |
automation | Automation Managing Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-managing-data.md | Title: Azure Automation data security description: This article helps you learn how Azure Automation protects your privacy and secures your data. Previously updated : 11/20/2023 Last updated : 05/09/2024 To ensure the security of data in transit to Azure Automation, we strongly encou * Webhook calls -* Hybrid Runbook Workers, which include machines managed by Update Management and Change Tracking and Inventory. +* User Hybrid Runbook Workers (extension-based and agent-based) -* DSC nodes +* Machines managed by Azure Automation Update management and Azure Automation Change tracking & inventory ++* Azure Automation DSC nodes Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**. We do not recommend explicitly setting your agent to only use TLS 1.2 unless its necessary, as it can break platform level security features that allow you to automatically detect and take advantage of newer more secure protocols as they become available, such as TLS 1.3. For information about TLS support with the Log Analytics agent for Windows and L ### Upgrade TLS protocol for Hybrid Workers and Webhook calls -From **31 October 2024**, all agent-based and extension-based User Hybrid Runbook Workers, Webhooks, and DSC nodes using Transport Layer Security (TLS) 1.0 and 1.1 protocols would no longer be able to connect to Azure Automation. All jobs running or scheduled on Hybrid Workers using TLS 1.0 and 1.1 protocols will fail. +From **31 October 2024**, all agent-based and extension-based User Hybrid Runbook Workers, Webhooks, DSC nodes and Azure Automation Update management and Change Tracking managed machines, using Transport Layer Security (TLS) 1.0 and 1.1 protocols would no longer be able to connect to Azure Automation. All jobs running or scheduled on Hybrid Workers using TLS 1.0 and 1.1 protocols will fail. -Ensure that the Webhook calls that trigger runbooks navigate on TLS 1.2 or higher. Ensure to make registry changes so that Agent and Extension based workers negotiate only on TLS 1.2 and higher protocols. Learn how to [disable TLS 1.0/1.1 protocols on Windows Hybrid Worker and enable TLS 1.2 or above](/system-center/scom/plan-security-tls12-config#configure-windows-operating-system-to-only-use-tls-12-protocol) on Windows machine. +Ensure that the Webhook calls that trigger runbooks navigate on TLS 1.2 or higher. Learn how to [disable TLS 1.0/1.1 protocols on Windows Hybrid Worker and enable TLS 1.2 or above](/system-center/scom/plan-security-tls12-config#configure-windows-operating-system-to-only-use-tls-12-protocol) on Windows machine. For Linux Hybrid Workers, run the following Python script to upgrade to the latest TLS protocol. |
automation | Automation Windows Hrw Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md | Title: Deploy an agent-based Windows Hybrid Runbook Worker in Automation description: This article tells how to deploy an agent-based Hybrid Runbook Worker that you can use to run runbooks on Windows-based machines in your local datacenter or cloud environment. Previously updated : 04/21/2024 Last updated : 09/04/2024 # Deploy an agent-based Windows Hybrid Runbook Worker in Automation -> [!IMPORTANT] -> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 November 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md). You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources. |
automation | Enforce Job Execution Hybrid Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/enforce-job-execution-hybrid-worker.md | Title: Enforce job execution on Azure Automation Hybrid Runbook Worker description: This article tells how to use a custom Azure Policy definition to enforce job execution on an Azure Automation Hybrid Runbook Worker. Previously updated : 09/17/2023 Last updated : 09/04/2024 # Use Azure Policy to enforce job execution on Hybrid Runbook Worker -> [!IMPORTANT] -> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 November 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md). Starting a runbook on a Hybrid Runbook Worker uses a **Run on** option that allows you to specify the name of a Hybrid Runbook Worker group when initiating from the Azure portal, with the Azure PowerShell, or REST API. When a group is specified, one of the workers in that group retrieves and runs the runbook. If your runbook does not specify this option, Azure Automation runs the runbook in the Azure sandbox. |
automation | Migrate Existing Agent Based Hybrid Worker To Extension Based Workers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md | Title: Migrate an existing agent-based hybrid workers to extension-based-workers description: This article provides information on how to migrate an existing agent-based hybrid worker to extension based workers. Previously updated : 06/29/2024 Last updated : 09/04/2024 #Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers.-> [!IMPORTANT] -> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 November 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md). +> [!Important] +> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) has retired on **31 August 2024** and is no longer supported. Follow the guidelines in this article on how to migrate from an existing Agent-based User Hybrid Runbook Workers to Extension-based Hybrid Workers. This article describes the benefits of Extension-based User Hybrid Runbook Worker and how to migrate existing Agent-based User Hybrid Runbook Workers to Extension-based Hybrid Workers. |
automation | Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md | -> [!IMPORTANT] -> Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 November 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](../migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md) This article provides information on troubleshooting and resolving issues with Azure Automation agent-based Hybrid Runbook Workers. For troubleshooting extension-based workers, see [Troubleshoot extension-based Hybrid Runbook Worker issues in Automation](./extension-based-hybrid-runbook-worker.md). For general information, see [Hybrid Runbook Worker overview](../automation-hybrid-runbook-worker.md). |
automation | Configure Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/configure-groups.md | Title: Use dynamic groups with Azure Automation Update Management description: This article tells how to use dynamic groups with Azure Automation Update Management. Previously updated : 07/15/2024 Last updated : 09/05/2024 Update Management allows you to target a dynamic group of Azure or non-Azure VMs You can define dynamic groups for Azure or non-Azure machines from **Update management** in the Azure portal. See [Manage updates for VMs](manage-updates-for-vm.md). -A dynamic group is defined by a query that Azure Automation evaluates at deployment time. Even if a dynamic group query retrieves a large number of machines, Azure Automation can process only a maximum of 1000 machines at a time. See [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#update-management). +A dynamic group is defined by a query that Azure Automation evaluates at deployment time. Even if a dynamic group query retrieves a large number of machines, Azure Automation can process only a maximum of 1000 machines at a time. > [!NOTE] > If you expect to update more than 1000 machines, we recommend that you split up the updates among multiple update schedules. |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md | At the date and time specified in the update deployment, the target machines exe ## Limits -For limits that apply to Update Management, see [Azure Automation service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#update-management). +Following are limits that apply to Update Management: ++| **Resource** | **Limit**| **Notes** | +|||| +|Number of machines per update deployment|1000|| +|Number of dynamic groups per update deployment |500 || ## Permissions |
azure-arc | Troubleshoot Resource Bridge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md | This occurs when the management machine is trying to reach the ARB VM IP by SSH If you receive an error that contains `Not able to connect to https://example.url.com`, check with your network administrator to ensure your network allows all of the required firewall and proxy URLs to deploy Arc resource bridge. For more information, see [Azure Arc resource bridge network requirements](network-requirements.md). +### Not able to connect - network and internet connectivity validation failed ++When deploying Arc resource bridge, you may receive an error with `errorCode` as `PostOperationsError`, `errorResponse` as code `GuestInternetConnectivityError` with a URL specifying port 53 (DNS). This may be due to the appliance VM IPs being unable to reach DNS servers, so they can't resolve the endpoint specified in the error. ++Error example: ++`{ _errorCode_: _PostOperationsError_, _errorResponse_: _{\n\_message\_: \_{\\n ΓÇ»\\\_code\\\_:\\\_GuestInternetConnectivityError\\\_,\\n\\\_message\\\_:\\\_Not able to connect to http://aszhcitest01.company.org:55000. Error returned: action failed after 5 attempts: Get \\\\\\\_http://aszhcitest01.company.org:55000\\\\\\\_: dial tcp: lookup aszhcitest01.company.org on 127.0.0.53:53: read udp 127.0.0.1:32975-\\u003e127.0.0.53:53: i/o timeout. Arc Resource Bridge network and internet connectivity validation failed: cloud-agent-connectivity-test. 1. check your networking setup and ensure the URLs mentioned in : https://aka.ms/AAla73m are reachable from the Appliance VM. ΓÇ» 2. Check firewall/proxy settings\\\_\\n }\_\n}_ }` ++Error example: ++`{ _errorCode_: _PostOperationsError_, _errorResponse_: _{\n\_message\_: \_{\\n ΓÇ»\\\_code\\\_: \\\_GuestInternetConnectivityError\\\_,\\n ΓÇ»\\\_message\\\_: \\\_Not able to connect to https://linuxgeneva-microsoft.azurecr.io. Error returned: action failed after 5 attempts: Get \\\\\\\_https://linuxgeneva-microsoft.azurecr.io\\\\\\\_: dial tcp: lookup linuxgeneva-microsoft.azurecr.io on 127.0.0.53:53: server misbehaving. Arc Resource Bridge network and internet connectivity validation failed: http-connectivity-test-arc. 1. Please check your networking setup and ensure the URLs mentioned in : https://aka.ms/AAla73m are reachable from the Appliance VM. ΓÇ» 2. Check firewall/proxy settings\\\_\\n }\_\n}_ }` ++To resolve the error, work with your network administrator to allow the appliance VM IPs to reach the DNS servers. For more information, see [Azure Arc resource bridge network requirements](network-requirements.md). + ### Http2 server sent GOAWAY When trying to deploy Arc resource bridge, you might receive an error message similar to: To check if the DNS server is able to resolve an address, from a machine where w ```Resolve-DnsName -Name "http://aszhcitest01.company.org:55000" -Server "<dns-server.com>"``` -### Not able to connect - i/o timeout --When deploying Arc resource bridge, you may receive an error with `errorCode` as `PostOperationsError`, `errorResponse` as code `GuestInternetConnectivityError` with keywords `i/o timeout` and `read udp`. This may be due to the appliance VM IPs being unable to reach DNS servers, so they can't resolve the MOC cloud agent address endpoint specified in the error. --Error example: --```{ _errorCode_: _PostOperationsError_, _errorResponse_: _{\n\_message\_: \_{\\n ΓÇ»\\\_code\\\_:\\\_GuestInternetConnectivityError\\\_,\\n\\\_message\\\_:\\\_Not able to connect to http://aszhcitest01.company.org:55000. Error returned: action failed after 5 attempts: Get \\\\\\\_http://aszhcitest01.company.org:55000\\\\\\\_: dial tcp: lookup aszhcitest01.company.org on 127.0.0.53:53: read udp 127.0.0.1:32975-\\u003e127.0.0.53:53: i/o timeout. Arc Resource Bridge network and internet connectivity validation failed: cloud-agent-connectivity-test. 1. check your networking setup and ensure the URLs mentioned in : https://aka.ms/AAla73m are reachable from the Appliance VM. ΓÇ» 2. Check firewall/proxy settings\\\_\\n }\_\n}_ }``` --To resolve the error, work with your network administrator to allow the appliance VM IPs to reach the DNS servers. - ### Authentication handshake failure When running an `az arcappliance` command, you might see a connection error: `authentication handshake failed: x509: certificate signed by unknown authority` When Arc resource bridge is deployed, you specify where the appliance VM will be These are the options to address either error: - Move the appliance VM back to its original location and ensure RBAC credentials are updated for the location change.-- Create a resource with the same name, move Arc resource bridge to that new resource, and then proceed with upgrade.+- Create a resource with the same name, move Arc resource bridge to that new resource. - If you're using Arc-enabled VMware, [run the Arc-enabled VMware disaster recovery script](../vmware-vsphere/disaster-recovery.md). The script will delete the appliance, deploy a new appliance and reconnect the appliance with the previously deployed custom location, cluster extension and Arc-enabled VMs. - Delete and [redeploy the Arc resource bridge](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md). These are the options to address either error: When deploying or upgrading the resource bridge on VMware vCenter, you might get an error similar to: -`{ ""code"": ""PreflightcheckError"", ""message"": ""{\n \""code\"": \""InsufficientPrivilegesError\"",\n \""message\"": \""The provided vCenter account is missing required vSphere privileges on the resource 'root folder (MoRefId: Folder:group-d1)'. Missing privileges: [Sessions.ValidateSession]. add the privileges to the vCenter account and try again. To review the full list of required privileges, go to https://aka.ms/ARB-vsphere-privilege.\""\n }' +`{ ""code"": ""PreflightcheckError"", ""message"": ""{\n \""code\"": \""InsufficientPrivilegesError\"",\n \""message\"": \""The provided vCenter account is missing required vSphere privileges on the resource 'root folder (MoRefId: Folder:group-d1)'. Missing privileges: [Sessions.ValidateSession]. add the privileges to the vCenter account and try again. To review the full list of required privileges, go to https://aka.ms/ARB-vsphere-privilege.\""\n }` When deploying Arc resource bridge, you are asked to provide vCenter credentials. The Arc resource bridge locally stores the vCenter credentials to interact with vCenter. To resolve the missing privileges issue, the vCenter account used by the resource bridge needs the following privileges in VMware vCenter: |
azure-functions | Create First Function Cli Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-powershell.md | Before you begin, you must have the following: + The [.NET 6.0 SDK](https://dotnet.microsoft.com/download) -+ [PowerShell 7.2](/powershell/scripting/install/installing-powershell-core-on-windows) ++ [PowerShell 7.4](/powershell/scripting/install/installing-powershell-core-on-windows) [!INCLUDE [functions-install-core-tools](../../includes/functions-install-core-tools.md)] |
azure-functions | Functions Reference Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md | The following table shows the PowerShell versions available to each major versio | Functions version | PowerShell version | .NET version | |-|--||-| 4.x | PowerShell 7.2 | .NET 6 | +| 4.x | PowerShell 7.4 | .NET 8 | +| 4.x | PowerShell 7.2 (support ending) | .NET 6 | You can see the current version by printing `$PSVersionTable` from any function. To learn more about Azure Functions runtime support policy, please refer to this [article](./language-support-policy.md) +> [!NOTE] +> Support for PowerShell 7.2 in Azure Functions ends on November 8, 2024. You might have to resolve some breaking changes when upgrading your PowerShell 7.2 functions to run on PowerShell 7.4. Follow this [migration guide](https://github.com/Azure/azure-functions-powershell-worker/wiki/Upgrading-your-Azure-Function-Apps-to-run-on-PowerShell-7.4) to upgrade to PowerShell 7.4. + ### Running local on a specific version -Support for PowerShell 7.0 in Azure Functions has ended on 3 December 2022. To use PowerShell 7.2 when running locally, you need to add the setting `"FUNCTIONS_WORKER_RUNTIME_VERSION" : "7.2"` to the `Values` array in the local.setting.json file in the project root. When running locally on PowerShell 7.2, your local.settings.json file looks like the following example: +When running your PowerShell functions locally, you need to add the setting `"FUNCTIONS_WORKER_RUNTIME_VERSION" : "7.4"` to the `Values` array in the local.setting.json file in the project root. When running locally on PowerShell 7.4, your local.settings.json file looks like the following example: ```json { Support for PowerShell 7.0 in Azure Functions has ended on 3 December 2022. To u "Values": { "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "powershell",- "FUNCTIONS_WORKER_RUNTIME_VERSION" : "7.2" + "FUNCTIONS_WORKER_RUNTIME_VERSION" : "7.4" } } ``` > [!NOTE]-> In PowerShell Functions, the value "~7" for FUNCTIONS_WORKER_RUNTIME_VERSION refers to "7.0.x". We do not automatically upgrade PowerShell Function apps that have "~7" to "7.2". Going forward, for PowerShell Function Apps, we will require that apps specify both the major and minor version they want to target. Hence, it is necessary to mention "7.2" if you want to target "7.2.x" +> In PowerShell Functions, the value "~7" for FUNCTIONS_WORKER_RUNTIME_VERSION refers to "7.0.x". We do not automatically upgrade PowerShell Function apps that have "~7" to "7.4". Going forward, for PowerShell Function Apps, we will require that apps specify both the major and minor version they want to target. Hence, it is necessary to mention "7.4" if you want to target "7.4.x" ### Changing the PowerShell version -Support for PowerShell 7.0 in Azure Functions has ended on 3 December 2022. To upgrade your Function App to PowerShell 7.2, ensure the value of FUNCTIONS_EXTENSION_VERSION is set to ~4. To learn how to do this, see [View and update the current runtime version](set-runtime-version.md#view-the-current-runtime-version). +Take these considerations into account before you migrate your PowerShell function app to PowerShell 7.4: ++ Because the migration might introduce breaking changes in your app, review this [migration guide](https://github.com/Azure/azure-functions-powershell-worker/wiki/Upgrading-your-Azure-Function-Apps-to-run-on-PowerShell-7.4) before upgrading your app to PowerShell 7.4. ++ Make sure that your function app is running on the latest version of the Functions runtime in Azure, which is version 4.x. For more information, see [View and update the current runtime version](set-runtime-version.md#view-the-current-runtime-version). Use the following steps to change the PowerShell version used by your function app. You can do this either in the Azure portal or by using PowerShell. Use the following steps to change the PowerShell version used by your function a 1. Choose your desired **PowerShell Core version** and select **Save**. When warned about the pending restart choose **Continue**. The function app restarts on the chosen PowerShell version. +> [!NOTE] +> Azure Functions support for PowerShell 7.4 is generally available (GA). You may see PowerShell 7.4 still indicated as preview in the Azure portal, but this will be updated soon to reflect the GA status. + # [PowerShell](#tab/powershell) Run the following script to change the PowerShell version: Set-AzResource -ResourceId "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RES ``` -Replace `<SUBSCRIPTION_ID>`, `<RESOURCE_GROUP>`, and `<FUNCTION_APP>` with the ID of your Azure subscription, the name of your resource group and function app, respectively. Also, replace `<VERSION>` with `7.2`. You can verify the updated value of the `powerShellVersion` setting in `Properties` of the returned hash table. +Replace `<SUBSCRIPTION_ID>`, `<RESOURCE_GROUP>`, and `<FUNCTION_APP>` with the ID of your Azure subscription, the name of your resource group and function app, respectively. Also, replace `<VERSION>` with `7.4`. You can verify the updated value of the `powerShellVersion` setting in `Properties` of the returned hash table. |
azure-functions | Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/deploy.md | For each scenario, you can target the action against one or more subscriptions, :::image type="content" source="media/deploy/schedule-recurrence-property.png" alt-text="Configure the recurrence frequency for logic app"::: +> [!NOTE] +> If you do not provide a start date and time for the first recurrence, a recurrence will immediately run when you save the logic app, which might cause the VMs to start or stop before the scheduled run. + 1. In the designer pane, select **Function-Try** to configure the target settings. In the request body, if you want to manage VMs across all resource groups in the subscription, modify the request body as shown in the following example. ```json In an environment that includes two or more components on multiple Azure Resourc :::image type="content" source="media/deploy/schedule-recurrence-property.png" alt-text="Configure the recurrence frequency for logic app"::: +> [!NOTE] +> If you do not provide a start date and time for the first recurrence, a recurrence will immediately run when you save the logic app, which might cause the VMs to start or stop before the scheduled run. + 1. In the designer pane, select **Function-Try** to configure the target settings and then select the **</> Code view** button in the top menu to edit the code for the **Function-Try** element. In the request body, if you want to manage VMs across all resource groups in the subscription, modify the request body as shown in the following example. ```json |
azure-functions | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md | This new version of Start/Stop VMs v2 provides a decentralized low-cost automati ## Important Start/Stop VMs v2 Updates -> + We've updated our Start/Stop VMs v2 function app resource to use [Azure Functions version 4.x](../functions-versions.md), and you'll get this version by default when you install Start/Stop VMs v2 from the marketplace. Existing customers should migrate from Functions version 3.x to version 4.x using our auto-update functionality. This functionality gets the latest version either by running the TriggerAutoUpdate timer function once manually or waiting for the schedule to run, if you've enabled it. +> + No further development, enhancements, or updates will be available for Start/Stop V2 except when required to remain on supported versions of components and Azure services. >-> + We've added a plan (**AZ - Availability Zone**) to our Start/Stop VMs v2 solution to enable a more reliable offering. You can now choose between Consumption and Availability Zone plans before you start your deployment. In most cases, the monthly cost of the Availability Zone plan is higher when compared to the Consumption plan. -> -> + Automatic updating functionality was introduced on April 28th, 2022. This new auto update feature helps you stay on the latest version of the solution. This feature is enabled by default when you perform a new installation. -> If you deployed your solution before this date, you can reinstall to the latest version from our [GitHub repository](https://github.com/microsoft/startstopv2-deployments) +> + The TriggerAutoUpdate and UpdateStartStopV2 functions are now deprecated and will be removed in future updates to Start/Stop,V2. To update Start/Stop V2, we recommend that you stop the site, install to the latest version from our [GitHub repository](https://github.com/microsoft/startstopv2-deployments), and the start the site. No built-in notification system is available for updates. After an update to Start/Stop V2 becomes available, we will update the [readme.md](https://github.com/microsoft/startstopv2-deployments/blob/main/README.md) in the GitHub repository. Third-party Github file watchers might be available to enable you to be notified of changes. To disable the automatic update functionality, set the Function App's **AzureClientOptions:EnableAutoUpdate** [application setting](../functions-how-to-use-azure-function-app-settings.md?tabs=azure-portal%2Cto-premium#get-started-in-the-azure-portal) to **false**. +> +> + As of August 19, 2024. Start/Stop v2 has been updated to the [.NET 8 isolated worker model](../functions-versions.md?tabs=isolated-process%2Cv4&pivots=programming-language-csharp#languages). + ## Overview An HTTP trigger function endpoint is created to support the schedule and sequenc |CostAnalyticsFunction |Timer |This function is used by Microsoft to estimate aggregate cost of Start/Stop V2 across customers. This function does not impact the functionality of Start/Stop V2.| |SavingsAnalyticsFunction |Timer |This function is used by Microsoft to estimate aggregate savings of Start/Stop V2 across customers. This function does not impact the functionality of Start/Stop V2.| |VirtualMachineSavingsFunction |Queue |This function performs the actual savings calculation on a VM achieved by the Start/Stop V2 solution.|-|TriggerAutoUpdate |Timer |This function starts the auto update process based on the application setting "**EnableAutoUpdate=true**".| -|UpdateStartStopV2 |Queue |This function performs the actual auto update execution, which validates your current version with the available version and decides the final action.| +|TriggerAutoUpdate |Timer |Deprecated. This function starts the auto update process based on the application setting "**AzureClientOptions:EnableAutoUpdate=true**".| +|UpdateStartStopV2 |Queue |Deprecated. This function performs the actual auto update execution, which validates your current version with the available version and decides the final action.| For example, **Scheduled** HTTP trigger function is used to handle schedule and sequence scenarios. Similarly, **AutoStop** HTTP trigger function handles the auto stop scenario. |
azure-maps | Data Driven Style Expressions Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-web-sdk.md | Title: Data-driven style expressions in the Azure Maps Web SDK | Microsoft Azure description: Learn about data-driven style expressions. See how to use these expressions in the Azure Maps Web SDK to adjust styles in maps. Previously updated : 4/4/2019 Last updated : 08/29/2024 |
azure-maps | Drawing Tools Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md | Title: Drawing tools events | Microsoft Azure Maps description: This article demonstrates how to add a drawing toolbar to a map using Microsoft Azure Maps Web SDK Previously updated : 05/23/2023 Last updated : 09/03/2024 When using drawing tools on a map, it's useful to react to certain events as the | Event | Description | |-|-|-| `drawingchanged` | Fired when any coordinate in a shape has been added or changed. | +| `drawingchanged` | Fired when any coordinate in a shape is added or changed. | | `drawingchanging` | Fired when any preview coordinate for a shape is being displayed. For example, this event fires multiple times as a coordinate is dragged. |-| `drawingcomplete` | Fired when a shape has finished being drawn or taken out of edit mode. | +| `drawingcomplete` | Fired when a shape completes drawing or is taken out of edit mode. | | `drawingerased` | Fired when a shape is erased from the drawing manager when in `erase-geometry` mode. |-| `drawingmodechanged` | Fired when the drawing mode has changed. The new drawing mode is passed into the event handler. | +| `drawingmodechanged` | Fired when the drawing mode changes. The new drawing mode is passed into the event handler. | | `drawingstarted` | Fired when the user starts drawing a shape or puts a shape into edit mode. | -For a complete working sample of how to display data from a vector tile source on the map, see [Drawing tools events] in the [Azure Maps Samples]. In this sample you can draw shapes on the map and watch as the events fire. For the source code for this sample, see [Drawing tools events sample code]. +For a complete working sample of how to display data from a vector tile source on the map, see [Drawing tools events] in the [Azure Maps Samples]. This sample enables you to draw shapes on the map and watch as the events fire. For the source code for this sample, see [Drawing tools events sample code]. The following image shows a screenshot of the complete working sample that demonstrates how the events in the Drawing Tools module work. ## Examples This code demonstrates how to monitor an event of a user drawing shapes. For thi For a complete working sample of how to use the drawing tools to draw polygon areas on the map with points within them that can be selected, see [Select data in drawn polygon area] in the [Azure Maps Samples]. For the source code for this sample, see [Select data in drawn polygon area sample code]. ### Draw and search in polygon area This code searches for points of interests inside the area of a shape after the For a complete working sample of how to use the drawing tools to search for points of interests within drawn areas, see [Draw and search polygon area] in the [Azure Maps Samples]. For the source code for this sample, see [Draw and search polygon area sample code]. ### Create a measuring tool -The following code shows how the drawing events can be used to create a measuring tool. The `drawingchanging` is used to monitor the shape, as it's being drawn. As the user moves the mouse, the dimensions of the shape are calculated. The `drawingcomplete` event is used to do a final calculation on the shape after it has been drawn. The `drawingmodechanged` event is used to determine when the user is switching into a drawing mode. Also, the `drawingmodechanged` event clears the drawing canvas and clears old measurement information. +The following code shows how the drawing events can be used to create a measuring tool. The `drawingchanging` is used to monitor the shape, as it's being drawn. As the user moves the mouse, the dimensions of the shape are calculated. The `drawingcomplete` event is used to do a final calculation on the shape after drawing completes. The `drawingmodechanged` event is used to determine when the user is switching into a drawing mode. Also, the `drawingmodechanged` event clears the drawing canvas and clears old measurement information. For a complete working sample of how to use the drawing tools to measure distances and areas, see [Create a measuring tool] in the [Azure Maps Samples]. For the source code for this sample, see [Create a measuring tool sample code]. ## Next steps |
azure-maps | Map Add Snap Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-snap-grid.md | Title: Add snap grid to the map | Microsoft Azure Maps description: How to add a snap grid to a map using Azure Maps Web SDK Previously updated : 06/08/2023 Last updated : 09/03/2024 The resolution of the snapping grid is in pixels. The grid is square and relativ Create a snap grid using the `atlas.drawing.SnapGridManager` class and pass in a reference to the map you want to connect the manager to. Set the `showGrid` option to `true` if you want to make the grid visible. To snap a shape to the grid, pass it into the snap grid managers `snapShape` function. If you want to snap an array of positions, pass it into the `snapPositions` function. -The [Use a snapping grid] sample snaps an HTML marker to a grid when it's dragged. Drawing tools are used to snap drawn shapes to the grid when the `drawingcomplete` event fires. For the source code for this sample, see [Use a snapping grid source code]. +The [Use a snapping grid] sample snaps an HTML marker to a grid when dragged. Drawing tools are used to snap drawn shapes to the grid when the `drawingcomplete` event fires. For the source code for this sample, see [Use a snapping grid source code]. <!-- > [!VIDEO https://codepen.io/azuremaps/embed/rNmzvXO?default-tab=js%2Cresult] The [Use a snapping grid] sample snaps an HTML marker to a grid when it's dragge The [Snap grid options] sample shows the different customization options available for the snap grid manager. The grid line styles can be customized by retrieving the underlying line layer using the snap grid managers `getGridLayer` function. For the source code for this sample, see [Snap grid options source code]. <!-- > [!VIDEO https://codepen.io/azuremaps/embed/RwVZJry?default-tab=result] |
azure-maps | Map Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-events.md | This article shows you how to use [map events class]. The property highlight eve The [Map Events] sample highlights the name of the events that are firing as you interact with the map. For the source code for this sample, see [Map Events source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/bLZEWd/?height=600&theme-id=0&default-tab=js,result&embed-version=2&editable=true] The [Map Events] sample highlights the name of the events that are firing as you The [Layer Events] sample highlights the name of the events that are firing as you interact with the Symbol Layer. The symbol, bubble, line, and polygon layer all support the same set of events. The heat map and tile layers don't support any of these events. For the source code for this sample, see [Layer Events source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/bQRRPE/?height=600&theme-id=0&default-tab=js,result&embed-version=2&editable=true] The [Layer Events] sample highlights the name of the events that are firing as y The [HTML marker layer events] sample highlights the name of the events that are firing as you interact with the HTML marker layer. For the source code for this sample, see [HTML marker layer Events source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/VVzKJY/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] |
azure-maps | Map Get Shape Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-shape-data.md | |
azure-maps | Tutorial Search Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md | The Map Control API is a convenient client library. This API allows you to easil Some things to know regarding the above HTML: - * The HTML header includes CSS and JavaScript resource files that are hosted by the Azure Map Control library. + * The HTML header includes CSS and JavaScript resource files hosted by the Azure Map Control library. * The `onload` event in the body of the page calls the `GetMap` function when the body of the page has loaded. * The `GetMap` function contains the inline JavaScript code used to access the Azure Maps APIs. It's added in the next step. The Map Control API is a convenient client library. This API allows you to easil }); ``` - Some things to know regarding the above JavaScript: + Some things to know regarding this JavaScript: * The core of the `GetMap` function, which initializes the Map Control API for your Azure Maps account key. * `atlas` is the namespace that contains the API and related visual components. At this point, the MapSearch page can display the locations of points of interes ## Add interactive data -The map that we've made so far only looks at the longitude/latitude data for the search results. However, the raw JSON that the Maps Search service returns contains additional information about each gas station. Including the name and street address. You can incorporate that data into the map with interactive popup boxes. +The map so far only looks at the longitude/latitude data for the search results. However, the raw JSON that the Maps Search service returns contains additional information about each gas station. Including the name and street address. You can incorporate that data into the map with interactive popup boxes. 1. Add the following lines of code in the map `ready` event handler after the code to query the fuzzy search service. This code creates an instance of a Popup and adds a mouseover event to the symbol layer. |
azure-monitor | Data Collection Syslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md | The following facilities are supported with the Syslog collector: | 6 | lpr | | 7 | news | | 8 | uucp |-| 9 | corn | +| 9 | cron | | 10 | authpriv | | 11 | ftp | | 12 | ntp | |
azure-monitor | Data Sources Firewall Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-firewall-logs.md | -# Collect firewall logs with Azure Monitor Agent (Preview) +# Collect firewall logs with Azure Monitor Agent Windows Firewall is a Microsoft Windows application that filters information coming to your system from the Internet and blocks potentially harmful programs. Windows Firewall logs are generated on both client and server operating systems. These logs provide valuable information about network traffic, including dropped packets and successful connections. Parsing Windows Firewall log files can be done using methods like Windows Event Forwarding (WEF) or forwarding logs to a SIEM product like Azure Sentinel. You can turn it on or off by following these steps on any Windows system: 1. Select Start, then open Settings. 1. Under Update & Security, select Windows Security, Firewall & network protection. |
azure-monitor | Log Analytics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-tutorial.md | Last updated 10/31/2023 Log Analytics is a tool in the Azure portal to edit and run log queries from data collected by Azure Monitor logs and interactively analyze their results. You can use Log Analytics queries to retrieve records that match particular criteria, identify trends, analyze patterns, and provide various insights into your data. -This tutorial walks you through the Log Analytics interface, gets you started with some basic queries, and shows you how you can work with the results. You'll learn how to: +This tutorial walks you through the Log Analytics interface, gets you started with some basic queries, and shows you how you can work with the results. You learn how to: > [!div class="checklist"] > * Understand the log data schema. This tutorial walks you through the Log Analytics interface, gets you started wi > * Load, export, and copy queries and results. > [!IMPORTANT]-> In this tutorial, you'll use Log Analytics features to build one query and use another example query. When you're ready to learn the syntax of queries and start directly editing the query itself, read the [Kusto Query Language tutorial](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor). That tutorial walks you through example queries that you can edit and run in Log Analytics. It uses several of the features that you'll learn in this tutorial. +> In this tutorial, you use Log Analytics features to build one query and use another example query. When you're ready to learn the syntax of queries and start directly editing the query itself, read the [Kusto Query Language tutorial](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor). That tutorial walks you through example queries that you can edit and run in Log Analytics. It uses several of the features that you learn in this tutorial. ## Prerequisites This tutorial uses the [Log Analytics demo environment](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade), which includes plenty of sample data that supports the sample queries. You can also use your own Azure subscription, but you might not have data in the same tables. +> [!NOTE] +> Log Analytics has two modes - Simple and KQL. *This tutorial walks you through KQL mode.* For information on Simple mode, see [Analyze data using Log Analytics Simple mode (Preview)](log-analytics-simple-mode.md). + ## Open Log Analytics Open the [Log Analytics demo environment](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade), or select **Logs** from the Azure Monitor menu in your subscription. This step sets the initial scope to a Log Analytics workspace so that your query selects from all data in that workspace. If you select **Logs** from an Azure resource's menu, the scope is set to only records from that resource. For details about the scope, see [Log query scope](./scope.md). -You can view the scope in the upper-left corner of the screen. If you're using your own environment, you'll see an option to select a different scope. This option isn't available in the demo environment. +You can view the scope in the upper-left corner of the Logs experience, below the name of your active query tab. If you're using your own environment, you see an option to select a different scope. This option isn't available in the demo environment. :::image type="content" source="media/log-analytics-tutorial/log-analytics-query-scope.png" alt-text="Screenshot that shows the Log Analytics scope for the demo." lightbox="media/log-analytics-tutorial/log-analytics-query-scope.png"::: Expand the **Log Management** solution and locate the **AppRequests** table. You :::image type="content" source="media/log-analytics-tutorial/table-details.png" alt-text="Screenshot that shows the Tables view." lightbox="media/log-analytics-tutorial/table-details.png"::: -Select the link below **Useful links** to go to the table reference that documents each table and its columns. Select **Preview data** to have a quick look at a few recent records in the table. This preview can be useful to ensure that this is the data that you're expecting before you run a query with it. +* Select the link below **Useful links** (in this example [AppRequests](/azure/azure-monitor/reference/tables/AppRequests)) to go to the table reference that documents each table and its columns. ++* Select **Preview data** to have a quick look at a few recent records in the table. This preview can be useful to ensure it's the data you're expecting before you run a query with it. :::image type="content" source="media/log-analytics-tutorial/preview-data.png" alt-text="Screenshot that shows preview data for the AppRequests table." lightbox="media/log-analytics-tutorial/preview-data.png"::: ## Write a query -Let's write a query by using the **AppRequests** table. Double-click its name to add it to the query window. You can also type directly in the window. You can even get IntelliSense that will help complete the names of tables in the current scope and Kusto Query Language (KQL) commands. +Let's write a query by using the **AppRequests** table. Double-click its name or hover over it and click on **Use in editor** to add it to the query window. You can also type directly in the window. You can even get IntelliSense which helps completing the names of tables in the current scope and Kusto Query Language (KQL) commands. This is the simplest query that we can write. It just returns all the records in a table. Run it by selecting the **Run** button or by selecting **Shift+Enter** with the cursor positioned anywhere in the query text. :::image type="content" source="media/log-analytics-tutorial/query-results.png" alt-text="Screenshot that shows query results." lightbox="media/log-analytics-tutorial/query-results.png"::: -You can see that we do have results. The number of records that the query has returned appears in the lower-right corner. +You can see that we do have results. The number of records that the query returns appears in the lower-right corner. The maximum number of results that you can retrieve in the Log Analytics portal experience is 30,000. ### Time range Let's change the time range of the query by selecting **Last 12 hours** from the :::image type="content" source="media/log-analytics-tutorial/query-time-range.png" alt-text="Screenshot that shows the time range." lightbox="media/log-analytics-tutorial/query-time-range.png"::: -### Multiple query conditions +### Multiple filters ++Let's reduce our results further by adding another filter condition. A query can include any number of filters to target exactly the set of records that you want. On the left side of the screen where the **Tables** tab is active, select the **Filter** tab instead. If you can't find it, click on the ellipsis to view more tabs. ++On the **Filter** tab, select **Load old filters** to view the top 10 values for each filter. -Let's reduce our results further by adding another filter condition. A query can include any number of filters to target exactly the set of records that you want. Select **Get Home/Index** under **Name**, and then select **Apply & Run**. ++Select **Get Home/Index** under **Name**, then click on **Apply & Run**. :::image type="content" source="media/log-analytics-tutorial/query-multiple-filters.png" alt-text="Screenshot that shows query results with multiple filters." lightbox="media/log-analytics-tutorial/query-multiple-filters.png"::: ## Analyze results -In addition to helping you write and run queries, Log Analytics provides features for working with the results. Start by expanding a record to view the values for all of its columns. +In addition to helping you write and run queries, Log Analytics provides features for working with the results. Start by expanding a record to view the values for all of its columns by clicking the chevron on the left side of the row. :::image type="content" source="media/log-analytics-tutorial/expand-query-search-result.png" alt-text="Screenshot that shows a record expanded in the search results." lightbox="media/log-analytics-tutorial/expand-query-search-result.png"::: Select the name of any column to sort the results by that column. Select the filter icon next to it to provide a filter condition. This action is similar to adding a filter condition to the query itself, except that this filter is cleared if the query is run again. Use this method if you want to quickly analyze a set of records as part of interactive analysis. -For example, set a filter on the **DurationMs** column to limit the records to those that took more than **150** milliseconds. +Set a filter on the **DurationMs** column to limit the records to those that took more than **150** milliseconds. ++1. The results table allows you to filter just like in Excel. Select the ellipsis in the **Name** column header. +1. Uncheck **Select All**, then search for **Get Home/Index** and check it. Filters are automatically applied to your results. :::image type="content" source="media/log-analytics-tutorial/query-results-filter.png" alt-text="Screenshot that shows a query results filter." lightbox="media/log-analytics-tutorial/query-results-filter.png"::: To better visualize your data, you can reorganize and summarize the data in the Select **Columns** to the right of the results pane to open the **Columns** sidebar. -In the sidebar, you'll see a list of all available columns. Drag the **Url** column into the **Row Groups** section. Results are now organized by that column, and you can collapse each group to help you with your analysis. This action is similar to adding a filter condition to the query, but instead of refetching data from the server, you're processing the data your original query returned. When you run the query again, Log Analytics retrieves data based on your original query. Use this method if you want to quickly analyze a set of records as part of interactive analysis. +In the sidebar, you see a list of all available columns. Drag the **Url** column into the **Row Groups** section. Results are now organized by that column, and you can collapse each group to help you with your analysis. This action is similar to adding a filter condition to the query, but instead of refetching data from the server, you're processing the data your original query returned. When you run the query again, Log Analytics retrieves data based on your original query. Use this method if you want to quickly analyze a set of records as part of interactive analysis. :::image type="content" source="media/log-analytics-tutorial/query-results-grouped.png" alt-text="Screenshot that shows query results grouped by URL." lightbox="media/log-analytics-tutorial/query-results-grouped.png"::: Now let's sort the results by longest maximum call duration by selecting the **m ## Work with charts -Let's look at a query that uses numerical data that we can view in a chart. Instead of building a query, we'll select an example query. +Let's look at a query that uses numerical data that we can view in a chart. Instead of building a query, we select an example query. ++Select **Queries** on the left pane. This pane includes example queries that you can add to the query window. If you're using your own workspace, you should have various queries in multiple categories.<!-- If you're using the demo environment, you might see only a single **Log Analytics workspaces** category. Expand that to view the queries in the category. --> ++Load the **Function Error rate** query in the **Applications** category to the editor. To do so, double-click the query or hover over the query name to show more information, then select **Load to editor**. -Select **Queries** on the left pane. This pane includes example queries that you can add to the query window. If you're using your own workspace, you should have various queries in multiple categories. If you're using the demo environment, you might see only a single **Log Analytics workspaces** category. Expand that to view the queries in the category. -Select the query called **Function Error rate** in the **Applications** category. This step adds the query to the query window. Notice that the new query is separated from the other by a blank line. A query in KQL ends when it encounters a blank line, so these are considered separate queries. +Notice that the new query is separated from the other by a blank line. A query in KQL ends when it encounters a blank line, making them separate queries. :::image type="content" source="media/log-analytics-tutorial/example-query.png" alt-text="Screenshot that shows a new query." lightbox="media/log-analytics-tutorial/example-query.png"::: -The current query is the one that the cursor is positioned on. You can see that the first query is highlighted, indicating that it's the current query. Click anywhere in the new query to select it, and then select the **Run** button to run it. +Click anywhere in a query to select it, then click on the **Run** button to run it. :::image type="content" source="media/log-analytics-tutorial/example-query-output-table.png" alt-text="Screenshot that shows the query results table." lightbox="media/log-analytics-tutorial/example-query-output-table.png"::: |
azure-netapp-files | Configure Customer Managed Keys Hardware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys-hardware.md | Azure NetApp Files volume encryption with customer-managed keys with the managed * South Africa North * South Central US * Southeast Asia+* Spain Central * Sweden Central * Switzerland North * UAE Central |
azure-netapp-files | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md | Azure NetApp Files customer-managed keys is supported for the following regions: * South Central US * South India * Southeast Asia+* Spain Central * Sweden Central * Switzerland North * Switzerland West |
azure-netapp-files | Cool Access Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md | Azure NetApp Files storage with cool access is supported for the following regio * South Central US * South India * Southeast Asia+* Spain Central * Switzerland North * Switzerland West * Sweden Central |
azure-netapp-files | Cross Region Replication Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md | Azure NetApp Files volume replication is supported between various [Azure region | Germany/UK | Germany West Central | UK South | | Germany/Europe | Germany West Central | West Europe | | Germany/France | Germany West Central | France Central |+| Spain/Sweden | Spain Central | Sweden Central | | Qatar/Europe | Qatar Central | West Europe | | North America | East US | East US 2 | | North America | East US 2| West US 2 | |
azure-netapp-files | Double Encryption At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md | Azure NetApp Files double encryption at rest is supported for the following regi * South Africa North * South Central US * Southeast Asia +* Spain Central * Sweden Central * Switzerland North * Switzerland West |
azure-netapp-files | Monitor Volume Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/monitor-volume-capacity.md | The *available space* is accurate using File Explorer or the `dir` command. Howe ### Linux (NFS) clients -Linux clients can check the used and available capacity of a volume using the [df command](https://linux.die.net/man/1/df). --The `-h` option shows the size, including used and available space in human readable format (using M, G and T unit sizes). +Linux clients can check the used and available capacity of a volume using the [`df -h`](https://linux.die.net/man/1/df). Using the `h` option displays the size, including used and available space in human readable format (using M, G and T unit sizes). The following snapshot shows volume capacity reporting in Linux: |
azure-netapp-files | Troubleshoot Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-volumes.md | -If a volume CRUD operation is performed on a volume that is not in a terminal state, then the operation will fail. Automation workflows and portal users should check for the terminal state of the volume before executing another asynchronous operation on the volume. +If a volume create-read-update-delete (CRUD) operation is performed on a volume not in a terminal state, the operation will fail. Automation workflows and portal users should check for the terminal state of the volume before executing subsequent asynchronous operations on the volume. ## Errors for SMB and dual-protocol volumes | Error conditions | Resolutions | |--|-|-| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>Check if AD DS and the volume are being deployed in same region.</li> <li>Check if AD DS and the volume are using the same VNet. If they're using different VNETs, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Microsoft Entra Domain Services. Microsoft Entra Domain Services should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. | +| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>If you're using Basic network features, check if Active Directory Domain Services (AD DS) and the volume are being deployed in same region.</li> <li>Check if AD DS and the volume are using the same virtual network (VNet). If they're using different VNets, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Microsoft Entra Domain Services. Microsoft Entra Domain Services should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. | | The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-C1C8\". Reason: Kerberos Error: Invalid credentials were given Details: Error: Machine account creation procedure failed\n [ 563] Loaded the preliminary configuration.\n**[ 670] FAILURE: Could not authenticate as 'test@contoso.com':\n** Unknown user (KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN)\n. "}]}` | <ul><li>Make sure that the username entered is correct. </li> <li>Make sure that the user is part of the Administrator group that has the privilege to create machine (computer) accounts. </li> <li> If you use Microsoft Entra Domain Services, make sure that the user is part of the Microsoft Entra group `Azure AD DC Administrators`. </li></ul> | | The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-A452\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\n [ 567] Loaded the preliminary configuration.\n [ 671] Successfully connected to ip 10.x.x.x, port 88 using TCP\n**[ 1099] FAILURE: Could not authenticate as\n** 'user@contoso.com': CIFS server account password does\n** not match password stored in Active Directory\n** (KRB5KDC_ERR_PREAUTH_FAILED)\n. "}]}` | Make sure that the password entered for joining the AD connection is correct. | | The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError","message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-D9A2\". Reason: SecD Error: ou not found Details: Error: Machine account creation procedure failed\n [ 561] Loaded the preliminary configuration.\n [ 665] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ 1039] Successfully connected to ip 10.x.x.x, port 389 using TCP\n**[ 1147] FAILURE: Specifed OU 'OU=AADDC Com' does not exist in\n** contoso.com\n. "}]}` | Make sure that the OU path specified for joining the AD connection is correct. If you use Microsoft Entra Domain Services, make sure that the organizational unit path is `OU=AADDC Computers`. | | The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL. Reason: LDAP Error: Local error occurred Details: Error: Machine account creation procedure failed. [nnn] Loaded the preliminary configuration. [nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn] Successfully connected to ip 10.x.x.x, port 389 using [nnn] Entry for host-address: 10.x.x.x not found in the current source: FILES. Ignoring and trying next available source [nnn] Source: DNS unavailable. Entry for host-address:10.x.x.x found in any of the available sources\n*[nnn] FAILURE: Unable to SASL bind to LDAP server using GSSAPI: local error [nnn] Additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address) [nnn] Unable to connect to LDAP (Active Directory) service on contoso.com (Error: Local error) [nnn] Unable to make a connection (LDAP (Active Directory):contosa.com, result: 7643. ` | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. |-| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL\". Reason: Kerberos Error: KDC has no support for encryption type Details: Error: Machine account creation procedure failed [nnn]Loaded the preliminary configuration. [nnn]Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn]FAILURE: Could not authenticate as 'contosa.com': KDC has no support for encryption type (KRB5KDC_ERR_ETYPE_NOSUPP) ` | Make sure that [AES Encryption](./create-active-directory-connections.md#create-an-active-directory-connection) is enabled both in the Active Directory connection and for the service account. | +| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL\". Reason: Kerberos Error: KDC has no support for encryption type Details: Error: Machine account creation procedure failed [nnn]Loaded the preliminary configuration. [nnn]Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn]FAILURE: Could not authenticate as 'contosa.com': KDC has no support for encryption type (KRB5KDC_ERR_ETYPE_NOSUPP) ` | Make sure that [AES Encryption](./create-active-directory-connections.md#create-an-active-directory-connection) is enabled for both the Active Directory connection and the service account. | | The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-NTAP-VOL\". Reason: LDAP Error: Strong authentication is required Details: Error: Machine account creation procedure failed\n [ 338] Loaded the preliminary configuration.\n [ nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ nnn ] Successfully connected to ip 10.x.x.x, port 389 using TCP\n [ 765] Unable to connect to LDAP (Active Directory) service on\n dc51.area51.com (Error: Strong(er) authentication\n required)\n*[ nnn] FAILURE: Unable to make a connection (LDAP (Active\n* Directory):contoso.com), result: 7609\n. "` | The LDAP Signing option is not selected, but the AD client has LDAP signing. [Enable LDAP Signing](create-active-directory-connections.md#create-an-active-directory-connection) and retry. |-| SMB volume creation fails with the following error: <br> `Failed to create the Active Directory machine account. Reason: LDAP Error: Intialization of LDAP library failed Details: Error: Machine account creation procedure failed` | This error occurs because the service or user account used in the Azure NetApp Files Active Directory connections does not have sufficient privilege to create computer objects or make modifications to the newly created computer object. <br> To solve the issue, you should grant the account being used greater privilege. You can apply a default role with sufficient privilege. You can also delegate additional privilege to the user or service account or to a group it's part of. | +| SMB volume creation fails with the following error: <br> `Failed to create the Active Directory machine account. Reason: LDAP Error: Intialization of LDAP library failed Details: Error: Machine account creation procedure failed` | This error occurs because the service or user account used in the Azure NetApp Files Active Directory connections does not have sufficient privilege to create computer objects or make modifications to the newly created computer object. <br> To resolve the issue, grant the account being used greater privilege. You can apply a default role with sufficient privileges or delegate more privilege to the user, service account, or group it's part of. | ## Errors for dual-protocol volumes | Error conditions | Resolutions | |-|-| | LDAP over TLS is enabled, and dual-protocol volume creation fails with the error `This Active Directory has no Server root CA Certificate`. | If this error occurs when you are creating a dual-protocol volume, make sure that the root CA certificate is uploaded in your NetApp account. |-| Dual-protocol volume creation fails with the error `Failed to validate LDAP configuration, try again after correcting LDAP configuration`. | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.x.x.x`, the hostname of the AD machine (as found by using the `hostname` command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `10.x.x.x` -> `contoso.com`. | +| Dual-protocol volume creation fails with the error `Failed to validate LDAP configuration, try again after correcting LDAP configuration`. | The pointer (PTR) record of the Active Directory (AD) host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.x.x.x`, the hostname of the AD machine (as found by using the `hostname` command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `10.x.x.x` -> `contoso.com`. | | Dual-protocol volume creation fails with the error `Failed to create the Active Directory machine account \\\"TESTAD-C8DD\\\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\\n [ 434] Loaded the preliminary configuration.\\n [ 537] Successfully connected to ip 10.x.x.x, port 88 using TCP\\n**[ 950] FAILURE`. | This error indicates that the AD password is incorrect when Active Directory is joined to the NetApp account. Update the AD connection with the correct password and try again. |-| Dual-protocol volume creation fails with the error `Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available`. | This error indicates that DNS is not reachable. The reason might be because DNS IP is incorrect, or there's a networking issue. Check the DNS IP entered in AD connection and make sure that the IP is correct. <br> Also, make sure that the AD and the volume are in same region and in same VNet. If they are in different VNETs, ensure that VNet peering is established between the two VNets. <br> See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#azure-native-environments) for details. | +| Dual-protocol volume creation fails with the error `Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available`. | This error indicates that DNS is not reachable. The reason might be because DNS IP is incorrect, or there's a networking issue. Check the DNS IP entered in AD connection and make sure that the IP is correct. <br> If you're using Basic network features, make sure that the AD configuration and the volume are in same region and in same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets. <br> See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#azure-native-environments) for details. | | Permission is denied error when mounting a dual-protocol volume. | A dual-protocol volume supports both the NFS and SMB protocols. When you try to access the mounted volume on the UNIX system, the system attempts to map the UNIX user you use to a Windows user. <br> Ensure that the `POSIX` attributes are properly set on the AD DS User object. | ## Errors for NFSv4.1 Kerberos volumes If a volume CRUD operation is performed on a volume that is not in a terminal st |`mount.nfs: access denied by server when mounting volume <SMB_SERVER_NAME-XXX.DOMAIN_NAME>/<VOLUME_NAME>` <br> Example: `smb-test-64d9.contoso.com:/nfs41-vol101` | <ol><li> Ensure that the A/PTR records are properly set up and exist in the Active Directory for the server name `smb-test-64d9.contoso.com`. <br> In the NFS client, if `nslookup` of `smb-test-64d9.contoso.com` resolves to IP address IP1 (that is, `10.1.1.68`), then `nslookup` of IP1 must resolve to only one record (that is, `smb-test-64d9.contoso.com`). `nslookup` of IP1 *must* not resolve to multiple names. </li> <li>Set AES-256 for the NFS machine account of type `NFS-<Smb NETBIOS NAME>-<few random characters>` on AD using either PowerShell or the UI. <br> Example commands: <ul><li>`Set-ADComputer <NFS_MACHINE_ACCOUNT_NAME> -KerberosEncryptionType AES256` </li><li>`Set-ADComputer NFS-SMB-TEST-64 -KerberosEncryptionType AES256` </li></ul> </li> <li>Ensure that the time of the NFS client, AD, and Azure NetApp Files storage software is synchronized with each other and is within a five-minute skew range. </li> <li>Get the Kerberos ticket on the NFS client using the command `kinit <administrator>`.</li> <li>Reduce the NFS client hostname to fewer than 15 characters and perform the realm join again. </li><li>Restart the NFS client and the `rpc-gssd` service as follows. The exact service names may vary on some Linux distributions.<br>Most current distributions use the same service names. Perform the following as root or with `sudo`<br> `systemctl enable nfs-client.target && systemctl start nfs-client.target`<br>(Restart the `rpc-gssd` service.) <br> `systemctl restart rpc-gssd.service` </ul>| |`mount.nfs: an incorrect mount option was specified` | The issue might be related to the NFS client issue. Reboot the NFS client. | |`Hostname lookup failed` | You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.1.1.4`, the hostname of the AD machine (as found by using the hostname command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `10.1.1.4 -> AD1.contoso.com`. |-|`Volume creation fails due to unreachable DNS server` | Two possible solutions are available: <br> <ul><li> This error indicates that DNS is not reachable. The reason might be an incorrect DNS IP or a networking issue. Check the DNS IP entered in AD connection and make sure that the IP is correct. </li> <li> Make sure that the AD and the volume are in same region and in same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets. </li></ul> | +|`Volume creation fails due to unreachable DNS server` | Two possible solutions are available: <br> <ul><li> This error indicates that DNS is not reachable. The reason might be an incorrect DNS IP or a networking issue. Check the DNS IP entered in the AD connection and make sure that the IP is correct. </li> <li> If you're using Basic network features, ensure that the AD and the volume are in same region and in same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets. </li></ul> | |NFSv4.1 Kerberos volume creation fails with an error similar to the following example: <br> `Failed to enable NFS Kerberos on LIF "svm_e719cde8d6d0413fbd6adac0636cdecb_7ad0b82e_73349613". Failed to bind service principal name on LIF "svm_e719cde8d6d0413fbd6adac0636cdecb_7ad0b82e_73349613". SecD Error: server create fail join user auth.` |The KDC IP is wrong and the Kerberos volume has been created. Update the KDC IP with a correct address. <br> After you update the KDC IP, the error will not go away. You need to re-create the volume. | ## Errors for LDAP volumes | Error conditions | Resolutions | |-||-| Error when creating an SMB volume with ldapEnabled as true: <br> `Error Message: ldapEnabled option is only supported with NFS protocol volume. ` | You cannot create an SMB volume with LDAP enabled. <br> Create SMB volumes with LDAP disabled. | +| Error when creating an SMB volume with LDAP enabled as true: <br> `Error Message: ldapEnabled option is only supported with NFS protocol volume. ` | You cannot create an SMB volume with LDAP enabled. <br> Create SMB volumes with LDAP disabled. | | Error when updating the ldapEnabled parameter value for an existing volume: <br> `Error Message: ldapEnabled parameter is not allowed to update` | You cannot modify the LDAP option setting after creating a volume. <br> Do not update the LDAP option setting on a created volume. See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for details. |-| Error when creating an LDAP-enabled NFS volume: <br> `Could not query DNS server` <br> `Sample error message:` <br> `"log": time="2020-10-21 05:04:04.300" level=info msg=Res method=GET url=/v2/Volumes/070d0d72-d82c-c893-8ce3-17894e56cea3 x-correlation-id=9bb9e9fe-abb6-4eb5-a1e4-9e5fbb838813 x-request-id=c8032cb4-2453-05a9-6d61-31ca4a922d85 xresp="200: {\"created\":\"2020-10-21T05:02:55.000Z\",\"lifeCycleState\":\"error\",\"lifeCycleStateDetails\":\"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available.\",\"name\":\"smb1\",\"ownerId\ \":\"8c925a51-b913-11e9-b0de-9af5941b8ed0\",\"region\":\"westus2stage\",\"volumeId\":\"070d0d72-d82c-c893-8ce3-` | This error occurs because DNS is unreachable. <br> <ul><li> Check if you've configured the correct site (site scoping) for Azure NetApp Files. </li><li> The reason that DNS is unreachable might be an incorrect DNS IP address or networking issues. Check the DNS IP address entered in the AD connection to make sure that it is correct. </li><li> Make sure that the AD and the volume are in the same region and the same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets.</li></ul> | +| Error when creating an LDAP-enabled NFS volume: <br> `Could not query DNS server` <br> `Sample error message:` <br> `"log": time="2020-10-21 05:04:04.300" level=info msg=Res method=GET url=/v2/Volumes/070d0d72-d82c-c893-8ce3-17894e56cea3 x-correlation-id=9bb9e9fe-abb6-4eb5-a1e4-9e5fbb838813 x-request-id=c8032cb4-2453-05a9-6d61-31ca4a922d85 xresp="200: {\"created\":\"2020-10-21T05:02:55.000Z\",\"lifeCycleState\":\"error\",\"lifeCycleStateDetails\":\"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available.\",\"name\":\"smb1\",\"ownerId\ \":\"8c925a51-b913-11e9-b0de-9af5941b8ed0\",\"region\":\"westus2stage\",\"volumeId\":\"070d0d72-d82c-c893-8ce3-` | This error occurs because DNS is unreachable. <br> <ul><li> Check if you've configured the correct site (site scoping) for Azure NetApp Files. </li><li> The reason that DNS is unreachable might be an incorrect DNS IP address or networking issues. Check the DNS IP address entered in the AD connection to make sure that it is correct. </li><li> If you're using Basic network features, ensure that the AD and the volume are in the same region and the same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets.</li></ul> | | Error when creating volume from a snapshot: <br> `Aggregate does not exist` | Azure NetApp Files doesnΓÇÖt support provisioning a new, LDAP-enabled volume from a snapshot that belongs to an LDAP-disabled volume. <br> Try creating new an LDAP-disabled volume from the given snapshot. | | When only primary group IDs are seen and user belongs to auxiliary groups too. | This is caused by a query timeout: <br> -Use [LDAP search scope option](configure-ldap-extended-groups.md). <br> -Use [preferred Active Directory servers for LDAP client](create-active-directory-connections.md#preferred-server-ldap). | | `Error describing volume - Entry doesn't exist for username: <username>, please try with a valid username` | -Check if the user is present on LDAP server. <br> -Check if the LDAP server is healthy. | |
azure-netapp-files | Understand Guidelines Active Directory Domain Service Site | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md | Azure NetApp Files supports identity-based authentication over SMB through the f ### <a name="network-requirements"></a>Network requirements -For predictable Active Directory Domain Services operations with Azure NetApp Files volumes, reliable and low-latency network connectivity (equal to or less than 10 ms RTT) to AD DS domain controllers is highly recommended. Poor network connectivity or high network latency between Azure NetApp Files and AD DS domain controllers can cause client access interruptions or client timeouts. +For predictable Active Directory Domain Services operations with Azure NetApp Files volumes, reliable and low-latency network connectivity (equal to or less than 10 milliseconds [ms] roundtrip time [RTT]) to AD DS domain controllers is highly recommended. Poor network connectivity or high network latency between Azure NetApp Files and AD DS domain controllers can cause client access interruptions or client timeouts. >[!NOTE] >The 10ms recommendation adheres to guidance in [Creating a Site Design: Deciding which locations will become sites](/windows-server/identity/ad-ds/plan/creating-a-site-design#deciding-which-locations-will-become-sites). Ensure that you meet the following requirements about network topology and confi * Network Security Groups (NSGs) and AD DS domain controller firewalls must have appropriately configured rules to support Azure NetApp Files connectivity to AD DS and DNS. * For optimal experience, ensure the network latency is equal to or less than 10ms RTT between Azure NetApp Files and AD DS domain controllers. Any RTT higher than 10ms can lead to degraded application or user experience in latency-sensitive applications/environments. In case RTT is too high for desirable user experience, consider deploying replica domain controllers in your Azure NetApp Files environment. Also see [Active Directory Domain Services considerations](#active-directory-domain-services-considerations). -For more information on Microsoft Active Directory requirements for network latency over a WAN, see +For more information on Microsoft Active Directory requirements for network latency over a wide-area network, see [Creating a Site Design](/windows-server/identity/ad-ds/plan/creating-a-site-design). The required network ports are as follows: The required network ports are as follows: | NetBIOS Datagram Service | 138 | UDP | | NetBIOS | 139 | UDP | | LDAP** | 389 | TCP, UDP | -| SAM/LSA/SMB | 445 | TCP, UDP | +| Security Account Manager (SAM)/Local Security Authority (LSA)/SMB | 445 | TCP, UDP | | Kerberos (kpasswd) | 464 | TCP, UDP | | Active Directory Global Catalog | 3268 | TCP | | Active Directory Secure Global Catalog | 3269 | TCP | Azure NetApp Files uses the **AD Site Name** configured in the [Active Directory #### AD DS domain controller discovery -Azure NetApp Files initiates domain controller discovery every four hours. Azure NetApp Files queries the site-specific DNS service (SRV) resource record to determine which domain controllers are in the AD DS site specified in the **AD Site Name** field of the Azure NetApp Files AD connection. Azure NetApp Files domain controller server discovery checks the status of the services hosted on the domain controllers (such as Kerberos, LDAP, Net Logon, and LSA) and selects the optimal domain controller for authentication requests. +Azure NetApp Files initiates domain controller discovery every four hours. Azure NetApp Files queries the site-specific DNS service resource (SRV) record to determine which domain controllers are in the AD DS site specified in the **AD Site Name** field of the Azure NetApp Files AD connection. Azure NetApp Files domain controller server discovery checks the status of the services hosted on the domain controllers (such as Kerberos, LDAP, Net Logon, and LSA) and selects the optimal domain controller for authentication requests. -The DNS service (SRV) resource records for the AD DS site specified in the AD Site name field of the Azure NetApp Files AD connection must contain the list of IP addresses for the AD DS domain controllers that will be used by Azure NetApp Files. You can check the validity of the DNS (SRV) resource record by using the `nslookup` utility. +The DNS SRV records for the AD DS site specified in the AD Site name field of the Azure NetApp Files AD connection must contain the list of IP addresses for the AD DS domain controllers that will be used by Azure NetApp Files. You can check the validity of the DNS SRV record by using the `nslookup` utility. > [!NOTE] > If you make changes to the domain controllers in the AD DS site that is used by Azure NetApp Files, wait at least four hours between deploying new AD DS domain controllers and retiring existing AD DS domain controllers. This wait time enables Azure NetApp Files to discover the new AD DS domain controllers. Ensure that stale DNS records associated with the retired AD DS domain controlle #### <a name="ad-ds-ldap-discover"></a> AD DS LDAP server discovery -A separate discovery process for AD DS LDAP servers occurs when LDAP is enabled for an Azure NetApp Files NFS volume. When the LDAP client is created on Azure NetApp Files, Azure NetApp Files queries the AD DS domain service (SRV) resource record for a list of all AD DS LDAP servers in the domain and not the AD DS LDAP servers assigned to the AD DS site specified in the AD connection. +A separate discovery process for AD DS LDAP servers occurs when LDAP is enabled for an Azure NetApp Files NFS volume. When the LDAP client is created on Azure NetApp Files, Azure NetApp Files queries the AD DS SRV record for a list of all AD DS LDAP servers in the domain and not the AD DS LDAP servers assigned to the AD DS site specified in the AD connection. In large or complex AD DS topologies, you might need to implement [DNS Policies](/windows-server/networking/dns/deploy/dns-policies-overview) or [DNS subnet prioritization](/previous-versions/windows/it-pro/windows-2000-server/cc961422(v=technet.10)?redirectedfrom=MSDN) to ensure that the AD DS LDAP servers assigned to the AD DS site specified in the AD connection are returned. Incorrect or incomplete AD DS site topology or configuration can result in volum >[!IMPORTANT] >The AD Site Name field is required to create an Azure NetApp Files AD connection. The AD DS site defined must exist and be properly configured. -Azure NetApp Files uses the AD DS Site to discover the domain controllers and subnets assigned to the AD DS Site defined in the AD Site Name. All domain controllers assigned to the AD DS Site must have good network connectivity from the Azure virtual network interfaces used by ANF and be reachable. AD DS domain controller VMs assigned to the AD DS Site that are used by Azure NetApp Files must be excluded from cost management policies that shut down VMs. +Azure NetApp Files uses the AD DS Site to discover the domain controllers and subnets assigned to the AD DS Site defined in the AD Site Name. All domain controllers assigned to the AD DS Site must have good network connectivity from the Azure virtual network interfaces used by ANF and be reachable. AD DS domain controller VMs assigned to the AD DS Site used by Azure NetApp Files must be excluded from cost management policies that shut down VMs. If Azure NetApp Files is not able to reach any domain controllers assigned to the AD DS site, the domain controller discovery process will query the AD DS domain for a list of all domain controllers. The list of domain controllers returned from this query is an unordered list. As a result, Azure NetApp Files may try to use domain controllers that are not reachable or well-connected, which can cause volume creation failures, problems with client queries, authentication failures, and failures to modify Azure NetApp Files AD connections. -You must update the AD DS Site configuration whenever new domain controllers are deployed into a subnet assigned to the AD DS site that is used by the Azure NetApp Files AD Connection. Ensure that the DNS SRV records for the site reflect any changes to the domain controllers assigned to the AD DS Site used by Azure NetApp Files. You can check the validity of the DNS (SRV) resource record by using the `nslookup` utility. +You must update the AD DS Site configuration whenever new domain controllers are deployed into a subnet assigned to the AD DS site that is used by the Azure NetApp Files AD Connection. Ensure that the DNS SRV records for the site reflect any changes to the domain controllers assigned to the AD DS Site used by Azure NetApp Files. You can check the validity of the DNS SRV resource record by using the `nslookup` utility. > [!NOTE] > Azure NetApp Files doesn't support the use of AD DS Read-only Domain Controllers (RODC). To prevent Azure NetApp Files from using an RODC, do not configure the **AD Site Name** field of the AD connections with an RODC. Writeable domain controllers are supported and are required for authentication with Azure NetApp Files volumes. For more information, see [Active Directory Replication Concepts](/windows-server/identity/ad-ds/get-started/replication/active-directory-replication-concepts). You must update the AD DS Site configuration whenever new domain controllers are An AD DS site topology is a logical representation of the network where Azure NetApp Files is deployed. In this section, the sample configuration scenario for AD DS site topology intends to show a _basic_ AD DS site design for Azure NetApp Files. It is not the only way to design network or AD site topology for Azure NetApp Files. > [!IMPORTANT]-> For scenarios that involve complex AD DS or complex network topologies, you should have a Microsoft Azure CSA review the Azure NetApp Files networking and AD Site design. +> For scenarios that involve complex AD DS or complex network topologies, you should have a Microsoft Azure cloud solutions architect CSA review the Azure NetApp Files networking and AD Site design. The following diagram shows a sample network topology:-sample-network-topology.png + :::image type="content" source="./media/understand-guidelines-active-directory-domain-service-site/sample-network-topology.png" alt-text="Diagram illustrating network topology." lightbox="./media/understand-guidelines-active-directory-domain-service-site/sample-network-topology.png"::: In the sample network topology, an on-premises AD DS domain (`anf.local`) is extended into an Azure virtual network. The on-premises network is connected to the Azure virtual network using an Azure ExpressRoute circuit. The Azure virtual network has four subnets: Gateway Subnet, Azure Bastion Subnet Azure NetApp Files can only use one AD DS site to determine which domain controllers will be used for authentication, LDAP queries, and Kerberos. In the sample scenario, two subnet objects are created and assigned to a site called `ANF` using the Active Directory Sites and Services utility. One subnet object is mapped to the AD DS subnet, 10.0.0.0/24, and the other subnet object is mapped to the ANF delegated subnet, 10.0.2.0/24. -In the Active Directory Sites and Services tool, verify that the AD DS domain controllers deployed into the AD DS subnet are assigned to the `ANF` site: +In the Active Directory Sites and Services tool, verify that the AD DS domain controllers deployed into the AD DS subnet are assigned to the `ANF` site. -To create the subnet object that maps to the AD DS subnet in the Azure virtual network, right-click the **Subnets** container in the **Active Directory Sites and Services** utility and select **New Subnet...**. -ΓÇâ -In the **New Object - Subnet** dialog, the 10.0.0.0/24 IP address range for the AD DS Subnet is entered in the **Prefix** field. Select `ANF` as the site object for the subnet. Select **OK** to create the subnet object and assign it to the `ANF` site. +If they aren't assigned, create the subnet object that maps to the AD DS subnet in the Azure virtual network. Right-click the **Subnets** container in the **Active Directory Sites and Services** utility and select **New Subnet...**. In the **New Object - Subnet** dialog, the 10.0.0.0/24 IP address range for the AD DS Subnet is entered in the **Prefix** field. Select `ANF` as the site object for the subnet. Select **OK** to create the subnet object and assign it to the `ANF` site. To verify that the new subnet object is assigned to the correct site, right-click the 10.0.0.0/24 subnet object and select **Properties**. The **Site** field should show the `ANF` site object: To create the subnet object that maps to the Azure NetApp Files delegated subnet in the Azure virtual network, right-click the **Subnets** container in the **Active Directory Sites and Services** utility and select **New Subnet...**. Azure NetApp Files SMB, dual-protocol, and NFSv4.1 Kerberos volumes support cros * [Create a dual-protocol volume](create-volumes-dual-protocol.md) * [Errors for SMB and dual-protocol volumes](troubleshoot-volumes.md#errors-for-smb-and-dual-protocol-volumes) * [Access SMB volumes from Microsoft Entra joined Windows virtual machines](access-smb-volume-from-windows-client.md)-* [Understand DNS in Azure NetApp Files](domain-name-system-concept.md). +* [Understand DNS in Azure NetApp Files](domain-name-system-concept.md). |
chaos-studio | Chaos Studio Fault Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md | Currently, a maximum of 4 process names can be listed in the processNames parame | Property | Value | | | |-| Capability name | DisaleAutoscale | +| Capability name | DisableAutoscale | | Target type | Microsoft-AutoscaleSettings | | Description | Disables the [autoscale service](/azure/azure-monitor/autoscale/autoscale-overview). When autoscale is disabled, resources such as virtual machine scale sets, web apps, service bus, and [more](/azure/azure-monitor/autoscale/autoscale-overview#supported-services-for-autoscale) aren't automatically added or removed based on the load of the application. | | Prerequisites | The autoScalesetting resource that's enabled on the resource must be onboarded to Chaos Studio. | |
chaos-studio | Chaos Studio Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-overview.md | description: Measure, understand, and build resilience to incidents by using cha Previously updated : 05/27/2022 Last updated : 09/05/2024 |
chaos-studio | Chaos Studio Region Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-region-availability.md | |
communication-services | Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/authentication.md | Another type of authentication uses *user access tokens* to authenticate against The following table shows the Azure Communication Services SDKs and their authentication options: -| SDK | Authentication option | -| -- | -| -| Identity | Access Key or Microsoft Entra authentication | -| SMS | Access Key or Microsoft Entra authentication | -| Phone Numbers | Access Key or Microsoft Entra authentication | -| Email | Access Key or Microsoft Entra authentication | -| Calling | User Access Token | -| Chat | User Access Token | +| SDK | Authentication option | +|--|-| +| Identity | Access Key or Microsoft Entra authentication | +| SMS | Access Key or Microsoft Entra authentication | +| Phone Numbers | Access Key or Microsoft Entra authentication | +| Email | Access Key or Microsoft Entra authentication | +| Advanced Messaging | Access Key or Microsoft Entra authentication | +| Calling | User Access Token | +| Chat | User Access Token | Each authorization option is briefly described below: |
communication-services | Download Media | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/advanced-messaging/whatsapp/download-media.md | Azure Communication Services enables you to send and receive WhatsApp messages. Use case: A business receives a WhatsApp message from their customer that contains an image. The business needs to download the image from WhatsApp in order to view the image. +Incoming messages to the business are published as [Microsoft.Communication.AdvancedMessageReceived](/azure/event-grid/communication-services-advanced-messaging-events#microsoftcommunicationadvancedmessagereceived-event) Event Grid events. This quickstart uses the media ID and media MIME type in the AdvancedMessageReceived event to download the media payload. ++Here's an example of an AdvancedMessageReceived event with media content: +```json +[{ + "id": "00000000-0000-0000-0000-000000000000", + "topic": "/subscriptions/{subscription-id}/resourcegroups/{resourcegroup-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}", + "subject": "advancedMessage/sender/{sender@id}/recipient/11111111-1111-1111-1111-111111111111", + "data": { + "channelType": "whatsapp", + "media": { + "mimeType": "image/jpeg", + "id": "22222222-2222-2222-2222-222222222222" + }, + "from": "{sender@id}", + "to": "11111111-1111-1111-1111-111111111111", + "receivedTimestamp": "2023-07-06T18:30:19+00:00" + }, + "eventType": "Microsoft.Communication.AdvancedMessageReceived", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2023-07-06T18:30:22.1921716Z" +}] +``` + [!INCLUDE [Download WhatsApp media messages with .NET](./includes/download-medi)] ## Next steps |
container-apps | Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md | Content-Type: application/json "expires_on": "1586984735", "resource": "https://vault.azure.net", "token_type": "Bearer",- "client_id": "5E29463D-71DA-4FE0-8E69-999B57DB23B0" + "client_id": "aaaaaaaaa-0000-1111-2222-bbbbbbbbbbbb" }- ``` This response is the same as the [response for the Microsoft Entra service-to-service access token request](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#successful-response). To access Key Vault, add the value of `access_token` to a client connection with the vault. |
container-apps | Vnet Custom Internal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md | $vnet = New-AzVirtualNetwork @VnetArgs +When using the Workload profiles environment, you need to update the VNET to delegate the subnet to `Microsoft.App/environments`. This delegation is not applicable to the Consumption-only environment. ++# [Bash](#tab/bash) ++```azurecli-interactive +az network vnet subnet update \ + --resource-group $RESOURCE_GROUP \ + --vnet-name $VNET_NAME \ + --name infrastructure-subnet \ + --delegations Microsoft.App/environments +``` ++# [Azure PowerShell](#tab/azure-powershell) ++```azurepowershell-interactive +$delegation = New-AzDelegation -Name 'containerApp' -ServiceName 'Microsoft.App/environments' +$vnet = Set-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name -VirtualNetwork $vnet -AddressPrefix $SubnetArgs.AddressPrefix -Delegation $delegation +$vnet | Set-AzVirtualNetwork +``` +++ With the VNET established, you can now query for the infrastructure subnet ID. # [Bash](#tab/bash) |
container-apps | Vnet Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md | $vnet = New-AzVirtualNetwork @VnetArgs +When using the Workload profiles environment, you need update the VNET to delegate the subnet to `Microsoft.App/environments`. This delegation is not applicable to the Consumption-only environment. ++# [Bash](#tab/bash) ++```azurecli-interactive +az network vnet subnet update \ + --resource-group $RESOURCE_GROUP \ + --vnet-name $VNET_NAME \ + --name infrastructure-subnet \ + --delegations Microsoft.App/environments +``` ++# [Azure PowerShell](#tab/azure-powershell) ++```azurepowershell-interactive +$delegation = New-AzDelegation -Name 'containerApp' -ServiceName 'Microsoft.App/environments' +$vnet = Set-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name -VirtualNetwork $vnet -AddressPrefix $SubnetArgs.AddressPrefix -Delegation $delegation +$vnet | Set-AzVirtualNetwork +``` +++ With the virtual network created, you can retrieve the ID for the infrastructure subnet. # [Bash](#tab/bash) |
container-registry | Allow Access Trusted Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/allow-access-trusted-services.md | Title: Access network-restricted registry using trusted Azure service description: Enable a trusted Azure service instance to securely access a network-restricted container registry to pull or push images -+ |
container-registry | Authenticate Aks Cross Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/authenticate-aks-cross-tenant.md | Title: Authenticate from AKS cluster to Azure container registry in different AD tenant description: Configure an AKS cluster's service principal with permissions to access your Azure container registry in a different AD tenant-+ |
container-registry | Authenticate Kubernetes Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/authenticate-kubernetes-options.md | Title: Scenarios to authenticate with Azure Container Registry from Kubernetes description: Overview of options and scenarios to authenticate to an Azure container registry from a Kubernetes cluster to pull container images-+ |
container-registry | Buffer Gate Public Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/buffer-gate-public-content.md | Title: Manage public content in private container registry description: Practices and workflows in Azure Container Registry to manage dependencies on public images from Docker Hub and other public content-+ |
container-registry | Container Registry Access Selected Networks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-access-selected-networks.md | Title: Configure public registry access description: Configure IP rules to enable access to an Azure container registry from selected public IP addresses or address ranges.-+ |
container-registry | Container Registry Api Deprecation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-api-deprecation.md | Title: Removed and deprecated features for Azure Container Registry description: This article lists and notifies the features that are deprecated or removed from support for Azure Container Registry.-+ Last updated 10/31/2023 |
container-registry | Container Registry Auth Aci | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-aci.md | Title: Access from Container Instances description: Learn how to provide access to images in your private container registry from Azure Container Instances by using a Microsoft Entra service principal.-+ |
container-registry | Container Registry Auth Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-kubernetes.md | Title: Authenticate with an Azure container registry using a Kubernetes pull secret description: Learn how to provide a Kubernetes cluster with access to images in your Azure container registry by creating a pull secret using a service principal-+ |
container-registry | Container Registry Auth Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-service-principal.md | Title: Authenticate with service principal description: Provide access to images in your private container registry by using a Microsoft Entra service principal.-+ |
container-registry | Container Registry Authentication Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication-managed-identity.md | Title: Authenticate with managed identity description: Provide access to images in your private container registry by using a user-assigned or system-assigned managed Azure identity.-+ |
container-registry | Container Registry Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication.md | Title: Registry authentication options description: Authentication options for a private Azure container registry, including signing in with a Microsoft Entra identity, using service principals, and using optional admin credentials.-+ |
container-registry | Container Registry Auto Purge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auto-purge.md | Title: Purge tags and manifests description: Use a purge command to delete multiple tags and manifests from an Azure container registry based on age and a tag filter, and optionally schedule purge operations.-+ |
container-registry | Container Registry Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-azure-policy.md | Title: Compliance using Azure Policy description: Assign built-in policy definitions in Azure Policy to audit compliance of your Azure container registries-+ |
container-registry | Container Registry Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-best-practices.md | Title: Registry best practices description: Learn how to use your Azure container registry effectively by following these best practices.-+ For background on registry concepts, see [About registries, repositories, and im Create your container registry in the same Azure region in which you deploy containers. Placing your registry in a region that is network-close to your container hosts can help lower both latency and cost. -Network-close deployment is one of the primary reasons for using a private container registry. Docker images have an efficient [layering construct](https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/) that allows for incremental deployments. However, new nodes need to pull all layers required for a given image. This initial `docker pull` can quickly add up to multiple gigabytes. Having a private registry close to your deployment minimizes the network latency. +Network-close deployment is one of the primary reasons for using a private container registry. Docker images have an efficient [layering construct](https://docs.docker.com/get-started/docker-concepts/building-images/understanding-image-layers/) that allows for incremental deployments. However, new nodes need to pull all layers required for a given image. This initial `docker pull` can quickly add up to multiple gigabytes. Having a private registry close to your deployment minimizes the network latency. Additionally, all public clouds, Azure included, implement network egress fees. Pulling images from one datacenter to another adds network egress fees, in addition to the latency. ## Geo-replicate multi-region deployments |
container-registry | Container Registry Check Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-check-health.md | Title: Check registry health description: Learn how to run a quick diagnostic command to identify common problems when using an Azure container registry, including local Docker configuration and connectivity to the registry-+ |
container-registry | Container Registry Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-concepts.md | Title: About registries, repositories, images, and artifacts description: Introduction to key concepts of Azure container registries, repositories, container images, and other artifacts.-+ |
container-registry | Container Registry Dedicated Data Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-dedicated-data-endpoints.md | Title: Mitigate data exfiltration with dedicated data endpoints description: Azure Container Registry is introducing dedicated data endpoints available to mitigate data-exfiltration concerns.-+ |
container-registry | Container Registry Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-delete.md | Title: Delete image resources description: Details on how to effectively manage registry size by deleting container image data using Azure CLI commands.-+ |
container-registry | Container Registry Event Grid Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-event-grid-quickstart.md | Title: Quickstart - Send events to Event Grid description: In this quickstart, you enable Event Grid events for your container registry, then send container image push and delete events to a sample application.-+ |
container-registry | Container Registry Firewall Access Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-firewall-access-rules.md | Title: Firewall access rules description: Configure rules to access an Azure container registry from behind a firewall, by allowing access to REST API and data endpoint domain names or service-specific IP address ranges.-+ |
container-registry | Container Registry Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-geo-replication.md | Title: Geo-replicate a registry description: Get started creating and managing a geo-replicated Azure container registry, which enables the registry to serve multiple regions with multi-primary regional replicas. Geo-replication is a feature of the Premium service tier. -+ Last updated 10/31/2023 |
container-registry | Container Registry Get Started Docker Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-docker-cli.md | Title: Push & pull container image description: Push and pull Docker images to your private container registry in Azure using the Docker CLI-+ Last updated 10/31/2023 |
container-registry | Container Registry Health Error Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-health-error-reference.md | Title: Error reference for registry health checks description: Error codes and possible solutions to problems found by running the az acr check-health diagnostic command in Azure Container Registry-+ |
container-registry | Container Registry Helm Repos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md | Title: Store Helm charts description: Learn how to store Helm charts for your Kubernetes applications using repositories in Azure Container Registry-+ |
container-registry | Container Registry Image Formats | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-formats.md | Title: Supported content formats description: Learn about content formats supported by Azure Container Registry, including Docker-compatible container images, Helm charts, OCI images, and OCI artifacts.-+ |
container-registry | Container Registry Image Lock | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-lock.md | Title: Lock images description: Set attributes for a container image or repository so it can't be deleted or overwritten in an Azure container registry.-+ |
container-registry | Container Registry Image Tag Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-tag-version.md | Title: Image tag best practices description: Best practices for tagging and versioning Docker container images when pushing images to and pulling images from an Azure container registry -+ Last updated 10/31/2023 |
container-registry | Container Registry Import Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-import-images.md | Title: Import container images description: Import container images to an Azure container registry by using Azure APIs, without needing to run Docker commands.-+ Last updated 10/31/2023 |
container-registry | Container Registry Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md | Title: Set up private endpoint with private link description: Set up a private endpoint on a container registry and enable access over a private link in a local virtual network. Private link access is a feature of the Premium service tier.-+ |
container-registry | Container Registry Repositories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-repositories.md | Title: View repositories in portal description: Use the Azure portal to view Azure Container Registry repositories, which host Docker container images and other supported artifacts.-+ Last updated 10/31/2023 |
container-registry | Container Registry Repository Scoped Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-repository-scoped-permissions.md | Title: Permissions to repositories in Azure Container Registry description: Create a token to grant and manage repository scoped permissions within a container registry. The token helps to perform actions, such as pull images, push images, delete images, read metadata, and write metadata.-+ Last updated 10/31/2023 |
container-registry | Container Registry Retention Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-retention-policy.md | Title: Policy to retain untagged manifests description: Learn how to enable a retention policy in your Premium Azure container registry, for automatic deletion of untagged manifests after a defined period.-+ |
container-registry | Container Registry Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-roles.md | Title: Registry roles and permissions description: Use Azure role-based access control (Azure RBAC) and identity and access management (IAM) to provide fine-grained permissions to resources in an Azure container registry.-+ Last updated 10/31/2023 |
container-registry | Container Registry Skus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-skus.md | Title: Registry service tiers and features description: Learn about the features and limits (quotas) in the Basic, Standard, and Premium service tiers (SKUs) of Azure Container Registry.-+ Last updated 10/31/2023 |
container-registry | Container Registry Support Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-support-policies.md | Title: Azure Container Registry technical support policies description: Learn about Azure Container Registry (ACR) technical support policies-+ Last updated 10/31/2023 |
container-registry | Container Registry Task Run Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-task-run-template.md | Title: Quick task run with template description: Queue an ACR task run to build an image using an Azure Resource Manager template-+ |
container-registry | Container Registry Tasks Authentication Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-authentication-key-vault.md | Title: External authentication from ACR task description: Configure an Azure Container Registry Task (ACR Task) to read Docker Hub credentials stored in an Azure key vault, by using a managed identity for Azure resources.-+ |
container-registry | Container Registry Tasks Authentication Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-authentication-managed-identity.md | description: Enable a managed identity for Azure Resources in an Azure Container -+ Last updated 10/31/2023 |
container-registry | Container Registry Tasks Base Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-base-images.md | Title: Base image updates - Tasks description: Learn about base images for application container images, and about how a base image update can trigger an Azure Container Registry task.-+ Last updated 10/31/2023 |
container-registry | Container Registry Tasks Cross Registry Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-cross-registry-authentication.md | Title: Cross-registry authentication from ACR task description: Configure an Azure Container Registry Task (ACR Task) to access another private Azure container registry by using a managed identity for Azure resources-+ |
container-registry | Container Registry Tasks Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-logs.md | Title: View task run logs - Tasks description: How to view and manage run logs generated by ACR Tasks.-+ Last updated 10/31/2023 |
container-registry | Container Registry Tasks Multi Step | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-multi-step.md | Title: Multi-step task to build, test & patch image description: Introduction to multi-step tasks, a feature of ACR Tasks in Azure Container Registry that provides task-based workflows for building, testing, and patching container images in the cloud.-+ Last updated 10/31/2023 |
container-registry | Container Registry Tasks Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-overview.md | Title: Overview of Azure Container Registry tasks description: Learn about Azure Container Registry tasks, a suite of features that provide automated building, management, and patching of container images in the cloud.-+ Last updated 01/24/2024 |
container-registry | Container Registry Tasks Pack Build | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-pack-build.md | Title: Build image with Cloud Native Buildpack description: Use the az acr pack build command to build a container image from an app and push to Azure Container Registry, without using a Dockerfile.-+ Last updated 10/31/2023 |
container-registry | Container Registry Tasks Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-samples.md | Title: ACR task samples description: Sample Azure Container Registry Tasks (ACR Tasks) to build, run, and patch container images-+ Last updated 10/31/2023 |
container-registry | Container Registry Tasks Scheduled | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-scheduled.md | Title: Tutorial - Schedule an ACR task description: In this tutorial, learn how to run an Azure Container Registry Task on a defined schedule by setting one or more timer triggers-+ |
container-registry | Container Registry Transfer Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-cli.md | Title: ACR Transfer with Az CLI description: Use ACR Transfer with Az CLI-+ Last updated 10/31/2023 |
container-registry | Container Registry Transfer Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-images.md | Title: ACR Transfer with Arm Templates description: ACR Transfer with Az CLI with ARM templates-+ Last updated 10/31/2023 |
container-registry | Container Registry Transfer Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-prerequisites.md | Title: Transfer artifacts description: Overview of ACR Transfer and prerequisites-+ Last updated 10/31/2023 |
container-registry | Container Registry Transfer Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-troubleshooting.md | description: Troubleshoot ACR Transfer Last updated 10/31/2023-+ |
container-registry | Container Registry Troubleshoot Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-access.md | Title: Troubleshoot network issues with registry description: Symptoms, causes, and resolution of common problems when accessing an Azure container registry in a virtual network or behind a firewall-+ Last updated 10/31/2023 |
container-registry | Container Registry Troubleshoot Login | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-login.md | Title: Troubleshoot login to registry description: Symptoms, causes, and resolution of common problems when logging into an Azure container registry-+ Last updated 10/31/2023 |
container-registry | Container Registry Troubleshoot Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-performance.md | Title: Troubleshoot registry performance description: Symptoms, causes, and resolution of common problems with the performance of a registry-+ Last updated 10/31/2023 |
container-registry | Container Registry Tutorial Sign Build Push | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md | In this tutorial: ## Install Notation CLI and AKV plugin -1. Install Notation v1.1.0 on a Linux amd64 environment. Follow the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/cli/) to download the package for other environments. +1. Install Notation v1.2.0 on a Linux amd64 environment. Follow the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/cli/) to download the package for other environments. ```bash # Download, extract and install- curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.1.0/notation_1.1.0_linux_amd64.tar.gz + curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.2.0/notation_1.2.0_linux_amd64.tar.gz tar xvzf notation.tar.gz # Copy the Notation binary to the desired bin directory in your $PATH, for example |
container-registry | Container Registry Tutorial Sign Trusted Ca | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-trusted-ca.md | In this article: ## Install the notation CLI and AKV plugin -1. Install Notation v1.1.0 on a Linux amd64 environment. Follow the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/cli/) to download the package for other environments. +1. Install Notation v1.2.0 on a Linux amd64 environment. Follow the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/cli/) to download the package for other environments. ```bash # Download, extract and install- curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.1.0/notation_1.1.0_linux_amd64.tar.gz + curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.2.0/notation_1.2.0_linux_amd64.tar.gz tar xvzf notation.tar.gz # Copy the notation cli to the desired bin directory in your PATH, for example |
container-registry | Container Registry Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-vnet.md | Title: Restrict access using a service endpoint description: Restrict access to an Azure container registry using a service endpoint in an Azure virtual network. Service endpoint access is a feature of the Premium service tier.-+ |
container-registry | Container Registry Webhook Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-webhook-reference.md | Title: Registry webhook schema reference description: Reference for JSON payload for webhook requests in an Azure container registry, which are generated when webhooks are enabled for artifact push or delete events-+ Last updated 10/31/2023 |
container-registry | Container Registry Webhook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-webhook.md | Title: Webhooks to respond to registry actions description: Learn how to use webhooks to trigger events when push or pull actions occur in your registry repositories.-+ |
container-registry | Push Multi Architecture Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/push-multi-architecture-images.md | Title: Multi-architecture images in your registry description: Use your Azure container registry to build, import, store, and deploy multi-architecture (multi-arch) images-+ Last updated 10/31/2023 |
container-registry | Scan Images Defender | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/scan-images-defender.md | Title: Scan registry images with Microsoft Defender for Cloud description: Learn about using Microsoft Defender for container registries to scan images in your Azure container registries-+ Last updated 10/31/2023 |
container-registry | Tasks Agent Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-agent-pools.md | Title: Use dedicated pool to run task - Tasks description: Set up a dedicated compute pool (agent pool) in your registry to run an Azure Container Registry task.-+ Last updated 10/31/2023 |
container-registry | Tasks Consume Public Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-consume-public-content.md | Title: Task workflow to manage public registry content description: Create an automated Azure Container Registry Tasks workflow to track, manage, and consume public image content in a private Azure container registry. -+ Last updated 10/31/2023 |
container-registry | Troubleshoot Artifact Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-artifact-cache.md | Title: Troubleshoot Artifact cache description: Learn how to troubleshoot the most common problems for a registry enabled with the Artifact cache feature.-+ Last updated 10/31/2023 |
container-registry | Zone Redundancy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/zone-redundancy.md | Title: Zone-redundant registry for high availability description: Learn about enabling zone redundancy in Azure Container Registry. Create a container registry or replication in an Azure availability zone. Zone redundancy is a feature of the Premium service tier.-+ Last updated 10/31/2023 |
cost-management-billing | Azure Openai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/azure-openai.md | Title: Save costs with Microsoft Azure OpenAI Service Provisioned Reservations -description: Learn about how to save costs with Microsoft Azure OpenAI Service Provisioned Reservations. +description: Save costs with Microsoft Azure OpenAI Service Provisioned Reservations by committing to a reservation for your provisioned throughput units. - Previously updated : 08/30/2024+ Last updated : 09/04/2024 # customer intent: As a billing administrator, I want to learn about saving costs with Microsoft Azure OpenAI Service Provisioned Reservations and buy one. You can save money on Azure OpenAI provisioned throughput by committing to a res To purchase an Azure OpenAI reservation, you choose an Azure region, quantity, and then add the Azure OpenAI SKU to your cart. Then you choose the quantity of provisioned throughput units that you want to purchase. -When you purchase a reservation, the Azure OpenAI provisioned throughput usage that matches the reservation attributes is no longer charged at the hourly rates. +When you purchase a reservation, the Azure OpenAI provisioned throughput usage that matches the reservation attributes is no longer charged at the hourly rates. For pricing information, see the [Azure OpenAI Service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) page. ++## Reservation application A reservation applies to provisioned deployments only and doesn't include other offerings such as standard deployments or fine tuning. Azure OpenAI Service Provisioned Reservations also don't guarantee capacity availability. To ensure capacity availability, the recommended best practice is to create your deployments before you buy your reservation. When the reservation expires, Azure OpenAI deployments continue to run but are billed at the hourly rate. +## Renewal options + You can choose to enable automatic renewal of reservations by selecting the option in the renewal settings or at time of purchase. With Azure OpenAI reservation auto renewal, the reservation renews using the same reservation order ID, and a new reservation doesn't get purchased. You can also choose to replace this reservation with a new reservation purchase in renewal settings and a replacement reservation is purchased when the reservation expires. By default, the replacement reservation has the same attributes as the expiring reservation. You can optionally change the name, billing frequency, term, or quantity in the renewal settings. Any user with owner access on the reservation and the subscription used for billing can set up renewal. -For pricing information, see the [Azure OpenAI Service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) page. +## Prerequisites You can buy an Azure OpenAI reservation in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade). Pay for the reservation [up front or with monthly payments](prepare-buy-reservation.md). To buy a reservation: For example, assume that your total consumption of provisioned throughput units ## Buy a Microsoft Azure OpenAI reservation +To buy an Azure OpenAI reservation, follow these steps: + 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **All services** > **Reservations** and then select **Azure OpenAI** :::image type="content" source="./media/azure-openai/purchase-openai.png" border="true" alt-text="Screenshot showing the Purchase reservations page." lightbox="./media/azure-openai/purchase-openai.png" ::: For example, assume that your total consumption of provisioned throughput units ## Cancel, exchange, or refund reservations -You can cancel or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md). However, Exchanges aren't allowed for Azure OpenAI Service Provisioned Reservations. +*Exchange isn't supported for Azure OpenAI Service Provisioned reservations*. ++You can cancel or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md). If you want to request a refund for your Azure OpenAI reservation, you can do so by following these steps: The following examples show how the Azure OpenAI reservation discount applies, d - **Example 3** - A reservation that's smaller than the deployed units. For example, you purchase 200 PTUs on a reservation and you deploy 600 PTUs. In this example, the reservation discount is applied to the 200 PTUs that were used. The remaining 400 PTUs are charged at the pay-as-you-go rate. - **Example 4** - A reservation that's the same size as the total of two deployments. For example, you purchase 200 PTUs on a reservation and you have two deployments of 100 PTUs each. In this example, the discount is applied to the sum of deployed units. -## Increase the size of an Azure OpenAI reservation +## Increase Azure OpenAI reservation capacity -If you want to increase the size of your Azure OpenAI reservation, you can buy more Azure OpenAI Service Provisioned Reservations using the preceding steps. +You can't change the size of a purchased reservation. If you want to increase your Azure OpenAI reservation capacity to cover more hourly PTUs, you can buy more Azure OpenAI Service Provisioned reservations. ## Related content |
cost-management-billing | Exchange And Refund Azure Reservations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md | +# customer intent: As a reservation purchaser, I want learn how to exchange or refund Azure reservations. # Self-service exchanges and refunds for Azure Reservations -Azure Reservations provide flexibility to help meet your evolving needs. Reservation products are interchangeable with each other if they're the same type of reservation. For example, you can exchange multiple compute reservations including Azure Dedicated Host, Azure VMware Solution, and Azure Virtual Machines with each other all at once. You can also exchange multiple SQL database reservation types including SQL Managed Instances and Elastic Pool with each other. --However, you can't exchange dissimilar reservations. For example, you can't exchange an Azure Cosmos DB reservation for SQL Database. +Azure Reservations provide flexibility to help meet your evolving needs. Reservation products are interchangeable with each other if they're the same type of reservation. For example, you can exchange multiple compute reservations including Azure Dedicated Host, Azure VMware Solution, and Azure Virtual Machines with each other all at once. You can also exchange multiple SQL database reservation types including SQL Managed Instances and Elastic Pool with each other. However, you can't exchange dissimilar reservations. For example, you can't exchange an Azure Cosmos DB reservation for SQL Database. You can also exchange a reservation to purchase another reservation of a similar type in a different region. For example, you can exchange a reservation that's in West US 2 region for one that's in West Europe region. +## Reservation exchange policy changes + > [!NOTE] > Initially planned to end on January 1, 2024, the availability of Azure compute reservation exchanges for Azure Virtual Machine, Azure Dedicated Host and Azure App Service has been extended **until further notice**. > You can also exchange a reservation to purchase another reservation of a similar When you exchange a reservation, you can change your term from one-year to three-year. Or, you can change the term from three-year to one-year. +Not all reservations are eligible for exchange. For example, you can't exchange the following reservations: ++- Azure Databricks reserved capacity +- Azure OpenAI provisioned throughput +- Synapse Analytics Pre-purchase plan +- Red Hat plans +- SUSE Linux plans + You can also refund reservations, but the sum total of all canceled reservation commitment in your billing scope (such as EA, Microsoft Customer Agreement, and Microsoft Partner Agreement) can't exceed USD 50,000 in a 12 month rolling window. +*Microsoft is not currently charging early termination fees for reservation refunds. We might charge the fees for refunds made in the future. We currently don't have a date for enabling the fee.* + The following reservations aren't eligible for refunds: - Azure Databricks reserved capacity - Synapse Analytics Pre-purchase plan - Azure VMware solution by CloudSimple-- Azure Red Hat Open Shift - Red Hat plans - SUSE Linux plans -> [!NOTE] -> - **You must have owner or Reservation administrator access on the Reservation Order to exchange or refund an existing reservation**. You can [Add or change users who can manage a reservation](./manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default). -> - Microsoft is not currently charging early termination fees for reservation refunds. We might charge the fees for refunds made in the future. We currently don't have a date for enabling the fee. +## Prerequisites ++**You must have owner or Reservation administrator access on the Reservation Order to exchange or refund an existing reservation**. You can [Add or change users who can manage a reservation](./manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default). + ## How to exchange or refund an existing reservation Let's look at an example with the previous points in mind. If you bought a 300,0 If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). -## Next steps +## Related content - To learn how to manage a reservation, see [Manage Azure Reservations](manage-reserved-vm-instance.md). - Learn about [Azure savings plan for compute](../savings-plan/index.yml) |
cost-management-billing | Reservation Renew | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-renew.md | Title: Automatically renew Azure reservations -description: Learn how you can automatically renew Azure reservations to continue getting reservation discounts. +description: Learn how to automatically renew Azure reservations to maintain reservation discounts, avoid manual renewals, and ensure continuous savings benefits. Previously updated : 06/05/2024 Last updated : 09/04/2024 +# customer intent: As a reservation purchaser, I want learn about renewing reservations so that I can decide to renew manually, automatically, or not at all. # Automatically renew reservations -You can renew reservations to automatically purchase a replacement when an existing reservation expires. Automatic renewal provides an easy way to continue getting reservation discounts. It also saves you from having to closely monitor a reservation's expiration. With automatic renewal, you prevent savings benefits loss by not having to manually renew. *The renewal setting is turned on by default*. Enable or disable the renewal setting anytime, up to the expiration of the existing reservation. You can also opt in to automatically renew at time of purchase. +You can renew reservations to automatically purchase a replacement when an existing reservation expires. Automatic renewal provides an easy way to continue getting reservation discounts. It also saves you from having to closely monitor a reservation's expiration. With automatic renewal, you prevent savings benefits loss by not having to manually renew. *The renewal setting is turned on by default* when you make a purchase. You can manually turn off the renewal setting at the time of purchase. After purchase, you can enable or disable the renewal setting anytime, up to the expiration of the existing reservation. *When auto-renew is enabled, you have to manually turn it off to stop automatic renewal*. -Renewing a reservation creates a new reservation when the existing reservation expires. It doesn't extend the term of the existing reservation. --Opt in to automatically renew at any time. The renewal price is available 30 days before the expiry of existing reservation. When you enable renewal more than 30 days before the reservation expiration, you're sent an email detailing renewal costs 30 days before expiration. The reservation price might change between the time that you lock the renewal price and the renewal time. If so, your renewal will not be processed and you can purchase a new reservation in order to continue getting the benefit. +The renewal price is available 30 days before the expiry of existing reservation. When you enable renewal more than 30 days before the reservation expiration, you're sent an email detailing renewal costs 30 days before expiration. The reservation price might change between the time that you lock the renewal price and the renewal time. If so, your renewal will not be processed and you can purchase a new reservation in order to continue getting the benefit. -There's no obligation to renew and you can opt out of the renewal at any time before the existing reservation expires. +Renewing a reservation creates a new reservation when the existing reservation expires. It doesn't extend the term of the existing reservation. ## Set up renewal Emails are sent to different people depending on your purchase method: - Individual subscription customers with pay-as-you-go rates - Emails are sent to users who are set up as account administrators, reservation owners, and the reservation administrator. -## Next steps +## Related content - To learn more about Azure Reservations, see [What are Azure Reservations?](save-compute-costs-reservations.md) |
data-factory | Connector Azure Database For Mariadb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mariadb.md | -This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Azure Database for MariaDB. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. - > [!IMPORTANT] > This connector will be deprecated on **December 31, 2024**. Please migrate to [Azure Database for MySQL connector](connector-azure-database-for-mysql.md) by that date. You can also refer to this [article](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/migrating-from-azure-database-for-mariadb-to-azure-database-for/ba-p/3838455) for the Azure Database for MariaDB migration guidance. +This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Azure Database for MariaDB. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. + ## Supported capabilities This Azure Database for MariaDB connector is supported for the following capabilities: |
data-factory | Connector Concur | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-concur.md | -This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Concur. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. --> [!IMPORTANT] -> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/). - > [!IMPORTANT] > This connector will be deprecated on **December 31, 2024**. You are recommended to migrate to [ODBC connector](connector-odbc.md) by installing a driver before that date. +This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Concur. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. + ## Supported capabilities This Concur connector is supported for the following capabilities: |
data-factory | Connector Deprecation Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-deprecation-plan.md | The following connectors are scheduled for deprecation on December 31, 2024. You - [Azure Database for MariaDB](connector-azure-database-for-mariadb.md) - [Concur (Preview)](connector-concur.md)+- [Drill](connector-drill.md) - [Hbase](connector-hbase.md) - [Magento (Preview)](connector-magento.md) - [Marketo (Preview)](connector-marketo.md)+- [Oracle Responsys (Preview)](connector-oracle-responsys.md) - [Paypal (Preview)](connector-paypal.md) - [Phoenix](connector-phoenix.md) -## Connectors that is deprecated +## Connectors that are deprecated The following connector was deprecated. |
data-factory | Connector Drill | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-drill.md | +> [!IMPORTANT] +> This connector will be deprecated on **December 31, 2024**. You are recommended to migrate to [ODBC connector](connector-odbc.md) by installing a driver before that date. + This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Drill. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. ## Supported capabilities |
data-factory | Connector Hbase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hbase.md | -This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from HBase. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. - > [!IMPORTANT] > This connector will be deprecated on **December 31, 2024**. You are recommended to migrate to [ODBC connector](connector-odbc.md) by installing a driver before that date. +This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from HBase. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. + ## Supported capabilities This HBase connector is supported for the following capabilities: |
data-factory | Connector Magento | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-magento.md | -This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Magento. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. --> [!IMPORTANT] -> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/). - > [!IMPORTANT] > This connector will be deprecated on **December 31, 2024**. You are recommended to migrate to [ODBC connector](connector-odbc.md) by installing a driver before that date. +This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Magento. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. + ## Supported capabilities This Magento connector is supported for the following capabilities: |
data-factory | Connector Marketo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-marketo.md | -This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Marketo. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. --> [!IMPORTANT] -> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/). - > [!IMPORTANT] > This connector will be deprecated on **December 31, 2024**. You are recommended to migrate to [ODBC connector](connector-odbc.md) by installing a driver before that date. +This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Marketo. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. + ## Supported capabilities This Marketo connector is supported for the following capabilities: The following properties are supported for Marketo linked service: |: |: |: | | type | The type property must be set to: **Marketo** | Yes | | endpoint | The endpoint of the Marketo server. (i.e. 123-ABC-321.mktorest.com) | Yes |-| clientId | The client Id of your Marketo service. | Yes | +| clientId | The client ID of your Marketo service. | Yes | | clientSecret | The client secret of your Marketo service. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes | | useEncryptedEndpoints | Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true. | No | | useHostVerification | Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over TLS. The default value is true. | No | |
data-factory | Connector Oracle Responsys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-responsys.md | -This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Oracle Responsys. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. - > [!IMPORTANT]-> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/). +> This connector will be deprecated on **December 31, 2024**. You are recommended to migrate to [ODBC connector](connector-odbc.md) by installing a driver before that date. ++This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Oracle Responsys. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. ## Supported capabilities |
data-factory | Connector Paypal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-paypal.md | -This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from PayPal. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. --> [!IMPORTANT] -> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/). - > [!IMPORTANT] > This connector will be deprecated on **December 31, 2024**. You are recommended to migrate to [ODBC connector](connector-odbc.md) by installing a driver before that date. +This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from PayPal. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. + ## Supported capabilities This PayPal connector is supported for the following capabilities: |
data-factory | Connector Phoenix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-phoenix.md | -This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Phoenix. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. - > [!IMPORTANT] > This connector will be deprecated on **December 31, 2024**. You are recommended to migrate to [ODBC connector](connector-odbc.md) by installing a driver before that date. +This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Phoenix. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. + ## Supported capabilities This Phoenix connector is supported for the following capabilities: |
data-factory | Scenario Ssis Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-overview.md | Azure-SSIS Integration Runtime (IR) in Azure Data Factory (ADF) or Synapse Pipel This article highlights migration process of your ETL workloads from on-premises SSIS to SSIS in ADF. The migration process consists of two phases: **Assessment** and **Migration**. +> [!IMPORTANT] +Data Migration Assistant (DMA) is deprecated. For more information, see the [DMA product documentation](/sql/dma/dma-overview). + ## Assessment To establish a complete migration plan, a thorough assessment will help identify issues with the source SSIS packages that would prevent a successful migration. |
ddos-protection | Ddos Protection Sku Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md | The following table shows features and corresponding tiers. DDoS Network Protection and DDoS IP Protection have the following limitations: -- PaaS services (multi-tenant), which includes Azure App Service Environment for Power Apps, Azure API Management in deployment modes other than APIM with virtual network integration (For more information, see https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-ddos-standard-protection-now-supports-apim-in-vnet/ba-p/3641671), and Azure Virtual WAN aren't currently supported. +- PaaS services (multi-tenant), which includes Azure App Service Environment for Power Apps, Azure API Management in deployment modes other than APIM with virtual network integration, and Azure Virtual WAN aren't currently supported. For more information, see [Azure DDoS Protection APIM in VNET Integration](https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-ddos-standard-protection-now-supports-apim-in-vnet/ba-p/3641671) - Protecting a public IP resource attached to a NAT Gateway isn't supported. - Virtual machines in Classic/RDFE deployments aren't supported. - VPN gateway or Virtual network gateway is protected by a DDoS policy. Adaptive tuning isn't supported at this stage. -- Partially supported: the Azure DDoS Protection service can protect a public load balancer with a public IP address prefix linked to its frontend. It effectively detects and mitigates DDoS attacks. However, telemetry and logging for the protected public IP addresses within the prefix range are currently unavailable. +- Azure DDoS Protection service can protect a public load balancer with a public IP address prefix linked to its frontend. This is supported for DDoS Network Protection SKU. +- DDoS telemetry for individual virtual machine instances in Virtual Machine Scale Sets is available with Flexible orchestration mode. DDoS IP Protection is similar to Network Protection, but has the following additional limitation: |
dev-box | How To Configure Intune Conditional Access Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-intune-conditional-access-policies.md | Title: Configuring Microsoft Intune conditional access policies for dev boxes + Title: Microsoft Intune conditional access policies for dev boxes description: Learn how to configure Microsoft Intune dynamic device groups and conditional access policies to restrict access to dev boxes. Previously updated : 12/20/2023 Last updated : 09/04/2024 # Customer intent: As a platform engineer, I want to configure conditional access policies in Microsoft Intune so that I can control access to dev boxes.-In this article, you learn how to configure conditional access policies in Microsoft Intune to control access to dev boxes. For Dev Box, itΓÇÖs common to configure conditional access policies to restrict who can access dev box, what they can do, and where they can access from. To configure conditional access policies, you can use Microsoft Intune to create dynamic device groups and conditional access policies. +In this article, you learn how to configure conditional access policies in Microsoft Intune to control access to dev boxes. For Dev Box, it's common to configure conditional access policies to restrict who can access dev box, what they can do, and where they can access from. To configure conditional access policies, you can use Microsoft Intune to create dynamic device groups and conditional access policies. Some usage scenarios for conditional access in Microsoft Dev Box include: Some usage scenarios for conditional access in Microsoft Dev Box include: - Restricting the ability to copy/paste from the dev box - Restricting access to dev box from only certain geographies -Conditional access is the protection of regulated content in a system by requiring certain criteria to be met before granting access to the content. Conditional access policies at their simplest are if-then statements. If a user wants to access a resource, then they must complete an action. Conditional access policies are a powerful tool for being able to keep your organizationΓÇÖs devices secure and environments compliant. +Conditional access is the protection of regulated content in a system by requiring certain criteria to be met before granting access to the content. Conditional access policies at their simplest are if-then statements. If a user wants to access a resource, then they must complete an action. Conditional access policies are a powerful tool for being able to keep your organization's devices secure and environments compliant. ## Prerequisites After creating your device group and validated your dev box devices are members, 1. Enter a **Name** for your conditional access policy. -1. Under **Users**, select the device group you created in the previous section. +1. Under **Users**, select **All users**. + +1. Under **Devices**, select the device group you created in the previous section. 1. Under **Cloud apps or actions**, select **No cloud apps, actions, or authentication contexts selected**. |
dns | Tutorial Alias Rr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-rr.md | After adding the alias record, you can verify that it's working by using a tool ``` Server: UnKnown- Address: 40.90.4.1 + Address: 203.0.113.10 Name: test.contoso.com Address: 10.10.10.10 After adding the alias record, you can verify that it's working by using a tool ``` Server: UnKnown- Address: 40.90.4.1 + Address: 203.0.113.10 Name: test.contoso.com Address: 10.11.11.11 |
expressroute | About Fastpath | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md | -ExpressRoute virtual network gateway is designed to exchange network routes and route network traffic. FastPath is designed to improve the data path performance between your on-premises network and your virtual network. When enabled, FastPath sends network traffic directly to virtual machines in the virtual network, bypassing the gateway. +ExpressRoute virtual network gateway is designed to exchange network routes and route network traffic. FastPath is designed to improve the data path performance between your on-premises network and your virtual network. When enabled, FastPath sends network traffic directly to virtual machines in the virtual network, bypassing the expressroute virtual network gateway. :::image type="content" source=".\media\about-fastpath\fastpath-vnet-peering.png" alt-text="Diagram of an ExpressRoute connection with Fastpath and virtual network peering."::: ExpressRoute virtual network gateway is designed to exchange network routes and ### Circuits -FastPath is available on all ExpressRoute circuits. Support for virtual network peering and UDR over FastPath is now generally available in all regions and only for connections associated to ExpressRoute Direct circuits. Limited general availability (GA) support for Private Endpoint/Private Link connectivity is only available for connections associated to ExpressRoute Direct circuits. +FastPath is available on all ExpressRoute circuits. Support for virtual network peering and UDR over FastPath is now generally available in all regions within the public cloud and only for connections associated to ExpressRoute Direct circuits. Limited general availability (GA) support for Private Endpoint/Private Link connectivity is only available for connections associated to ExpressRoute Direct circuits and within limited regions & for limited services behind a private endpoint. ### Gateways -FastPath still requires a virtual network gateway to be created to exchange routes between a virtual network and an on-premises network. For more information about virtual network gateways and ExpressRoute, including performance information, and gateway SKUs, see [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md). +FastPath still requires an expressroute virtual network gateway to be created to exchange routes between a virtual network and an on-premises network. For more information about virtual network gateways and ExpressRoute, including performance information, and gateway SKUs, see [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md). -To configure FastPath, the virtual network gateway must be either: +To configure FastPath, the expressroute virtual network gateway must be either of these two SKUs: * Ultra Performance * ErGw3AZ |
expressroute | Traffic Collector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/traffic-collector.md | ExpressRoute Traffic Collector supports both Provider-managed circuits and Expre ExpressRoute Traffic Collector is supported in the following regions: +Note: If your desired region is not yet supported, you can deploy ExpressRoute Traffic Collector to another region in the same geo-political region as your ExpressRoute Circuit. + | Region | Region Name | | | -- | | North American | <ul><li>Canada East</li><li>Canada Central</li><li>Central US</li><li>Central US EUAP</li><li>North Central US</li><li>South Central US</li><li>West Central US</li><li>East US</li><li>East US 2</li><li>West US</li><li>West US 2</li><li>West US 3</li></ul> | |
firewall | Firewall Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-azure-policy.md | + + Title: Use Azure Policy to help secure your Azure Firewall deployments +description: You can use Azure Policy to help secure your Azure Firewall deployments. ++++ Last updated : 09/05/2024+++# Use Azure Policy to help secure your Azure Firewall deployments ++Azure Policy is a service in Azure that allows you to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements. Azure Policy does this by evaluating your resources for noncompliance with assigned policies. For example, you can have a policy to allow only a certain size of virtual machines in your environment or to enforce a specific tag on resources. ++Azure Policy can be used to govern Azure Firewall configurations by applying policies that define what configurations are allowed or disallowed. This helps ensure that the firewall settings are consistent with organizational compliance requirements and security best practices. ++## Prerequisites ++If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ++## Policies available for Azure Firewall ++The following policies are available for Azure Firewall: ++- **Enable Threat Intelligence in Azure Firewall Policy** ++ This policy makes sure that any Azure Firewall configuration without threat intel enabled is marked as noncompliant. +- **Deploy Azure Firewall across Multiple Availability Zones** ++ The policy restricts Azure Firewall deployment to be only allowed with Multiple Availability Zone configuration. +- **Upgrade Azure Firewall Standard to Premium** ++ This policy recommends upgrading Azure Firewall Standard to Premium so that all the Premium version advanced firewall features can be used. This further enhances the security of the network. +- **Azure Firewall Policy Analytics should be enabled** ++ This policy ensures that the Policy Analytics is enabled on the firewall to effectively tune and optimize firewall rules. +- **Azure Firewall should only allow Encrypted Traffic** + + This policy analyses existing rules and ports in Azure firewall policy and audits firewall policy to make sure that only encrypted traffic is allowed into the environment. +- **Azure Firewall should have DNS Proxy Enabled** + + This Policy Ensures that DNS proxy feature is enabled on Azure Firewall deployments. +- **Enable IDPS in Azure Firewall Premium Policy** + + This policy ensures that the IDPS feature is enabled on Azure Firewall deployments to effectively protect the environment from various threats and vulnerabilities. +- **Enable TLS inspection on Azure Firewall Policy** + + This policy mandates that TLS inspection is enabled to detect, alert, and mitigate malicious activity in HTTPS traffic. +- **Migrate from Azure Firewall Classic Rules to Firewall Policy** + + This policy recommends migrating from Firewall Classic Rules to Firewall Policy. +- **VNET with specific tag must have Azure Firewall Deployed** + + This policy finds all virtual networks with a specified tag and checks if there's an Azure Firewall deployed, and flags it as noncompliant if no Azure Firewall exists. ++The following steps show how you can create an Azure Policy that enforces all Firewall Policies to have the Threat Intelligence feature enabled (either **Alert Only**, or **Alert and deny**). The Azure Policy scope is set to the resource group that you create. ++## Create a resource group ++This resource group is set as the scope for the Azure Policy, and is where you create the Firewall Policy. ++1. From the Azure portal, select **Create a resource**. +1. In the search box, type **resource group** and press Enter. +1. Select **Resource group** from the search results. +1. Select **Create**. +1. Select your subscription. +1. Type a name for your resource group. +1. Select a region. +1. Select **Next : Tags**. +1. Select **Next : Review + create**. +1. Select **Create**. ++## Create an Azure Policy ++Now create an Azure Policy in your new resource group. This policy ensures that any firewall policies must have Threat Intelligence enabled. ++1. From the Azure portal, select **All services**. +1. In the filter box, type **policy** and press Enter. +1. Select **Policy** in the search results. +1. On the Policy page, select **Getting started**. +1. Under **Assign policies**, select **View definitions**. +1. On the Definitions page, type **firewall**, in the search box. +1. Select **Azure Firewall Policy should enable Threat Intelligence**. +1. Select **Assign policy**. +1. For **Scope**, select you subscription and your new resource group. +1. Select **Select**. +1. Select **Next**. +1. On the **Parameters** page, clear the **Only show parameters that need input or review** check box. +1. For **Effect**, select **Deny**. +1. Select **Review + create**. +1. Select **Create**. ++## Create a Firewall Policy ++Now you attempt to create a Firewall Policy with Threat Intelligence disabled. ++1. From the Azure portal, select **Create a resource**. +1. In the search box, type **firewall policy** and press Enter. +1. Select **Firewall Policy** in the search results. +1. Select **Create**. +1. Select your subscription. +1. For **Resource group**, select the resource group that you created previously. +1. In the **Name** text box, type a name for your policy. +1. Select **Next : DNS Settings**. +1. Continue selecting through to the **Threat intelligence** page. +1. For **Threat intelligence mode**, select **Disabled**. +1. Select **Review + create**. ++You should see an error that says your resource was disallowed by policy, confirming that your Azure Policy doesn't allow firewall policies that have Threat Intelligence disabled. +++## Related content ++- [What is Azure Policy?](../governance/policy/overview.md) +- [Govern your Azure Firewall configuration with Azure Policies](https://techcommunity.microsoft.com/t5/azure-network-security-blog/govern-your-azure-firewall-configuration-with-azure-policies/ba-p/4189902) ++ |
hdinsight-aks | Cluster Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/cluster-storage.md | Last updated 08/3/2023 # Introduction to cluster storage [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ Azure HDInsight on AKS can seamlessly integrate with Azure Storage, which is a general-purpose storage solution that works well with many other Azure services. Azure Data Lake Storage Gen2 (ADLS Gen 2) is the default file system for the clusters. |
hdinsight-aks | Concept Azure Monitor Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/concept-azure-monitor-integration.md | Last updated 08/29/2023 # Azure Monitor integration [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ HDInsight on AKS offers an integration with Azure Monitor that can be used to monitor cluster pools and their clusters. Azure Monitor collects metrics and logs from multiple resources into an Azure Monitor Log Analytics workspace, which presents the data as structured, queryable tables that can be used to configure custom alerts. Azure Monitor logs provide an excellent overall experience for monitoring workloads and interacting with logs, especially if you have multiple clusters. |
hdinsight-aks | Concept Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/concept-security.md | Last updated 05/11/2024 # Overview of enterprise security in Azure HDInsight on AKS [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ Azure HDInsight on AKS offers security by default, and there are several methods to address your enterprise security needs. This article covers overall security architecture, and security solutions by dividing them into four traditional security pillars: perimeter security, authentication, authorization, and encryption. |
hdinsight-aks | Control Egress Traffic From Hdinsight On Aks Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/control-egress-traffic-from-hdinsight-on-aks-clusters.md | Last updated 05/21/2024 # Control network traffic from HDInsight on AKS Cluster pools and clusters [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ HDInsight on AKS is a managed Platform as a Service (PaaS) that runs on Azure Kubernetes Service (AKS). HDInsight on AKS allows you to deploy popular Open-Source Analytics workloads like Apache Spark™, Apache Flink®️, and Trino without the overhead of managing and monitoring containers. By default, HDInsight on AKS clusters allow outbound network connections from clusters to any destination, if the destination is reachable from the node's network interface. This means that cluster resources can access any public or private IP address, domain name, or URL on the internet or on your virtual network. |
hdinsight-aks | Create Cluster Error Dictionary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/create-cluster-error-dictionary.md | Last updated 08/31/2023 # Cluster creation errors on Azure HDInsight on AKS [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ This article describes how to troubleshoot and resolve errors that could occur when you create Azure HDInsight on AKS clusters. |Sr. No|Error message|Cause|Resolution| |
hdinsight-aks | Create Cluster Using Arm Template Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/create-cluster-using-arm-template-script.md | Last updated 02/12/2024 # Export cluster ARM template - Azure portal [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ This article describes how to generate an ARM template for your cluster automatically. You can use the ARM template to modify, clone, or recreate a cluster starting from the existing cluster's configurations. ## Prerequisites |
hdinsight-aks | Create Cluster Using Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/create-cluster-using-arm-template.md | Last updated 02/12/2024 # Export cluster ARM template - Azure CLI [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ This article describes how to generate an ARM template using Azure CLI. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] |
hdinsight-aks | Customize Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/customize-clusters.md | Last updated 08/29/2023 # Customize Azure HDInsight on AKS clusters using script actions [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]++ Azure HDInsight on AKS provides a configuration method called  Script Actions that invoke custom scripts to customize the cluster. These scripts can be used to install more packages/jars and change configuration settings. The Script actions can be used only during cluster creation. Post cluster creation script actions are in the roadmap. Currently Script Actions are available only with Spark clusters. |
hdinsight-aks | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/faq.md | Last updated 08/29/2023 This article addresses some common questions about Azure HDInsight on AKS. [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ ## General * What is HDInsight on AKS? |
hdinsight-aks | Application Mode Cluster On Hdinsight On Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/application-mode-cluster-on-hdinsight-on-aks.md | Last updated 03/21/2024 # Apache Flink Application Mode cluster on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + HDInsight on AKS now offers a Flink Application mode cluster. This cluster lets you manage cluster Flink application mode lifecycle using the Azure portal with easy-to-use interface and Azure Resource Management Rest APIs. Application mode clusters are designed to support large and long-running jobs with dedicated resources, and handle resource-intensive or extensive data processing tasks. This deployment mode enables you to assign dedicated resources for specific Flink applications, ensuring that they have enough computing power and memory to handle large workloads efficiently.ΓÇ» |
hdinsight-aks | Assign Kafka Topic Event Message To Azure Data Lake Storage Gen2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md | Last updated 03/29/2024 # Write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Apache Flink uses file systems to consume and persistently store data, both for the results of applications and for fault tolerance and recovery. In this article, learn how to write event messages into Azure Data Lake Storage Gen2 with DataStream API. ## Prerequisites |
hdinsight-aks | Azure Service Bus Demo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/azure-service-bus-demo.md | Last updated 04/02/2024 # Use Apache Flink on HDInsight on AKS with Azure Service Bus [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article provides an overview and demonstration of Apache Flink DataStream API on HDInsight on AKS for Azure Service Bus. A Flink job demonstration is designed to read messages from an [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) and writes them to [Azure Data Lake Storage Gen2](./assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md) (ADLS Gen2). ## Prerequisites |
hdinsight-aks | Change Data Capture Connectors For Apache Flink | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/change-data-capture-connectors-for-apache-flink.md | Last updated 04/02/2024 # Change Data Capture of SQL Server with Apache Flink® DataStream API and DataStream Source on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Change Data Capture (CDC) is a technique you can use to track row-level changes in database tables in response to create, update, and delete operations. In this article, we use [CDC Connectors for Apache Flink®](https://github.com/ververica/flink-cdc-connectors), which offer a set of source connectors for Apache Flink. The connectors integrate [Debezium®](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/debezium/#debezium-format) as the engine to capture the data changes. In this article, learn how to perform Change Data Capture of SQL Server using Datastream API. The SQLServer CDC connector can also be a DataStream source. |
hdinsight-aks | Cosmos Db For Apache Cassandra | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/cosmos-db-for-apache-cassandra.md | Last updated 04/02/2024 # Sink Apache Kafka® messages into Azure Cosmos DB for Apache Cassandra, with Apache Flink® on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This example uses [Apache Flink](../flink/flink-overview.md) to sink [HDInsight for Apache Kafka](/azure/hdinsight/kafka/apache-kafka-introduction) messages into [Azure Cosmos DB for Apache Cassandra](/azure/cosmos-db/cassandra/introduction). This example is prominent when Engineers prefer real-time aggregated data for analysis. With access to historical aggregated data, you can build machine learning (ML) models to build insights or actions. You can also ingest IoT data into Apache Flink to aggregate data in real-time and store it in Apache Cassandra. |
hdinsight-aks | Create Kafka Table Flink Kafka Sql Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/create-kafka-table-flink-kafka-sql-connector.md | Last updated 03/14/2024 # Create Apache Kafka® table on Apache Flink® on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Using this example, learn how to Create Kafka table on Apache FlinkSQL. ## Prerequisites |
hdinsight-aks | Datastream Api Mongodb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/datastream-api-mongodb.md | Last updated 03/22/2024 # Use Apache Flink® DataStream API on HDInsight on AKS for MongoDB as a source and sink [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Apache Flink provides a MongoDB connector for reading and writing data from and to MongoDB collections with at-least-once guarantees. This example demonstrates on how to use Apache Flink 1.17.0 on HDInsight on AKS along with your existing MongoDB as Sink and Source with Flink DataStream API MongoDB connector. |
hdinsight-aks | Fabric Lakehouse Flink Datastream Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/fabric-lakehouse-flink-datastream-api.md | Last updated 03/23/2024 # Connect to OneLake in Microsoft Fabric with HDInsight on AKS cluster for Apache Flink® [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This example demonstrates on how to use HDInsight on AKS cluster for Apache Flink® with [Microsoft Fabric](/fabric/get-started/microsoft-fabric-overview). [Microsoft Fabric](/fabric/get-started/microsoft-fabric-overview) is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. |
hdinsight-aks | Flink Catalog Delta Hive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-catalog-delta-hive.md | Last updated 03/29/2024 # Create Delta Catalog with Apache Flink® on Azure HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + [Delta Lake](https://docs.delta.io/latest/delta-intro.html) is an open source project that enables building a Lakehouse architecture on top of data lakes. Delta Lake provides ACID transactions, scalable metadata handling, and unifies streaming and batch data processing on top of existing data lakes. In this article, we learn how Apache Flink SQL/TableAPI is used to implement a Delta catalog for Apache Flink, with Hive Catalog. Delta Catalog delegates all metastore communication to Hive Catalog. It uses the existing logic for Hive or In-Memory metastore communication that is already implemented in Flink. |
hdinsight-aks | Flink Catalog Iceberg Hive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-catalog-iceberg-hive.md | Last updated 04/19/2024 # Create Iceberg Catalog in Apache Flink® on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + [Apache Iceberg](https://iceberg.apache.org/) is an open table format for huge analytic datasets. Iceberg adds tables to compute engines like Apache Flink, using a high-performance table format that works just like a SQL table. Apache Iceberg [supports](https://iceberg.apache.org/multi-engine-support/#apache-flink) both Apache Flink’s DataStream API and Table API. In this article, we learn how to use Iceberg Table managed in Hive catalog, with Apache Flink on HDInsight on AKS cluster. |
hdinsight-aks | Flink Cluster Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-cluster-configuration.md | Last updated 09/26/2023 # Troubleshoot Apache Flink® cluster configurations on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Incorrect cluster configuration may lead to deployment errors. Typically those errors occur when incorrect configuration provided in ARM template or input in Azure portal, for example, on [Configuration management](flink-configuration-management.md) page. Example configuration error: |
hdinsight-aks | Flink Configuration Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-configuration-management.md | Last updated 04/25/2024 # Apache Flink® Configuration management in HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + HDInsight on AKS provides a set of default configurations of Apache Flink for most properties and a few based on common application profiles. However, in case you're required to tweak Flink configuration properties to improve performance for certain applications with state usage, parallelism, or memory settings, you can change Flink job configuration using Flink Jobs Section in HDInsight on AKS cluster. 1. Go To Settings > Flink Jobs > Click on Update. In HDInsight on AKS, Flink uses Kubernetes as backend. Even if the Job Manager f ### FAQ -**Why does the Job failure in between. +**Why does the Job failure in between? Even if the jobs fail abruptly, if the checkpoints are happening continuously, then the job is restarted by default from the latest checkpoint.** Change the job strategy in between? |
hdinsight-aks | Flink Create Cluster Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-create-cluster-portal.md | Last updated 12/28/2023 # Create an Apache Flink® cluster in HDInsight on AKS with Azure portal [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Complete the following steps to create an Apache Flink cluster on Azure portal. ## Prerequisites Flink clusters can be created once cluster pool deployment has been completed, l |Subscription | This field is autopopulated with the Azure subscription that was registered for the Cluster Pool.| |Resource Group|This field is autopopulated and shows the resource group on the cluster pool.| |Region|This field is autopopulated and shows the region selected on the cluster pool.|- |Cluster Pool|This field is autopopulated and shows the cluster pool name on which the cluster is now getting created.To create a cluster in a different pool, find that cluster pool in the portal and click **+ New cluster**.| + |Cluster Pool|This field is autopopulated and shows the cluster pool name on which the cluster is now getting created. To create a cluster in a different pool, find that cluster pool in the portal and click **+ New cluster**.| |HDInsight on AKS Pool Version|This field is autopopulated and shows the cluster pool version on which the cluster is now getting created.| |HDInsight on AKS Version | Select the minor or patch version of the HDInsight on AKS of the new cluster.| |Cluster type | From the drop-down list, select Flink.| |
hdinsight-aks | Flink How To Setup Event Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-how-to-setup-event-hub.md | Last updated 04/02/2024 # Connect Apache Flink® on HDInsight on AKS with Azure Event Hubs for Apache Kafka® [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + A well known use case for Apache Flink is stream analytics. The popular choice by many users to use the data streams, which are ingested using Apache Kafka. Typical installations of Flink and Kafka start with event streams being pushed to Kafka, which can be consumed by Flink jobs. Azure Event Hubs provides an Apache Kafka endpoint on an event hub, which enables users to connect to the event hub using the Kafka protocol. In this article, we explore how to connect [Azure Event Hubs](/azure/event-hubs/event-hubs-about) with [Apache Flink on HDInsight on AKS](./flink-overview.md) and cover the following |
hdinsight-aks | Flink Job Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-job-management.md | Last updated 04/01/2024 # Apache Flink® job management in HDInsight on AKS clusters [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + HDInsight on AKS provides a feature to manage and submit Apache Flink® jobs directly through the Azure portal (user-friendly interface) and ARM Rest APIs. This feature empowers users to efficiently control and monitor their Apache Flink jobs without requiring deep cluster-level knowledge. To authenticate Flink ARM Rest API users, need to get the bearer token or acces `Invoke-RestMethod -Uri $restUri -Method POST -Headers @{ Authorization = "Bearer $tok" } -Body $jsonString -ContentType "application/json"` -- **Savepoint:** Rest API to trigger savepoint for job.+- **Savepoint:** Rest APIs to trigger savepoint for job. | Option | Value | | -- | - | |
hdinsight-aks | Flink Job Orchestration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-job-orchestration.md | Last updated 10/28/2023 # Apache Flink® job orchestration using Azure Data Factory Workflow Orchestration Manager (powered by Apache Airflow) [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article covers managing a Flink job using [Azure REST API](flink-job-management.md#arm-rest-api) and orchestration data pipeline with Azure Data Factory Workflow Orchestration Manager. [Azure Data Factory Workflow Orchestration Manager](/azure/data-factory/concepts-workflow-orchestration-manager) service is a simple and efficient way to create and manage [Apache Airflow](https://airflow.apache.org/) environments, enabling you to run data pipelines at scale easily. Apache Airflow is an open-source platform that programmatically creates, schedules, and monitors complex data workflows. It allows you to define a set of tasks, called operators that can be combined into directed acyclic graphs (DAGs) to represent data pipelines. |
hdinsight-aks | Flink Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-overview.md | Last updated 10/28/2023 # What is Apache Flink® in Azure HDInsight on AKS? (Preview) [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + [Apache Flink](https://flink.apache.org/) is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. Flink has been designed to run in all common cluster environments, perform computations and stateful streaming applications at in-memory speed and at any scale. Applications are parallelized into possibly thousands of tasks that are distributed and concurrently executed in a cluster. Therefore, an application can use unlimited amounts of vCPUs, main memory, disk and network IO. Moreover, Flink easily maintains large application state. Its asynchronous and incremental checkpointing algorithm ensures minimal influence on processing latencies while guaranteeing exactly once state consistency. Apache Flink is a massively scalable analytics engine for stream processing. |
hdinsight-aks | Flink Table Api And Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-table-api-and-sql.md | Last updated 10/27/2023 # Table API and SQL in Apache Flink® clusters on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Apache Flink features two relational APIs - the Table API and SQL - for unified stream and batch processing. The Table API is a language-integrated query API that allows the composition of queries from relational operators such as selection, filter, and join intuitively. Flink’s SQL support is based on Apache Calcite, which implements the SQL standard. The Table API and SQL interfaces integrate seamlessly with each other and Flink’s DataStream API. You can easily switch between all APIs and libraries, which build upon them. |
hdinsight-aks | Flink Web Ssh On Portal To Flink Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-web-ssh-on-portal-to-flink-sql.md | Last updated 02/04/2024 # Access Apache Flink® CLI client using Secure Shell (SSH) on HDInsight on AKS clusters with Azure portal [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This example guides how to enter the Apache Flink CLI client on HDInsight on AKS clusters using SSH on Azure portal, we cover both SQL and Flink DataStream. ## Prerequisites |
hdinsight-aks | Fraud Detection Flink Datastream Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/fraud-detection-flink-datastream-api.md | Last updated 04/09/2024 # Fraud detection with the Apache Flink® DataStream API [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + In this article, learn how to build a fraud detection system for alerting on suspicious credit card transactions. Using a simple set of rules, you see how Flink allows us to implement advanced business logic and act in real-time. This sample is from the use case on Apache Flink [Fraud Detection with the DataStream API](https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/try-flink/datastream/). |
hdinsight-aks | Hive Dialect Flink | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/hive-dialect-flink.md | Last updated 04/17/2024 # Hive dialect in Apache Flink® clusters on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + In this article, learn how to use Hive dialect in Apache Flink clusters on HDInsight on AKS. ## Introduction |
hdinsight-aks | Join Stream Kafka Table Filesystem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/join-stream-kafka-table-filesystem.md | Last updated 03/14/2024 # Enrich the events from Apache Kafka® with attributes from ADLS Gen2 with Apache Flink® [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + In this article, you can learn how you can enrich the real time events by joining a stream from Kafka with table on ADLS Gen2 using Flink Streaming. We use Flink Streaming API to join events from HDInsight Kafka with attributes from ADLS Gen2. Further we use attributes-joined events to sink into another Kafka topic. ## Prerequisites |
hdinsight-aks | Monitor Changes Postgres Table Flink | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/monitor-changes-postgres-table-flink.md | Last updated 03/29/2024 # Change Data Capture (CDC) of PostgreSQL table using Apache Flink® [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Change Data Capture (CDC) is a technique you can use to track row-level changes in database tables in response to create, update, and delete operations. In this article, we use [CDC Connectors for Apache Flink®](https://github.com/ververica/flink-cdc-connectors), which offer a set of source connectors for Apache Flink. The connectors integrate [Debezium®](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/debezium/#debezium-format) as the engine to capture the data changes. Flink supports to interpret Debezium JSON and Avro messages as INSERT/UPDATE/DELETE messages into Apache Flink SQL system. |
hdinsight-aks | Process And Consume Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/process-and-consume-data.md | Last updated 04/03/2024 # Using Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + A well known use case for Apache Flink is stream analytics. The popular choice by many users to use the data streams, which are ingested using Apache Kafka. Typical installations of Flink and Kafka start with event streams being pushed to Kafka, which can be consumed by Flink jobs. This example uses HDInsight on AKS clusters running Flink 1.17.0 to process streaming data consuming and producing Kafka topic. |
hdinsight-aks | Sink Kafka To Kibana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/sink-kafka-to-kibana.md | Last updated 04/09/2024 # Use Elasticsearch with Apache Flink on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Apache Flink for real-time analytics can be used to build a dashboard application that visualizes the streaming data by using Elasticsearch and Kibana. As an example, you can use Flink to analyze a stream of taxi ride events and compute metrics. Metrics can include number of rides per hour, the average fare per ride, or the most popular pickup locations. You can write these metrics to an Elasticsearch index by using a Flink sink. Then you can use Kibana to connect and create charts or dashboards to display metrics in real time. |
hdinsight-aks | Sink Sql Server Table Using Flink Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/sink-sql-server-table-using-flink-sql.md | Last updated 10/27/2023 # Change Data Capture (CDC) of SQL Server using Apache Flink® [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Change Data Capture (CDC) is a technique you can use to track row-level changes in database tables in response to create, update, and delete operations. In this article, we use [CDC Connectors for Apache Flink®](https://github.com/ververica/flink-cdc-connectors), which offer a set of source connectors for Apache Flink. The connectors integrate [Debezium®](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/debezium/#debezium-format) as the engine to capture the data changes. Apache Flink supports to interpret Debezium JSON and Avro messages as INSERT/UPDATE/DELETE messages into Flink SQL system. |
hdinsight-aks | Start Sql Client Cli Gateway Mode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/start-sql-client-cli-gateway-mode.md | Last updated 04/17/2024 # Start SQL Client CLI in gateway mode [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This tutorial guides you how to start the SQL Client CLI in gateway mode in Apache Flink Cluster 1.17.0 on HDInsight on AKS. In the gateway mode, the CLI submits the SQL to the specified remote gateway to execute statements. ``` |
hdinsight-aks | Use Apache Nifi With Datastream Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-apache-nifi-with-datastream-api.md | Last updated 03/25/2024 # Use Apache NiFi to consume processed Apache Kafka® topics from Apache Flink® and publish into ADLS Gen2 [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Apache NiFi is a software project from the Apache Software Foundation designed to automate the flow of data between software systems. It supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic. For more information, see [Apache NiFi](https://nifi.apache.org) |
hdinsight-aks | Use Azure Pipelines To Run Flink Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-azure-pipelines-to-run-flink-jobs.md | Last updated 10/27/2023 # How to use Azure Pipelines with Apache Flink® on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + In this article, you'll learn how to use Azure Pipelines with HDInsight on AKS to submit Flink jobs with the cluster's REST API. We guide you through the process using a sample YAML pipeline and a PowerShell script, both of which streamline the automation of the REST API interactions. |
hdinsight-aks | Use Flink Cli To Submit Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-cli-to-submit-jobs.md | Last updated 10/27/2023 # Apache Flink® Command-Line Interface (CLI) on HDInsight on AKS clusters [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Apache Flink provides a CLI (Command Line Interface) **bin/flink** to run jobs (programs) that are packaged as JAR files and to control their execution. The CLI is part of the Flink setup and can be set up on a single-node VM. It connects to the running JobManager specified in **conf/flink-conf.yaml**. ## Installation Steps |
hdinsight-aks | Use Flink Delta Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-delta-connector.md | Last updated 04/25/2024 # How to use Flink/Delta Connector [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + By using Apache Flink and Delta Lake together, you can create a reliable and scalable data lakehouse architecture. The Flink/Delta Connector allows you to write data to Delta tables with ACID transactions and exactly once processing. It means that your data streams are consistent and error-free, even if you restart your Flink pipeline from a checkpoint. The Flink/Delta Connector ensures that your data isn't lost or duplicated, and that it matches the Flink semantics. In this article, you learn how to use Flink-Delta connector. |
hdinsight-aks | Use Flink To Sink Kafka Message Into Hbase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-to-sink-kafka-message-into-hbase.md | Last updated 05/01/2024 # Write messages to Apache HBase® with Apache Flink® DataStream API [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + In this article, learn how to write messages to HBase with Apache Flink DataStream API. ## Overview |
hdinsight-aks | Use Hive Catalog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-hive-catalog.md | Last updated 03/29/2024 # How to use Hive Catalog with Apache Flink® on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This example uses Hive’s Metastore as a persistent catalog with Apache Flink’s Hive Catalog. We use this functionality for storing Kafka table and MySQL table metadata on Flink across sessions. Flink uses Kafka table registered in Hive Catalog as a source, perform some lookup and sink result to MySQL database |
hdinsight-aks | Use Hive Metastore Datastream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-hive-metastore-datastream.md | Last updated 03/29/2024 # Use Hive Metastore with Apache Flink® DataStream API [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Over the years, Hive Metastore has evolved into a de facto metadata center in the Hadoop ecosystem. Many companies have a separate Hive Metastore service instance in their production environments to manage all their metadata (Hive or non-Hive metadata). For users who have both Hive and Flink deployments, HiveCatalog enables them to use Hive Metastore to manage Flink’s metadata. |
hdinsight-aks | Hdinsight On Aks Autoscale Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/hdinsight-on-aks-autoscale-clusters.md | Last updated 02/06/2024 # Auto Scale HDInsight on AKS Clusters [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ The sizing of any cluster to meet job performance and manage costs ahead of time is always tricky, and hard to determine! One of the lucrative benefits of building data lake house over Cloud is its elasticity, which means to use autoscale feature to maximize the utilization of resources at hand. Auto scale with Kubernetes is one key to establishing a cost optimized ecosystem. With varied usage patterns in any enterprise, there could be variations in cluster loads over time that could lead to clusters being under-provisioned (lousy performance) or overprovisioned (unnecessary costs due to idle resources). The autoscale feature offered in HDInsight on AKS can automatically increase or decrease the number of worker nodes in your cluster. Auto scale uses the cluster metrics and scaling policy used by the customers. |
hdinsight-aks | Hdinsight On Aks Manage Authorization Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/hdinsight-on-aks-manage-authorization-profile.md | Last updated 08/4/2023 # Manage cluster access [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ This article provides an overview of the mechanisms available to manage access for HDInsight on AKS cluster pools and clusters. It also covers how to assign permission to users, groups, user-assigned managed identity, and service principals to enable access to cluster data plane. |
hdinsight-aks | How To Azure Monitor Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/how-to-azure-monitor-integration.md | Last updated 08/29/2023 # How to integrate with Log Analytics [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ This article describes how to enable Log Analytics to monitor & collect logs for cluster pool and cluster operations on HDInsight on AKS. You can enable the integration during cluster pool creation or post the creation. Once the integration at cluster pool is enabled, it isn't possible to disable the integration. However, you can disable the log analytics for individual clusters, which are part of the same pool. |
hdinsight-aks | In Place Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/in-place-upgrade.md | Last updated 03/22/2024 # Upgrade your HDInsight on AKS clusters and cluster pools [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ Learn how to update your HDInsight on AKS clusters and cluster pools to the latest AKS patches, security updates, cluster patches, and cluster hotfixes with in-place upgrade. ## Why to upgrade |
hdinsight-aks | Manage Cluster Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manage-cluster-pool.md | Last updated 08/29/2023 # Manage cluster pools [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ Cluster pools are a logical grouping of clusters and maintain a set of clusters in the same pool. It helps in building robust interoperability across multiple cluster types and allow enterprises to have the clusters in the same virtual network. One cluster pool corresponds to one cluster in AKS infrastructure. This article describes how to manage a cluster pool. |
hdinsight-aks | Manage Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manage-cluster.md | Last updated 08/29/2023 # Manage clusters [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ Clusters are individual compute workloads such as Apache Spark, Apache Flink, and Trino, which can be created rapidly in few minutes with preset configurations and few clicks. This article describes how to manage a cluster using Azure portal. |
hdinsight-aks | Manage Script Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manage-script-actions.md | Last updated 08/29/2023 # Script actions during cluster creation [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ Azure HDInsight on AKS provides a mechanism called **Script Actions** that invoke custom scripts to customize the cluster. These scripts are used to install additional components and change configuration settings. Script actions can be provisioned only during cluster creation as of now. Post cluster creation, Script Actions are part of the roadmap. This article explains how you can provision script actions when you create an HDInsight on AKS cluster. |
hdinsight-aks | Manual Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manual-scale.md | Last updated 02/06/2024 # Manual scale [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ HDInsight on AKS provides elasticity with options to scale up and scale down the number of cluster nodes. This elasticity works to help increase resource utilization and improve cost efficiency. ## Utility to scale clusters |
hdinsight-aks | Monitor With Prometheus Grafana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/monitor-with-prometheus-grafana.md | Last updated 11/07/2023 # Monitoring with Azure Managed Prometheus and Grafana [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ Cluster and service Monitoring is integral part of any organization. Azure HDInsight on AKS comes with integrated monitoring experience with Azure services. In this article, we use managed Prometheus service with Azure Grafana dashboards for monitoring. [Azure Managed Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) is a service that monitors your cloud environments. The monitoring is to maintain their availability and performance and workload metrics. It collects data generated by resources in your Azure instances and from other monitoring tools. The data is used to provide analysis across multiple sources. |
hdinsight-aks | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/overview.md | Last updated 05/28/2024 # What is HDInsight on AKS? (Preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]++ HDInsight on AKS is a modern, reliable, secure, and fully managed Platform as a Service (PaaS) that runs on Azure Kubernetes Service (AKS). HDInsight on AKS allows you to deploy popular Open-Source Analytics workloads like Apache Spark™, Apache Flink®️, and Trino without the overhead of managing and monitoring containers. |
hdinsight-aks | Powershell Cluster Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/powershell-cluster-create.md | Last updated 12/11/2023 # Manage HDInsight on AKS clusters using PowerShell [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ Azure PowerShell is a powerful scripting environment that you can use to control and automate the deployment and management of your workloads in Microsoft Azure. This document provides information about how to create a HDInsight on AKS cluster by using Azure PowerShell. It also includes an example script. |
hdinsight-aks | Quickstart Create Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-create-cli.md | Last updated 06/18/2024 # Quickstart: Create an HDInsight on AKS cluster pool using Azure CLI [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ HDInsight on AKS introduces the concept of cluster pools and clusters, which allow you to realize the complete value of data lakehouse. - **Cluster pools** are a logical grouping of clusters and maintain a set of clusters in the same pool, which helps in building robust interoperability across multiple cluster types. It can be created within an existing virtual network or outside a virtual network. |
hdinsight-aks | Quickstart Create Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-create-cluster.md | Last updated 06/18/2024 # Quickstart: Create an HDInsight on AKS cluster pool using Azure portal [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ HDInsight on AKS introduces the concept of cluster pools and clusters, which allow you to realize the complete value of data lakehouse. - **Cluster pools** are a logical grouping of clusters and maintain a set of clusters in the same pool, which helps in building robust interoperability across multiple cluster types. It can be created within an existing virtual network or outside a virtual network. |
hdinsight-aks | Quickstart Create Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-create-powershell.md | Last updated 06/19/2024 # Quickstart: Create an HDInsight on AKS cluster pool using Azure PowerShell [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ HDInsight on AKS introduces the concept of cluster pools and clusters, which allow you to realize the complete value of data lakehouse. - **Cluster pools** are a logical grouping of clusters and maintain a set of clusters in the same pool, which helps in building robust interoperability across multiple cluster types. It can be created within an existing virtual network or outside a virtual network. |
hdinsight-aks | Quickstart Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-get-started.md | Last updated 08/29/2023 # Get started with one-click deployment [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ One-click deployments are designed for users to experience zero touch creation of HDInsight on AKS. It eliminates the need to manually perform certain steps. This article describes how to use readily available ARM templates to create a cluster pool and cluster in few clicks. |
hdinsight-aks | Quickstart Prerequisites Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-prerequisites-resources.md | Last updated 04/08/2024 # Resource prerequisites [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ This article details the resources required for getting started with HDInsight on AKS. It covers the necessary and the optional resources and how to create them. ## Necessary resources |
hdinsight-aks | Quickstart Prerequisites Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-prerequisites-subscription.md | Last updated 05/06/2024 # Subscription prerequisites [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ If you're using Azure subscription first time for HDInsight on AKS, the following features might need to be enabled. ## Tenant registration |
hdinsight-aks | Hdinsight Aks Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/release-notes/hdinsight-aks-release-notes-archive.md | Last updated 08/05/2024 # Azure HDInsight on AKS archived release notes ++ Azure HDInsight on AKS is one of the most popular services among enterprise customers for open-source analytics on Azure. If you would like to subscribe on release notes, watch releases on this [GitHub repository](https://github.com/Azure/HDInsight-on-aks/releases). ### Release date: March 20, 2024 |
hdinsight-aks | Hdinsight Aks Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/release-notes/hdinsight-aks-release-notes.md | Last updated 08/05/2024 # Azure HDInsight on AKS release notes +++ This article provides information about the **most recent** Azure HDInsight on AKS release updates. For information on earlier releases, see [Azure HDInsight on AKS archived release notes](./hdinsight-aks-release-notes-archive.md). If you would like to subscribe on release notes, watch releases on this [GitHub repository](https://github.com/Azure/HDInsight-on-aks/releases). ## Summary |
hdinsight-aks | Required Outbound Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/required-outbound-traffic.md | Last updated 03/26/2024 # Required outbound traffic for HDInsight on AKS [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ > [!NOTE] > HDInsight on AKS uses Azure CNI Overlay network model by default. For more information, see [Azure CNI Overlay networking](/azure/aks/concepts-network-azure-cni-overlay). |
hdinsight-aks | Rest Api Cluster Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/rest-api-cluster-creation.md | Last updated 11/26/2023 # Manage HDInsight on AKS clusters using Azure REST API [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ Learn how to create an HDInsight cluster using an Azure Resource Manager template and the Azure REST API. The Azure REST API allows you to perform management operations on services hosted in the Azure platform, including the creation of new resources such as HDInsight clusters. |
hdinsight-aks | Sdk Cluster Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/sdk-cluster-creation.md | Last updated 11/23/2023 # Manage HDInsight on AKS clusters using .NET SDK [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ This article describes how you can create and manage cluster in Azure HDInsight on AKS using .NET SDK. The HDInsight .NET SDK provides .NET client libraries, so that it's easier to work with HDInsight clusters from .NET. |
hdinsight-aks | Secure Traffic By Firewall Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-firewall-azure-portal.md | Last updated 08/3/2023 # Use firewall to restrict outbound traffic using Azure portal [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ When an enterprise wants to use their own virtual network for the cluster deployments, securing the traffic of the virtual network becomes important. This article provides the steps to secure outbound traffic from your HDInsight on AKS cluster via Azure Firewall using Azure portal. |
hdinsight-aks | Secure Traffic By Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-firewall.md | Last updated 02/19/2024 # Use firewall to restrict outbound traffic using Azure CLI [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ When an enterprise wants to use their own virtual network for the cluster deployments, securing the traffic of the virtual network becomes important. This article provides the steps to secure outbound traffic from your HDInsight on AKS cluster via Azure Firewall using [Azure CLI](/azure/cloud-shell/quickstart?tabs=azurecli). |
hdinsight-aks | Secure Traffic By Nsg | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-nsg.md | Last updated 08/3/2023 # Use NSG to restrict traffic to HDInsight on AKS [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ HDInsight on AKS relies on AKS outbound dependencies and they're entirely defined with FQDNs, which don't have static addresses behind them. The lack of static IP addresses means one can't use Network Security Groups (NSGs) to lock down the outbound traffic from the cluster using IPs. If you still prefer to use NSG to secure your traffic, then you need to configure the following rules in the NSG to do a coarse-grained control. |
hdinsight-aks | Service Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/service-configuration.md | Last updated 08/29/2023 # Manage cluster configuration [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ HDInsight on AKS allows you to tweak the configuration properties to improve performance of your cluster with certain settings. For example, usage or memory settings. You can do the following actions: * Update the existing configurations or add new configurations. |
hdinsight-aks | Service Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/service-health.md | Last updated 08/29/2023 # Manage service health [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ This article describes how to check the health of the services running in HDInsight on AKS cluster. It includes the collection of the services and the status of each service running in the cluster. You can drill down on each service to check instance level details. |
hdinsight-aks | Azure Hdinsight Spark On Aks Delta Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/azure-hdinsight-spark-on-aks-delta-lake.md | Last updated 10/27/2023 # Use Delta Lake in Azure HDInsight on AKS with Apache SparkΓäó cluster (Preview) [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + [Azure HDInsight on AKS](../overview.md) is a managed cloud-based service for big data analytics that helps organizations process large amounts data. This tutorial shows how to use Delta Lake in Azure HDInsight on AKS with Apache SparkΓäó cluster. ## Prerequisite |
hdinsight-aks | Configuration Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/configuration-management.md | Last updated 10/19/2023 # Configuration management in HDInsight on AKS with Apache SparkΓäó cluster [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Azure HDInsight on AKS is a managed cloud-based service for big data analytics that helps organizations process large amounts data. This tutorial shows how to use configuration management in Azure HDInsight on AKS with Apache SparkΓäó cluster. Configuration management is used to add specific configurations into the Apache Spark cluster. |
hdinsight-aks | Connect To One Lake Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/connect-to-one-lake-storage.md | Last updated 10/27/2023 # Connect to OneLake Storage [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This tutorial shows how to connect to OneLake with a Jupyter notebook from an Azure HDInsight on AKS cluster. 1. Create an HDInsight on AKS cluster with Apache SparkΓäó. Follow these instructions: Set up clusters in HDInsight on AKS. |
hdinsight-aks | Create Spark Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/create-spark-cluster.md | Last updated 12/28/2023 # Create Spark cluster in HDInsight on AKS (Preview) [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Once the [subscription prerequisites](../prerequisites-subscription.md) and [resource prerequisites](../prerequisites-resources.md)ΓÇ» steps are complete, and you have a cluster pool deployed, continue to use the Azure portal to create a Spark cluster. You can use the Azure portal to create an Apache Spark cluster in cluster pool. You can then create a Jupyter Notebook and use it to run Spark SQL queries against Apache Hive tables. |
hdinsight-aks | Hdinsight On Aks Spark Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/hdinsight-on-aks-spark-overview.md | Last updated 10/27/2023 # What is Apache SparkΓäó in HDInsight on AKS? (Preview) [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Apache SparkΓäó is a parallel processing framework that supports in-memory processing to boost the performance of big-data analytic applications. Apache SparkΓäó provides primitives for in-memory cluster computing. A Spark job can load and cache data into memory and query it repeatedly. In-memory computing is faster than disk-based applications, such as Hadoop, which shares data through Hadoop distributed file system (HDFS). Apache Spark allows integration with the Scala and Python programming languages to let you manipulate distributed data sets like local collections. There's no need to structure everything as map and reduce operations. |
hdinsight-aks | Library Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/library-management.md | Last updated 08/29/2023 # Library management in Spark [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + The purpose of Library Management is to make open-source or custom code available to notebooks and jobs running on your clusters. You can upload Python libraries from PyPI repositories. This article focuses on managing libraries in the cluster UI. Azure HDInsight on AKS already includes many common libraries in the cluster. To see which libraries are included in HDI on AKS cluster, review the library management page. |
hdinsight-aks | Spark Job Orchestration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/spark-job-orchestration.md | Last updated 11/28/2023 # Apache Spark® job orchestration using Azure Data Factory Workflow Orchestration Manager (powered by Apache Airflow) [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article covers managing a Spark job using [Apache Spark Livy API](https://livy.incubator.apache.org/docs/latest/rest-api.html) and orchestration data pipeline with Azure Data Factory Workflow Orchestration Manager. [Azure Data Factory Workflow Orchestration Manager](/azure/data-factory/concepts-workflow-orchestration-manager) service is a simple and efficient way to create and manage [Apache Airflow](https://airflow.apache.org/) environments, enabling you to run data pipelines at scale easily. Apache Airflow is an open-source platform that programmatically creates, schedules, and monitors complex data workflows. It allows you to define a set of tasks, called operators that can be combined into directed acyclic graphs (DAGs) to represent data pipelines. |
hdinsight-aks | Submit Manage Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/submit-manage-jobs.md | Last updated 10/27/2023 # Submit and manage jobs on an Apache SparkΓäó cluster in HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Once the cluster is created, user can use various interfaces to submit and manage jobs by * using Jupyter |
hdinsight-aks | Use Hive Metastore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/use-hive-metastore.md | Last updated 10/27/2023 # How to use Hive metastore with Apache SparkΓäó cluster [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + It's essential to share the data and metastore across multiple services. One of the commonly used metastore in HIVE metastore. HDInsight on AKS allows users to connect to external metastore. This step enables the HDInsight users to seamlessly connect to other services in the ecosystem. Azure HDInsight on AKS supports custom meta stores, which are recommended for production clusters. The key steps involved are |
hdinsight-aks | Use Machine Learning Notebook On Spark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/use-machine-learning-notebook-on-spark.md | Last updated 08/29/2023 # How to use Azure Machine Learning Notebook on Spark [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Machine learning is a growing technology, which enables computers to learn automatically from past data. Machine learning uses various algorithms for building mathematical models and making predictions use historical data or information. We have a model defined up to some parameters, and learning is the execution of a computer program to optimize the parameters of the model using the training data or experience. The model may be predictive to make predictions in the future, or descriptive to gain knowledge from data. The following tutorial notebook shows an example of training machine learning models on tabular data. You can import this notebook and run it yourself. |
hdinsight-aks | Subscribe To Release Notes Repo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/subscribe-to-release-notes-repo.md | Last updated 11/20/2023 # Subscribe to HDInsight on AKS release notes GitHub repo [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ Learn how to subscribe to HDInsight on AKS release notes GitHub repo to get email notifications. ## Prerequisites |
hdinsight-aks | Trademarks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trademarks.md | Last updated 10/26/2023 # Trademarks ++ Product names, logos and other material used on this Azure HDInsight on AKS learn pages are registered trademarks of various entities including, but not limited to, the following trademark owners and names: - The [Trino Software Foundation](https://trino.io/foundation.html) owns and manages the Trino brand and trademarks. The use of these marks does not imply endorsement by The Trino Software Foundation. |
hdinsight-aks | Configure Azure Active Directory Login For Superset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/configure-azure-active-directory-login-for-superset.md | Last updated 08/29/2023 # Configure Microsoft Entra ID OAuth2 login [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article describes how to allow users to use their Microsoft Entra account ("Microsoft work or school account") to log in to Apache Superset. The following configuration allows users to have Superset accounts automatically created when they use their Microsoft Entra login. Azure groups can be automatically mapped to Superset roles, which allow control over who can access Superset and what permissions are given. |
hdinsight-aks | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/role-based-access-control.md | Last updated 08/29/2023 # Configure Role Based Access Control [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article describes how to provide Role Based Access Control and auto assign users to Apache Superset roles. This Role Based Access Control enables you to manage user groups in Microsoft Entra ID but configure access permissions in Superset. For example, if you have a security group called `datateam`, you can propagate membership of this group to Superset, which means Superset can automatically deny access if a user is removed from this security group. |
hdinsight-aks | Trino Add Catalogs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-catalogs.md | Last updated 10/19/2023 # Configure catalogs [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Every Trino cluster comes by default with few catalogs - system, tpcds, `tpch`. You can add your own catalogs same way you would do with OSS Trino. In addition, Trino with HDInsight on AKS allows storing secrets in Key Vault so you donΓÇÖt have to specify them explicitly in ARM template. |
hdinsight-aks | Trino Add Delta Lake Catalog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-delta-lake-catalog.md | Last updated 06/19/2024 # Configure Delta Lake catalog [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article provides an overview of how to configure Delta Lake catalog in your Trino cluster with HDInsight on AKS. You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal. ## Prerequisites |
hdinsight-aks | Trino Add Iceberg Catalog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-iceberg-catalog.md | Last updated 06/19/2024 # Configure Iceberg catalog [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article provides an overview of how to configure Iceberg catalog in your Trino cluster with HDInsight on AKS. You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal. ## Prerequisites |
hdinsight-aks | Trino Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-airflow.md | Last updated 10/19/2023 # Use Apache AirflowΓäó with Trino cluster [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article demonstrates how to configure available open-source [Apache Airflow Trino provider](https://airflow.apache.org/docs/apache-airflow-providers-trino/stable/https://docsupdatetracker.net/index.html) to connect to your Trino cluster with HDInsight on AKS. The objective is to show how you can connect Airflow to Trino with HDInsight on AKS considering main steps as obtaining access token and running query. |
hdinsight-aks | Trino Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-authentication.md | Last updated 10/19/2023 # Authentication mechanism [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Trino with HDInsight on AKS provides tools such as CLI client, JDBC driver etc., to access the cluster, which is integrated with Microsoft Entra ID to simplify the authentication for users. Supported tools or clients need to authenticate using Microsoft Entra ID OAuth2 standards that are, a JWT access token issued by Microsoft Entra ID must be provided to the cluster endpoint. |
hdinsight-aks | Trino Caching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-caching.md | Last updated 11/03/2023 # Configure caching [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Querying object storage using the Hive connector is a common use case for Trino. This process often involves sending large amounts of data. Objects are retrieved from HDFS or another supported object store by multiple workers and processed by those workers. Repeated queries with different parameters, or even different queries from different users, often access and transfer the same objects. HDInsight on AKS added **final result caching** capability for Trino, which provides the following benefits: |
hdinsight-aks | Trino Catalog Glue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-catalog-glue.md | Last updated 10/19/2023 # Query data from AWS S3 using AWS Glue [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article provides examples of how you can add catalogs to a Trino cluster with HDInsight on AKS where catalogs are using AWS Glue as metastore and AWS S3 as storage. ## Prerequisites |
hdinsight-aks | Trino Configuration Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-configuration-troubleshoot.md | Last updated 08/29/2023 # Troubleshoot cluster configuration [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Incorrect cluster configuration may lead to deployment errors. Typically those errors occur when incorrect configuration provided in ARM template or input in Azure portal, for example, on Configuration management page. > [!NOTE] |
hdinsight-aks | Trino Connect To Metastore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-connect-to-metastore.md | Last updated 02/21/2024 # Use external Hive metastore database [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Hive metastore is used as a central repository for storing metadata about the data. This article describes how you can add a Hive metastore database to your Trino cluster with HDInsight on AKS. There are two ways: * You can add a Hive catalog and link it to an external Hive metastore database during [Trino cluster creation](./trino-create-cluster.md). |
hdinsight-aks | Trino Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-connectors.md | Last updated 08/29/2023 # Trino connectors [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Trino in HDInsight on AKS enables seamless integration with data sources. You can refer to the following documentation for open-source connectors. * [BigQuery](https://trino.io/docs/410/connector/bigquery.html) |
hdinsight-aks | Trino Create Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-create-cluster.md | Last updated 12/28/2023 # Create a Trino cluster in the Azure portal (Preview) [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article describes the steps to create a Trino cluster with HDInsight on AKS by using the Azure portal. ## Prerequisites |
hdinsight-aks | Trino Create Delta Lake Tables Synapse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-create-delta-lake-tables-synapse.md | Last updated 10/19/2023 # Read Delta Lake tables (Synapse or external location) [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article provides an overview of how to read a Delta Lake table without having any access to the metastore (Synapse or other metastores without public access). You can perform the following operations on the tables using Trino with HDInsight on AKS. |
hdinsight-aks | Trino Custom Plugins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-custom-plugins.md | Last updated 10/19/2023 # Custom plugins [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article provides details on how to deploy custom plugins to your Trino cluster with HDInsight on AKS. Trino provides a rich interface allowing users to write their own plugins such as event listeners, custom SQL functions etc. You can add the configuration described in this article to make custom plugins available in your Trino cluster using ARM template. |
hdinsight-aks | Trino Fault Tolerance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-fault-tolerance.md | Last updated 10/19/2023 # Fault-tolerant execution [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Trino supports [fault-tolerant execution](https://trino.io/docs/current/admin/fault-tolerant-execution.html) to mitigate query failures and increase resilience. This article describes how you can enable fault tolerance for your Trino cluster with HDInsight on AKS. |
hdinsight-aks | Trino Jvm Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-jvm-configuration.md | Last updated 10/19/2023 # Configure JVM heap size [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article describes how to modify initial and max heap size for Trino pods with HDInsight on AKS. `Xms` and `-Xmx` settings can be changed to control initial and max heap size of Trino pods. You can modify the JVM heap settings using ARM template. |
hdinsight-aks | Trino Miscellaneous Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-miscellaneous-files.md | Last updated 10/19/2023 # Using miscellaneous files [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article provides details on how to specify and use miscellaneous files configuration. You can add the configurations for using miscellaneous files in your cluster using ARM template. For broader examples, refer to [Service configuration](./trino-service-configuration.md). |
hdinsight-aks | Trino Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-overview.md | Last updated 08/29/2023 # What is Trino? (Preview) [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + [Trino](https://trino.io/docs/current/overview.html) (formerly PrestoSQL) is an open-source distributed SQL query engine for federated and interactive analytics against heterogeneous data sources. It can query data at scale (gigabytes to petabytes) from multiple sources to enable enterprise-wide analytics. Trino is used for a wide range of analytical use cases and is an excellent choice for interactive and ad-hoc querying. |
hdinsight-aks | Trino Query Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-query-logging.md | Last updated 10/19/2023 # Query logging [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Trino supports custom [event listeners](https://trino.io/docs/current/develop/event-listener.html) that can be used to listen for Query lifecycle events. You can author your own event listeners or use a built-in plugin provided by HDInsight on AKS that logs events to Azure Blob Storage. You can enable built-in query logging in two ways: |
hdinsight-aks | Trino Scan Stats | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-scan-stats.md | Last updated 10/19/2023 # Enable scan statistics for queries [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Often data teams are required to investigate performance or optimize queries to improve resource utilization or meet business requirements. A new capability has been added in Trino for HDInsight on AKS that allows user to capture Scan statistics for any connector. This capability provides deeper insights into query performance profile beyond what is available in statistics produced by Trino. |
hdinsight-aks | Trino Service Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-service-configuration.md | Last updated 10/19/2023 # Trino configuration management [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Trino cluster with HDInsight on AKS comes with most of the default configurations of open-source Trino. This article describes how to update config files, and adds your own supplemental config files to the cluster. You can add/update the configurations in two ways: |
hdinsight-aks | Trino Sharded Sql Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-sharded-sql-connector.md | Last updated 02/06/2024 # Sharded SQL connector [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + The sharded SQL connector allows queries to be executed over data distributed across any number of SQL servers. ## Prerequisites |
hdinsight-aks | Trino Superset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-superset.md | Last updated 10/19/2023 # Deploy Apache SupersetΓäó [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Visualization is essential to effectively explore, present, and share data. [Apache Superset](https://superset.apache.org/) allows you to run queries, visualize, and build dashboards over your data in a flexible Web UI. This article describes how to deploy an Apache Superset UI instance in Azure and connect it to Trino cluster with HDInsight on AKS to query data and create dashboards. |
hdinsight-aks | Trino Ui Command Line Interface | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-command-line-interface.md | Last updated 10/19/2023 # Trino CLI [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + The Trino CLI for HDInsight on AKS provides a terminal-based, interactive shell for running queries. ## Install on Windows |
hdinsight-aks | Trino Ui Dbeaver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-dbeaver.md | Last updated 10/19/2023 # Connect and query with DBeaver [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + It's possible to use JDBC driver with many available database tools. This article demonstrates how to configure one of the most popular tool **DBeaver** to connect to Trino cluster with HDInsight on AKS in few simple steps. ## Prerequisites |
hdinsight-aks | Trino Ui Jdbc Driver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-jdbc-driver.md | Last updated 10/19/2023 # Trino JDBC driver [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + Trino with HDInsight on AKS provides JDBC driver, which supports Microsoft Entra authentication and adds few parameters for it. ## Install |
hdinsight-aks | Trino Ui Web Ssh | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-web-ssh.md | Last updated 08/29/2023 # Web SSH [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article describes how you can run queries on your Trino cluster using web ssh. ## Run Web SSH |
hdinsight-aks | Trino Ui | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui.md | Last updated 10/19/2023 # Trino UI [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] + This article covers the details around the Trino UI provided for monitoring the cluster nodes and queries submitted to Trino. |
hdinsight-aks | Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/versions.md | Last updated 03/27/2024 # Azure HDInsight on AKS versions [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ HDInsight on AKS service has three components, a Resource provider, an Open-source software (OSS), and Controllers that are deployed on a cluster. Microsoft periodically upgrades the images and the aforementioned components to include new improvements and features. New HDInsight on AKS version may be created when one or more of the following are true: |
hdinsight-aks | Virtual Machine Recommendation Capacity Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/virtual-machine-recommendation-capacity-planning.md | Last updated 10/05/2023 # Default and minimum virtual machine size recommendations and capacity planning for HDInsight on AKS [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] ++ This article discusses default and recommended node configurations for Azure HDInsight on AKS clusters. ## Cluster pools |
hdinsight-aks | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/whats-new.md | Last updated 03/24/2024 # What's new in HDInsight on AKS? (Preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] In HDInsight on AKS, all cluster management and operations have native support for [service management](./service-configuration.md) on Azure portal for individual clusters. |
healthcare-apis | Deploy Dicom Services In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure.md | -# Deploy the DICOM service by using the Azure portal +# Deploy the DICOM service with Blob storage by using the Azure portal In this quickstart, you learn how to deploy the DICOM® service by using the Azure portal. To deploy the DICOM service, you need a workspace created in the Azure portal. F :::image type="content" source="media/select-workspace-resource-group.png" alt-text="Screenshot showing selecting a workspace resource group." lightbox="media/select-workspace-resource-group.png"::: -1. Select **Deploy DICOM service**. +2. Select **Deploy DICOM service**. :::image type="content" source="media/workspace-deploy-dicom-services.png" alt-text="Screenshot showing deployment of the DICOM service." lightbox="media/workspace-deploy-dicom-services.png"::: -1. Select **Add DICOM service**. +3. Select **Add DICOM service**. :::image type="content" source="media/add-dicom-service.png" alt-text="Screenshot showing how to add the DICOM service." lightbox="media/add-dicom-service.png"::: -1. Enter a name for the DICOM service, and then select **Review + create**. +4. Enter a name for the DICOM service. + - Select Blob Storage (legacy) for the storage location. + - (Optional) Select **Enable data partitions** when you deploy a new DICOM service. After data partitioning is turned on, it can't be turned off. In addition, data partitions can't be turned on for any DICOM service that is already deployed. For more information, see [Enable data partitioning](data-partitions.md). + - After the data partitions setting is turned on, the capability modifies the API surface of the DICOM server and makes any previous data accessible under the `Microsoft.Default` partition. Select **Review + create**. +![Screenshot showing the DICOM service name and storage location option.](media/deploy-dicom-services-in-azure/enter-dicom-service-name.png) -1. (Optional) Select **Next: Tags**. ++5. (Optional) Select **Next: Tags**. Tags are name/value pairs used for categorizing resources. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md). -1. When you notice the green validation check mark, select **Create** to deploy the DICOM service. +6. When you notice the green validation check mark, select **Create** to deploy the DICOM service. -1. After the deployment process is finished, select **Go to resource**. +7. After the deployment process is finished, select **Go to resource**. :::image type="content" source="media/go-to-resource.png" alt-text="Screenshot showing Go to resource." lightbox="media/go-to-resource.png"::: To deploy the DICOM service, you need a workspace created in the Azure portal. F * [Assign roles for the DICOM service](../configure-azure-rbac.md#assign-roles-for-the-dicom-service) * [Use DICOMweb Standard APIs with DICOM services](dicomweb-standard-apis-with-dicom-services.md) |
healthcare-apis | Autoscale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/autoscale.md | -Autoscaling is a capability to dynamically scale FHIR service based on the load reported. The FHIR service in Azure Health Data Services provides the built-in autoscaling capability and the process is automated. This capability provides elasticity and enables provisioning of more instances for FHIR service customers on demand. +Autoscaling is a capability to dynamically scale FHIR service based on the load reported. The FHIR service in Azure Health Data Services provides the built-in autoscaling capability, which is automated. This capability provides elasticity and enables on demand provisioning of more instances for FHIR service customers. The autoscaling feature for FHIR service is available in all regions where the FHIR service is supported. > [!NOTE]-> Autoscaling feature is subject to the resources availability in Azure regions. +> The autoscaling feature is subject to the resources availability in Azure regions. The autoscaling feature adjusts computing resources automatically to optimize service scalability. There's no action required from customers. The autoscaling feature adjusts computing resources automatically to optimize se ### Scaling trigger -Scaling triggers describes when scaling of the service is performed. Conditions defined in the trigger are checked periodically to determine if a service should be scaled or not. All triggers that are currently supported are Average CPU, Max Worker Thread, Average LogWrite, Average data IO. +Scaling triggers describes when scaling of the service is performed. Conditions defined in the trigger are checked periodically to determine if a service should be scaled or not. Only the following triggers are currently supported: Average CPU, Max Worker Thread, Average LogWrite, Average data IO. ### Scaling mechanism The autoscaling feature incurs no extra costs. ### What should customers do if there's high volume of HTTP 429 errors? -We recommend that you gradually increase the request rate to see if it reduces HTTP 429 errors. For consistent 429 errors, create a support ticket through the Azure portal. The support team engages with you to understand your scaling trigger needs. +We recommend that you gradually increase the request rate to see if it reduces HTTP 429 errors. For consistent 429 errors, create a support ticket through the Azure portal. The support team will engage with you to understand your scaling trigger needs. ## Related content |
healthcare-apis | Azure Active Directory Identity Configuration Old | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/azure-active-directory-identity-configuration-old.md | -When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. FHIR service in the Azure Health Data Services is secured using [Microsoft Entra ID](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. This article provides an overview of FHIR server authorization and the steps needed to obtain a token to access a FHIR server. While these steps will apply to any FHIR server and any identity provider, we'll walk through the FHIR service and Microsoft Entra ID as our identity provider in this article. +When you're working with healthcare data, it's important to ensure that the data is secure, and can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. FHIR service in the Azure Health Data Services is secured using [Microsoft Entra ID](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. This article provides an overview of FHIR server authorization and the steps needed to obtain a token to access a FHIR server. While these steps apply to any FHIR server and any identity provider, we walk through the FHIR service and Microsoft Entra ID as our identity provider in this article. ## Access control overview -In order for a client application to access the FHIR service, it must present an access token. The access token is a signed, [Base64](https://en.wikipedia.org/wiki/Base64) encoded collection of properties (claims) that convey information about the client's identity and roles and privileges granted to the client. +In order for a client application to access the FHIR service, it must present an access token. The access token is a signed, [Base64](https://en.wikipedia.org/wiki/Base64) encoded collection of properties (claims) that convey information about the client's identity, roles, and privileges granted. -There are many ways to obtain a token, but the FHIR service doesn't care how the token is obtained as long as it's an appropriately signed token with the correct claims. +The FHIR service doesn't care how the token is obtained, as long as it's an appropriately signed token with the correct claims. -Using [authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md) as an example, accessing a FHIR server goes through the four steps below: +Using [authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md) as an example, accessing a FHIR server goes through the following four steps. ![FHIR Authorization](media/azure-active-directory-fhir-service/fhir-authorization.png) -1. The client sends a request to the `/authorize` endpoint of Microsoft Entra ID. Microsoft Entra ID will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/develop/v2-oauth2-auth-code-flow.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Microsoft Entra ID will only allow this authorization code to be returned to a registered reply URL configured in the client application registration (see below). -1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Microsoft Entra ID. When requesting a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/develop/v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). +1. The client sends a request to the `/authorize` endpoint of Microsoft Entra ID. Microsoft Entra ID will redirect the client to a sign-in page where the user authenticates using appropriate credentials (for example username and password, or two-factor authentication). Select the link for details on [obtaining an authorization code](../../active-directory/develop/v2-oauth2-auth-code-flow.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Microsoft Entra ID will only allow this authorization code to be returned to a registered reply URL configured in the client application registration (following). +1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Microsoft Entra ID. When requesting a token, the client application may have to provide a client secret (the applications password). Select the link for details on [obtaining an access token](../../active-directory/develop/v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). 1. The client makes a request to the FHIR service, for example `GET /Patient` to search all patients. When making the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token.-1. The FHIR service validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client. +1. The FHIR service validates that the token contains appropriate claims (properties in the token). If everything checks out, it completes the request and returns a FHIR bundle with results to the client. -It's important to note that the FHIR service isn't involved in validating user credentials and it doesn't issue the token. The authentication and token creation is done by Microsoft Entra ID. The FHIR service simply validates that the token is signed correctly (it's authentic) and that it has appropriate claims. +It's important to note that the FHIR service isn't involved in validating user credentials and it doesn't issue the token. The authentication and token creation is done by Microsoft Entra ID. The FHIR service simply validates that the token is signed correctly (is authentic) and that it has appropriate claims. ## Structure of an access token -Development of FHIR applications often involves debugging access issues. If a client is denied access to the FHIR service, it's useful to understand the structure of the access token and how it can be decoded to inspect the contents (the claims) of the token. +Development of FHIR applications often involves debugging access issues. If a client is denied access to the FHIR service, it's useful to understand the structure of the access token and how it can be decoded to inspect the contents (claims) of the token. FHIR servers typically expect a [JSON Web Token](https://en.wikipedia.org/wiki/JSON_Web_Token) (JWT, sometimes pronounced "jot"). It consists of three parts: FHIR servers typically expect a [JSON Web Token](https://en.wikipedia.org/wiki/J } ``` -**Part 3**: A signature, which is calculated by concatenating the Base64 encoded contents of the header and the payload and calculating a cryptographic hash of them based on the algorithm (`alg`) specified in the header. A server will be able to obtain public keys from the identity provider and validate that this token was issued by a specific identity provider and it hasn't been tampered with. +**Part 3**: A signature, which is calculated by concatenating the Base64 encoded contents of the header and the payload and calculating a cryptographic hash of them based on the algorithm (`alg`) specified in the header. A server is able to obtain public keys from the identity provider, validate that the token was issued by a specific identity provider, and hasn't been tampered with. The full token consists of the Base64 encoded (actually Base64 url encoded) versions of those three segments. The three segments are concatenated and separated with a `.` (dot). -An example token is seen below: +Here's an example token: ``` eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJvaWQiOiIxMjMiLCAiaXNzIjoiaHR0cHM6Ly9pc3N1ZXJ1cmwiLCJpYXQiOjE0MjI3Nzk2MzgsInJvbGVzIjpbImFkbWluIl19.gzSraSYS8EXBxLN_oWnFSRgCzcmJmMjLiuyu5CSpyHI The token can be decoded and inspected with tools such as [https://jwt.ms](https ## Obtaining an access token -As mentioned above, there are several ways to obtain a token from Microsoft Entra ID. They're described in detail in the [Microsoft Entra developer documentation](../../active-directory/develop/index.yml). +As previously mentioned, there are several ways to obtain a token from Microsoft Entra ID. They're described in detail in the [Microsoft Entra developer documentation](../../active-directory/develop/index.yml). Use either of the following authentication protocols: * [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md). * [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). -There are other variations (for example, on behalf of flow) for obtaining a token. Check the Microsoft Entra documentation for details. When using the FHIR service, there are also some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md). +There are other variations for obtaining a token (for example, on behalf of flow). Check the Microsoft Entra documentation for details. When using the FHIR service, there are also shortcuts to obtaining an access token for debugging purposes [using the Azure CLI](get-healthcare-apis-access-token-cli.md). ## Next steps |
healthcare-apis | Azure Ad B2c Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/azure-ad-b2c-setup.md | Run the code in Azure Cloud Shell or in PowerShell locally in Visual Studio Code 1. Use `Connect-AzAccount` to sign in to Azure. After you sign in, use `Get-AzContext` to verify the subscription and tenant you want to use. Change the subscription and tenant if needed. -1. Create a new resource group, or use an existing one by skipping the step or commenting out the line starting with `New-AzResourceGroup`. +1. Create a new resource group (or use an existing one) by skipping the "create resource group" step, or commenting out the line starting with `New-AzResourceGroup`. ```PowerShell ### variables New-AzResourceGroupDeployment -ResourceGroupName $resourcegroupname -TemplateUri 1. Use `Connect-AzAccount` to sign in to Azure. After you sign in, use `az account show --output table` to verify the subscription and tenant you want to use. Change the subscription and tenant if needed. -1. Create a new resource group, or use an existing one by skipping the step or commenting out the line starting with `az group create`. +1. Create a new resource group (or use an existing one) by skipping the "create reource group" step or commenting out the line starting with `az group create`. ```bash ### variables You need a test B2C user to associate with a specific patient resource in the FH #### Link a B2C user with the `fhirUser` custom user attribute -The `fhirUser` custom user attribute is used to link a B2C user with a user resource in the FHIR service. In this example, a user named **Test Patient1** is created in the B2C tenant, and in a later step a [patient](https://www.hl7.org/fhir/patient.html) resource is created in the FHIR service. The **Test Patient1** user is linked to the patient resource by setting the `fhirUser` attribute value to the patient resource identifier. For more information about custom user attributes, see [User flow custom attributes in Azure Active Directory B2C](/azure/active-directory-b2c/user-flow-custom-attributes?pivots=b2c-user-flow). +The `fhirUser` custom user attribute is used to link a B2C user with a user resource in the FHIR service. In this example, a user named **Test Patient1** is created in the B2C tenant. In a later step a [patient](https://www.hl7.org/fhir/patient.html) resource is created in the FHIR service. The **Test Patient1** user is linked to the patient resource by setting the `fhirUser` attribute value to the patient resource identifier. For more information about custom user attributes, see [User flow custom attributes in Azure Active Directory B2C](/azure/active-directory-b2c/user-flow-custom-attributes?pivots=b2c-user-flow). 1. On the **Azure AD B2C** page in the left pane, choose **User attributes**. The `fhirUser` custom user attribute is used to link a B2C user with a user reso #### Create a new B2C user flow -User flows define the sequence of steps users must follow to sign in. In this example, a user flow is defined so that when a user signs in, the access token provided includes the `fhirUser` claim. For more information, see [Create user flows and custom policies in Azure Active Directory B2C](../../active-directory-b2c/tutorial-create-user-flows.md). +User flows define the sequence of steps users must follow to sign in. In this example, a user flow is defined so that when a user signs in and the access token provided includes the `fhirUser` claim. For more information, see [Create user flows and custom policies in Azure Active Directory B2C](../../active-directory-b2c/tutorial-create-user-flows.md). 1. On the **Azure AD B2C** page in the left pane, choose **User flows**. User flows define the sequence of steps users must follow to sign in. In this ex :::image type="content" source="media/azure-ad-b2c-setup/b2c-user-flow-sml.png" alt-text="Screenshot showing B2C user flow." lightbox="media/azure-ad-b2c-setup/b2c-user-flow-lrg.png"::: -1. Give the user flow a name unique to the B2C tenant. (The name doesn't have to be globally unique.) In this example, the name of the user flow is **USER_FLOW_1**. Make note of the name. +1. Give the user flow a name unique to the B2C tenant. The name doesn't have to be globally unique. In this example, the name of the user flow is **USER_FLOW_1**. Make note of the name. 1. Make sure **Email signin** is enabled for local accounts so that the test user can sign in and obtain an access token for the FHIR service. The B2C resource application handles authentication requests from your healthcar 1. In the **Supported account types** list, choose **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**. -1. In the **Redirect URI (recommended)** drop-down list, select ***Public client/native (mobile & desktop)**. Populate the value with the [Postman](https://www.postman.com) callback URI [https://oauth.pstmn.io/v1/callback](#create-a-new-b2c-resource-application). The callback URI is for testing purposes. +1. In the **Redirect URI (recommended)** drop-down list, select ***Public client/native (mobile & desktop)**. Populate the value with the [Postman](https://www.postman.com) callback URI [https://oauth.pstmn.io/v1/callback](#create-a-new-b2c-resource-application). This callback URI is for testing purposes. 1. In the **Permissions** section, select **Grant admin consent to openid and offline_access permissions**. The B2C resource application handles authentication requests from your healthcar 1. Scroll until you find the `oauth2Permissions` array. Replace the array with one or more values in the [oauth2Permissions.json](https://raw.githubusercontent.com/Azure-Samples/azure-health-data-and-ai-samples/main/samples/fhir-aad-b2c/oauth2Permissions.json) file. Copy the entire array or individual permissions. - If you add a permission to the list, any user in the B2C tenant can obtain an access token with the API permission. If a level of access isn't appropriate for a user within the B2C tenant, don't add to the array because there isn't a way to limit permissions to a subset of users. + If you add a permission to the list, any user in the B2C tenant can obtain an access token with the API permission. If a level of access isn't appropriate for a user within the B2C tenant, don't add it to the array because there isn't a way to limit permissions to a subset of users. 1. After the **oauth2Permissions** array is populated, choose **Save**. Run the code in Azure Cloud Shell or in PowerShell locally in Visual Studio Code 1. Use `Connect-AzAccount` to sign in to Azure. Use `Get-AzContext` to verify the subscription and tenant you want to use. Change the subscription and tenant if needed. -1. Create a new resource group, or use an existing one by skipping the step or commenting out the line starting with `New-AzResourceGroup`. +1. Create a new resource group (or use an existing one) by skipping the "create resource group" step, or commenting out the line starting with `New-AzResourceGroup`. ```PowerShell ### variables New-AzResourceGroupDeployment -ResourceGroupName $resourcegroupname -TemplateUri 1. Use `az login` to sign in to Azure. Use `az account show --output table` to verify the subscription and tenant you want to use. Change the subscription and tenant if needed. -1. Create a new resource group, or use an existing one by skipping the step or commenting out the line starting with `az group create`. +1. Create a new resource group (or use an existing one) by skipping the "create resource group" step, or commenting out the line starting with `az group create`. ```bash ### variables The validation process involves creating a patient resource in the FHIR service, Run the [Postman](https://www.postman.com) application locally or in a web browser. For steps to obtain the proper access to the FHIR service, see [Access the FHIR service using Postman](use-postman.md). -When you follow the steps to [GET FHIR resource](use-postman.md#get-the-fhir-resource) section, the request returns an empty response because the FHIR service is new and doesn't have any patient resources. +When you follow the steps in the [Get the FHIR resource](use-postman.md#get-the-fhir-resource) section, the request returns an empty response because the FHIR service is new and doesn't have any patient resources. #### Create a patient resource in the FHIR service -It's important to note that users in the B2C tenant aren't able to read any resources until the user is linked to a FHIR resource, for example as patient or practitioner. A user with the `FhirDataWriter` or `FhirDataContributor` role in the Microsoft Entra ID where the FHIR service is tenanted must perform this step. +It's important to note that users in the B2C tenant aren't able to read any resources until the user (such as a patient or practitioner) is linked to a FHIR resource. A user with the `FhirDataWriter` or `FhirDataContributor` role in the Microsoft Entra ID where the FHIR service is tenanted must perform this step. 1. Create a patient with a specific identifier by changing the method to `PUT` and executing a request to `{{fhirurl}}/Patient/1` with this body: It's important to note that users in the B2C tenant aren't able to read any reso #### Link the patient resource to the Azure AD B2C user -You need to create an explicit link between the test user in the B2C tenant and the resource in the FHIR service. Create the link by using Extension Attributes in Microsoft Graph. For more information, see [Define custom attributes in Azure Active Directory B2C](../../active-directory-b2c/user-flow-custom-attributes.md). +Create an explicit link between the test user in the B2C tenant and the resource in the FHIR service. Create the link by using Extension Attributes in Microsoft Graph. For more information, see [Define custom attributes in Azure Active Directory B2C](../../active-directory-b2c/user-flow-custom-attributes.md). 1. Go to the B2C tenant. On the left pane, choose **App registrations**. Obtain an access token to test the authentication flow. :::image type="content" source="media/azure-ad-b2c-setup/postman-auth.png" alt-text="Screenshot showing Postman auth." lightbox="media/azure-ad-b2c-setup/postman-auth.png"::: -1. Scroll to the **Configure New Token** section and enter these values: +1. Scroll to the **Configure New Token** section and enter the following values. - **Callback URL**. This value is configured when the B2C resource application is created. |
healthcare-apis | Using Rest Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-rest-client.md | grant_type=client_credentials &resource={{fhirurl}} &client_id={{clientid}} &client_secret={{clientsecret}}+&scope={{fhirurl}}/.default ### Extract access token from getAADToken request @token = {{getAADToken.response.body.access_token}} |
healthcare-apis | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md | Refer to the table for details about resolution dates or possible workarounds. |Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- |-|Customers accessing the FHIR Service via a private endpoint are experiencing difficulties, specifically receiving a 403 error when making API calls from within the vNet. This problem affects those with FHIR instances created post-August 19th that utilize private link.|August 22,2024 11:00 am PST|Suggested workaround to unblock is 1 Create a Private DNS Zone for azurehealthcareapis.com under the same VNET. 2 Create a new recordset to the targeted FHIR service. | --| +|Changes in private link configuration at the workspace level don't propagate to the child services.|September 4,2024 9:00 am PST| To fix this issue a service reprovisioning is required. To reprovision the service, reach out to FHIR service team|--| +|Customers accessing the FHIR Service via a private endpoint are experiencing difficulties, specifically receiving a 403 error when making API calls from within the vNet. This problem affects FHIR instances provisioned after August 19th that utilize private link.|August 22,2024 11:00 am PST|-- | September 3.2024 9:00 am PST| |FHIR Applications were down in EUS2 region|January 8, 2024 2 pm PST|--|January 8, 2024 4:15 pm PST| |API queries to FHIR service returned Internal Server error in UK south region |August 10, 2023 9:53 am PST|--|August 10, 2023 10:43 am PST| |
migrate | Common Questions Appliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-appliance.md | To fix this issue, follow these steps to ensure that your appliance can validate :::image type="content" source="./media/common-questions-appliance/settings-inline.png" alt-text="Screenshot of Windows settings." lightbox="./media/common-questions-appliance/settings-expanded.png"::: - 1. In the certificate manager, you must see the entry for **Microsoft Root Certificate Authority 2011** and **Microsoft Code Signing PCA 2011** as shown in the following screenshots: + 1. In the certificate manager, you must see the entry for **Microsoft Root Certificate Authority 2011** and **Microsoft Code Signing PCA 2011**. - :::image type="content" source="./media/common-questions-appliance/certificate-1-inline.png" alt-text="Screenshot of certificate 1." lightbox="./media/common-questions-appliance/certificate-1-expanded.png"::: + :::image type="content" source="./media/common-questions-appliance/certificate-1.png" alt-text="Screenshot of certificate 1." lightbox="./media/common-questions-appliance/certificate-1.png"::: :::image type="content" source="./media/common-questions-appliance/certificate-2-inline.png" alt-text="Screenshot of certificate 2." lightbox="./media/common-questions-appliance/certificate-2-expanded.png"::: |
migrate | Troubleshoot Network Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-network-connectivity.md | description: Provides troubleshooting tips for common errors in using Azure Migr Previously updated : 11/17/2023- Last updated : 09/09/2024+ # Troubleshoot network connectivity+ This article helps you troubleshoot network connectivity issues using Azure Migrate with private endpoints. ## Validate private endpoints configuration Make sure the private endpoint is an approved state. 2. The properties page contains the list of private endpoints and private link FQDNs that were automatically created by Azure Migrate. 3. Select the private endpoint you want to diagnose. - a. Validate that the connection state is Approved. - b. If the connection is in a Pending state, you need to get it approved. - c. You might also navigate to the private endpoint resource and review if the virtual network matches the Migrate project private endpoint virtual network. + a. Validate that the connection state is Approved. + b. If the connection is in a Pending state, you need to get it approved. + c. You might also navigate to the private endpoint resource and review if the virtual network matches the Migrate project private endpoint virtual network. :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection.png" alt-text="Screenshot of View Private Endpoint connection."::: - ## Validate the data flow through the private endpoints+ Review the data flow metrics to verify the traffic flow through private endpoints. Select the private endpoint in the Azure Migrate: Server Assessment and Migration and modernization Properties page. This will redirect to the private endpoint overview section in Azure Private Link Center. In the left menu, select **Metrics** to view the _Data Bytes In_ and _Data Bytes Out_ information to view the traffic flow. ## Verify DNS resolution The on-premises appliance (or replication provider) will access the Azure Migrate resources using their fully qualified private link domain names (FQDNs). You might require additional DNS settings to resolve the private IP address of the private endpoints from the source environment. [See this article](../private-link/private-endpoint-dns-integration.md#on-premises-workloads-using-a-dns-forwarder) to understand the DNS configuration scenarios that can help troubleshoot any network connectivity issues. -To validate the private link connection, perform a DNS resolution of the Azure Migrate resource endpoints (private link resource FQDNs) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address. +To validate the private link connection, perform a DNS resolution of the Azure Migrate resource endpoints (private link resource FQDNs) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address. **To obtain the private endpoint details to verify DNS resolution:** -1. The private endpoint details and private link resource FQDNs' information is available in the Discovery and Assessment and Migration and modernization properties pages. Select **Download DNS settings** to view the list. Note, only the private endpoints that were automatically created by Azure Migrate are listed below. -- ![Azure Migrate: Discovery and Assessment Properties](./media/how-to-use-azure-migrate-with-private-endpoints/server-assessment-properties.png) +1. The private endpoint details and private link resource FQDNs' information is available in the Discovery and Assessment and Migration and modernization properties pages. Select **Download DNS settings** to view the list. Note, only the private endpoints that were automatically created by Azure Migrate are listed below. - [![Migration and modernization tool Properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties-inline.png)](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties-expanded.png#lightbox) + [![Migration and modernization tool Properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties.png)](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties.png#lightbox) -2. If you have created a private endpoint for the storage account(s) for replicating over a private network, you can obtain the private link FQDN and IP address as illustrated below. +2. If you have created a private endpoint for the storage account(s) for replicating over a private network, you can obtain the private link FQDN and IP address as illustrated below. - - Go to the **Storage account** > **Networking** > **Private endpoint connections** and select the private endpoint created. + - Go to the **Storage account** > **Networking** > **Private endpoint connections** and select the private endpoint created. :::image type="content" source="./media/troubleshoot-network-connectivity/private-endpoint.png" alt-text="Screenshot of the Private Endpoint connections."::: - - Go to **Settings** > **DNS configuration** to obtain the storage account FQDN and private IP address. + - Go to **Settings** > **DNS configuration** to obtain the storage account FQDN and private IP address. :::image type="content" source="./media/troubleshoot-network-connectivity/private-link-info.png" alt-text="Screenshot showing the Private Link F Q D N information."::: An illustrative example for DNS resolution of the storage account private link FQDN. -- Enter ```nslookup <storage-account-name>_.blob.core.windows.net.``` Replace ```<storage-account-name>``` with the name of the storage account used for Azure Migrate. +- Enter ```nslookup <storage-account-name>_.blob.core.windows.net.``` Replace ```<storage-account-name>``` with the name of the storage account used for Azure Migrate. You'll receive a message like this: :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/dns-resolution-example.png" alt-text="Screenshot showing a D N S resolution example."::: -- A private IP address of 10.1.0.5 is returned for the storage account. This address belongs to the private endpoint virtual network subnet. +- A private IP address of 10.1.0.5 is returned for the storage account. This address belongs to the private endpoint virtual network subnet. -You can verify the DNS resolution for other Azure Migrate artifacts using a similar approach. +You can verify the DNS resolution for other Azure Migrate artifacts using a similar approach. If the DNS resolution is incorrect, follow these steps: **Recommended**: Manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses.+ - If you use a custom DNS, review your custom DNS settings, and validate that the DNS configuration is correct. For guidance, see [private endpoint overview: DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration). - If you use Azure-provided DNS servers, refer to the below section for further troubleshooting. > [!Tip]-> For testing, you can manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses. <br/> +> For testing, you can manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses. +## Validate the Private DNS Zone -## Validate the Private DNS Zone If the DNS resolution isn't working as described in the previous section, there might be an issue with your Private DNS Zone. ### Confirm that the required Private DNS Zone resource exists -By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type. The private DNS zone is created in the same Azure resource group as the private endpoint resource group. The Azure resource group should contain private DNS zone resources with the following format: -- privatelink.vaultcore.azure.net for the key vault-- privatelink.blob.core.windows.net for the storage account-- privatelink.siterecovery.windowsazure.com for the recovery services vault (for Hyper-V and agent-based replications)-- privatelink.prod.migration.windowsazure.com - migrate project, assessment project, and discovery site. ++By default, Azure Migrate also creates a private DNS zone corresponding to the _privatelink_ subdomain for each resource type. The private DNS zone is created in the same Azure resource group as the private endpoint resource group. The Azure resource group should contain private DNS zone resources with the following format: ++- privatelink.vaultcore.azure.net for the key vault +- privatelink.blob.core.windows.net for the storage account +- privatelink.siterecovery.windowsazure.com for the recovery services vault (for Hyper-V and agent-based replications) +- privatelink.prod.migration.windowsazure.com - migrate project, assessment project, and discovery site. Azure Migrate automatically creates the private DNS zone (except for the cache/replication storage account selected by the user). You can locate the linked private DNS zone by navigating to the private endpoint page and selecting DNS configurations. Here, you should see the private DNS zone under the private DNS integration section. If the DNS zone isn't present (as shown below), [create a new Private DNS Zone r [![Create a Private DNS Zone](./media/how-to-use-azure-migrate-with-private-endpoints/create-dns-zone-inline.png)](./media/how-to-use-azure-migrate-with-private-endpoints/create-dns-zone-expanded.png#lightbox) ### Confirm that the Private DNS Zone is linked to the virtual network -The private DNS zone should be linked to the virtual network that contains the private endpoint for the DNS query to resolve the private IP address of the resource endpoint. If the private DNS zone isn't linked to the correct Virtual Network, any DNS resolution from that virtual network will ignore the private DNS zone. ++The private DNS zone should be linked to the virtual network that contains the private endpoint for the DNS query to resolve the private IP address of the resource endpoint. If the private DNS zone isn't linked to the correct Virtual Network, any DNS resolution from that virtual network will ignore the private DNS zone. Navigate to the private DNS zone resource in the Azure portal and select the virtual network links from the left menu. You should see the virtual networks linked. [![View virtual network links](./media/how-to-use-azure-migrate-with-private-endpoints/virtual-network-links-inline.png)](./media/how-to-use-azure-migrate-with-private-endpoints/virtual-network-links-expanded.png#lightbox) -This shows a list of links, each with the name of a virtual network in your subscription. The virtual network that contains the Private Endpoint resource must be listed here. Else, [follow this article](../dns/private-dns-getstarted-portal.md#link-the-virtual-network) to link the private DNS zone to a virtual network. +This shows a list of links, each with the name of a virtual network in your subscription. The virtual network that contains the Private Endpoint resource must be listed here. Else, [follow this article](../dns/private-dns-getstarted-portal.md#link-the-virtual-network) to link the private DNS zone to a virtual network. -Once the private DNS zone is linked to the virtual network, DNS requests originating from the virtual network looks for DNS records in the private DNS zone. This is required for correct address resolution to the virtual network where the private endpoint was created. +Once the private DNS zone is linked to the virtual network, DNS requests originating from the virtual network looks for DNS records in the private DNS zone. This is required for correct address resolution to the virtual network where the private endpoint was created. ### Confirm that the private DNS zone contains the right A records -Go to the private DNS zone you want to troubleshoot. The Overview page shows all DNS records for that private DNS zone. Verify that a DNS A record exists for the resource. The value of the A record (the IP address) must be the resourcesΓÇÖ private IP address. If you find the A record with the wrong IP address, you must remove the wrong IP address and add a new one. It's recommended that you remove the entire A record and add a new one, and do a DNS flush on the on-premises source appliance. +Go to the private DNS zone you want to troubleshoot. The Overview page shows all DNS records for that private DNS zone. Verify that a DNS A record exists for the resource. The value of the A record (the IP address) must be the resourcesΓÇÖ private IP address. If you find the A record with the wrong IP address, you must remove the wrong IP address and add a new one. It's recommended that you remove the entire A record and add a new one, and do a DNS flush on the on-premises source appliance. An illustrative example for the storage account DNS A record in the private DNS zone: - ![DNS records](./media/how-to-use-azure-migrate-with-private-endpoints/dns-a-records.png) + ![DNS records](./media/how-to-use-azure-migrate-with-private-endpoints/dns-a-records.png) An illustrative example for the Recovery Services vault microservices DNS A records in the private DNS zone: An illustrative example for the Recovery Services vault microservices DNS A reco This is a non-exhaustive list of items that can be found in advanced or complex scenarios: -- Firewall settings, either the Azure Firewall connected to the Virtual network or a custom firewall solution deploying in the appliance machine. -- Network peering, which might impact which DNS servers are used and how traffic is routed. -- Custom gateway (NAT) solutions might impact how traffic is routed, including traffic from DNS queries.+- Firewall settings, either the Azure Firewall connected to the Virtual network or a custom firewall solution deploying in the appliance machine. +- Network peering, which might impact which DNS servers are used and how traffic is routed. +- Custom gateway (NAT) solutions might impact how traffic is routed, including traffic from DNS queries. For more information, review the [troubleshooting guide for Private Endpoint connectivity problems.](../private-link/troubleshoot-private-endpoint-connectivity.md) ## Common issues while using Azure Migrate with private endpoints+ In this section, we'll list some of the commonly occurring issues and suggest do-it-yourself troubleshooting steps to remediate the problem. ### Appliance registration fails with the error ForbiddenToAccessKeyVault+ Azure Key Vault create or update operation failed for <_KeyVaultName_> due to the error <_ErrorMessage_> -#### Possible causes: +#### Possible causes + This issue can occur if the Azure account being used to register the appliance doesnΓÇÖt have the required permissions or the Azure Migrate appliance cannot access the Key Vault. -#### Remediation: +#### Remediation **Steps to troubleshoot Key Vault access issues:**+ 1. Make sure the Azure user account used to register the appliance has at least Contributor permissions on the subscription. 1. Ensure that the user trying to register the appliance has access to the Key Vault and has an access policy assigned in the Key Vault>Access Policy section. [Learn more](/azure/key-vault/general/assign-access-policy-portal)-- [Learn more](./migrate-appliance.md#appliancevmware) about the required Azure roles and permissions.++- [Learn more](./migrate-appliance.md#appliancevmware) about the required Azure roles and permissions. **Steps to troubleshoot connectivity issues to the Key Vault:** If you have enabled the appliance for private endpoint connectivity, use the following steps to troubleshoot network connectivity issues:-- Ensure that the appliance is either hosted in the same virtual network or is connected to the target Azure virtual network (where the Key Vault private endpoint has been created) over a private link. The Key Vault private endpoint is created in the virtual network selected during the project creation experience. You can verify the virtual network details in the **Azure Migrate > Properties** page.++- Ensure that the appliance is either hosted in the same virtual network or is connected to the target Azure virtual network (where the Key Vault private endpoint has been created) over a private link. The Key Vault private endpoint is created in the virtual network selected during the project creation experience. You can verify the virtual network details in the **Azure Migrate > Properties** page. ![Azure Migrate properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-properties-page.png) -- Ensure that the appliance has network connectivity to the Key Vault over a private link. To validate the private link connectivity, perform a DNS resolution of the Key Vault resource endpoint from the on-premises server hosting the appliance and ensure that it resolves to a private IP address.-- Go to **Azure Migrate: Discovery and assessment> Properties** to find the details of private endpoints for resources like the Key Vault created during the key generation step. +- Ensure that the appliance has network connectivity to the Key Vault over a private link. To validate the private link connectivity, perform a DNS resolution of the Key Vault resource endpoint from the on-premises server hosting the appliance and ensure that it resolves to a private IP address. +- Go to **Azure Migrate: Discovery and assessment> Properties** to find the details of private endpoints for resources like the Key Vault created during the key generation step. - ![Azure Migrate server assessment properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-assessment-properties.png) -- Select **Download DNS settings** to download the DNS mappings.+- Select **Download DNS settings** to download the DNS mappings. - ![Download DNS settings](./media/how-to-use-azure-migrate-with-private-endpoints/download-dns-settings.png) + ![Download DNS settings](./media/how-to-use-azure-migrate-with-private-endpoints/download-dns-settings.png) -- Open the command line and run the following nslookup command to verify network connectivity to the Key Vault URL mentioned in the DNS settings file. +- Open the command line and run the following nslookup command to verify network connectivity to the Key Vault URL mentioned in the DNS settings file. ```console nslookup <your-key-vault-name>.vault.azure.net If the DNS resolution is incorrect, follow these steps: ![DNS hosts file](./media/how-to-use-azure-migrate-with-private-endpoints/dns-hosts-file-1.png) -1. If you use a custom DNS server, review your custom DNS settings, and validate that the DNS configuration is correct. For guidance, see +1. If you use a custom DNS server, review your custom DNS settings, and validate that the DNS configuration is correct. For guidance, see [private endpoint overview: DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration). 1. **Proxy server considerations**: If the appliance uses a proxy server for outbound connectivity, you might need to validate your network settings and configurations to ensure the private link URLs are reachable and can be routed as expected. If the DNS resolution is incorrect, follow these steps: After youΓÇÖve verified the connectivity, retry the registration process. -### Validate private endpoint network connectivity -You can use the Test-NetConnection command in PowerShell to check if the port is reachable from the appliance to the private endpoint. Ensure that you can resolve the Storage Account and the Key Vault for the Azure migrate project using the private IP address. +### Validate private endpoint network connectivity ++You can use the Test-NetConnection command in PowerShell to check if the port is reachable from the appliance to the private endpoint. Ensure that you can resolve the Storage Account and the Key Vault for the Azure migrate project using the private IP address. ![Screenshot of Vault private endpoint connectivity.](./media/troubleshoot-network-connectivity/vault-network-connectivity-test.png) ![Screenshot of storage private endpoint connectivity.](./media/troubleshoot-network-connectivity/storage-network-connectivity-test.png) ### Start Discovery fails with the error AgentNotConnected+ The appliance could not initiate discovery as the on-premises agent is unable to communicate to the Azure Migrate service endpoint: <_URLname_> in Azure. ![Agent not connected error](./media/how-to-use-azure-migrate-with-private-endpoints/agent-not-connected-error.png) -#### Possible causes: +#### Possible causes + This issue can occur if the appliance is unable to reach the service endpoint(s) mentioned in the error message. -#### Remediation: +#### Remediation + Ensure that the appliance has connectivity either directly or via proxy and can resolve the service endpoint provided in the error message. If you have enabled the appliance for private endpoint connectivity, ensure that the appliance is connected to the Azure virtual network over a private link and can resolve the service endpoint(s) provided in the error message. If you have enabled the appliance for private endpoint connectivity, ensure that If you have enabled the appliance for private endpoint connectivity, use the following steps to troubleshoot network connectivity issues: -- Ensure that the appliance is either hosted in the same virtual network or is connected to the target Azure virtual network (where the private endpoints have been created) over a private link. Private endpoints for the Azure Migrate services are created in the virtual network selected during the project creation experience. You can verify the virtual network details in the **Azure Migrate > Properties** page.+- Ensure that the appliance is either hosted in the same virtual network or is connected to the target Azure virtual network (where the private endpoints have been created) over a private link. Private endpoints for the Azure Migrate services are created in the virtual network selected during the project creation experience. You can verify the virtual network details in the **Azure Migrate > Properties** page. - ![Azure Migrate properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-properties-page.png) +![Azure Migrate properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-properties-page.png) -- Ensure that the appliance has network connectivity to the service endpoint URLs and other URLs, mentioned in the error message, over a private link connection. To validate private link connectivity, perform a DNS resolution of the URLs from the on-premises server hosting the appliance and ensure that it resolves to private IP addresses.-- Go to **Azure Migrate: Discovery and assessment> Properties** to find the details of private endpoints for the service endpoints created during the key generation step. -- ![Azure Migrate server assessment properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-assessment-properties.png) +- Ensure that the appliance has network connectivity to the service endpoint URLs and other URLs, mentioned in the error message, over a private link connection. To validate private link connectivity, perform a DNS resolution of the URLs from the on-premises server hosting the appliance and ensure that it resolves to private IP addresses. +- Go to **Azure Migrate: Discovery and assessment> Properties** to find the details of private endpoints for the service endpoints created during the key generation step. - Select **Download DNS settings** to download the DNS mappings. - ![Download DNS settings](./media/how-to-use-azure-migrate-with-private-endpoints/download-dns-settings.png) + ![Download DNS settings](./media/how-to-use-azure-migrate-with-private-endpoints/download-dns-settings.png) |**DNS mappings containing Private endpoint URLs** | **Details** | | | | In addition to the URLs above, the appliance needs access to the following URLs | **Other public cloud URLs <br> (Public endpoint URLs)** | **Details** | | | | |*.portal.azure.com | Navigate to the Azure portal-|*.windows.net <br/> *.msftauth.net <br/> *.msauth.net <br/> *.microsoft.com <br/> *.live.com <br/> *.office.com <br/> *.microsoftonline.com <br/> *.microsoftonline-p.com <br/> | Used for access control and identity management by Microsoft Entra ID +|*.windows.net <br/>*.msftauth.net <br/> *.msauth.net <br/>*.microsoft.com <br/> *.live.com <br/>*.office.com <br/> *.microsoftonline.com <br/>*.microsoftonline-p.com <br/> | Used for access control and identity management by Microsoft Entra ID |management.azure.com | For triggering Azure Resource Manager deployments |*.services.visualstudio.com (optional) | Upload appliance logs used for internal monitoring.-|aka.ms/* (optional) | Allow access to *also know as* links; used to download and install the latest updates for appliance services -|download.microsoft.com/download | Allow downloads from Microsoft download center +|aka.ms/* (optional) | Allow access to _also know as_ links; used to download and install the latest updates for appliance services +|download.microsoft.com/download | Allow downloads from Microsoft download center -- Open the command line and run the following nslookup command to verify privatelink connectivity to the URLs listed in the DNS settings file. Repeat this step for all URLs in the DNS settings file.+- Open the command line and run the following nslookup command to verify privatelink connectivity to the URLs listed in the DNS settings file. Repeat this step for all URLs in the DNS settings file. _**Illustration**: verifying private link connectivity to the discovery service endpoint_ ```console nslookup 04b8c9c73f3d477e966c8d00f352889c-agent.cus.disc.privatelink.prod.migration.windowsazure.com ```+ If the request can reach the discovery service endpoint over a private endpoint, you will see a result that looks like this: ```console If the DNS resolution is incorrect, follow these steps: After youΓÇÖve verified the connectivity, retry the discovery process. -### Import/export request fails with the error "403: This request is not authorized to perform this operation" +### Import/export request fails with the error "403: This request is not authorized to perform this operation" -The export/import/download report request fails with the error *"403: This request is not authorized to perform this operation"* for projects with private endpoint connectivity. +The export/import/download report request fails with the error _"403: This request is not authorized to perform this operation"_ for projects with private endpoint connectivity. -#### Possible causes: -This error might occur if the export/import/download request was not initiated from an authorized network. This can happen if the import/export/download request was initiated from a client that is not connected to the Azure Migrate service (Azure virtual network) over a private network. +#### Possible causes ++This error might occur if the export/import/download request was not initiated from an authorized network. This can happen if the import/export/download request was initiated from a client that is not connected to the Azure Migrate service (Azure virtual network) over a private network. #### Remediation-**Option 1** *(recommended)*: ++**Option 1** _(recommended)_: -To resolve this error, retry the import/export/download operation from a client residing in a virtual network that is connected to Azure over a private link. You can open the Azure portal in your on-premises network or your appliance VM and retry the operation. +To resolve this error, retry the import/export/download operation from a client residing in a virtual network that is connected to Azure over a private link. You can open the Azure portal in your on-premises network or your appliance VM and retry the operation. **Option 2**: The import/export/download request makes a connection to a storage account for u To set up the storage account for public endpoint connectivity, -1. **Locate the storage account**: The storage account name is available on the Azure Migrate: Discovery and Assessment properties page. The storage account name will have the suffix *usa*. +1. **Locate the storage account**: The storage account name is available on the Azure Migrate: Discovery and Assessment properties page. The storage account name will have the suffix _usa_. - :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/server-assessment-properties.png" alt-text="Snapshot of download D N S settings."::: --2. Navigate to the storage account and edit the storage account networking properties to allow access from all/other networks. +2. Navigate to the storage account and edit the storage account networking properties to allow access from all/other networks. :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/networking-firewall-virtual-networks.png" alt-text="Snapshot of storage account networking properties."::: To set up the storage account for public endpoint connectivity, ### Using private endpoints for replication requires the Azure Migrate appliance services to be running on the following versions -#### Possible causes: +#### Possible causes + This issue can occur if the services running on the appliance are not running on their latest version. The DRA agent orchestrates server replication, and coordinates communication between replicated servers and Azure. The gateway agent sends replicated data to Azure. >[!Note]-> This error is only applicable for agentless VMware VM migrations. +> This error is only applicable for agentless VMware VM migrations. -#### Remediation: +#### Remediation 1. Validate that the services running on the appliance are updated to the latest versions. - To do so, launch the appliance configuration manager from your appliance server and select **View appliance services** from the **Setup prerequisites** panel. The appliance and its components are automatically updated. If not, follow the instructions to update the appliance services manually. + To do so, launch the appliance configuration manager from your appliance server and select **View appliance services** from the **Setup prerequisites** panel. The appliance and its components are automatically updated. If not, follow the instructions to update the appliance services manually. :::image type="content" source="./media/troubleshoot-network-connectivity/view-appliance-services.png" alt-text="Snapshot of View appliance services."::: ### Failed to save configuration: 504 gateway timeout -#### Possible causes: +#### Possible causes + This issue can occur if the Azure Migrate appliance cannot reach the service endpoint provided in the error message. -#### Remediation: +#### Remediation To validate the private link connection, perform a DNS resolution of the Azure Migrate service endpoints (private link resource FQDNs) from the on-premises server hosting the Migrate appliance and ensure that they resolve to private IP addresses. **To obtain the private endpoint details to verify DNS resolution:** -The private endpoint details and private link resource FQDN information are available in the Discovery and Assessment and Migration and modernization properties pages. Select **Download DNS settings** on both the properties pages to view the full list. +The private endpoint details and private link resource FQDN information are available in the Discovery and Assessment and Migration and modernization properties pages. Select **Download DNS settings** on both the properties pages to view the full list. Next, refer to [this guidance](#verify-dns-resolution) to verify the DNS resolution. |
migrate | Tutorial Discover Spring Boot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-spring-boot.md | To cleanup, run the following script in delete mode: In the script generated by the portal, after all the user arguments (after line 19 in the following image), add `export DELETE= ΓÇ£trueΓÇ¥` and run the same script again. This cleans up all existing components created during appliance creation. ## Overview of Discovery results |
modeling-simulation-workbench | Concept Chamber | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-chamber.md | +# Chambers in the Azure Modeling and Simulation Workbench -# Chamber: Azure Modeling and Simulation Workbench +In the Azure Modeling and Simulation Workbench, chambers are a security boundary for a group virtual machines (VM) (nodes) and share common users. A chamber provides a full-featured and secure environment for users to run engineering applications and workloads together in isolation. Chamber VMs are all on the same subnet and have no internet access. -In Azure Modeling and Simulation Workbench, a chamber is defined as a group of connected computers (nodes) that work together as a single system. A chamber provides a full-featured and secure environment for users to run engineering applications and workloads together. +## Key features -- Chambers offer optimized infrastructure, allowing users to choose from varied VM sizes, storage options, and compute resources to constitute workloads.-- Chambers enable a preconfig environment for license server access and full-featured workload tools.-- On-demand chambers are nested to Modeling and Simulation [Workbench](./concept-workbench.md) resource.+* Chambers offer optimized infrastructure, allowing users to choose from varied VM sizes, storage options, and compute resources to constitute workloads. +* Chambers enable a preconfigured, isolated environment for license server access and full-featured workload tools. +* Chambers are encapsulated in the [Workbench](./concept-workbench.md) resource. ## Chamber environment Chambers create a secure and isolated environment by adding private IP access and removing internet access. Public domain access is restricted to authorized networks over encrypted sessions enabled by the connector component. A [connector](./concept-connector.md) exists per chamber that supports the protocols established through VPN, Azure Express Route, or allowlisted Public IP addresses. -Only provisioned users can access the chamber environment. User provisioning is done at the chamber component using IAM [(Access Control)](/azure/role-based-access-control/role-assignments-portal). This enables Cross team and/or cross-organization individuals to collaborate on the same projects through the chambers. Multifactor authentication (MFA) enabled through Microsoft Entra ID is recommended to enhance your organization's security. +Only provisioned users can access the chamber environment. User provisioning is done at the chamber level using Azure's [Identity Access Management](/azure/role-based-access-control/role-assignments-portal). This enables cross-team and/or cross-organization collaboration on the same projects through chambers. Multifactor authentication (MFA) enabled through Microsoft Entra ID is recommended to enhance your organization's security. ## Chamber storage -Users can resize and tailor the chambers to support storage requirement needs throughout the design process. Chamber users can also allocate Chamber VMs on demand, select the right-sized VM/CPU for the task/job at hand, and decommission the workload when the job is done to save costs. +Users can resize and tailor the chambers to support storage requirement needs throughout the design process. Chamber users can also allocate chamber VMs on demand, select the right-sized VM/CPU for the task/job at hand, and decommission the workload when the job is done to save costs. -### Right-sizing +### Cost optimization -The right-sizing feature reduces the Azure spend by identifying idle and underutilized resources. For example: +Administrators can optimize their resource consumption without necessarily destroying resources or moving data by: -- By managing the size and number of virtual machines.-- By stopping unused workloads, connectors and chambers.-- By managing the size and performance tier of chamber storages.+* [Managing](./how-to-guide-chamber-vm.md) the size and number of virtual machines. +* [Idling](./how-to-guide-chamber-idle.md) unused chambers to reduce cost without deleting VMs or storage. +* [Managing](./how-to-guide-manage-chamber-storage.md) the size and performance tier of chamber storages. Learn more about reducing service costs using [Azure Advisor](/azure/advisor/advisor-cost-recommendations#optimize-spend-for-mariadb-mysql-and-postgresql-servers-by-right-sizing) and [right-size VMs best practices](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs#best-practice-right-size-vms). -## Related content +## Next steps -- [Connector](./concept-connector.md)+> [!div class="nextstepaction"] +> [Create a chamber VM](./how-to-guide-chamber.md) |
modeling-simulation-workbench | Concept Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-connector.md | Title: "Connector: Azure Modeling and Simulation Workbench" -description: Overview of how the Azure Modeling and Simulation Workbench implements connectors. + Title: "Connectors: Azure Modeling and Simulation Workbench" +description: Connector implementation in Azure Modeling and Simulation Workbench. Last updated 01/01/2023-#Customer intent: As a Modeling and Simulation Workbench user, I want to understand the connector component. ++#Customer intent: As a Modeling and Simulation Workbench user, I want to understand the component. +# Connectors in Azure Modeling and Simulation Workbench ++Connectors define the network access method between users and the Azure Modeling and Simulation Workbench chamber. Connectors support connectivity through allowlisted public IPs, VPN, or Azure ExpressRoute. A chamber can have only one connector configured at a time. Connectors also configure copy-paste functionality into chamber VMs. Connector types are immutable and once created can't be changed to another access model. Connectors are part of the Idle mode setting to reduce cost. ++## Public IP access via allowlist ++The Workbench can be built to allow users to connect directly from the internet, allowing flexible, open access. When a Public IP Connection is built, connections are permitted using an allowlist. The allowlist uses CIDR (Classless Interdomain Routing) notation to conveniently manage access from large network ranges, such as conference centers or corporate exit nodes. Only IPs listed in the allowlist are able to make connections to its associated chamber. -# Connector: Azure Modeling and Simulation Workbench +## Private Azure networking -Connectors are used to define and configure the network access between an organization's on-premises or cloud environment into the Azure Modeling and Simulation Workbench chamber. The connector supports protocols established through VPN, Azure Express Route, or network Access Control Lists. +A connector can be created for private network access from Azure virtual networks. This method is best suited where a private or controlled connection is required. Azure ExpressRoutes provide a dedicated connection from an on-premises infrastructure to an Azure data center and can be peered to the Workbench. With a VPN gateway, the Workbench can use a private network with extra encryption layers. -## VPN or Azure Express Route +### VPN -For organizations who have an Azure network setup to manage access for their employees, they can have strict controls of the virtual network subnet addresses used for connecting into the chamber. At creation time of the connector, the Chamber Admin or Workbench Owner can connect a virtual network subnet with VPN gateway or ExpressRoute gateway to establish a secure connection from your on-premises network to the chamber. The subnet selection should be a non gateway subnet within the same virtual network with the gateway subnet for VPN gateway or ExpressRoute gateway. +A VPN connector can be created which deploys infrastructure specifically for VPN access. The VPN connector is required if the chamber is accessed through a point-to-site or site-to-site VPN. -## Allowlisted Public IP addresses +### Azure ExpressRoute -For those organizations who don't have an Azure network setup, or prefer to use the public network, they can configure their connector to allow access to the chamber via allowlisted Public IP addresses. The connector object allows the allowed IP list to be configured at creation time or added or removed dynamically after the connector object is created. +[Azure ExpressRoute](/azure/expressroute/expressroute-introduction) provides secure, dedicated, encrypted connectivity from on-premises to an Azure landing zone. A Workbench Owner must create a connector expressly for ExpressRoute, providing the necessary virtual network, supporting network infrastructure, and peer the appropriate vnets. -## Related content +## Next step -- [Data pipeline](./concept-data-pipeline.md)+> [!div class="nextstepaction"] +> [Create a connector](./how-to-guide-set-up-networking.md) |
modeling-simulation-workbench | Concept Data Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-data-pipeline.md | The data pipeline enables users to bring data into the [chamber](./concept-chamb ## Importing data overview -Users with access to the chamber can bring data into the chamber via AzCopy and an expiring SAS URI token they get from the chamber component. They then use AzCopy to move data into the data pipeline endpoint. The chamber recognizes the data pipeline request and moves the file into the chamber. For traceability purposes, when a file is moved into the chamber, the data pipeline automatically creates a file object in the chamber that represents the file data. +Users with access to the chamber can bring data into the chamber via AzCopy and an expiring SAS URI token they get from the chamber component. They then use AzCopy to move data into the data pipeline endpoint. The chamber recognizes the data pipeline request and moves the file into the chamber. For traceability purposes, when a file is moved into the chamber, the data pipeline automatically creates a file object in the chamber that represents the file data. ## Exporting data overview Users with access to the chamber can export data from the chamber via the data pipeline. -1. **Identify file to export.** The export process is triggered when a user places a file to export into a designated area within the chamber. A Chamber Admin or Chamber User copies the file to the data out folder within the pipeline. The data pipeline detects the copied file and creates a file object. The file creation activity is traceable in the logs and enables the next step of the data pipeline. +1. **Identify file to export.** The export process is triggered when a user places a file to export into a designated area within the chamber. A chamber Admin or chamber User copies the file to the data out folder within the pipeline. The data pipeline detects the copied file and creates a file object. The file creation activity is traceable in the logs and enables the next step of the data pipeline. -1. **Request file to export.** A Chamber Admin reviews files in the data pipeline and requests to export files in the data out folder in the chamber. The pipeline creates a file request object. The export request activity is traceable in the logs and enables the next step of the data pipeline. +1. **Request file to export.** A Chamber Admin reviews the files staged in the data pipeline and requests to export. The pipeline manager creates a file request object. The export request activity is traceable in the logs and enables the next step of the data pipeline. -1. **Approve/reject export request.** The Workbench Owner approves or rejects the file request object for export. The export approval step must be completed by the Workbench Owner and can't be the same person who requested to export the data. +1. **Approve/reject export request.** The Workbench Owner either approves or rejects the export file request. Only a Workbench Owner can approve or reject requests. The individual who approves or denies can't be the same person who initially requested the export. -1. **Download file to export.** If a file is approved for export, the user gets a download URI from the file request object and copies it out of the chamber using AzCopy. The URI has an expiration timestamp and must be downloaded before it expires. If the URI expires, you need to request a new download URI. +1. **Download file.** If a file is approved for export, the user gets a download URI from the file request object and copies it out of the chamber using AzCopy. The URI has an expiration timestamp and must be downloaded before it expires. If the URI expires, you need to request a new download URI. - > [!NOTE] - > Larger files take longer to be available to download after being approved and to download using AzCopy. Check the expiration on the download URI and request a new one if the window has expired. + > [!NOTE] + > Larger files take longer to be available to download after being approved and to download using AzCopy. Check the expiration on the download URI and request a new one if the window has expired. -## Related content +## Next steps -- [License service](./concept-license-service.md)+- [License service](./concept-license-service.md) |
modeling-simulation-workbench | Concept License Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-license-service.md | Last updated 01/01/2023 # License service: Azure Modeling and Simulation Workbench -A license service automates the installation of a license manager to help customers accelerate their engineering design. A license service is integrated into Azure Modeling and Simulation Workbench. +A license service automates the installation of a license manager to help customers accelerate their engineering design. A license service is integrated into Azure Modeling and Simulation Workbench. ## Overview Engineering design tools are widely used across industries to enable design team Here's how the license service works: -- For each deployed chamber within the workbench, we set up a license server and expose the FLEXlm HostID's to procure licenses.-- Users request tool licenses for the specific HostID.-- Once the license file is received from the tool vendor, users import it to enable the license service.+For each deployed chamber within the workbench, we set up a license server and expose the FLEXlm HostID's to procure licenses. Users then request tool licenses referencing the specific HostID. Once the license file is received from the tool vendor, users import it to the chamber license server to enable the license service. ## Additional information -For silicon EDA, our service automation deploys license servers for each of the four common software vendors (Synopsys, Cadence, Siemens, and Ansys) as part of resource creation to enable multi-vendor flows. The workbench also supports license service beyond these common EDA tool vendors with some manual configuration. +For semiconductor Electronic Design Automation (EDA), our service automation deploys license servers for each of the four common software vendors (Synopsys, Cadence, Siemens, and Ansys) as part of resource creation to enable multi-vendor flows. The workbench also supports license service beyond these common EDA tool vendors with some manual configuration. -This flow is extendible and can also include other software vendors across industry verticals." +This flow is extendible and can also include other software vendors across industry verticals. ## Related content -- Learn more about the benefits and key features of using [Shared storage](./shared-storage.md).+- Learn more about the benefits and key features of using [shared storage](./shared-storage.md). |
modeling-simulation-workbench | Concept Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-storage.md | + + Title: "Storage: Azure Modeling and Simulation Workbench" +description: Types of storage offered in Modeling and Simulation Workbench. ++++ Last updated : 08/22/2024++#CustomerIntent: As a Workbench user, I want to understand the types of storage available in the Azure Modeling and Simulation Workbench. ++# Storage and access in Azure Modeling and Simulation Workbench ++The Modeling and Simulation Workbench offers several tiers of storage classes. There are important differences in capacity and performance that make some volumes more suitable for certain situations. ++## Local storage on VMs ++Depending on the [Virtual Machine (VM) selected](./concept-vm-offerings.md), local temporary storage might not be available. Modeling and Simulation Workbench doesn't have controls for specifying data and OS disks as in conventional Azure VMs. Since VMs are frequently created and deleted, Microsoft recommends that users install applications and workspaces to the chamber or shared storage volume to improve reliability. Chamber and shared storages are high-performance and high-reliability volumes based on Azure NetApp Files. ++## Chamber-tier storage ++Chamber-accessible storage is accessible across the entire chamber, its VMs, and users. Chamber-tier storage has three classes: user home directories, data pipeline mount points, and chamber storage. ++### User home directories ++The conventional Linux `/home` directory is mounted at `/mount/sharedhome`. The `/mount/sharedhome` is a single volume accessible across all chamber VMs and isn't accessible outside the chamber. This volume isn't high-performance and users are discouraged from attempting to install large files or operate intense workloads there. This directory is intended for user resource (rc), configuration, and small private directories. ++### Data pipeline mount points ++The data pipeline file structure has two directories: `/mount/datapipeline/datain` where imported data is staged and `/mount/datapipeline/dataout` where file exports are staged for file requests. This volume is large to accommodate large file imports and exports but files shouldn't be stored here long term. This mount is only for data import and export operations and isn't high-performance. ++### Chamber Storage ++Chamber Storage is the high-performance, high-capacity storage solution for use within chambers. Based on Azure NetApp Files, it's available in two high-performance tiers, and dynamically scalable after creation. Chamber Storage can be accessed at `/mount/chamberstorages` where a different directory exists for each created volume. Volumes are sizable in 4 TB increments up to the user's subscription quota. ++> [!TIP] +> Users are encouraged to place all working directories and point all application runs at a chamber storage volume for increased performance and data reliablity. ++## Workbench tier shared storage ++Shared storage is accessible across select chambers in a Workbench. With each shared storage volume, you specify which chambers have access to the volume. Shared storage volumes appear under the `/mount/sharedstorage` mount point in every VM in the chamber to which access was granted. ++To enable secure cross team and/or cross-enterprise collaboration, a shared storage resource allows for selective data sharing between chambers. Shared storage is built on Azure NetApp Files storage volumes and is available to deploy in multiples of 4 TB. Workbench owners can create multiple shared storage instances on demand and dynamically link them to existing chambers to facilitate collaboration. ++Users who are provisioned to a specific chamber can access all shared storage volumes linked to that chamber. Once users get deprovisioned from a chamber or that chamber gets deleted, they lose access to any linked shared storage volumes. ++## Key features of shared storage ++**Performance**: Shared storage is based on Azure NetApp Files and is ideal for complex engineering or scientific workloads such as simulations. ++**Scalability**: Users can adjust the storage capacity and performance tier according to their needs, just like chamber private storage. ++**Management**: Workbench Owners can manage storage capacity, resize storage, and change performance tiers through the Azure portal. ++> [!IMPORTANT] +> All members of a chamber have access to a shared storage resource once that chamber has been granted access to the storage volume. Do not place any data in shared storage that you do not wish to share with all members of that chamber. Create a separate chamber for select users if access needs to be restricted. ++## Resources ++* [Create chamber storage](./how-to-guide-manage-chamber-storage.md) +* [Create shared storage](./how-to-guide-manage-shared-storage.md) +* [About chamber VM offerings and local storage](./concept-vm-offerings.md) |
modeling-simulation-workbench | Concept Vm Offerings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-vm-offerings.md | + + Title: "VM offerings: Azure Modeling and Simulation Workbench" +description: VM offerings available in the Azure Modeling and Simulation Workbench. ++++ Last updated : 08/12/2024++#CustomerIntent: As a Workbench User, I want to understand what VMs are offered on the Azure Modeling and Simulation Workbench so that I can pick the right VM for my needs. ++# VM Offerings in Azure Modeling and Simulation Workbench ++Azure Modeling and Simulation Workbench offers a select set of virtual machines (VM) optimized for large, complex modeling and simulation workloads, semiconductor design, and other scientific or industrial workloads. ++This article provides an overview of the Azure VM families that are available in Modeling and Simulation Workbench. A summary of the series, common and optimal workloads, and additional information can help you choose the best VM for your scenario. ++## Quotas ++VM quotas in Modeling and Simulation Workbench are handled differently than in traditional Azure VM offerings. Modeling and Simulation Workbench operates in a Microsoft managed environment, therefore VM quotas aren't directly associated with the owner's Azure subscription. Quota requests should be sent through your Microsoft account manager. ++## General purpose ++General purpose VM sizes provide balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. They make ideal management VMs for managing chambers, Facilitating file imports or exports, compiling applications, or installing applications to shared storage. ++### Dv4-series ++The 'D' family of VM sizes are one of Azure's general purpose VM sizes. They're designed for a range of demanding workloads, such as enterprise applications, web and application servers, development and test environments, and batch processing tasks. They're favored for running enterprise-grade applications, supporting moderate to high-traffic web servers, and performing data-intensive batch processing. ++The Dv4 run on Intel® Xeon® Platinum 8473C (Sapphire Rapids), Intel® Xeon® Platinum 8370C (Ice Lake), or Intel® Xeon® Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, providing a better value proposition for most general-purpose workloads. Dv4 series don't have local storage. ++[View the Dv4 family page](/azure/virtual-machines/sizes/general-purpose/dv4-series) ++| Size Name | vCPUs (Qty.) | Memory (GB) | Max Bandwidth (Mbps) | +|--|--|-|-| +| Standard_D2_v4 | 2 | 8 | 5000 | +| Standard_D4_v4 | 4 | 16 | 10000 | +| Standard_D8_v4 | 8 | 32 | 12500 | +| Standard_D16_v4 | 16 | 64 | 12500 | +| Standard_D32_v4 | 32 | 128 | 16000 | +| Standard_D48_v4 | 48 | 192 | 24000 | +| Standard_D64_v4 | 64 | 256 | 30000 | ++## Compute optimized ++Compute optimized VM sizes have a high CPU-to-memory ratio. These sizes are good for medium traffic web servers, network appliances, batch processes, and application servers. ++### Fsv2-series ++The Fsv2-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors, or the Intel® Xeon® Platinum 8168 (Skylake) processors. It features a sustained all core Turbo clock speed of 3.4 GHz and a maximum single-core turbo frequency of 3.7 GHz. Intel® AVX-512 instructions are new on Intel Scalable Processors. These instructions provide up to a 2X performance boost to vector processing workloads on both single and double precision floating point operations. Fsv2-series VMs feature Intel® Hyper-Threading Technology. ++Fsv2-series have fixed-sized local storage. ++[View the Fsv2 family page](/azure/virtual-machines/sizes/compute-optimized/fsv2-series) ++| Size | vCPUs | Memory: GiB | Temp storage (SSD) GiB | Expected network bandwidth (Mbps) | +|||||| +| Standard_F16s_v2 | 16 | 32 | 128 | 12500 | +| Standard_F32s_v2 | 32 | 64 | 256 | 16000 | +| Standard_F48s_v2 | 48 | 96 | 384 | 21000 | +| Standard_F64s_v2 | 64 | 128 | 512 | 28000 | +| Standard_F72s_v2 | 72 | 144 | 576 | 30000 | ++## Memory optimized ++Memory optimized VM sizes offer a high memory-to-CPU ratio that is great for relational database servers, medium to large caches, and in-memory analytics. ++### Esv5-series ++Esv5-series virtual machines run on Intel® Xeon® Platinum 8473C (Sapphire Rapids), or Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. Esv5-series virtual machines don't have temporary storage. ++[View the Esv5 family page](/azure/virtual-machines/ev5-esv5-series) ++| Size | vCPU (Qty.) | Memory: GiB | Max bandwidth (Mbps) | +||||| +| Standard_E2s_v5 | 2 | 16 | 12500 | +| Standard_E4s_v5 | 4 | 32 | 12500 | +| Standard_E8s_v5 | 8 | 64 | 12500 | +| Standard_E16s_v5 | 16 | 128 | 12500 | +| Standard_E20s_v5 | 20 | 160 | 12500 | +| Standard_E32s_v5 | 32 | 256 | 16000 | +| Standard_E48s_v5 | 48 | 384 | 24000 | +| Standard_E64s_v5 | 64 | 512 | 30000 | +| Standard_E96s_v5 | 96 | 672 | 35000 | ++### M family ++The 'M' family of VM size series are one of Azure's ultra memory-optimized VM instances. They're designed for memory-intensive workloads, such as large in-memory databases, data warehousing, and high-performance computing (HPC). The M-family is equipped with substantial RAM capacities and high vCPU capabilities, M-family VMs support applications and services that require massive amounts of memory and significant computational power. This makes them well-suited for handling tasks like real-time data processing, complex scientific simulations, and large-scale enterprise resource planning (ERP) systems, ensuring peak performance for the most demanding data-centric applications. ++M-series VMs have fixed-size temporary storage. ++[View the M series page](/azure/virtual-machines/m-series) ++| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Expected network bandwidth (Mbps) | +|-||-||--| +| Standard_M64s | 64 | 1024 | 2048 | 16000 | +| Standard_M128s | 128 | 2048 | 4096 | 30000 | +| Standard_M64m | 64 | 1792 | 7168 | 16000 | +| Standard_M128m | 128 | 3892 | 14336 | 32000 | ++## Next step ++> [!div class="nextstepaction"] +> [Create a chamber VM](./how-to-guide-chamber-vm.md) |
modeling-simulation-workbench | Concept Workbench | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-workbench.md | -# Customer intent: As a Modeling and Simulation Workbench user, I want to understand the workbench component. +# Customer intent: As a Modeling and Simulation Workbench user, I want to understand workbench components. - # Workbench: Azure Modeling and Simulation Workbench -An Azure Modeling and Simulation Workbench is a placeholder for housing several workbench components for users. A workbench refers to a series of supporting services that optimize workload performance in Azure Modeling and Simulation Workbench, such as: computing, storage, and networking. --## Workbench components --A workbench hosts Azure resources in a closed environment of virtual machines, storage devices, and databases. A workbench is the parent container for [chamber](./concept-chamber.md) objects that run engineering applications and workloads in isolated environments. +The Azure Modeling and Simulation Workbench is a Platform-as-a-Service (PaaS) that provides a secure environment for managed, cloud-based collaboration and access to large-scale compute infrastructure. The Modeling and Simulation Workbench provides conventional cloud resources, such as computing, storage, and networking, in an isolated, managed environment. The components are arranged as a hierarchy of containers, presented in the user's subscription but deployed in Microsoft's manged environment. Multiple enterprises can collaboratively work on projects within a workbench using Modeling and Simulation Workbench's secure design environment. -Multiple teams can work on shared projects within a workbench using Modeling and Simulation Workbench's collaborative and secure design environment. +This article presents an overview of the individual components, which make up the Azure Modeling and Simulation Workbench. -The chamber and [connector](./concept-connector.md) have its own admin that manages the space, the components, and its users. Authorized users can access and modify systems and transform the components and services as per their project requirements. Users can also delete high-performance VMs after use to save on costs. +## Workbench -## Workbench infrastructure +A Workbench is the top-level container for the Azure Modeling and Simulation Workbench. It hosts conventional Azure resources in a closed environment. Workbenches house user and data isolation chambers, virtual machines, and networking infrastructure. A Workbench has no managing controls and only a Workbench Owner can deploy. -The infrastructure of the Azure Modeling and Simulation Workbench is optimized for compute and memory intensive applications. The workbenches ensure maximum throughput and performance for engineering workloads, supported by high performance file systems and efficient job scheduling. +## Chambers -The workbench includes the following types of components: +[Chambers](./concept-chamber.md) are contained within a Workbench object and contain user data and workloads in an isolated environment. Users assigned to a chamber only have visibility to users and resources in that same chamber. Compute resources are deployed into a chamber as Workload VMs and several classes of storage are available. ### Compute -Azure offers varied classes of virtual machines (VMs) that span diverse memory-to-core ratios and suit different workload requirements. Some of the VMs include General purpose VMs, Compute optimized VMs, and Memory optimized VMs. +Chamber Workload VMs are the Workbench's compute resource and the encapsulating container for traditional VMs. Unlike traditional Infrastructure-as-a-Service offerings, Workload VMs are created with a sensible set of defaults, eliminating the expertise required to securely deploy a VM into a cloud environment. Workload VMs are isolated from the internet and VMs in other chambers, but have access to all VMs in the same chamber. User provisioning is automated at the chamber level. Chamber VMs offer a select set of the Azure virtual machine (VM) offerings that span diverse memory-to-core ratios and suit different workload requirements. VM offerings include general purpose, compute optimized memory optimized VMs. -### Storage +## Storage -Key storage components work together to provide high performance for engineering workflows. The storage service enables you to migrate and run enterprise file applications. +Storage components work together to provide high performance for engineering workflows. The storage service enables you to migrate and run enterprise file applications. Modeling and Simulation Workbench offers a range of storage configurations that offer high-performance, shared, or isolated access. Storage is preconfigured to be accessible to chambers or between a select set of chambers. -### Networking +## Networking -The Azure virtual network enables over-provisioned network resources with high bandwidth and low latency. Network quality and throughput impacts job runtime drastically. Azure offers built-in, custom options for fast, scalable, and secure connectivity aided by its wide and private optical-fiber capacity, enabling low-latency access globally. Azure also offers accelerated networking to reduce the number of hops and deliver improved performance. +Networking is presented as a [Connector](./concept-connector.md) object that attaches to a chamber. Connectors can be provisioned to allow connection directly from the internet or an Azure virtual network. Azure virtual network connections enable over-provisioned network resources with high bandwidth and low latency. Network quality and throughput impacts job runtime drastically. Azure offers built-in, custom options for fast, scalable, and secure connectivity aided by its wide and private optical-fiber capacity, enabling low-latency access globally. Azure also offers accelerated networking to reduce the number of hops and deliver improved performance. +<!-- - [Azure ExpressRoute](/azure/expressroute/expressroute-introduction) - The network service creates private connections between the infrastructure on-premises without traversing the public internet. The service offers immense reliability, quicker speeds, and lower latencies than regular internet connections. - [Azure VPN](/azure/vpn-gateway/vpn-gateway-about-vpngateways) - A VPN gateway is a specific type of virtual network gateway, sending encrypted traffic between an Azure virtual network and an on-premises network over the public network. -- Remote desktop service - As robust security is mandatory to protect IP within and outside chambers, remote desktop access needs to be secured, with custom restrictions on data transfer through the sessions. Customer IT admins can enable multifactor authentication through [Microsoft Entra ID](/azure/active-directory/) and provision role assignments to Modeling and Simulation Workbench users.+- Remote desktop service - As robust security is mandatory to protect IP within and outside chambers, remote desktop access needs to be secured, with custom restrictions on data transfer through the sessions. Customer IT admins can enable multifactor authentication through [Microsoft Entra ID](/azure/active-directory/) and provision role assignments to Modeling and Simulation Workbench users. --> ## Related content -- [User personas](./concept-user-personas.md)+* [Storage](./concept-storage.md) +* [User personas](./concept-user-personas.md) +* [Chambers](./concept-chamber.md) |
modeling-simulation-workbench | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/disaster-recovery.md | Title: Disaster recovery for Modeling and Simulation Workbench -description: This article provides an overview of disaster recovery for Azure Modeling and Simulation Workbench workbench component. + Title: "Disaster recovery: Azure Modeling and Simulation Workbench" +description: This article provides an overview of disaster recovery for Azure Modeling and Simulation Workbench. Last updated 08/21/2024 # Disaster recovery for Modeling and Simulation Workbench -This article explains how to configure a disaster recovery environment for Azure Modeling and Simulation Workbench. Azure data center outages are rare but can last anywhere from a few minutes to hours. Data Center outages can cause disruption to the environments hosted in that data center. By following the steps in this article, Azure Modeling and Simulation workbench customers will continue to operate in the cloud in the event of a data center outage for the primary region hosting your workbench instance. +This article explains how to configure a disaster recovery environment for Azure Modeling and Simulation Workbench. Azure data center outages are rare but can last anywhere from a few minutes to hours. Data Center outages can cause disruption to the environments hosted in that data center. This article gives Azure Modeling and Simulation workbench customers a resource to continue operations in the cloud in the event of a data center outage in the primary region hosting your workbench instance. Planning for disaster recovery involves identifying expected recovery point objectives (RPO) and recovery time objectives (RTO) for your instance. Based upon your risk-tolerance and expected RPO, follow these instructions at an interval appropriate for your business needs. -A typical disaster recovery workflow starts with a failure of a critical service in your primary region. As the issue gets investigated, Azure publishes an expected time for the primary region to recover. If this timeframe is not acceptable for business continuity, and the problem does not impact your secondary region, you would start the process to fail over to the secondary region. +A typical disaster recovery workflow starts with a failure of a critical service in your primary region. As the issue gets investigated, Azure publishes an expected time for the primary region to recover. If this timeframe isn't acceptable for business continuity, and the problem doesn't impact your secondary region, you would start the process to fail over to the secondary region. ## Achieving business continuity for Azure Modeling and Simulation Workbench-To be prepared for a data center outage, you can have a Modeling and Simulation workbench instance provisioned in a secondary region. -These Workbench resources can be configured to match the resources that exist in the primary Azure Modeling and Simulation workbench instance. Users in the workbench instance environment can be provisioned ahead of time, or when you switch to the secondary region. Chamber and connector resources can be put in a stopped state post deployment to invoke idle billing meters when not being used actively. +To be prepared for a data center outage, you can have a Modeling and Simulation workbench instance provisioned in a secondary region. ++These Workbench resources can be configured to match the resources that exist in the primary Azure Modeling and Simulation workbench instance. Users in the workbench instance environment can be provisioned ahead of time, or when you switch to the secondary region. Chamber and connector resources can be put in a stopped state post deployment to invoke idle billing meters when not being used actively. Alternatively, if you don't want to have an Azure Modeling and Simulation Workbench instance provisioned in the secondary region until an outage impacts your primary region, follow the provided steps in the Quickstart, but stop before creating the Azure Modeling and Simulation Workbench instance in the secondary region. That step can be executed when you choose to create the workbench resources in the secondary region as a failover. Alternatively, if you don't want to have an Azure Modeling and Simulation Workbe - Ensure that the services and features that your account uses are supported in the target secondary region. -## Verify Entra ID tenant +## Verify Microsoft Entra ID tenant information -The workspace source and destination can be in the same subscription. If source and destination for workbench are different subscriptions, the subscriptions must exist within the same Entra ID tenant. Use Azure PowerShell to verify that both subscriptions have the same tenant ID. +The workspace source and destination can be in the same subscription. If source and destination for workbench are different subscriptions, the subscriptions must exist within the same Microsoft Entra ID tenant. Use Azure PowerShell to verify that both subscriptions have the same tenant ID. ```powershell Get-AzSubscription -SubscriptionId <your-source-subscription>.TenantId List of supported regions can be found on the [Azure product availability roadma Then, create a backup of your Azure Key Vault and keys used by Azure Modeling and Simulation in Key Vault including: -1. Application Client Id key -2. Application Secret key +1. Application Client key +2. Application Secret key ## Configure the new instance -In the event of your primary region failure, and decision to work in a backup region, you would create a Modeling and Simulation Workbench instance in your backup region. +In the event of a primary region failure, and decision to work in a backup region, you would create a Modeling and Simulation Workbench instance in your backup region. -1. Register to the Azure Modeling and Simulation Workbench Resource Provider as described in [Create an Azure Modeling and Simulation Workbench](/azure/modeling-simulation-workbench/quickstart-create-portal#register-azure-modeling-and-simulation-workbench-resource-provider). +1. Register to the Azure Modeling and Simulation Workbench Resource Provider as described in [Create an Azure Modeling and Simulation Workbench](/azure/modeling-simulation-workbench/quickstart-create-portal#register-azure-modeling-and-simulation-workbench-resource-provider). 1. Create an Azure Modeling and Simulation Workbench using this section of the Quickstart. -1. If desired, upload data into the new backup instance following Upload Files section of instructions. +1. Upload data into the new backup instance following Upload Files section of instructions, if necessary. You can now do your work in the new workbench instance created in the backup region. - ## Cleanup Once your primary region is up and operating, and you no longer need your backup instance, you can delete it. |
modeling-simulation-workbench | How To Guide Add Redirect Uris | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-add-redirect-uris.md | + + Title: "Add redirect URIs: Azure Modeling and Simulation Workbench" +description: Add redirect URIs for Azure Modeling and Simulation Workbench. ++++ Last updated : 08/20/2024++#CustomerIntent: As a administrator, I want to add authentication URI from the Azure Modeling and Simulation Workbench to the Entra Id application registration. ++# Add redirect URIs for Modeling and Simulation Workbench ++A redirect Uniform Resource Identifier (URI) is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication. Each has two redirect URIs that must be registered in Microsoft Entra ID. A single Application Registration handles all the redirects and security tokens for a workbench. ++## Prerequisites ++* An application registration in Microsoft Entra ID for the Azure Modeling and Simulation Workbench +* A Workbench instance with a chamber and created. ++## Add redirect URIs for the application in Microsoft Entra ID + |
modeling-simulation-workbench | How To Guide Chamber Idle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-chamber-idle.md | + + Title: "Manage chamber idle mode: Azure Modeling and Simulation Workbench" +description: Place a chamber into idle mode to optimize cost in Azure Modeling and Simulation Workbench. ++++ Last updated : 08/17/2024++#CustomerIntent: As a Chamber Admin, I want to reduce cost and place a chamber into Idle mode. ++# Manage chamber idle mode ++To optimize cost management, chambers can be put into an idle state to reduce the running cost, while still maintaining the core infrastructure. Before enabling chamber idle, ensure that there are no running workloads and no active remote desktop user connections. ++> [!IMPORTANT] +> To place a chamber into or take a chamber out of idle mode, you must perform the operations on both the chamber and its connector in the correct order, waiting for each operation to successfully complete before proceeding. ++## Prerequisites ++* An instance of Azure Modeling and Simulation Design Workbench with at least one chamber and connector. +* A user role with at Chamber Admin assignment for the target chambers. ++## Place a chamber into idle state ++To place a chamber into idle, the connector must be stopped before the chamber is stopped. Perform the following steps in the Azure portal. ++1. Navigate to the chamber to be placed into idle. +1. From the **Settings** menu at the left, select **Connector**. +1. Select the connector to be stopped. +1. From the top action bar, select **Stop**. Connectors typically take about 8 minutes to shut down and dispose of resources. ++ :::image type="content" source="media/howtoguide-idle/connector-stop.png" alt-text="Screenshot of connector action bar with Stop button highlighted in red."::: ++ Wait until the connector completely stops and the Power state shows **Stopped**. ++ :::image type="content" source="media/howtoguide-idle/connector-verify-stop.png" alt-text="Screenshot of connector overview with Power state of status highlighted in red."::: ++1. Navigate back to the parent chamber. +1. From the top action bar, select **Stop**. Chambers typically take about 8 minutes to shut down and dispose of resources. ++ :::image type="content" source="media/howtoguide-idle/chamber-stop.png" alt-text="Screenshot of chamber action bar with Stop button highlighted in red."::: ++ Wait until the chamber completely stops and the Power state shows **Stopped**. ++ :::image type="content" source="media/howtoguide-idle/chamber-verify-stop.png" alt-text="Screenshot of chamber overview with Power state as Stopped."::: ++ > [!TIP] + > The Activity log will show successful stop of both chamber and connector. ++ :::image type="content" source="media/howtoguide-idle/connector-log-stop.png" alt-text="Screenshot of activity log showing chamber successfully stopped."::: ++ :::image type="content" source="media/howtoguide-idle/connector-log-stop.png" alt-text="Screenshot of activity log showing connector successfully stopped."::: ++## Take a chamber out of idle state ++To take a chamber out of idle state, both the chamber and connector must be started in the correct order. The chamber must be fully running before the connector can be started. ++1. Navigate to the chamber to be taken out of idle state. +1. From the top action bar, select **Start**. Chambers typically take about 8 minutes to start and create resources. Before proceeding, ensure that the chamber is successfully running by verifying that the Power state of the chamber **Running**. +1. Navigate to the chamber's connector by selecting **Connector** from the **Settings** menu at the left. +1. Select the connector to be stopped. +1. From the top action bar, select **Start**. Connectors typically take about 8 minutes to start and create resources. The Power state of the connector must show as **Running** before a connector can be used for connecting to a desktop or file upload and download. |
modeling-simulation-workbench | How To Guide Chamber Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-chamber-vm.md | + + Title: "Create and manage chamber VMs: Azure Modeling and Simulation Workbench" +description: How to create and manage a chamber VMs in the Azure Modeling and Simulation Workbench. ++++ Last updated : 08/16/2024+#CustomerIntent: As a Workbench Owner, I want to create and manage a chamber to isolate users, workloads and data. ++# Chamber VMs ++Chamber workload virtual machines (VM) are Azure VMs managed by the Workbench. Chamber VMs don't require expert users to select, deploy, configure, or manage. VMs are deployed quickly, preconfigured with drivers for the most common EDA (Electronic Design Automation) workloads, and with access to thousands of managed applications. ++Chamber VMs deploy quickly and with little configuration in as little as 10 minutes. Chamber VMs are automatically: ++* Deployed directly into the chamber with no more configuration +* Networked to the chamber's private virtual network +* Mounted to any existing chamber storage, user home directories, and the [Data Pipeline](./concept-data-pipeline.md) mount points +* Preconfigured with drivers to work with major semiconductor design tools ++User administration is managed from the parent chamber and all users of a chamber have access into all VMs in the chamber. ++This article shows how to create, manage, and delete a chamber VM. ++## Prerequisites ++* A Modeling and Simulation Workbench with at least one chamber. +* A user account with Workbench Owner privileges (Subscription Owner or Subscription Contributor) role. ++## Create a chamber VM ++A chamber VM can only be deployed into an existing chamber. Once deployed, the chamber VM can't be moved to other chambers or renamed. The location of a chamber VM can't be specified as VMs are deployed to the same location as the parent Workbench. ++The Azure Modeling and Simulation Workbench offers a select set of high-performance VMs. To see the VM offerings and features, refer to [Modeling and Simulation Workbench VM Offerings](./concept-vm-offerings.md). ++All VMs are created with Red Hat Enterprise Linux version 8.8. ++1. From the chamber overview page, select **Chamber VM** from the **Settings** menu in the left pane. ++ :::image type="content" source="media/howtoguide-create-chamber-vm/chamber-vm-menu.png" alt-text="Screenshot of chamber settings menu with chamber VM in red box."::: ++1. On the chamber VM page, select **Create** from the action bar. ++ :::image type="content" source="media/howtoguide-create-chamber-vm/chamber-vm-create.png" alt-text="Screenshot of chamber VM action bar with 'Create' button annotated in red box."::: ++1. In the Create chamber VM dialog, enter the name of the chamber VM, the VM type, and the number of VMs to be created (default is 1). The VM image type will be expanded in the future to support software for other scientific and engineering applications. ++ :::image type="content" source="media/howtoguide-create-chamber-vm/chamber-vm-create-dialog.png" alt-text="Screenshot of Create chamber VM dialog with textboxes and ReviewCreate button marked in red.":::\. ++ Read about the [Chamber VM offerings](./concept-vm-offerings.md) to help you select the correct VM for your workload. ++1. Select **Review + create**. +1. If prevalidation checks are successful, the **Create** button is enabled. Select **Create**. A chamber VM typically can take up to 10 minutes to deploy. Once deployed, the **Power state** status shows as "Running". ++## Manage a chamber VM ++Once a chamber VM is created, a Workbench Owner or chamber Admin can administer it. Chamber VMs can only be stopped, started, or restarted. Chamber VMs can't be migrated or resized. Chamber VMs don't accept user role assignments. User administration happens at the chamber level. Chambers have access to shared storage (shared between chambers) and chamber storage, which is accessible only within the chamber by the members. IP addresses are managed by the deployment engine. Data and OS disks aren't configurable in chamber VMs. Microsoft recommends installing all your applications and data on the chamber storage volumes to allow you to create and destroy VMs that are instantly ready for use. All VMs have access to the chamber License servers. ++* [Manage users](./how-to-guide-manage-users.md) +* [How to start, stop, or restart a chamber](./how-to-guide-start-stop-restart.md) +* [Manage Storage](./how-to-guide-manage-chamber-storage.md) +* [About license servers](./concept-license-service.md) ++## Delete a chamber VM ++If a chamber VM is no longer needed, it can be deleted. VMs don't need to be stopped before being deleted. Once a chamber is deleted, it can't be recovered. ++1. Navigate to the chamber VM. +1. Select **Delete** from the action bar. Deleting a chamber can take up to 10 minutes. ++## Related content ++* [Manage users](./how-to-guide-manage-users.md) +* [Start, stop, or restart a chamber](./how-to-guide-start-stop-restart.md) +* [Manage chamber storage](./how-to-guide-manage-chamber-storage.md) +* [License servers](./concept-license-service.md) |
modeling-simulation-workbench | How To Guide Chamber | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-chamber.md | + + Title: "Create and manage chambers: Azure Modeling and Simulation Workbench" +description: How to create and manage a chamber in the Azure Modeling and Simulation Workbench. ++++ Last updated : 08/16/2024++#CustomerIntent: As a Workbench Owner, I want to create and manage a chamber to isolate users, workloads and data. ++# Create a chamber in the Azure Modeling and Simulation Workbench ++The Azure Modeling and Simulation Workbench provides a secure, cloud-based environment to collaborate with other organizations. Chambers are isolated areas with no access to the internet or other chambers, making them ideal work environments for enterprises. In a complex project where isolation is needed, a chamber should be created for each independent work group, or enterprise that requires confidentiality and control of their data. ++This article shows how to create, manage, and delete a chamber. ++## Prerequisites ++* A Modeling and Simulation Workbench top-level Workbench is created. +* A user account with Workbench Owner privileges (Subscription Owner or Subscription Contributor) role. ++## Create a chamber ++A Workbench Owner can create a chamber in an existing Workbench. Chambers can't be renamed or moved once created, nor can the location be specified. Chambers are deployed to the same location as the parent Workbench. ++1. From the Workbench overview page, select **Chamber** from the **Settings** menu in the left pane. +1. In the chamber page, select **Create** from the action bar. ++ :::image type="content" source="media/howtoguide-create-chamber/chamber-create-button.png" alt-text="Screenshot of chamber action bar with Create button annotated in red box."::: ++1. In the next dialog, only the name of a chamber is required. Enter a name and select **Next**. ++ :::image type="content" source="media/howtoguide-create-chamber/chamber-create-name.png" alt-text="Screenshot of chamber name dialog."::: ++1. If prevalidation checks are successful, select **Create**. A chamber typically takes around 15 minutes to deploy. ++## Manage a chamber ++Once a chamber is created, a Workbench Owner or chamber Admin can administer it. A chamber can be stopped, started, or restarted. Chambers are the scope of user role assignments, and defining boundary for data. ++* [Manage users](./how-to-guide-manage-users.md) +* [Manage license servers or upload licenses](./how-to-guide-licenses.md) +* [Start, stop, or restart a chamber](./how-to-guide-start-stop-restart.md) +* [Upload data](./how-to-guide-upload-data.md) +* [Download data](./how-to-guide-download-data.md) ++## Delete a chamber ++If a chamber is no longer needed, it can be deleted only if it's empty. All nested resources under the chamber must first be deleted before the chamber can be deleted. A chamber's nested resources include virtual machines (VM), connectors, and chamber storage. Once a chamber is deleted, it can't be recovered. ++1. Navigate to chamber. +1. Ensure that all nested resources are deleted. From the **Settings** menu at the left, visit each of the nested resources and ensure that they're empty. Visit the [Deleting nested resources](#deleting-nested-resources) section to learn how to delete each of those resources. +1. Select **Delete** from the action bar. Deleting a chamber can take up to 10 minutes. ++### Deleting nested resources ++Nested resources of a chamber must first be deleted before the top-level chamber can be deleted. A chamber can't be deleted if it still has a connector, chamber storage, or VM deployed within it. License servers are chamber infrastructure, aren't user deployable, and don't apply to this requirement. ++* [Manage connectors](./how-to-guide-set-up-networking.md) +* [Manage chamber storage](./how-to-guide-manage-chamber-storage.md) +* [Manage chamber VMs](./how-to-guide-chamber-vm.md) ++## Related content ++* [Manage license servers](./how-to-guide-licenses.md) |
modeling-simulation-workbench | How To Guide Configure Firewall Red Hat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-configure-firewall-red-hat.md | + + Title: "Configure Red Hat firewalls: Azure Modeling and Simulation Workbench" +description: Configure firewalls in Red Hat VMs in Azure Modeling and Simulation Workbench. ++++ Last updated : 08/18/2024++#CustomerIntent: As a Chamber Admin, I want to configure firewalls on individual VMs to allow applications to communicate within a chamber. ++# Configure firewalls in Red Hat ++Chamber VMs run Red Hat Enterprise Linux as the operating system. By default, the firewall is configured to deny all inbound connections except to managed services. To allow inbound communication, rules must be added to the firewall to allow traffic to pass. Similarly, if a rule is no longer needed, it should be removed. ++This article presents the most common firewall configuration commands. For full documentation or more complex scenarios, see [Chapter 40. Using and configuring `firewalld`](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/using-and-configuring-firewalld_configuring-and-managing-networking) of the Red Hat Enterprise Linux 8 documentation. ++All the operations referenced here require `sudo` privileges and thus need the Chamber Admin role. ++> [!IMPORTANT] +> VMs can only communicate with other VMs in the same chamber. Chamber-to-chamber traffic is never permitted and modifying firewall rules won't enable inter-chamber traffic. ++## Prerequisites +++## List all open ports ++List all currently open ports and associated protocol. ++```bash +$ sudo firewall-cmd --list-all +public (active) + target: default + icmp-block-inversion: no + interfaces: eth0 + sources: + + ports: 6817-6819/tcp 60001-63000/tcp + protocols: + forward: no + masquerade: no + forward-ports: + source-ports: + icmp-blocks: + rich rules: +``` ++## Open ports for traffic ++You can open a single or consecutive range of ports for network traffic. Changes to `firewall-d` are temporary and don't persist if the service is restarted or reloaded unless committed. ++### Open a single port ++Open a single port with `firewalld` for a given protocol using the `--add-port=portnumber/porttype` option. This example opens port 5510/TCP. ++```bash +$ sudo firewall-cmd --add-port=33500/tcp +success +``` ++Commit the rule to the permanent set: ++```bash +$ sudo firewall-cmd --runtime-to-permanent +success +``` ++### Open a range of ports ++Open a range of ports with `firewalld` for a specified protocol with the `--add-port=startport-endport/porttype` option. This command is useful in distributed computing scenarios where workers are dispatched to a large number of nodes and multiple workers might be on the same physical node. This example opens 100 consecutive ports starting at port 5000 with the UDP protocol. ++```bash +$ sudo firewall-cmd --add-port=5000-5099/udp +success +``` ++Commit the rule to the permanent set: ++```bash +$ sudo firewall-cmd --runtime-to-permanent +success +``` ++## Remove port rules ++If rules are no longer needed, they can be removed with the same notation as adding and using the `--remove-port=portnumber/porttype`. This example removes a single port: ++```bash +$ sudo firewall-cmd --remove-port=33500/tcp +success +``` ++Commit the rule to the permanent set: ++```bash +$ sudo firewall-cmd --runtime-to-permanent +success +``` ++## Related content ++* [Upload data to a chamber](./how-to-guide-upload-data.md) +* [Download data from a chamber](./how-to-guide-download-data.md) |
modeling-simulation-workbench | How To Guide Download Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-download-data.md | Title: Export data from Azure Modeling and Simulation Workbench + Title: "Export data: Azure Modeling and Simulation Workbench" description: Learn how to export data from a chamber in Azure Modeling and Simulation Workbench. This article explains the steps to export data from Azure Modeling and Simulatio - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An instance of Azure Modeling and Simulation Design Workbench installed with at least one chamber.-- A user who's a Workbench Owner (Subscription Owner or Subscription Contributor), and a user who's provisioned as a Chamber Admin or Chamber User.+- A user who's a Workbench Owner (Subscription Owner or Subscription Contributor), and a user provisioned as a Chamber Admin or Chamber User. - [AzCopy](/azure/storage/common/storage-ref-azcopy) installed on the machine, with access to the configured network for the target chamber. Only machines on the specified network path for the chamber can export files. ## Sign in to the Azure portal To export a file, you first need to copy the file to the data-out folder in the 1. On the left menu, select **Settings** > **Chamber**. A resource list appears. Select the chamber that you want to export data from. -1. On the left menu, select **Settings** > **Connector**. In the resource list, select the displayed connector. +1. On the left menu, select **Settings** > **Connector**. In the resource list, select the displayed connector. 1. Select the **Dashboard URL** link to open the ETX dashboard. Complete the following steps to download an approved export file from a chamber: ## Next steps -To learn how to manage chamber storage in Azure Modeling and Simulation Workbench, see [Manage chamber storage](./how-to-guide-manage-storage.md). +To learn how to manage chamber storage in Azure Modeling and Simulation Workbench, see [Manage chamber storage](./how-to-guide-manage-chamber-storage.md). |
modeling-simulation-workbench | How To Guide Enable Copy Paste | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-enable-copy-paste.md | + + Title: "Enable copy/paste: Azure Modeling and Simulation Workbench" +description: Enable copy/paste functionality in Azure Modeling and Simulation Workbench. ++++ Last updated : 08/25/2024++#CustomerIntent: As a Workbench administrator, I want to enable copy/paste functionality to allow users to be able to copy and paste into and out of a Workbench VM. ++# Enable copy/paste in Azure Modeling and Simulation Workbench ++Copy/paste functionality is disabled by default for all chambers created in the Azure Modeling and Simulation Workbench. Workbench Owners can enable copy/paste for an entire chamber. Enabling copy/paste allows users to move text data between their local workstations and chamber VMs. Enabling copy/paste changes the security boundary of the service since data can be directly copied out instead of the data pipeline controls. ++The Workbench Owner can enable this copy/paste when the connector is first created or later when needed. This article shows how to manage copy/paste configuration using OpenText Exceed TurboX (ETX), the remote client solution. ++> [!WARNING] +> If copy/paste is enabled, users can export data through clipboard operations and without having to request file downloads. Only enable copy/paste if these additional controls aren't needed. ++## Prerequisites ++++## View the current setting of copy/paste ++The current setting of the control can be viewed on the connector overview page. ++1. Navigate to the connector of the chamber to be checked. +1. On the Overview page, check the **Copy/paste** status in the right column. ++ :::image type="content" source="media/howtoguide-enable-copy-paste/copy-paste-status.png" alt-text="Screenshot of connector overview with copy/paste status outlined in red."::: ++## Enable or disable copy/paste ++1. Navigate to the connector of the chamber to be configured. +1. On the Overview page, select **Configure copy/paste** from the action bar. ++ :::image type="content" source="media/howtoguide-enable-copy-paste/copy-paste-configure-button.png" alt-text="Screenshot of connector overview with copy/paste configuration button highlighted in red."::: +The copy/paste control dialog appears. ++1. Select the desired setting. Select **Save**. ++ :::image type="content" source="media/howtoguide-enable-copy-paste/copy-paste-control.png" alt-text="Screenshot of copy/paste control dialog showing enable and disable radio buttons."::: ++## Copy and paste using the client ++When copying from or pasting to a virtual machine (VM), you must use the ETX client's controls. ++#### [Windows client](#tab/windows) ++In the Windows native ETX client, the copy/paste menu can be accessed from the application menu in the upper left. ++1. Select the application icon at the far left of the title bar. +1. Select **Edit** then either **Copy X Selection** or **Paste to X Selection**. +1. Highlighting either option produces another flyout menu of sources or destinations. ++ :::image type="content" source="media/howtoguide-enable-copy-paste/etx-windows-copy-paste-menu.png" alt-text="Screenshot of Windows ETX copy/paste menu."::: ++#### [Web client](#tab/web) ++In the web client, the menu is accessed from the main screen. ++1. Select the blue box and white arrow in the left corner. The menu flies out and a menu icon is displayed. +1. Select the menu icon to reveal copy/paste actions. ++ :::image type="content" source="media/howtoguide-enable-copy-paste/etx-web-client-copy-paste.png" alt-text="Screenshot of ETX web client copy/paste menu."::: ++++## Related content ++* [Manage connectors](./how-to-guide-set-up-networking.md) +* [Upload data](./how-to-guide-upload-data.md) +* [Download data](./how-to-guide-download-data.md) |
modeling-simulation-workbench | How To Guide Licenses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-licenses.md | Title: Manage a license service for Azure Modeling and Simulation Workbench -description: In this how-to guide, you learn how to upload a license file to activate a license service for an Azure Modeling and Simulation Workbench chamber. + Title: "Manage license +description: Learn how to upload a license file to chamber license service in Azure Modeling and Simulation Workbench. This article shows you how to upload a license file and activate a license servi This section lists the steps to upload a license for a FLEXlm-based tool. First, you get the FLEXlm host ID or the virtual machine (VM) universally unique ID (UUID) from the chamber. Then you provide that value to the license vendor to get the license file. After you get the license file from the vendor, you upload it to the chamber and activate it. 1. Open your web browser and go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal.-1. Search for **Modeling and Simulation Workbench**. Select the workbench that you want to provision from the resource list. +1. Search for **Modeling and Simulation Workbench**. Select the workbench that you want to update the licenses in from the resource list. 1. On the left menu, select **Settings** > **Chamber**. A resource list appears. Select the chamber that you want to upload the data to. 1. In the **Settings** section, select the **License** pane. 1. On the **License Overview** page, copy the **FLEXlm host ID** or **VM UUID** value. Provide this value to your license vendor to get a license file. 1. After the vendor sends you the license file, select **Update** on the **License Overview** page. The **Update license** window appears. 1. Select the chamber license service for the license file that you're uploading. Select **Enable** to enable the service. Then upload the license file from your storage space.-1. In the **Update license** pop-up dialog, select the **Update** button to activate your license service. -1. Azure Modeling and Simulation Workbench applies the new license to the license service and prompts a restart that might affect actively running jobs. +1. In the **Update license** pop-up dialog, select the **Update** button to activate your license service. The new license is loaded, causing the service to restart. ++> [!IMPORTANT] +> Loading a new license causes the license server to restart. This could affect actively running jobs. ## Next steps -To learn how to import data into an Azure Modeling and Simulation Workbench chamber, see [Import data](./how-to-guide-upload-data.md). +> [!div class="nextstepaction"] +> [Import data](./how-to-guide-upload-data.md) |
modeling-simulation-workbench | How To Guide Manage Chamber Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-manage-chamber-storage.md | + + Title: "Manage chamber storage: Azure Modeling and Simulation Workbench" +description: Learn how to manage chamber storage in Azure Modeling and Simulation Workbench. +++++ Last updated : 01/01/2023+# Customer intent: As a Chamber Admin in Azure Modeling and Simulation Workbench, I want to manage chamber storage. +++# Manage chamber storage in Azure Modeling and Simulation Workbench ++Chamber Admins and Workbench Owners can manage the storage capacity in Azure Modeling and Simulation Workbench to fit their organization's specific needs. For example, they can increase or decrease the amount of chamber storage. They can also change the performance tier. ++This article explains how Chamber Admins and Workbench Owners manage chamber storage. ++## Prerequisites +++++## Access storage options in a chamber ++If you're a Workbench Owner or Chamber Admin, complete the following steps to access the chamber storage options: ++1. Enter **Modeling and Simulation Workbench** in the global search. Then, under **Services**, select **Modeling and Simulation Workbench**. ++1. Select your workbench from the resource list. ++1. On the left menu, select **Settings** > **Chamber**. A resource list appears. Select the chamber where you want to manage the storage. ++1. On the left menu, select **Settings** > **Storage**. In the resource list, select the displayed storage. ++### Resize chamber storage ++If you're a Workbench Owner or Chamber Admin, you can increase or decrease a chamber's storage capacity by changing the storage size. ++You can't change the storage size to less than what you're currently using for that storage instance. In addition, you can't change the storage size to more than the available capacity for the region where your workbench is installed. The default storage quota limit is 25 TB across all workbenches installed in your subscription per region. For more information about resource capacity limits, contact your Microsoft account manager. ++Complete the following steps to increase or decrease the storage size: ++1. In the storage overview, select **Resize**. +1. In the **Resize** pop-up dialog, enter the desired storage size. +1. Select the **Change** button to confirm the resize request. +1. Select **Refresh** to show the new size in the storage overview. ++> [!IMPORTANT] +> Azure NetApp Files capacity availability is limited per region. Azure NetApp Files quota availability is limited per region and customer subscription. To request an increase in storage quota, contact your Microsoft account manager. ++### Change the performance tier ++If you're a Workbench Owner or a Chamber Admin, you can change the performance tier for storage. ++You can change the storage performance tier to a higher tier, such as from standard to ultra, at any time. You can change the storage performance tier to a lower tier, such as from ultra to standard, after the cool-down period. The Azure Net App Files cool-down period is one week from when you created the storage or one week from the last time that you increased the storage tier. ++Complete the following steps to change the performance tier: ++1. In the chamber storage overview, select **Change tier**. +1. In the **Change tier** pop-up dialog, select the desired storage tier from the combo box. +1. Select the **Update** button to confirm the request to change the tier. +1. Select **Refresh** to show the new tier in the storage overview. |
modeling-simulation-workbench | How To Guide Manage Shared Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-manage-shared-storage.md | + + Title: "Manage shared storage: Azure Modeling and Simulation Workbench" +description: Learn how to manage shared storage in Azure Modeling and Simulation Workbench. ++++ Last updated : 08/22/2024+# Customer intent: As a Chamber Admin in Azure Modeling and Simulation Workbench, I want to manage shared storage. +++# Manage shared storage in Azure Modeling and Simulation Workbench ++Shared storage is accessible between one or more chambers and is the only means that data can be exchanged between chambers. Chamber Admins and Workbench Owners can manage the Shared storage in Azure Modeling and Simulation Workbench. For example, they can increase or decrease the amount of chamber storage. They can also change the performance tier. ++This article explains how Chamber Admins and Workbench Owners manage chamber storage. ++## Prerequisites +++++## Sign in to the Azure portal ++Open your web browser and go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. ++## Create shared storage ++If you're a Workbench Owner or Chamber Admin, complete the following steps to access the shared storage options: ++1. Enter **Modeling and Simulation Workbench** in the global search. Under **Services**, select **Modeling and Simulation Workbench**. +1. Select your workbench from the resource list. +1. On the left menu, select **Settings** > **Shared storage**. +1. Select **Create** from the action bar. The Create shared storage configuration appears. +1. Fill in a name, set the capacity in 4-TB increments, and select the chambers that the shared storage should be accessible to. ++ :::image type="content" source="media/howtoguide-shared-storage/shared-storage-create.png" alt-text="Screenshot of shared storage create dialog."::: ++1. Select **Review + Create**. If the validation checks pass, select **Create**. ++## Manage shared storage ++If you're a Workbench Owner or Chamber Admin, complete the following steps to access the chamber storage options: ++1. Enter **Modeling and Simulation Workbench** in the global search. Then, under **Services**, select **Modeling and Simulation Workbench**. +1. Select your workbench from the resource list. +1. On the left menu, select **Settings** > **Shared storage**. In the resource list, select the storage to be managed. ++### Resize chamber storage ++If you're a Workbench Owner or Chamber Admin, you can increase or decrease a chamber's storage capacity by changing the storage size. ++You can't change the storage size to less than what you're currently using for that storage instance. In addition, you can't change the storage size to more than the available capacity for the region where your workbench is installed. The default storage quota limit is 25 TB across all workbenches installed in your subscription per region. For more information about resource capacity limits, contact your Microsoft account manager. ++Complete the following steps to increase or decrease the storage size: ++1. In the storage overview, select **Resize**. +1. In the **Resize** pop-up dialog, enter the desired storage size. +1. Select the **Change** button to confirm the resize request. +1. Select **Refresh** to show the new size in the storage overview. ++> [!IMPORTANT] +> Azure NetApp Files capacity availability is limited per region. Azure NetApp Files quota availability is limited per region and customer subscription. To request an increase in storage quota, contact your Microsoft account manager. ++### Change the performance tier ++If you're a Workbench Owner or a Chamber Admin, you can change the performance tier for storage. ++You can change the storage performance tier to a higher tier, such as from standard to ultra, at any time. You can change the storage performance tier to a lower tier, such as from ultra to standard, after the cool-down period. The Azure Net App Files cool-down period is one week from when you created the storage or one week from the last time that you increased the storage tier. ++Complete the following steps to change the performance tier: ++1. In the chamber storage overview, select **Change tier**. +1. In the **Change tier** pop-up dialog, select the desired storage tier from the combo box. +1. Select the **Update** button to confirm the request to change the tier. +1. Select **Refresh** to show the new tier in the storage overview. |
modeling-simulation-workbench | How To Guide Manage Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-manage-users.md | Title: Manage users in Azure Modeling and Simulation Workbench -description: In this how-to guide, you learn how to manage users' access to Azure Modeling and Simulation Workbench. + Title: "Manage users: Azure Modeling and Simulation Workbench" +description: Learn how to manage users' access to Azure Modeling and Simulation Workbench. This article describes how to grant or remove user access to your chamber. ## Prerequisites -- To provision users in a chamber, make sure that those users exist in your company's Microsoft Entra tenant. If you want to invite guests to collaborate in your chamber, you must add them to your Microsoft Entra tenant.+* Users to be added must already exist in your company's Microsoft Entra ID tenant. If you want to invite guests to collaborate in your chamber, you must add them to your Microsoft Entra ID tenant. -- You use email aliases to identify and enable users' access to the chamber workloads. Each user must have an email account set in the user profile. The email alias must exactly match the user's Microsoft Entra sign-in alias. For example, a Microsoft Entra sign-in alias of jane.doe@contoso.com must also have email alias of jane.doe@contoso.com.+* Email fields for users must be populated in the Microsoft Entra ID user profile. The email alias must exactly match the user's Microsoft Entra sign-in alias. For example, a Microsoft Entra sign-in alias of <jane.doe@contoso.com> must also have email alias of <jane.doe@contoso.com>. ## Assign user roles You can assign user roles at either of these levels: -- Users assigned at the *resource group level* can see Azure Modeling and Simulation Workbench resources and create workloads in a chamber.-- Users assigned at the *chamber level* can perform Azure Modeling and Simulation Workbench operations in the Azure portal and access the chamber workloads.+* Users assigned at the *resource group level* can see Azure Modeling and Simulation Workbench resources and create workloads in a chamber. +* Users assigned at the *chamber level* can perform Azure Modeling and Simulation Workbench operations in the Azure portal and access the chamber workloads. ### Assign access to read and create workloads When you want to remove user access to your chamber, you need to remove the Cham 1. When you're prompted to confirm role assignment removal, select **Yes**. -> [!NOTE] -> This procedure won't immediately interrupt active remote desktop dashboard sessions, but it will block future logins. To interrupt or block any active sessions, you must restart the connector. A connector restart will affect all active users and sessions, so use it with caution. It won't stop any active jobs that are running on the workloads. + > [!NOTE] + > This procedure won't immediately interrupt active remote desktop dashboard sessions, but it will block future logins. To interrupt or block any active sessions, you must restart the connector. A connector restart will affect all active users and sessions, so use it with caution. It won't stop any active jobs that are running on the workloads. ## Next steps |
modeling-simulation-workbench | How To Guide Register Resource Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-register-resource-provider.md | + + Title: "Register resource provider: Azure Modeling and Simulation Workbench" +description: Register the Azure Modeling and Simulation Workbench resource provider. ++++ Last updated : 08/20/2024++#CustomerIntent: As an administrator, I want to register the resource provider so I can install Azure Modeling and Simulation Workbench ++# Register Azure Modeling and Simulation Workbench resource provider ++To install the Azure Modeling and Simulation Workbench, the resource provider must be registered with the target subscription. Registering the resource provider gives the subscription access to the application. You should only register the resource providers you intend to use with the subscription. ++## Prerequisites ++* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++* The Azure account must have permission to manage resource providers and to manage resources for the subscription. The permission is included in the Contributor and Owner roles. ++* The Azure account must have permission to manage applications in Microsoft Entra ID. The following Microsoft Entra roles include the required permissions: + * [Application administrator](/azure/active-directory/roles/permissions-reference#application-administrator) + * [Application developer](/azure/active-directory/roles/permissions-reference#application-developer) + * [Cloud application administrator](/azure/active-directory/roles/permissions-reference#cloud-application-administrator) ++* A Microsoft Entra tenant. ++## Register the resource provider +++## Re-register the resource provider ++Some application issues and certain updates require the resource provider to be re-registered. ++1. On the Azure portal menu, search for **Subscriptions**. Select it from the available options. ++ :::image type="content" source="/azure/azure-resource-manager/management/media/resource-providers-and-types/search-subscriptions.png" alt-text="Screenshot of the Azure portal in a web browser, showing search subscriptions."::: ++1. On the **Subscriptions** page, select the subscription you want to view. In the example, 'Documentation Testing 1' is shown as an example. ++ :::image type="content" source="/azure/azure-resource-manager/management/media/resource-providers-and-types/select-subscription.png" alt-text="Screenshot of the Azure portal in a web browser, showing select subscriptions."::: ++1. On the left menu, under **Settings**, select **Resource providers**. ++ :::image type="content" source="/azure/azure-resource-manager/management/media/resource-providers-and-types/select-resource-providers.png" alt-text="Screenshot of the Azure portal in a web browser, showing select resource providers."::: ++1. Select the *Microsoft.ModSimWorkbench* resource provider. Then select **Unregister**, wait for the operation to complete, then select **Register**. ++ :::image type="content" source="./media/quickstart-create-portal/register-resource-provider.png" alt-text="Screenshot of the Azure portal in a web browser, showing register resource providers."::: |
modeling-simulation-workbench | How To Guide Set Up Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-set-up-networking.md | Title: Set up networking in Azure Modeling and Simulation Workbench -description: In this how-to guide, you learn how to set up networking for an Azure Modeling and Simulation Workbench connector. + Title: "Set up networking: Azure Modeling and Simulation Workbench" +description: Learn how to set up networking for an Azure Modeling and Simulation Workbench connector. Each chamber has a dedicated connector. Each connector can support either of the ## Add a VPN or ExpressRoute connection -If your organization set up an Azure network to oversee user access to the workbench, you can enforce stringent controls over the virtual network and subnet addresses employed to establish connections to the chamber. +If your organization has a presence in Azure or requires that connections to the Workbench be over a VPN, the VPN or ExpressRoute connector should be used. -When you create the connector, the Workbench Owner (Subscription Owner) can link a virtual network with a VPN gateway and/or ExpressRoute gateway. This link provides a secure connection between your on-premises network and the chamber. +When a connector is created, the Workbench Owner (Subscription Owner) can link an existing virtual network with a VPN gateway and/or ExpressRoute gateway. This link provides a secure connection between your on-premises network and the chamber. -To add a VPN or ExpressRoute connection: +### Create a VPN or ExpressRoute connector -1. Before you create a [connector](./concept-connector.md) for private IP networking via VPN or ExpressRoute, perform this role assignment. Azure Modeling and Simulation Workbench needs the **Network Contributor** role set for the resource group in which you're hosting your virtual network connected with ExpressRoute or VPN. +1. Before you create a [Connector](./concept-connector.md) for private IP networking via VPN or ExpressRoute, the Workbench needs a role assignment. Azure Modeling and Simulation Workbench requires the **Network Contributor** role set for the resource group in which you're hosting your virtual network connected with ExpressRoute or VPN. - | Setting | Value | - | : | :-- | - | **Role** | **Network Contributor** | - | **Assign access to** | **User, group, or service principal** | - | **Members** | **Azure Modeling and Simulation Workbench** | + | Setting | Value | + |:|:--| + | **Role** | **Network Contributor** | + | **Assign access to** | **User, group, or service principal** | + | **Members** | **Azure Modeling and Simulation Workbench** | ++ [!INCLUDE [azure-hpc-workbench-alert](includes/azure-hpc-workbench-alert.md)] 1. When you create your connector, specify **VPN** or **ExpressRoute** as your method to connect to your on-premises network. -1. A list of available virtual network subnets within your subscription appears. Select a non-gateway subnet within the same virtual network that has the gateway subnet for the VPN gateway or ExpressRoute gateway. +1. A list of available virtual network subnets within your subscription appears. Select a subnet other than the gateway subnet within the same virtual network for the VPN gateway or ExpressRoute gateway. ## Edit allowed public IP addresses -For organizations that don't have an Azure network set up or that prefer to use a public IP, the Azure portal allows IP addresses to be allowlisted to connect into the chamber. To use this connectivity method, you need to specify at least one IP address for the connector object when you create the workbench. Workbench Owners and Chamber Admins can add to and edit the allowlisted public addresses for a connector after the connector object is created. +IP addresses can be allowlisted in the Azure portal to allow connections to a chamber. Only one IP address can be specified for a Public IP connector when creating a new Workbench. After the connector is created, you can specify other IP addresses. Standard [CIDR (Classless Inter-Domain Routing)](/azure/virtual-network/virtual-networks-faq) mask notation can be used to allow ranges of IP addresses across a subnet. ++Workbench Owners and Chamber Admins can add to and edit the allowlisted public addresses for a connector after the connector object is created. To edit the list of allowed IP addresses: To edit the list of allowed IP addresses: 1. Select **Submit** to save your changes. 1. Refresh the view for connector networking and confirm that your changes appear. - :::image type="content" source="./media/resources-troubleshoot/chamber-connector-networking-network-allowlist.png" alt-text="Screenshot of the Azure portal in a web browser, showing the allowlist for chamber connector networking."::: + :::image type="content" source="./media/resources-troubleshoot/chamber-connector-networking-network-allowlist.png" alt-text="Screenshot of the Azure portal in a web browser, showing the allowlist for chamber connector networking."::: -## Add redirect URIs for the application in Microsoft Entra ID +## Redirect URIs A *redirect URI* is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication. Each time you create a new connector, you need to register the redirect URIs for your application registration in Microsoft Entra ID. -Follow these steps to get redirect URIs: +To find redirect URIs: 1. On the page for your new workbench in Azure Modeling and Simulation Workbench, select **Connector** on the left menu. Then select the connector in the resource list. Follow these steps to get redirect URIs: - **Dashboard reply URL**: For example, https://<*dashboardFqdn*>/etx/oauth2/code - **Authentication reply URL**: For example, https://<*authenticationFqdn*>/otdsws/login?authhandler=AzureOIDC - :::image type="content" source="./media/quickstart-create-portal/update-aad-app-01.png" alt-text="Screenshot of the connector overview page that shows where you select the reply URLs."::: + :::image type="content" source="./media/quickstart-create-portal/update-aad-app-01.png" alt-text="Screenshot of the connector overview page that shows where you select the reply URLs."::: -Follow these steps to add redirect URIs: +The redirect URIs must be registered with the Application registration to properly authenticate and redirect users to the workbench. To learn how to add redirect URIs, see [How to add redirect URIs](./how-to-guide-add-redirect-uris.md). -1. In the Azure portal, in **Microsoft Entra ID** > **App registrations**, select the application that you created in your Microsoft Entra instance. +## Ports and IP addresses -1. Under **Manage**, select **Authentication**. +### Ports -1. Under **Platform configurations**, select **Add a platform**. +The Azure Modeling and Simulation Workbench require certain ports to be accessible from users workstation. Firewalls and VPNs might block access on these ports to certain destinations, when accessed from certain applications, or when connected to different networks. Check with your system administrator to ensure your client can access the service from all your work locations. -1. Under **Configure platforms**, select the **Web** tile. +- **53/TCP** and **53/UDP**: DNS queries. +- **443/TCP**: Standard https port for accessing the VM dashboard and any Azure portal page. +- **5510/TCP**: Used by the ETX client to provide VDI access for both the native and web-based client. +- **8443/TCP**: Used by the ETX client to negotiate and authenticate to ETX management nodes. -1. On the **Configure Web** pane, paste the **Dashboard reply URL** value that you documented earlier. Then select **Configure**. +### IP addresses -1. Under **Platform configurations** > **Web** > **Redirect URIs**, select **Add URI**. +For the Public IP connector, Azure IP addresses are taken from Azure's IP ranges for the location in which the Workbench was deployed. A list of all Azure IP addresses and Service tags is available at [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519&msockid=1b155eb894cc6c3600a84ac5959a6d3f). It's not possible to list all Workbench IP addresses when the public IP connector is implemented. -1. Paste the **Authentication reply URL** value that you documented earlier. Then select **Save**. +> [!CAUTION] +> The pool of IP addresses can increase not only by adding VMs, but users as well. Connection nodes are scaled when more users are added to the chamber. Discovery of endpoint IP addresses may be incomplete once the userbase increases. - :::image type="content" source="./media/quickstart-create-portal/update-aad-app-02.png" alt-text="Screenshot of the Microsoft Entra app authentication page that shows where you select redirect URIs."::: +For more control over destination IP addresses and to minimize changes to corporate firewalls, a VPN and ExpressRoute connector is suggested. When using a VPN Gateway, the access point of the workbench is limited only to the gateway IP address. ## Next steps -To learn how to import data into an Azure Modeling and Simulation Workbench chamber, see [Import data](./how-to-guide-upload-data.md). +> [!div class="nextstepaction"] +> [Import data](./how-to-guide-upload-data.md) |
modeling-simulation-workbench | How To Guide Start Stop Restart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-start-stop-restart.md | + + Title: "Start, stop, and restart chambers, connectors, and VMs: Azure Modeling and Simulation Workbench" +description: How to start, stop, and restart chambers, connectors, and VMs in the Azure Modeling and Simulation Workbench. ++++ Last updated : 08/18/2024++#CustomerIntent: As a workbench user, I want to control chambers, VMs, and connectors. ++# Start, stop, and restart chambers, connectors, and VMs ++**Applies to:** :heavy_check_mark: Chambers :heavy_check_mark: Connectors :heavy_check_mark: Chamber VMs ++Chambers, connectors, and virtual machines (VM) in the Azure Modeling and Simulation Workbench can be started, stopped, or restarted as needed. Idle mode, a cost optimization feature requires that chambers and connectors be stopped to realize cost savings. These resources are running after creation and don't need to be started. ++License servers are controlled by the chamber in which they reside and don't have their own start/stop controls. License servers can only be enabled or disabled. See [Manage license servers](./how-to-guide-licenses.md) to learn how to manage chamber license servers. ++## Prerequisites ++* An instance of the Azure Modeling and Simulation Workbench with a chamber, connector, or VM. +* A user role with either Chamber Admin or Workbench Owner privileges. ++## Start a chamber, connector, or VM ++If stopped, use the following procedure to start. This procedure applies to chambers, connectors, and VMs. The action bar is located on the main page of the resource selected and is identical for each of these resources. ++1. Navigate to the resource to be started. For chambers, select **Chamber** from the **Settings** menu in the Workbench overview. Connectors and VMs are listed in the **Settings** menu of their respective chamber. +1. Select **Start** from the action bar at the top of overview page. The start operation can take up to 8 minutes. +1. Verify that the operation succeeded by checking the **Power state** field on the overview page of the resource. The status should be **Running** if the resource started successfully. ++## Stop a chamber, connector, or VM ++If a chamber, connector, or VM is running, it can be stopped with the following procedure. Stopping properly shuts down and releases resources and doesn't destroy any user data. ++1. Navigate to the resource to be stopped. For chambers, select **Chamber** from the **Settings** menu in the Workbench overview. Connectors and VMs are listed in the **Settings** menu of their respective chamber. +1. Select **Start** from the action bar at the top of overview page. The stop operation can take up to 8 minutes. +1. Verify that the operation succeeded by checking the **Power state** field on the overview page of the resource. The status should be **Stopped** if the resource stopped successfully. ++## Restart a chamber, connector, or VM ++A chamber, connector, or VM can be restarted in a single action. Restarting a resource may be necessary after certain updates or if a resource is malfunctioning. ++1. Navigate to the resource to be restarted. For chambers, select **Chamber** from the **Settings** menu in the Workbench overview. Connectors and VMs are listed in the **Settings** menu of their respective chamber. +1. Select **Restart** from the action bar at the top of overview page. The restart operation can take up to 8 minutes. +1. Verify that the operation succeeded by checking the **Power state** field on the overview page of the resource. The status should be **Running** if the resource restarted successfully. |
modeling-simulation-workbench | How To Guide Upload Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/how-to-guide-upload-data.md | Title: Import data into Azure Modeling and Simulation Workbench + Title: "Import data: Azure Modeling and Simulation Workbench" description: Learn how to import data into a chamber in Azure Modeling and Simulation Workbench. You can use Azure Modeling and Simulation Workbench to run your design applicati - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An instance of Azure Modeling and Simulation Design Workbench installed with at least one chamber.-- A user who's provisioned as a Chamber Admin or Chamber User.+- A user provisioned as a Chamber Admin or Chamber User. - [AzCopy](/azure/storage/common/storage-ref-azcopy) installed on the machine, with access to the configured network for the target chamber. Only machines on the specified network path for the chamber can upload files. ## Sign in to the Azure portal Open your web browser and go to the [Azure portal](https://portal.azure.com/). E A Chamber Admin or Chamber User can access the uploaded file from the chamber by accessing the following path: */mount/datapipeline/datain*. > [!IMPORTANT]-> If you're importing multiple smaller files, we recommend that you zip or tarball them into a single file. Gigabyte-sized tarballs and zipped files are supported, depending on your connection type and network speed. -> The /mount/datapipeline/datain directory has a file size of 1TB, so if the imported dataset is larger than this, then free up space by moving the files over to /mount/chamberstorages/ΓÇ¥Workbench chamber storageΓÇ¥ -> Note that the /datapipeline directory is Azure Files based, whereas the /chamberstorages directory is high-performance Azure NetApp Files. Always copy over the tools/binaries/IP from the /datapipeline/datain folder /chamberstorages directory under the specific chamberΓÇÖs private storage. +> If you're importing multiple smaller files, we recommend that you zip or tarball them into a single file. Gigabyte-sized tarballs and zipped files are supported, depending on your connection type and network speed. The `/mount/datapipeline/datain` directory has a volume size of 1TB, so if the imported dataset is larger than this, free up space by moving the files over to `/mount/chamberstorages/`. +> +> Note that the `/mount/datapipeline` volume is Azure Files based, whereas the `/mount/chamberstorages` volume is high-performance Azure NetApp Files. Always copy over the tools, binaries, and IP from the `/mount/datapipeline/datain` folder to `/mount/chamberstorages` volume under the specific chamberΓÇÖs private storage. ## Next steps |
modeling-simulation-workbench | Modeling Simulation Workbench Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/modeling-simulation-workbench-overview.md | Title: What is Azure Modeling and Simulation Workbench? -description: A brief overview of Azure Modeling and Simulation Workbench +description: A brief overview of Azure Modeling and Simulation Workbench. Last updated 08/05/2024 # What is Azure Modeling and Simulation Workbench? -The Azure Modeling and Simulation Workbench is a secure, on-demand service that provides a fully managed engineering design and simulation environment for secure and efficient user collaboration. The service incorporates infrastructure services required to build a successful environment for engineering development, such as: workload specific VMs, scheduler, orchestration, license server, remote connectivity, high performance storage, network configurations, security, and access controls. +The Azure Modeling and Simulation Workbench is a secure, on-demand service that provides a fully managed engineering design and simulation environment for secure and efficient user collaboration. The service incorporates infrastructure services required to build a successful environment for engineering development, such as: workload specific virtual machines (VM), scheduler, orchestration, license server, remote connectivity, high performance storage, network configurations, security, and access controls. - A chamber environment enables primary development teams to onboard their collaborators (customers, partners, ISVs, service/IP providers) for joint analysis/debug activity within the same chamber. - Multi-layered security and access controls allow users to monitor, scale, and optimize the compute and storage capacity as needed. - Automated provisioning reduces setup time of the design environment from weeks to hours. After providing an initial set of configurations, all resources, identity management, access controls, VMs, configured network, and partitioned storage are automatically provisioned. - Fully scalable to workload demands. For infra management and cost control, users can scale workloads up or down with push button controls, as well as change the storage performance tier and size. Chambers and workloads can be stopped while not in use, to further control costs. -<! Multi-Chamber collaboration allows these dev teams and their collaborators to have their own private workspaces, while allowing them to share data across chamber boundaries through Shared Storage +<! Multi-Chamber collaboration allows these dev teams and their collaborators to have their own private workspaces, while allowing them to share data across chamber boundaries through shared storage > ## Isolated chambers -The Modeling and Simulation [Workbench](./concept-workbench.md) can be created with one or more isolated chambers, where access can be provided to a group of users to work with complete privacy. These isolated chambers allow intellectual property (IP) owners to operate within a private environment to retain full control of their IP and limit who can access it. RBAC [(Role Based Access Control)](/azure/role-based-access-control/overview) allows only provisioned [Chamber](./concept-chamber.md) Users and Chamber Admins to have access to the chamber, through multifactor authentication using [Microsoft Entra ID](https://azure.microsoft.com/services/active-directory/) services. Once in the chamber, users have access to all the resources within that specific isolated Chamber environment, including private storage and workload VMs. +The Modeling and Simulation [Workbench](./concept-workbench.md) can be created with one or more isolated [chambers](./concept-chamber.md), where access can be provided to a group of users to work with complete privacy. These isolated chambers allow intellectual property (IP) owners to operate within a private environment to retain full control of their IP and limit who can access it. [Role Based Access Control (RBAC)](/azure/role-based-access-control/overview) allows only provisioned Chamber Users and Chamber Admins to have access to the chamber, through multifactor authentication using [Microsoft Entra ID](https://azure.microsoft.com/services/active-directory/) services. Once in the chamber, users have access to all the resources within that specific isolated chamber environment, including private storage and workload VMs. ## Compute capabilities -The Azure Modeling and Simulation Workbench supports a wide variety of VM sizes suitable for most engineering development type of workloads, and are made available on-demand and scale elastically. These include General purpose VMs such as the D and E series VMs, as well as specialized VMs such as the HB and Fx series (for silicon EDA). Each virtual machine comes with its own virtual hardware including CPU cores, memory, hard drives (local storage), network interfaces, and operating system (OS) services. An easy to use interface helps to provision and deprovision these VMs as needed. +The Azure Modeling and Simulation Workbench supports a wide variety of VM sizes suitable for most engineering development type of workloads, and are made available on-demand and scale elastically. These include general purpose VMs such as the D- and E-series, as well as specialized VMs such as the HB- and Fx-series, ideal for silicon Electronic Design Automation (EDA). Each virtual machine comes with its own virtual hardware including CPU cores, memory, hard drives (some with local storage), network interfaces, and operating system (OS) services. An easy to use interface helps to provision and deprovision these VMs as needed. A job scheduler comes prebuilt in to help access these compute resources. With the flexible pay-as-you-go model, users only pay for the compute time utilized in the workbench environment. To use the Modeling and Simulation Workbench APIs, you must create your Azure Mo - USGov Arizona - USGov Virginia - ## Contact us [Email us](mailto:azuremswb@microsoft.com) or use the feedback widget on the upper right of any docs page if you have feedback for us. |
modeling-simulation-workbench | Quickstart Create Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/quickstart-create-portal.md | Title: Quickstart - Create an Azure Modeling and Simulation Workbench (preview) in the Azure portal -description: In this quickstart, you learn how to use the Azure portal to create an Azure Modeling and Simulation Workbench. +description: Learn how to use the Azure portal to create an Azure Modeling and Simulation Workbench. Open your web browser and go to the [Azure portal](https://portal.azure.com/). E ## Register Azure Modeling and Simulation Workbench resource provider -1. On the Azure portal menu, search for **Subscriptions**. Select it from the available options. -- :::image type="content" source="/azure/azure-resource-manager/management/media/resource-providers-and-types/search-subscriptions.png" alt-text="Screenshot of the Azure portal in a web browser, showing search subscriptions."::: --1. On the **Subscriptions** page, select the subscription you want to view. In the screenshot below, 'Documentation Testing 1' is shown as an example. -- :::image type="content" source="/azure/azure-resource-manager/management/media/resource-providers-and-types/select-subscription.png" alt-text="Screenshot of the Azure portal in a web browser, showing select subscriptions."::: --1. On the left menu, under **Settings**, select **Resource providers**. -- :::image type="content" source="/azure/azure-resource-manager/management/media/resource-providers-and-types/select-resource-providers.png" alt-text="Screenshot of the Azure portal in a web browser, showing select resource providers."::: --1. Select the *Microsoft.ModSimWorkbench* resource provider. Then select **Register**. -- :::image type="content" source="./media/quickstart-create-portal/register-resource-provider.png" alt-text="Screenshot of the Azure portal in a web browser, showing register resource providers."::: --> [!IMPORTANT] -> -> To maintain the least privileges in your subscription, only register the resource providers you're ready to use. -> -> To allow your application to continue sooner than waiting for all regions to complete, don't block the creation of resources for a resource provider in the registering state. --<a name='create-an-application-in-azure-active-directory'></a> ## Create an application in Microsoft Entra ID -To create an application in Microsoft Entra ID, you first register the application and add a client secret. Then you create a Key Vault, set up Key Vault role assignments, and add client secrets to the Key Vault. +To create an application in Microsoft Entra ID, you first register the application and add a client secret. Then you create a Key Vault, set up Key Vault role assignments, and add client secrets to the Key Vault. ### Register an application Registering your application establishes a trust relationship between Modeling a Follow these steps to create the app registration: -1. If you have access to multiple tenants, use the Directories + subscriptions** filter :::image type="content" source="/azure/active-directory/develop/media/common/portal-directory-subscription-filter.png" alt-text="Showing filter icon."::: in the top menu to switch to the tenant in which you want to register the application. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter in the top menu to switch to the tenant in which you want to register the application. ++ :::image type="icon" source="/azure/active-directory/develop/media/common/portal-directory-subscription-filter.png" alt-text=""::: 1. Search for and select **Microsoft Entra ID**. Creating a client secret allows the Azure Modeling and Simulation Workbench to r 1. In **App registrations**, select your application *QuickstartModSimWorkbenchApp*. 1. Select **Certificates & secrets** > **Client secrets** > **New client secret**. 1. Add a description for your client secret.-1. Select ** 6 months** for the **Expires**. -1. Select **Add**. -1. The application properties display. Locate the **Client secret value** and document it. You need the Client secret value when you create your Key Vault. Make sure you write it down now, as it will never be displayed again once you leave this page. +1. Select **6 months** for the **Expires**. +1. Select **Add**. The application properties displays. +1. Locate the **Client secret value** and document it. You need the client secret value when you create your Key Vault. Make sure you write it down now, as it will never be displayed again after you leave this page. ### Create a Key Vault Creating a client secret allows the Azure Modeling and Simulation Workbench to r - In **Access policy**, select **Azure role-based access control** under **Permission model**. - :::image type="content" source="/azure/key-vault/media/rbac/image-1.png" alt-text="Enable Azure RBAC permissions - new vault"::: + :::image type="content" source="/azure/key-vault/media/rbac/image-1.png" alt-text="Screenshot of new Key Vault settings with RBAC permissions."::: - Leave the other options to their defaults. 1. After providing the information as instructed, select **Create**. Creating a client secret allows the Azure Modeling and Simulation Workbench to r | Assign access to | User, group, or service principal | | Members | Azure Modeling and Simulation Workbench | - In case the ΓÇÿAzure Modeling and Simulation WorkbenchΓÇÖ is not discoverable, please search for ΓÇÿAzure HPC WorkbenchΓÇÖ. + In case the ΓÇÿAzure Modeling and Simulation WorkbenchΓÇÖ isn't discoverable, search for ΓÇÿAzure HPC WorkbenchΓÇÖ. | Setting | Value | | : | :-- | Creating a client secret allows the Azure Modeling and Simulation Workbench to r To create an Azure Modeling and Simulation Workbench, you first fill out the Azure portal wizard fields for naming and connectivity preferences. Then you submit the form to create a Workbench. -1. While you're signed in the Azure portal, go to https://*\<AzurePortalUrl\>*/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.ModSimWorkbench%2Fworkbenches. For example, go to https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.ModSimWorkbench%2Fworkbenches for Azure public cloud. +1. While you're signed in the Azure portal, go to https://*\<AzurePortalUrl\>*/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.ModSimWorkbench%2Fworkbenches. For example, go to <https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.ModSimWorkbench%2Fworkbenches> for Azure public cloud. 1. On the **Modeling and Simulation Workbenches (preview)** page, select **Create**. The **Create an Azure Modeling and Simulation Workbench** page opens. To create an Azure Modeling and Simulation Workbench, you first fill out the Azu 1. Select **Review + create** button at the bottom of the page. - :::image type="content" source="./media/quickstart-create-portal/create-03.png" alt-text="Screenshot of the Chamber details section showing where you type and select the values."::: + :::image type="content" source="./media/quickstart-create-portal/create-03.png" alt-text="Screenshot of the chamber details section showing where you type and select the values."::: 1. On the **Review + create** page, you can see the details about the Azure Modeling and Simulation Workbench you're about to create. When you're ready, select **Create**. To create an Azure Modeling and Simulation Workbench, you first fill out the Azu :::image type="content" source="./media/quickstart-create-portal/chamber-iam-02.png" alt-text="Screenshot of the Role assignments page showing where you select the Add role assignment command."::: -1. The Add role assignment pane opens. Search or scroll for the **Chamber Admin** role in the role list and **select** it. Then select **Next**. +1. The **Add role assignment** pane opens. Search or scroll for the **Chamber Admin** role in the role list and **select** it. Then select **Next**. :::image type="content" source="./media/quickstart-create-portal/chamber-iam-03.png" alt-text="Screenshot of the Add role assignment page showing where you select the Role."::: 1. Leave the **Assign access to** default **User, group, or service principal**. Select **+ Select members**. In the **Select members** blade on the left side of the screen, search for your security principal by entering a string or scrolling through the list. Select your security principal. Select **Select** to save the selections. - > [!NOTE] + > [!NOTE] > Chamber Admins and Chamber Users *MUST* have an alias set within their Microsoft Entra profile email field, or they can't log into the environment. To check this, go to Microsoft Entra ID in your Azure portal and under Manage -> Select Users, search for the user by name. Under the Properties tab, look for the email field and ensure it has the email address of the user populated. Also, the role assignment must be done ONLY at the chamber resource level, not at any other resource level. Duplicate and/or multiple role assignments are not allowed and will result in a failed connection. :::image type="content" source="./media/quickstart-create-portal/chamber-iam-04.png" alt-text="Screenshot of the Add role assignment page showing where you select the security principal."::: 1. Select **Review + assign** to assign the selected role. -1. Repeat steps 3-6 to assign the **Chamber User** role to other users who need to work on the chamber. Also, remember to assign any provisioned chamber admins/users the ΓÇÿReaderΓÇÖ and 'Classic storage account contributor' role at the Resource Group level to enable permissions to access workbench resources and deploy workload VMs respectively. +1. Repeat steps 3-6 to assign the **Chamber User** role to other users who need to work on the chamber. -<a name='add-redirect-uris-for-the-application-in-azure-active-directory'></a> +> [!NOTE] +> RemAssign any provisioned Chamber Admins or Chamber Users the ΓÇÿReaderΓÇÖ and 'Classic storage account contributor' role at the Resource Group level to enable permissions to access workbench resources and deploy workload VMs. ## Add redirect URIs for the application in Microsoft Entra ID -A *redirect URI* is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication. --Follow these steps to get redirect URIs: --1. On the page for your new Modeling and Simulation Workbench workbench, **myModSimWorkbench**, select the left side menu **Connector**. Then select **myfirstconnector** from the right side resource list. --1. On the **Overview** page, locate and document the two connector properties, **Dashboard reply URL** and **Authentication reply URL**, using the copy to clipboard icon. If these properties aren't visible, select the **See More** button on page to expand the window. - - **Dashboard reply URL**: For example, https://<*dashboardFqdn*>/etx/oauth2/code - - **Authentication reply URL**: For example, https://<*authenticationFqdn*>/otdsws/login?authhandler=AzureOIDC -- :::image type="content" source="./media/quickstart-create-portal/update-aad-app-01.png" alt-text="Screenshot of the connector overview page showing where you select the reply URLs."::: --Follow these steps to add redirect URIs: --1. In the Azure portal, in **Microsoft Entra ID** > **App registrations**, select your application created in **Register an application** step. --1. Under **Manage**, select **Authentication**. --1. Under **Platform configurations**, select **Add a platform**. --1. Under **Configure platforms**, select **Web** tile. --1. On the **Configure Web** pane, paste the **Dashboard reply URL** you documented in the previous step in the Redirect URI field. Then select **Configure**. -- :::image type="content" source="./media/quickstart-create-portal/update-aad-app-02.png" alt-text="Screenshot of the Microsoft Entra app Authentication page showing where you configure web authentication."::: --1. Under **Platform configurations** > **Web** > **Redirect URIs**, select **Add URI**. --1. Paste the **Authentication reply URL** you documented in the previous step. Then select **Save**. -- :::image type="content" source="./media/quickstart-create-portal/update-aad-app-03.png" alt-text="Screenshot of the Microsoft Entra app Authentication page showing where you set the second Redirect URI."::: ## Connect to chamber with remote desktop -Chamber Admins and Chamber Users can now connect into the chamber with remote desktop access. The remote desktop dashboard URL is available in the connector overview page. These users must be on the appropriate network set up for the connector, so for this quickstart their Public IP address needs to be included in the connector Network ACLs range. +Chamber Admins and Chamber Users can now connect into the chamber with remote desktop access. The remote desktop dashboard URL is available in the connector overview page. These users must be on the appropriate network set up for the connector, so for this quickstart their Public IP address needs to be included in the connector Network Access Control Lists (ACL) range. 1. On the page for your new Modeling and Simulation Workbench workbench, **myModSimWorkbench**, select the left side menu **Connector**. Then select **myfirstconnector** from the right side resource list. Or instead, to delete the newly created Modeling and Simulation Workbench resour 1. Open your Modeling and Simulation Workbench in the Azure portal and select **All resources** from the left side menu. Then search for the Modeling and Simulation Workbench you created. For example, in this quickstart, we used *myModSimWorkbench* as the name of the Workbench. -1. To delete a resource, select the **Delete** button located on the top pane of the **Overview page** for each resource. You must delete the child resources before deleting their parents. For example, first delete any Chamber VMs and Connectors. Then delete Storages. Then delete Chambers. Delete Workbenches last. +1. To delete a resource, select the **Delete** button located on the top pane of the **Overview page** for each resource. You must delete the child resources before deleting their parents. For example, first delete any chamber VMs and connectors. Then delete Storages. Then delete chambers. Delete Workbenches last. ## Next steps |
modeling-simulation-workbench | Refresh Remote Connection Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/refresh-remote-connection-keys.md | + + Title: Refresh remote connection keys in Azure Modeling and Simulation Workbench +description: Learn how to refresh remote connection keys in Azure Modeling and Simulation Workbench. ++++ Last updated : 09/05/2024+# Customer intent: As a Chamber User in Azure Modeling and Simulation Workbench, I want to refresh remote connection keys. +++# Refresh remote connection keys ++The Modeling and Simulation Workbench offers provisioned users secure remote connectivity to all chamber workloads. To set up remote access, register a new application in Microsoft Entra ID and add a client secret as described in [Create an Azure Modeling and Simulation Workbench](/azure/modeling-simulation-workbench/quickstart-create-portal#add-a-client-secret). Registering your application establishes a trust relationship between the Modeling and Simulation Workbench remote desktop authentication and the Microsoft identity platform. Creating a client secret allows the Modeling and Simulation Workbench to redirect Microsoft Entra sign-in requests directly to your organization's Microsoft Entra ID and enables a single sign-on experience for onboarded users. ++## Client secret lifetime ++The client secret lifetime is typically set to 12 months. If the workbenchΓÇÖs lifetime extends beyond the secretΓÇÖs lifespan, the app secrets will expire resulting in users losing access to the chambers. Expired client secrets disrupt remote authentication, causing a blue screen. ++To address an expired client secret, you need to create a new secret by following steps outlined in [Create an application in Microsoft Entra ID](/azure/modeling-simulation-workbench/quickstart-create-portal#create-an-application-in-microsoft-entra-id). After creating a new secret, you need to update the Modeling and Simulation Workbench with that new client secret URL. ++## Update the client secret URL ++On the Azure portal workbench resource page, you can update the authentication URLs for the new client secrets linked to the original app registration. ++1. In the Azure portal, navigate to the Modeling and Simulation Workbenches (preview) page. ++1. Select **Auth URL**. ++ ![Screenshot of the Azure portal pane where the authentication URL can be updated.](./media/refresh-remote-connection-keys/auth-url.png) ++1. Enter in the new client secret URL. ++ ![Screenshot of the Azure portal pane where a new client secret URL can be entered.](./media/refresh-remote-connection-keys/client-secret-url.png) ++These actions can only be performed by the Workbench Owner, and allow rotation of Client ID/secrets on deployed workbenches prior to expiration, which ensures business continuity for the engineering team. ++> [!NOTE] +> Please restart the connector before attempting to make a remote connection with the updated authentication URLs |
modeling-simulation-workbench | Resources Get Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/resources-get-support.md | Title: Get support for Modeling and Simulation Workbench -description: In this article, learn how to get support for Modeling and Simulation Workbench deployment. +description: Learn how to get support for Modeling and Simulation Workbench deployment. |
modeling-simulation-workbench | Resources Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/resources-troubleshoot.md | Title: Troubleshoot Azure Modeling and Simulation Workbench -description: In this article, learn how to troubleshoot some issues with an Azure Modeling and Simulation Workbench +description: Learn how to troubleshoot issues with an Azure Modeling and Simulation Workbench. A *not authorized error* while accessing the remote desktop dashboard URL indica #### Failing for all users -- Review the [Create an application in Microsoft Entra ID](./quickstart-create-portal.md#create-an-application-in-azure-active-directory) article to verify your application registration is set up correctly.-- Review the [Update the application in Microsoft Entra ID](./quickstart-create-portal.md#add-redirect-uris-for-the-application-in-azure-active-directory) article to confirm your chamber connector's redirect URIs are set up correctly.+- Review the [Create an application in Microsoft Entra ID](./quickstart-create-portal.md#create-an-application-in-microsoft-entra-id) article to verify your application registration is set up correctly. +- Review the redirect URI registrations for the specific chamber and confirm the connector's redirects match those found with the application. If they don't match, re[register the redirect URIs](./how-to-guide-add-redirect-uris.md). - Review the application registration secrets for Modeling and Simulation Workbench and check to see if your application client secret has expired. Complete the following steps if it's expired. 1. Generate a new secret and make note of the client secret value. 1. Update your Key Vault app secret value with the newly generated client **secret value.** A *not authorized error* while accessing the remote desktop dashboard URL indica #### Failing for some users 1. Ensure the user is provisioned as a Chamber User or a Chamber Admin on the **chamber** resource. They should be set up as an IAM role directly for that chamber, not as a parent resource with inherited permission.-1. Ensure the user has a valid email set for their Microsoft Entra profile, and that their Microsoft Entra alias matches their email alias. For example, a Microsoft Entra sign-in alias of _jane.doe_ must also have an email alias of _jane.doe_. Jane Doe can't sign in to Microsoft Entra ID with jadoe or any other variation. -1. Validate your /mount/sharehome folder has available space. The /mount/sharedhome directory is set up to store user keys to establish a secure connection. Don't store uploaded tarballs/binaries in this folder or install tools and use disk capacity, as it may create system connection errors causing an outage. Use /mount/chamberstorages/\<storage name\> directory instead for all your data storage and tool installation needs. +1. Ensure the user has a valid email set for their Microsoft Entra profile, and that their Microsoft Entra alias matches their email alias. For example, a Microsoft Entra sign-in alias of *jane.doe* must also have an email alias of *jane.doe*. Jane Doe can't sign in to Microsoft Entra ID with jadoe or any other variation. +1. Validate your `/mount/sharehome` folder has available space. The`/mount/sharedhome` directory is set up to store user keys to establish a secure connection. Don't store uploaded tarballs/binaries in this folder or install tools and use disk capacity, as it may create system connection errors causing an outage. Use /mount/chamberstorages/\<storage name\> directory instead for all your data storage and tool installation needs. 1. Validate your folder permission settings are correct within your chamber. User provisioning may not work properly if the folder permission settings aren't correct. You can check folder permissions in a terminal session using the *ls -al* command for each /mount/sharedhome/\<useralias\>/.ssh folder, results should match below expectations: ```text |
modeling-simulation-workbench | Shared Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/shared-storage.md | Title: Shared storage for Modeling and Simulation Workbench + Title: "Shared storage: Azure Modeling and Simulation Workbench" description: This article provides an overview of shared storage for Azure Modeling and Simulation Workbench workbench component. Last updated 08/21/2024 # Shared storage for Modeling and Simulation Workbench -To enable cross team and/or cross-organization collaboration in a secure manner within the workbench, a shared storage resource allows for selective data sharing between collaborating parties. It's an Azure NetApp Files based storage volume and is available to deploy in multiples of 4 TBs. Workbench owners can create multiple shared storage instances on demand and dynamically link them to existing chambers to facilitate secure collaboration. +To enable cross team and/or cross-organization collaboration in a secure manner within the workbench, a shared storage resource allows for selective data sharing between collaborating parties. It's an Azure NetApp Files based storage volume and is available to deploy in multiples of 4 TBs. Workbench owners can create multiple shared storage instances on demand and dynamically link them to existing chambers to facilitate secure collaboration. Users who are provisioned to a specific chamber can access all shared storage volumes linked to that chamber. Once users get deprovisioned from a chamber or that chamber gets deleted, they lose access to any linked shared storage volumes. |
operational-excellence | Relocation Netapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-netapp.md | Title: Relocate Azure NetApp Files volume to another region + Title: Relocate an Azure NetApp Files volume to another region description: Learn how to relocate an Azure NetApp Files volume to another region Previously updated : 08/14/2024 Last updated : 09/04/2024 - subject-relocation -# Relocate Azure NetApp Files volume to another region +# Relocate an Azure NetApp Files volume to another region This article covers guidance for relocating [Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) volumes to another region. [!INCLUDE [relocate-reasons](./includes/service-relocation-reason-include.md)] - ## Prerequisites -Before you begin the relocation planning stage, first review the following prerequisites: +Before you begin the relocation planning stage, review the following prerequisites: -- The target NetApp account instance should already be created.+- The target NetApp account should already be created. - Source and target regions must be paired regions. To see if they're paired, see [Supported cross-region replication pairs](../azure-netapp-files/cross-region-replication-introduction.md?#supported-region-pairs). - Understand all dependent resources. Some of the resources could be:- - Microsoft Entra ID + - [Microsoft Entra ID](../azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md) - [Virtual Network](./relocation-virtual-network.md) - Azure DNS - [Storage services](./relocation-storage-account.md) Before you begin the relocation planning stage, first review the following prere ## Prepare -Before you begin the relocation process, make sure to complete the following preparations: +Before you begin the relocation process, complete the following preparations: - The target Microsoft Entra ID connection must have access to the DNS servers, AD DS Domain Controllers, or Microsoft Entra Domain Services Domain Controllers that are reachable from the delegated subnet in the target region. -- The network configurations (including separate subnets if needed and IP ranges) should already be planned and prepared+- The network configurations (including separate subnets if needed and IP ranges) should already be planned and prepared. -- Turn off replication procedures to disaster recovery region. If you've established a disaster recovery (DR) solution using replication to a DR region, turn off replication to the DR site before initiating relocation procedures.+- Disable replication to the disaster recovery region. If you've established a disaster recovery (DR) solution using replication to a DR region, turn off replication to the DR site before initiating relocation procedures. - Understand the following considerations in regards to replication: Before you begin the relocation process, make sure to complete the following pre ## Cleanup -Once the replication is complete, you can then safely delete the replication peering the source volume. +Once the replication is complete, you can safely delete the replication peering the source volume. To learn how to clean up a replication, see [Delete volume replications or volumes](/azure/azure-netapp-files/cross-region-replication-delete). - ## Related content - - [Cross-region replication of Azure NetApp Files volumes](../azure-netapp-files/cross-region-replication-introduction.md) To learn more about moving resources between regions and disaster recovery in Azure, refer to: |
operator-nexus | Concepts Security Access Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-security-access-identity.md | Azure Operator Nexus provides the following built-in roles. [Operator Nexus Keyset Administrator Role (Preview)](#operator-nexus-keyset-administrator-role-preview) +[Operator Nexus Owner Role (Preview)](#operator-nexus-owner-role-preview) + > [!NOTE] > Preview roles are subject to change. and updating baremetal machine (BMM) and baseboard management (BMC) keysets. | Microsoft.NetworkCloud/clusters/bareMetalMachineKeySets/write | Create a new or update an existing bare metal machine key set of the provided cluster | | Microsoft.NetworkCloud/clusters/bmcKeySets/read | Get baseboard management controller key set of the provided cluster | | Microsoft.NetworkCloud/clusters/bmcKeySets/write | Create a new or update an existing baseboard management controller key set of the provided cluster |-| Microsoft.NetworkCloud/clusters/bmcKeySets/delete | Delete a baseboard management controller key set of the provided cluster +| Microsoft.NetworkCloud/clusters/bmcKeySets/delete | Delete a baseboard management controller key set of the provided cluster | ++### Operator Nexus Owner Role (Preview) ++The user with this role has access to perform all actions on any Microsoft.NetworkCloud resource within the scope assignment. ++| Actions | Description | +|--|| +| Microsoft.NetworkCloud/* | Perform any action on a Microsoft.NetworkCloud resource | |
operator-nexus | Howto Baremetal Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-functions.md | This article describes how to perform lifecycle management operations on bare me - **Replace the BMM** > [!IMPORTANT]-> Disruptive command requests against a Kubernetes Control Plane (KCP) node are rejected if there is another disruptive action command already running against another KCP node or if the full KCP is not available. This check is done to maintain the integrity of the Nexus instance and ensure multiple KCP nodes don't go down at once due to simultaneous disruptive actions. If multiple nodes go down, it will break the healthy quorum threshold of the Kubernetes Control Plane. +> Disruptive command requests against a Kubernetes Control Plane (KCP) node are rejected if there is another disruptive action command already running against another KCP node or if the full KCP is not available. This check is done to maintain the integrity of the Nexus instance and ensure multiple KCP nodes don't become non-operational at once due to simultaneous disruptive actions. If multiple nodes become non-operational, it will break the healthy quorum threshold of the Kubernetes Control Plane. > > The bolded actions in the above list are considered disruptive (Power off, Restart, Reimage, Replace). Cordon without evacuate is not considered disruptive. Cordon with evacuate is considered disruptive. > |
operator-nexus | Howto Baremetal Nexusctl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-nexusctl.md | + + Title: "Azure Operator Nexus: Running bare metal actions directly with nexusctl" +description: Learn how to bypass Azure and run bare metal actions directly in an emergency using nexusctl. ++++ Last updated : 08/26/2024++++# Run emergency bare metal actions outside of Azure using nexusctl ++This article describes the `nexusctl` utility, which can be used in break-glass (emergency) situations to +run simple actions on bare metal machines without using the Azure console or command-line interface (CLI). ++> [!CAUTION] +> Do not perform any action against management servers without first consulting with Microsoft support personnel. Doing so could affect the integrity of the Operator Nexus Cluster. ++> [!IMPORTANT] +> Disruptive command requests against a Kubernetes Control Plane (KCP) node are rejected if there is another disruptive action command already running against another KCP node or if the full KCP is not available. This check is done to maintain the integrity of the Nexus instance and ensure multiple KCP nodes don't become non-operational at once due to simultaneous disruptive actions. If multiple nodes become non-operational, it will break the healthy quorum threshold of the Kubernetes Control Plane. +> +> Powering off a KCP node is the only nexusctl action considered disruptive in the context of this check. ++## Prerequisites ++- A [BareMetalMachineKeySet](./howto-baremetal-bmm-ssh.md) must be available to allow ssh access to the bare metal machines. The user must have superuser privilege level. +- The platform Kubernetes must be up and running on site. ++## Overview ++`nexusctl` is a stand-alone program that can be run using `nc-toolbox` from an `ssh` session on any control-plane or management-plane node. Since `nexusctl` is contained in the `nc-toolbox-breakglass` container image and isn't installed directly on the host, it must be run with a command-line like: ++``` +sudo nc-toolbox nc-toolbox-breakglass nexusctl <command> [subcommand] [options] +``` ++(`nc-toolbox` must always be run as root or with `sudo`.) ++Like most other command-line programs, the `--help` option can be used with any command or subcommand to see more information: ++``` +sudo nc-toolbox nc-toolbox-breakglass nexusctl --help +sudo nc-toolbox nc-toolbox-breakglass nexusctl baremetal --help +sudo nc-toolbox nc-toolbox-breakglass nexusctl baremetal power-off --help +``` ++etc. ++> [!NOTE] +> +> > There is no bulk execution against multiple machines. Commands are executed on a machine by machine basis. ++## Power off a bare metal machine ++A single bare metal machine can be powered off by connecting to a control-plane or management-plane node via ssh and running the command: ++``` +sudo nc-toolbox nc-toolbox-breakglass nexusctl baremetal power-off --name <machine name> +``` ++If the command is accepted, `nexusctl` responds with another command line that can be used to view the status of the long-running operation. Prefix this command with `sudo nc-toolbox nc-toolbox-breakglass`, as follows: ++``` +sudo nc-toolbox nc-toolbox-breakglass nexusctl baremetal power-off --status --name <machine name> --operation-id <operation-id> +``` ++The status is blank until the operation completes and reaches either a "succeeded" or "failed" state. While it's blank, assume that the operation is still in progress. ++## Start a bare metal machine ++A single bare metal machine can be started by connecting to a control-plane or management-plane node via ssh and running the command: ++``` +sudo nc-toolbox nc-toolbox-breakglass nexusctl baremetal start --name <machine name> +``` ++If the command is accepted, `nexusctl` responds with another command line that can be used to view the status of the long-running operation. Prefix this command with `sudo nc-toolbox nc-toolbox-breakglass`, as follows: ++``` +sudo nc-toolbox nc-toolbox-breakglass nexusctl baremetal start --status --name <machine name> --operation-id <operation-id> +``` ++The status is blank until the operation completes and reaches either a "succeeded" or "failed" state. While it's blank, assume that the operation is still in progress. ++## Unmanage a bare metal machine (set to unmanaged state) ++A single bare metal machine can be switched to an unmanaged state by connecting to a control-plane or management-plane node via ssh and running the command: ++``` +sudo nc-toolbox nc-toolbox-breakglass nexusctl baremetal unmanage --name <machine name> +``` ++While in an unmanaged state, no actions are permitted for that machine, except for returning it to a managed state (see next section). This function can be used to keep a bare metal machine powered off if it's in a rebooting crash loop. ++`unmanage` isn't a long-running command, so there's no associated command to check operation status. ++## Manage a bare metal machine (set to managed state) ++A single bare metal machine can be switched to a managed state by connecting to a control-plane or management-plane node via ssh and running the command: ++``` +sudo nc-toolbox nc-toolbox-breakglass nexusctl baremetal manage --name <machine name> +``` ++`manage` isn't a long-running command, so there's no associated command to check operation status. |
operator-nexus | Howto Install Cli Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md | Name Version -- - monitor-control-service 0.4.1 connectedmachine 0.7.0-connectedk8s 1.7.3 +connectedk8s 1.9.2 k8s-extension 1.4.3 networkcloud 1.1.0 k8s-configuration 2.0.0-managednetworkfabric 6.2.0 +managednetworkfabric 6.4.0 customlocation 0.1.3-ssh 2.0.4 +ssh 2.0.5 ``` <!-- LINKS - External --> |
operator-service-manager | Get Started With Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/get-started-with-private-link.md | + + Title: Get started with Azure Operator Service Manager Private Link +description: Secure backhaul connectivity of on-premises artifact store hosted on Azure Operator Nexus ++ Last updated : 09/04/2024+++++# Get started with private link ++## Overview +This guide describes the Azure Operator Service Manager (AOSM) private link (PL) feature for artifact stores hosted on Azure Operator Nexus. As part of the AOSM edge registry initiative, PL uses Azure private endpoints, and Azure private link service, to securely backhaul Nexus on-premises artifact store traffic. This traffic is never exposed to the internet, instead exclusively traversing Microsoft's private network. ++## Introduction +This document provides a quick start guide to enable private link feature for AOSM artifact store using AOSM Publisher APIs. ++### Required permissions +The operations required to link and manage a private endpoint with a Nexus fabric controller (NFC) requires the following nondefault role privileges. ++#### Permissions for linking and managing manual private endpoint +Remove private endpoint +``` +"Microsoft.HybridNetwork/publishers/artifactStores/removePrivateEndPoints/action" +``` +Approve private endpoint +``` +"Microsoft.HybridNetwork/publishers/artifactStores/approvePrivateEndPoints/action" +``` +#### Permissions for linking and managing a private endpoint with NFC +Add NFC private endpoints +``` +"Microsoft.HybridNetwork/publishers/artifactStores/addNetworkFabricControllerEndPoints/action" +"Microsoft.ManagedNetworkFabric/networkFabricControllers/joinartifactstore/action" +``` +List NFC private endpoints +``` +"Microsoft.HybridNetwork/publishers/artifactStores/listNetworkFabricControllerPrivateEndPoints/action" +``` +Delete NFC private endpoints +``` +"Microsoft.HybridNetwork/publishers/artifactStores/deleteNetworkFabricControllerEndPoints/action" +"Microsoft.ManagedNetworkFabric/networkFabricControllers/disjoinartifactstore/action" +``` ++> [!NOTE] +> As new NFC permissions are introduced, the recommended role privileges will be updated. ++## Use AOSM APIs to set up private link +Before resources can be uploaded securely, the following sequence of operations establishes a PL connection to the artifact store. ++### Create publisher and artifact store +* Create a new publisher resource with identity type set to 'SystemAssigned.' + - If the publisher was already created without this property, use a reput operation to update. +* Use the new property 'backingResourcePublicNetworkAcccess' to disable artifact store public access. + - The property is first added in the 2024-04-15 version. + - If the ArtifactResource was already created without this property, use a reput operation to update. ++#### Sample publisher bicep script ++``` +param location string = resourceGroup().location +param publisherName string +param acrArtifactStoreName string ++/* AOSM publisher resource creation +*/ +var publisherNameWithLocation = concat(publisherName, uniqueString(resourceGroup().id)) +resource publisher 'Microsoft.HybridNetwork/publishers@2023-09-01' = { + name: publisherNameWithLocation + location: location +identity: { + type: 'SystemAssigned' + } + properties: { + scope: 'Private' + } +} ++/* AOSM artifact store resource creation +*/ +resource acrArtifactStore 'Microsoft.HybridNetwork/publishers/artifactStores@2024-04-15' = { + parent: publisher + name: acrArtifactStoreName + location: location + properties: { + storeType: 'AzureContainerRegistry' + backingResourcePublicNetworkAccess: 'Disabled' + } + +} +``` ++## Manual endpoint operations +The following operations enable manual management of an artifact store once the PL is established. ++### Manage private endpoint access +By default, when the artifact store is connected to the vnet, the user doesn't have permissions to the ACR, so the private endpoint winds up in a pending state. The following Azure rest commands and payload enable a user to approve, reject and/or list these endpoints. ++> [!NOTE] +> In this workflow, the vnet is managed by the customer. +> ++#### Sample JSON payload: +``` +{ + "manualPrivateEndPointConnections": [ + { + "id":"/subscriptions/<subscriptionId>/resourceGroups/<ResourceGroup>/providers/Microsoft.Network/privateEndpoints/peName" + } + ] + } +``` ++#### Sample private endpoint commands +``` +# approve private endpoints +az rest --method post --url https://management.azure.com/subscriptions/<Subscription>/resourceGroups/<ResourceGroup>/providers/Microsoft.HybridNetwork/publishers/<Publisher>/artifactStores/<ArtifactStore>/approveprivateendpoints?api-version=2024-04-15 --body '{ \"manualPrivateEndPointConnections\" : [ { \"id\" : \"/subscriptions/<Subscription>/resourceGroups/<ResourceGroup>/providers/Microsoft.Network/privateEndpoints/peName\" } ] }' +``` +``` +# remove private endpoints +az rest --method post --url https://management.azure.com/subscriptions/<Subscription>/resourceGroups/<ResourceGroup>/providers/Microsoft.HybridNetwork/publishers/<Publisher>/artifactStores/<ArtifactStore>/removeprivateendpoints?api-version=2024-04-15 --body '{ \"manualPrivateEndPointConnections\" : [ { \"id\" : \"/subscriptions/<Subscription>/resourceGroups/<ReourceGroup>/providers/Microsoft.Network/privateEndpoints/peName\" } ] }' +``` +``` +# list private endpoints +az rest --method post --url https://management.azure.com/subscriptions/<Subscription>resourceGroups/<ResourceGroup>/providers/Microsoft.HybridNetwork/publishers/<Publisher>/artifactStores/<artifactStore>/listPrivateEndPoints?api-version=2024-04-15 --body '{}' +``` ++### Add private endpoints to NFC +The following Azure rest commands enable a user to create, remove, and/or list the association between private endpoint, ACR, and the Nexus managed vnets. ++#### Sample private endpoint commands +``` +# add nfc private endpoints +az rest --method post --url https://management.azure.com/subscriptions/<Subscription>/resourceGroups/<ResourceGroup>/providers/Microsoft.HybridNetwork/publishers/<Publisher>/artifactStores/<artifactStore>/addnetworkfabriccontrollerendpoints?apiversion=2024-04-15 --body '{ \"networkFabricControllerIds\":[{\"id\": \"/subscriptions/<Subscription>/resourceGroups/op2lab-nfc-useop1/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/op2labnfc01\"}] }' +``` +``` +# list nfc private endpoints +az rest --method post --url https://management.azure.com/subscriptions/<Subscription>/resourceGroups/<ResourceGroup>/providers/Microsoft.HybridNetwork/publishers/<Publisher>/artifactStores/<artifactStore>/listnetworkfabriccontrollerprivateendpoints?apiversion=2024-04-15 --body '{}' +``` +``` +# delete nfc private endpoints +az rest --method post --url https://management.azure.com/subscriptions/<Subscription>/resourceGroups/<ResourceGroup>/providers/Microsoft.HybridNetwork/publishers/<publisher>/artifactStores/<artifactStore>/deletenetworkfabriccontrollerendpoints?api-version=2024-04-15 --body '{ \"networkFabricControllerIds\":[{\"id\": \"/subscriptions/<Subscription>/resourceGroups/op2lab-nfc-useop1/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/op2labnfc01\"}] }' +``` |
sentinel | Create Codeless Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md | Research the following components and verify support for them in the [Data Conne 1. Pagination options to the data source -We also recommend testing your components with an API testing tool like one of the following: +### Testing APIs ++We recommend testing your components with an API testing tool like one of the following: - [Visual Studio Code](https://code.visualstudio.com/download) with an [extension from Visual Studio Marketplace](https://marketplace.visualstudio.com/vscode) - [PowerShell Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod) Notes: 2) Since this is a type of API polling connector, set the `connectivityCriteria` type to `hasDataConnectors` 3) The example `instructionsSteps` include a button of type `ConnectionToggleButton`. This button helps trigger the deployment of data connector rules based on the connection parameters specified. -Use Postman to call the data connector definitions API to create the data connector UI in order to validate it in the data connectors gallery. +Use an [API testing tool](#testing-apis) to call the data connector definitions API to create the data connector UI in order to validate it in the data connectors gallery. To learn from an example, see the [Data connector definitions reference example section](data-connector-ui-definitions-reference.md#example-data-connector-definition). For more information on building this section, see the [Data connector connectio To learn from an example, see the [Data connector connection rules reference example](data-connector-connection-rules-reference.md#example-ccp-data-connector). -Use Postman to call the data connector API to create the data connector which combines the connection rules and previous components. Verify the connector is now connected in the UI. +Use an [API testing tool](#testing-apis) to call the data connector API to create the data connector which combines the connection rules and previous components. Verify the connector is now connected in the UI. ## Secure confidential input The following DCR defines a single stream `Custom-ExampleConnectorInput` using t For more information on the structure of this example, see [Structure of a data collection rule](../azure-monitor/essentials/data-collection-rule-structure.md). -To create this DCR in a test environment, follow the [Data Collection Rules API](/rest/api/monitor/data-collection-rules/create). Elements of the example in `{{double curly braces}}` indicate variables that require values with ease of use for Postman. When you create this resource in the ARM template, the variables expressed here are exchanged for parameters. +To create this DCR in a test environment, follow the [Data Collection Rules API](/rest/api/monitor/data-collection-rules/create). Elements of the example in `{{double curly braces}}` indicate variables that require values for ease of use with an [API testing tool](#testing-apis). When you create this resource in the ARM template, the variables expressed here are exchanged for parameters. ```json { |
site-recovery | Hyper V Azure Common Questions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-common-questions.md | Title: Common questions for Hyper-V disaster recovery with Azure Site Recovery description: This article summarizes common questions about setting up disaster recovery for on-premises Hyper-V VMs to Azure using the Azure Site Recovery site. Previously updated : 05/26/2023 Last updated : 07/10/2024 During replication, data is replicated to Azure storage, and you don't pay any V You will typically see an increase in the transactions cost incurred on GPv2 storage accounts since Azure Site Recovery is transactions heavy. [Read more](../storage/common/storage-account-upgrade.md#pricing-and-billing) to estimate the change. +### Does Site Recovery work with reserved instances? ++Yes, you can purchase [reserved Azure virtual machines](https://azure.microsoft.com/pricing/reserved-vm-instances/) in the disaster recovery region, and Site Recovery failover operations use them. No additional configuration is needed. + ## Azure ### What do I need in Hyper-V to orchestrate replication with Site Recovery? |
site-recovery | Vmware Azure Common Questions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-common-questions.md | Title: Common questions about VMware disaster recovery with Azure Site Recovery description: Get answers to common questions about disaster recovery of on-premises VMware VMs to Azure by using Azure Site Recovery. Previously updated : 04/10/2024 Last updated : 07/10/2024 Managed disks are charged slightly differently from storage accounts. [Learn mor You'll typically see an increase in the transactions cost incurred on GPv2 storage accounts since Azure Site Recovery is transactions heavy. [Read more](../storage/common/storage-account-upgrade.md#pricing-and-billing) to estimate the change. +### Does Site Recovery work with reserved instances? ++Yes, you can purchase [reserved Azure virtual machines](https://azure.microsoft.com/pricing/reserved-vm-instances/) in the disaster recovery region, and Site Recovery failover operations use them. No additional configuration is needed. + ## Mobility service ### Where can I find the Mobility service installers? |
spring-apps | Quickstart Logs Metrics Tracing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-logs-metrics-tracing.md | There are two ways to see logs on Azure Spring Apps: **Log Streaming** of real-t ### Log streaming -You can use log streaming in the Azure CLI with the following command. +#### [Azure portal](#tab/azure-portal-1) +++#### [Azure CLI](#tab/Azure-CLI-1) ++You can use log streaming in the Azure CLI with the following command: ```azurecli az spring app logs --name solar-system-weather --follow Executing ObjectResult, writing value of type 'System.Collections.Generic.KeyVal > [!TIP] > Use `az spring app logs -h` to explore more parameters and log stream functionality. ++ ### Log Analytics 1. In the Azure portal, go to the **service | Overview** page and select **Logs** in the **Monitoring** section. Select **Run** on one of the sample queries for Azure Spring Apps. There are two ways to see logs on Azure Spring Apps: **Log Streaming** of real-t ### Log streaming -#### [CLI](#tab/Azure-CLI) +#### [Azure portal](#tab/azure-portal) +++#### [Azure CLI](#tab/Azure-CLI) -You can use log streaming in the Azure CLI with the following command. +You can use log streaming in the Azure CLI with the following command: ```azurecli az spring app logs \ To learn more about the query language that's used in Log Analytics, see [Azure #### [IntelliJ](#tab/IntelliJ) -To get the logs using Azure Toolkit for IntelliJ: +Use the following steps to get the logs using the Azure Toolkit for IntelliJ: 1. Select **Azure Explorer**, then **Spring Cloud**. |
spring-apps | Diagnostic Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/diagnostic-services.md | This article shows you how to analyze diagnostics data in Azure Spring Apps. Using the diagnostics functionality of Azure Spring Apps, you can analyze logs and metrics with any of the following -* Use Azure Log Analytics. There is a delay when exporting logs to Log Analytics. +* Use Azure Log Analytics. There's a delay when exporting logs to Log Analytics. * Save logs to a storage account for auditing or manual inspection. You can specify the retention time (in days). * Stream logs to your event hub for ingestion by a third-party service or custom analytics solution. To get started, enable one of these services to receive the data. To learn about 1. In the Azure portal, go to your Azure Spring Apps instance. 1. Select **diagnostics settings** option, and then select **Add diagnostics setting**.-1. Enter a name for the setting, and then choose where you want to send the logs. You can select any combination of the following three options: - * **Archive to a storage account** - * **Stream to an event hub** - * **Send to Log Analytics** - * **Send to partner solution** +1. Enter a name for the setting, and then choose where you want to send the logs. You can select any combination of the following options: ++ * **Archive to a storage account** + * **Stream to an event hub** + * **Send to Log Analytics** + * **Send to partner solution** 1. Choose which log category and metric category you want to monitor, and then specify the retention time (in days). The retention time applies only to the storage account. 1. Select **Save**. To get started, enable one of these services to receive the data. To learn about There are various methods to view logs and metrics as described under the following headings. -### Use the Logs blade +### Use the Logs pane 1. In the Azure portal, go to your Azure Spring Apps instance. 1. To open the **Log Search** pane, select **Logs**.-1. In the **Tables** search box - * To view logs, enter a simple query such as: +1. In the **Tables** search box, use one of the following queries: ++ * To view logs, enter a query such as the following example: - ```sql - AppPlatformLogsforSpring - | limit 50 - ``` + ```kusto + AppPlatformLogsforSpring + | limit 50 + ``` - * To view metrics, enter a simple query such as: + * To view metrics, enter a query such as the following example: - ```sql - AzureMetrics - | limit 50 - ``` + ```kusto + AzureMetrics + | limit 50 + ``` 1. To view the search result, select **Run**. There are various methods to view logs and metrics as described under the follow 1. In the Azure portal, in the left pane, select **Log Analytics**. 1. Select the Log Analytics workspace that you chose when you added your diagnostics settings. 1. To open the **Log Search** pane, select **Logs**.-1. In the **Tables** search box, - * to view logs, enter a simple query such as: +1. In the **Tables** search box, use one of the following queries: ++ * To view logs, enter a query such as the following example: - ```sql - AppPlatformLogsforSpring - | limit 50 - ``` + ```kusto + AppPlatformLogsforSpring + | limit 50 + ``` - * to view metrics, enter a simple query such as: + * To view metrics, enter a query such as the following example: - ```sql - AzureMetrics - | limit 50 - ``` + ```kusto + AzureMetrics + | limit 50 + ``` 1. To view the search result, select **Run**.-1. You can search the logs of the specific application or instance by setting a filter condition: +1. You can search the logs of the specific application or instance by setting a filter condition, as shown in the following example: - ```sql - AppPlatformLogsforSpring - | where ServiceName == "YourServiceName" and AppName == "YourAppName" and InstanceName == "YourInstanceName" - | limit 50 - ``` + ```kusto + AppPlatformLogsforSpring + | where ServiceName == "YourServiceName" and AppName == "YourAppName" and InstanceName == "YourInstanceName" + | limit 50 + ``` - > [!NOTE] - > `==` is case sensitive, but `=~` is not. + > [!NOTE] + > `==` is case sensitive, but `=~` is not. To learn more about the query language that's used in Log Analytics, see [Azure Monitor log queries](/azure/data-explorer/kusto/query/). To query all your Log Analytics logs from a centralized client, check out [Azure Data Explorer](/azure/data-explorer/query-monitor-data). Application logs provide critical information and verbose logs about your applic To review a list of application logs from Azure Spring Apps, sorted by time with the most recent logs shown first, run the following query: -```sql +```kusto AppPlatformLogsforSpring | project TimeGenerated , ServiceName , AppName , InstanceName , Log | sort by TimeGenerated desc AppPlatformLogsforSpring To review unsorted log entries that mention an error or exception, run the following query: -```sql +```kusto AppPlatformLogsforSpring | project TimeGenerated , ServiceName , AppName , InstanceName , Log | where Log contains "error" or Log contains "exception" Use this query to find errors, or modify the query terms to find specific error To create a pie chart that displays the number of errors and exceptions logged by your application in the last hour, run the following query: -```sql +```kusto AppPlatformLogsforSpring | where TimeGenerated > ago(1h) | where Log contains "error" or Log contains "exception" AppPlatformLogsforSpring ### Show ingress log entries containing a specific host -To review log entries that are generated by a specific host, run the following query: +To review log entries generated by a specific host, run the following query: -```sql +```kusto AppPlatformIngressLogs | where TimeGenerated > ago(1h) and Host == "ingress-asc.test.azuremicroservices.io" | project TimeGenerated, RemoteIP, Host, Request, Status, BodyBytesSent, RequestTime, ReqId, RequestHeaders Use this query to find response `Status`, `RequestTime`, and other properties of To review log entries for a specific `requestId` value *\<request_ID>*, run the following query: -```sql +```kusto AppPlatformIngressLogs | where TimeGenerated > ago(1h) and ReqId == "<request_ID>" | project TimeGenerated, RemoteIP, Host, Request, Status, BodyBytesSent, RequestTime, ReqId, RequestHeaders AppPlatformIngressLogs To review log entries for a specific app during the build process, run the following query: -```sql +```kusto AppPlatformBuildLogs | where TimeGenerated > ago(1h) and PodName contains "<app-name>" | sort by TimeGenerated AppPlatformBuildLogs To review log entries for a specific app in a specific build stage, run the following query. Replace the *`<app-name>`* placeholder with your application name. Replace the *`<build-stage>`* placeholder with one of the following values, which represent the stages of the build process: `prepare`, `detect`, `restore`, `analyze`, `build`, `export`, or `completion`. -```sql +```kusto AppPlatformBuildLogs | where TimeGenerated > ago(1h) and PodName contains "<app-name>" and ContainerName == "<build-stage>" | sort by TimeGenerated AppPlatformBuildLogs To review log entries for VMware Spring Cloud Gateway logs in the Enterprise plan, run the following query: -```sql +```kusto AppPlatformSystemLogs  | where LogType == "SpringCloudGateway" | project TimeGenerated , LogType, Level , ServiceName , Thread , Stack , Log , _ResourceId  AppPlatformSystemLogs  Another component, named Spring Cloud Gateway Operator, controls the lifecycle of Spring Cloud Gateway and routes. If you encounter any issues with the route not taking effect, check the logs for this component. To review log entries for VMware Spring Cloud Gateway Operator in the Enterprise plan, run the following query: -```sql +```kusto AppPlatformSystemLogs  | where LogType == "SpringCloudGatewayOperator" | project TimeGenerated , LogType, Level , ServiceName , Thread , Stack , Log , _ResourceId  AppPlatformSystemLogs  To review log entries for Application Configuration Service for Tanzu logs in the Enterprise plan, run the following query: -```sql +```kusto AppPlatformSystemLogs  | where LogType == "ApplicationConfigurationService" | project TimeGenerated , LogType, Level , ServiceName , Thread , Stack , Log , _ResourceId  AppPlatformSystemLogs  To review log entries for Tanzu Service Registry logs in the Enterprise plan, run the following query: -```sql +```kusto AppPlatformSystemLogs  | where LogType == "ServiceRegistry" | project TimeGenerated , LogType, Level , ServiceName , Thread , Stack , Log , _ResourceId  AppPlatformSystemLogs  To review log entries for API portal for VMware Tanzu logs in the Enterprise plan, run the following query: -```sql +```kusto AppPlatformSystemLogs  | where LogType == "ApiPortal" | project TimeGenerated , LogType, Level , ServiceName , Thread , Stack , Log , _ResourceId  AppPlatformSystemLogs  Azure Monitor provides extensive support for querying application logs by using Log Analytics. To learn more about this service, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md). For more information about building queries to analyze your application logs, see [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md). +### Convenient entry points in Azure portal ++Use following steps to navigate to the **Log Analytics** pane with predefined queries: ++1. Go to the **Overview** page for your Azure Spring Apps service instance and then select **Apps** in the navigation pane. ++1. Find your target app and then select the context menu. ++1. In the pop-up context menu, select **View logs**. ++ :::image type="content" source="media/diagnostic-services/view-logs.png" alt-text="Screenshot of the Azure portal that shows the Apps page with the View logs context menu item highlighted." lightbox="media/diagnostic-services/view-logs.png"::: ++ This action navigates you to the **Log Analytics** pane with predefined queries. ++There are other entry points to view logs. You can also find the **View logs** button for managed components such as Build Service and Service Registry. + ## Frequently asked questions (FAQ) ### How do I convert multi-line Java stack traces into a single line? -There is a workaround to convert your multi-line stack traces into a single line. You can modify the Java log output to reformat stack trace messages, replacing newline characters with a token. If you use Java Logback library, you can reformat stack trace messages by adding `%replace(%ex){'[\r\n]+', '\\n'}%nopex` as follows: +There's a workaround to convert your multi-line stack traces into a single line. You can modify the Java log output to reformat stack trace messages, replacing newline characters with a token. If you use Java Logback library, you can reformat stack trace messages by adding `%replace(%ex){'[\r\n]+', '\\n'}%nopex` as follows: ```xml <configuration> There is a workaround to convert your multi-line stack traces into a single line </configuration> ``` -You can then replace the token with newline characters in Log Analytics as below: +You can then replace the token with newline characters in Log Analytics, as shown in the following example: -```sql +```kusto AppPlatformLogsforSpring | extend Log = array_strcat(split(Log, '\\n'), '\n') ``` -You may be able to use the same strategy for other Java log libraries. +You might be able to use the same strategy for other Java log libraries. ## Next steps |
spring-apps | How To Deploy In Azure Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-in-azure-virtual-network.md | If you already have a virtual network to host an Azure Spring Apps instance, ski ## Grant service permission to the virtual network -This section shows you to grant Azure Spring Apps the [Owner](../../role-based-access-control/built-in-roles.md#owner) permission on your virtual network. This permission enables you to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. +This section shows you how to grant Azure Spring Apps the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) and [Network Contributor](../../role-based-access-control/built-in-roles.md#network-contributor) permissions on your virtual network. This permission enables you to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. + > [!NOTE]-> The minimal required permissions are [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) and [Network Contributor](../../role-based-access-control/built-in-roles.md#network-contributor). You can grant role assignments to both of them if you can't grant `Owner` permission. -> > If you're using your own route table or a user defined route feature, you also need to grant Azure Spring Apps the same role assignments to your route tables. For more information, see the [Bring your own route table](#bring-your-own-route-table) section and [Control egress traffic for an Azure Spring Apps instance](how-to-create-user-defined-route-instance.md). ### [Azure portal](#tab/azure-portal) Use the following steps to grant permission: :::image type="content" source="media/how-to-deploy-in-azure-virtual-network/access-control.png" alt-text="Screenshot of the Azure portal Access Control (IAM) page showing the Check access tab with the Add role assignment button highlighted." lightbox="media/how-to-deploy-in-azure-virtual-network/access-control.png"::: -1. Assign the `Owner` role to the Azure Spring Cloud Resource Provider. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml). --- :::image type="content" source="./media/how-to-deploy-in-azure-virtual-network/assign-owner-resource-provider.png" alt-text="Screenshot of the Azure portal Access Control page with Add role assignment pane and Select box with Azure Spring Cloud Resource Provider highlighted." lightbox="./media/how-to-deploy-in-azure-virtual-network/assign-owner-resource-provider.png"::: +1. Assign the `Network Contributor` and `User Access Administrator` roles to the Azure Spring Cloud Resource Provider. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml). + > [!NOTE] + > Role `User Access Administrator` is in the **Privileged administrator roles** and `Network Contributor` is in the **Job function roles**. ### [Azure CLI](#tab/azure-CLI) export VIRTUAL_NETWORK_RESOURCE_ID=$(az network vnet show \ --output tsv) az role assignment create \- --role "Owner" \ + --role "User Access Administrator" \ + --scope ${VIRTUAL_NETWORK_RESOURCE_ID} \ + --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2 ++az role assignment create \ + --role "Network Contributor" \ --scope ${VIRTUAL_NETWORK_RESOURCE_ID} \ --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2 ``` If your custom subnets don't contain route tables, Azure Spring Apps creates the ### Route table requirements -The route tables to which your custom vnet is associated must meet the following requirements: +The route tables to which your custom virtual network is associated must meet the following requirements: -* You can associate your Azure route tables with your vnet only when you create a new Azure Spring Apps service instance. You can't change to use another route table after Azure Spring Apps has been created. +* You can associate your Azure route tables with your virtual network only when you create a new Azure Spring Apps service instance. You can't change to use another route table after you create an Azure Spring Apps instance. * Both the Spring application subnet and the service runtime subnet must associate with different route tables or neither of them.-* Permissions must be assigned before instance creation. Be sure to grant Azure Spring Cloud Resource Provider the `Owner` permission (or `User Access Administrator` and `Network Contributor` permissions) on your route tables. +* Permissions must be assigned before instance creation. Be sure to grant Azure Spring Cloud Resource Provider the `User Access Administrator` and `Network Contributor` permissions on your route tables. * You can't update the associated route table resource after cluster creation. While you can't update the route table resource, you can modify custom rules on the route table. * You can't reuse a route table with multiple instances due to potential conflicting routing rules. If your custom DNS server can't add Azure DNS IP `168.63.129.16` as the upstream ## Next steps -* [Troubleshooting Azure Spring Apps in VNET](troubleshooting-vnet.md) -* [Customer Responsibilities for Running Azure Spring Apps in VNET](vnet-customer-responsibilities.md) +* [Troubleshooting Azure Spring Apps in virtual networks](troubleshooting-vnet.md) +* [Customer responsibilities for Running Azure Spring Apps in a virtual network](vnet-customer-responsibilities.md) |
spring-apps | How To Log Streaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-log-streaming.md | This article describes how to enable log streaming in the Azure CLI to get real- - [Azure CLI](/cli/azure/install-azure-cli) with the Azure Spring Apps extension, version 1.0.0 or higher. You can install the extension by using the following command: `az extension add --name spring` - An instance of Azure Spring Apps with a running application. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md). -## Use the Azure CLI to produce tail logs +## Stream logs ++### [Azure portal](#tab/azure-portal) +++### [Azure CLI](#tab/azure-CLI) This section provides examples of using the Azure CLI to produce tail logs. To avoid repeatedly specifying your resource group and service instance name, use the following commands to set your default resource group name and cluster name: Single vip registry refresh property : null > {timestamp} {level:>5} [{thread:>15.15}] {logger{39}:<40.40}: {message}{n}{stackTrace} > ``` ++ ## Stream an Azure Spring Apps app log in a virtual network injection instance For an Azure Spring Apps instance deployed in a custom virtual network, you can access log streaming by default from a private network. For more information, see [Deploy Azure Spring Apps in a virtual network](./how-to-deploy-in-azure-virtual-network.md) |
spring-apps | Quickstart Deploy Infrastructure Vnet Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet-azure-cli.md | The Enterprise deployment plan includes the following Tanzu components: * Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * An existing Log Analytics workspace for Azure Spring Apps diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md). * Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges won't be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.-* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). +* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires `User Access Administrator` and `Network Contributor` permissions to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements). * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Azure Spring Apps cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). |
spring-apps | Quickstart Deploy Infrastructure Vnet Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet-bicep.md | The Enterprise deployment plan includes the following Tanzu components: * Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * An existing Log Analytics workspace for Azure Spring Apps diagnostics settings. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md). * Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges won't be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.-* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). +* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires `User Access Administrator` and `Network Contributor` permissions to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements). * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Azure Spring Apps cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). |
spring-apps | Quickstart Deploy Infrastructure Vnet Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet-terraform.md | For more customization including custom domain support, see the [Azure Spring Ap * Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * An existing Log Analytics workspace for Azure Spring Apps diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md). * Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges won't be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Azure Spring Apps CIDR. Clusters also may not use any IP ranges included within the cluster virtual network address range.-* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). +* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires `User Access Administrator` and `Network Contributor` permissions to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements). * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Azure Spring Apps cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). |
spring-apps | Quickstart Deploy Infrastructure Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet.md | The Enterprise deployment plan includes the following Tanzu components: * Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * An existing Log Analytics workspace for Azure Spring Apps diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md). * Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges aren't directly routable and are used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Azure Spring Apps CIDR ranges. Clusters also may not use any IP ranges included within the cluster virtual network address range.-* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). +* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires `User Access Administrator` and `Network Contributor` permissions to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * If you're using Azure Firewall or a Network Virtual Appliance (NVA), you also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements). |
storage | Immutable Storage Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md | A time-based retention policy stores blob data in a WORM format for a specified A time-based retention policy can be configured at the following scopes: -- Version-level WORM policy: A time-based retention policy can be configured at the account, container, or version level. If it's configured at the account or container level, it will be inherited by all blobs in the respective account or container.+- Version-level WORM policy: A time-based retention policy can be configured at the account, container, or version level. If it's configured at the account or container level, it will be inherited by all blobs in the respective account or container. If there is a legal hold on a container, Version-level WORM cannot be created for the same container. This is because the versions can't generated due to the legal hold. - Container-level WORM policy: A time-based retention policy configured at the container level applies to all blobs in that container. Individual blobs can't be configured with their own immutability policies. ### Retention interval for a time-based policy |
storage | Storage Metrics Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-metrics-migration.md | To transition to metrics in Azure Monitor, we recommend the following approach. > [!NOTE] > Metrics in Azure Monitor are enabled by default, so there is nothing you need to do to begin capturing metrics. You must however, create charts or dashboards to view those metrics. -5. If you've created alert rules that are based on classic storage metrics, then [create alert rules](../../azure-monitor/alerts/alerts-overview.md) that are based on metrics in Azure Monitor. --6. After you're able to see all of your metrics in Azure Monitor, you can turn off classic logging. +1. If you've created alert rules that are based on classic storage metrics, then [create alert rules](../../azure-monitor/alerts/alerts-overview.md) that are based on metrics in Azure Monitor. <a id="key-differences-between-classic-metrics-and-metrics-in-azure-monitor"></a> |
storage | Storage Files Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md | +* <a id="afs-avrecalls"></a> + **Why is the anti virus software on the AFS server recalling tiered files?** + When users access tiered files, some anti-virus (AV) software may cause unintended file recalls. This occurs if the AV software is not configured to ignore tiered files (those with the RECALL_ON_DATA_ACCESS attribute). + Here's what happens: + 1. A user attempts to access a tiered file. + 2. The AV software blocks the read handle. + 3. The AV application then performs its own read to scan the file for viruses. + + This process may appear as if the AV software is recalling the tiered files, but it's actually triggered by the user's access attempt. To prevent this issue, ensure that your AV vendor configures their software to ignore scanning tiered files with the RECALL_ON_DATA_ACCESS attribute. ++* <a id="afs-networkconnect"></a> + **Can SSL inspection software block access to AFS Servers?** + Make sure your SSL inspection software (such as Zscaler or FortiGate) allows Azure File Sync (AFS) server endpoints to access Azure. These SSL inspection tools can override firewall settings and selectively allow traffic. Contact your network administrator to resolve this issue. Use the "testnet" command to determine if your AFS server is experiencing this problem. + ## Security, authentication, and access control * <a id="file-auditing"></a> |
trusted-signing | How To Device Guard Signing Service Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/how-to-device-guard-signing-service-migration.md | Title: Device Guard Signing Service migration to Trusted Signing description: Learn how to migrate from Device Guard Signing Service (DGSSv2) to Trusted Signing for code integrity policy -+ |
trusted-signing | How To Signing Integrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/how-to-signing-integrations.md | To complete the steps in this article, you need: 1. [Download and install the .NET 8 Runtime](#download-and-install-net-80-runtime). 1. [Download and install the Trusted Signing dlib package](#download-and-install-the-trusted-signing-dlib-package). 1. [Create a JSON file to provide your Trusted Signing account and a certificate profile](#create-a-json-file).-1. [Invoke SignTool to sign a file](#use-signtool-to-sign-a-file). +1. [To Sign a file, Invoke SignTool](#use-signtool-to-sign-a-file). ### Download and install SignTool To download and install SignTool: 1. Download the latest version of SignTool and Windows Build Tools NuGet at [Microsoft.Windows.SDK.BuildTools](https://www.nuget.org/packages/Microsoft.Windows.SDK.BuildTools/). -1. Install SignTool from the Windows SDK (minimum version: 10.0.2261.755, 20348 Windows SDK version is not supported with our dlib). +1. Install SignTool from the Windows SDK (minimum version: 10.0.2261.755, 20348 Windows SDK version isn't supported with our dlib). Another option is to use the latest *nuget.exe* file to download and extract the latest Windows SDK Build Tools NuGet package by using PowerShell: To download and install the Trusted Signing dlib package (a .zip file): 1. Download the [Trusted Signing dlib package](https://www.nuget.org/packages/Microsoft.Trusted.Signing.Client). -1. Extract the Trusted Signing dlib zipped content and install it on your signing node in your choice of directory. The node must be the node where you'll use SignTool to sign files. +1. Extract the Trusted Signing dlib zipped content and install it on your signing node in your choice of directory. The node must be the node where you use SignTool to sign files. Another option is to download the [Trusted Signing dlib package](https://www.nuget.org/packages/Microsoft.Trusted.Signing.Client) via NuGet similar like the Windows SDK Build Tools NuGet package: To sign by using Trusted Signing, you need to provide the details of your Truste <sup>1</sup> The optional `"CorrelationId"` field is an opaque string value that you can provide to correlate sign requests with your own workflows, such as build identifiers or machine names. +### Authentication ++This Task performs authentication using [DefaultAzureCredential](https://learn.microsoft.com/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet), which attempts a series of authentication methods in order. If one method fails, it attempts the next one until authentication is successful. ++Each authentication method can be disabled individually to avoid unnecessary attempts. ++For example, when authenticating with [EnvironmentCredential](https://learn.microsoft.com/dotnet/api/azure.identity.environmentcredential?view=azure-dotnet) specifically, disable the other credentials with the following inputs: ++ExcludeEnvironmentCredential: false +ExcludeManagedIdentityCredential: true +ExcludeSharedTokenCacheCredential: true +ExcludeVisualStudioCredential: true +ExcludeVisualStudioCodeCredential: true +ExcludeAzureCliCredential: true +ExcludeAzurePowershellCredential: true +ExcludeInteractiveBrowserCredential: true ++Similarly, if using for example an [AzureCliCredential](https://learn.microsoft.com/dotnet/api/azure.identity.azureclicredential?view=azure-dotnet) , then we want to skip over attempting to authenticate with the several methods that come before it in order. ++ ### Use SignTool to sign a file To invoke SignTool to sign a file: |
update-manager | Migration Key Points | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/migration-key-points.md | description: A summary of important pointers while migrating using Azure portal Previously updated : 08/13/2024 Last updated : 09/05/2024 This article lists the significant details that you must note when you're migrat - In the end, you can resolve more machines from Azure Resource Graph as in Azure Update Manager. You can't check if Hybrid Runbook Worker is reporting or not, unlike in Automation Update Management where it was an intersection of Dynamic Queries and Hybrid Runbook Worker. - Machines that are unsupported in Azure Update Manager aren't migrated. The Schedules, which have such machines will be partially migrated and only supported machines of the software update configuration will be moved to Azure Update Manager. To prevent patching by both Automation Update Management and Azure Update Manager, remove migrated machines from deployment schedules in Automation Update Management. - **Post Deboarding**: - - Remove the user managed identity created for migration that is linked with the automation account. For more information, see [Remove user-assigned managed identity for Azure Automation account](../automation/remove-user-assigned-identity.md#remove-using-the-azure-portal). - - [Delete the user managed identity](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#delete-a-user-assigned-managed-identity). + - Ensure to execute the [script](https://github.com/azureautomation/Post-Migration-from-Azure-Automation-Update-Management-to-Azure-Update-Manager-Preqrequisite-Cleanup/blob/main/MigrationPrerequisitesCleanup.ps1) that will do the following: + - Delete the automation account variable `AzureAutomationAccountEnvironment` created for migration. + - Remove the user-managed identity created for migration from the automation account. + - Delete the assigned roles for the user-managed identity created for migration. + - Delete the user-managed identity created for migration. + - To run the above script, you must have Microsoft.Authorization/roleAssignments/write permissions on all the subscriptions that contain Automation Update Management resources such as machines, schedules, log analytics workspace, and automation account. For more information, see [how to assign an Azure role](../role-based-access-control/role-assignments-rest.md). + - The script should be executed in the same manner as the [Prerequisite](migration-using-runbook-scripts.md#prerequisite-2-create-user-identity-and-role-assignments-by-running-powershell-script) script. Post-migration, a Software Update Configuration can have any one of the following four migration statuses: |
virtual-desktop | Troubleshoot Client Windows Basic Shared | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-windows-basic-shared.md | |
virtual-desktop | Client Features Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-windows.md | |
virtual-desktop | Connect Macos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-macos.md | description: Learn how to connect to Azure Virtual Desktop using the Remote Desk Last updated 02/26/2024+ |
virtual-desktop | Connect Web | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-web.md | description: Learn how to connect to Azure Virtual Desktop using the Remote Desk Last updated 10/04/2022+ |
virtual-desktop | Connect Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows.md | |
virtual-desktop | Whats New Documentation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-documentation.md | description: Learn about new and updated articles to the Azure Virtual Desktop d Previously updated : 07/30/2024 Last updated : 09/05/2024 # What's new in documentation for Azure Virtual Desktop -We update documentation for Azure Virtual Desktop regularly. In this article, we highlight articles for new features and where there are important updates to existing articles. +We update documentation for Azure Virtual Desktop regularly. In this article, we highlight articles for new features and where there are significant updates to existing articles. To learn what's new in the service, see [What's new for Azure Virtual Desktop](whats-new.md). ++## August 2024 ++In August 2024, we made the following changes to the documentation: ++- Published a new set of documentation to learn about peripheral and resource redirection and how to configure different classes of redirection: + - [Peripheral and resource redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md) + - [Configure audio and video redirection over the Remote Desktop Protocol](redirection-configure-audio-video.md). + - [Configure camera, webcam, and video capture redirection over the Remote Desktop Protocol](redirection-configure-camera-webcam-video-capture.md). + - [Configure clipboard redirection over the Remote Desktop Protocol](redirection-configure-clipboard.md). + - [Configure fixed, removable, and network drive redirection over the Remote Desktop Protocol](redirection-configure-drives-storage.md). + - [Configure location redirection over the Remote Desktop Protocol](redirection-configure-location.md). + - [Configure Media Transfer Protocol and Picture Transfer Protocol redirection on Windows over the Remote Desktop Protocol](redirection-configure-plug-play-mtp-ptp.md). + - [Configure printer redirection over the Remote Desktop Protocol](redirection-configure-printers.md). + - [Configure serial or COM port redirection over the Remote Desktop Protocol](redirection-configure-serial-com-ports.md). + - [Configure smart card redirection over the Remote Desktop Protocol](redirection-configure-smart-cards.md). + - [Configure USB redirection on Windows over the Remote Desktop Protocol](redirection-configure-usb.md). + - [Configure WebAuthn redirection over the Remote Desktop Protocol](redirection-configure-webauthn.md). ++- Updated [Set custom Remote Desktop Protocol (RDP) properties on a host pool in Azure Virtual Desktop](customize-rdp-properties.md) to include rewritten steps for Azure PowerShell and added steps for Azure CLI. ++- Updated [Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md) to include information on how to publish new Teams as a RemoteApp. ++- Published a new article for [Azure Virtual Desktop on Azure Extended Zones](azure-extended-zones.md). ++- Published a new article to [Configure the session lock behavior for Azure Virtual Desktop](configure-session-lock-behavior.md) and updated [Configure single sign-on for Azure Virtual Desktop using Microsoft Entra ID](configure-single-sign-on.md) to include the relevant information. ++- Published a new article to [Onboard Azure Virtual Desktop session hosts to forensic evidence from Microsoft Purview Insider Risk Management](purview-forensic-evidence.md). ++- Updated [Configure the clipboard transfer direction and data types that can be copied in Azure Virtual Desktop](clipboard-transfer-direction-data-types.md?tabs=intune) to include the steps for using the Microsoft Intune settings catalog. ## July 2024 |
virtual-wan | Create Bgp Peering Hub Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/create-bgp-peering-hub-powershell.md | Update an existing hub BGP peer connection. ```azurepowershell-interactive Update-AzVirtualHubBgpConnection -ResourceGroupName "[resource group name]" -VirtualHubName "westushub" -PeerIp 192.168.1.6 -PeerAsn 20000 -Name "testBgpConnection" -VirtualHubVnetConnection $hubVnetConnection ```-## BGP learned route in HUB +## Check BGP learned route -Check BGP learned route in HUB. +Check BGP learned route in a hub. ```azurepowershell-interactive Get-AzRouteServerPeerLearnedRoute -ResourceGroupName "[resource group name]" -RouteServerName "[hub name]" -PeerName "[peer name]" |
vpn-gateway | Vpn Gateway Troubleshoot Vpn Point To Site Connection Problems | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md | Update the NIC driver: 1. If Windows doesn't find a new driver, you can try looking for one on the device manufacturer's website and follow their instructions. 1. Restart the computer and try the connection again. -## <a name="entra-expired"></a>VPN client error: Your authentication with Microsoft Entra has expired +## <a name="entra-expired"></a>VPN client error: Your authentication with Microsoft Entra expired -If you're using Microsoft Entra ID authentication, you might encounter the following error: +If you're using Microsoft Entra ID authentication, you might encounter one of the following errors: ++**Your authentication with Microsoft Entra is expired. You need to re-authenticate in Entra to acquire a new token. Authentication timeout can be tuned by your administrator.** ++or **Your authentication with Microsoft Entra has expired so you need to re-authenticate to acquire a new token. Please try connecting again. Authentication policies and timeout are configured by your administrator in Entra tenant.** |