Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md | Watch this video to learn how to configure monitoring for Azure AD B2C using Azu ## Deployment overview Azure AD B2C uses [Microsoft Entra monitoring](../active-directory/reports-monitoring/overview-monitoring-health.md). Unlike Microsoft Entra tenants, an Azure AD B2C tenant can't have a subscription associated with it. So, we need to take extra steps to enable the integration between Azure AD B2C and Log Analytics, which is where we send the logs.-To enable _Diagnostic settings_ in Microsoft Entra ID within your Azure AD B2C tenant, you use [Azure Lighthouse](../lighthouse/overview.md) to [delegate a resource](../lighthouse/concepts/architecture.md), which allows your Azure AD B2C (the **Service Provider**) to manage a Microsoft Entra ID (the **Customer**) resource. +To enable _Diagnostic settings_ in Microsoft Entra ID within your Azure AD B2C tenant, you use [Azure Lighthouse](/azure/lighthouse/overview) to [delegate a resource](/azure/lighthouse/concepts/architecture), which allows your Azure AD B2C (the **Service Provider**) to manage a Microsoft Entra ID (the **Customer**) resource. > [!TIP] > Azure Lighthouse is typically used to manage resources for multiple customers. However, it can also be used to manage resources **within an enterprise that has multiple Microsoft Entra tenants of its own**, which is what we are doing here, except that we are only delegating the management of single resource group. To create the custom authorization and delegation in Azure Lighthouse, we use an 1. Sign in to the [Azure portal](https://portal.azure.com). 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.-1. Use the **Deploy to Azure** button to open the Azure portal and deploy the template directly in the portal. For more information, see [create an Azure Resource Manager template](../lighthouse/how-to/onboard-customer.md#create-an-azure-resource-manager-template). +1. Use the **Deploy to Azure** button to open the Azure portal and deploy the template directly in the portal. For more information, see [create an Azure Resource Manager template](/azure/lighthouse/how-to/onboard-customer#create-an-azure-resource-manager-template). [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure-ad-b2c%2Fsiem%2Fmaster%2Ftemplates%2FrgDelegatedResourceManagement.json) To create the custom authorization and delegation in Azure Lighthouse, we use an ] ``` -After you deploy the template, it can take a few minutes (typically no more than five) for the resource projection to complete. You can verify the deployment in your Microsoft Entra tenant and get the details of the resource projection. For more information, see [View and manage service providers](../lighthouse/how-to/view-manage-service-providers.md). +After you deploy the template, it can take a few minutes (typically no more than five) for the resource projection to complete. You can verify the deployment in your Microsoft Entra tenant and get the details of the resource projection. For more information, see [View and manage service providers](/azure/lighthouse/how-to/view-manage-service-providers). ## 4. Select your subscription |
api-management | Api Management Key Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts.md | API Management is offered in a variety of pricing tiers to meet the needs of dif API Management integrates with many complementary Azure services to create enterprise solutions, including: * **[Azure API Center](../api-center/overview.md)** to build a complete inventory of APIsΓÇï in the organization - regardless of their type, lifecycle stage, or deployment locationΓÇï - for API discovery, reuse, and governance-* **[Copilot in Azure](../copilot/overview.md)** to help author API Management policies or explain already configured policiesΓÇï +* **[Copilot in Azure](/azure/copilot/overview)** to help author API Management policies or explain already configured policiesΓÇï * **[Azure Key Vault](/azure/key-vault/general/overview)** for secure safekeeping and management of [client certificates](api-management-howto-mutual-certificates.md) and [secretsΓÇï](api-management-howto-properties.md) * **[Azure Monitor](api-management-howto-use-azure-monitor.md)** for logging, reporting, and alerting on management operations, systems events, and API requestsΓÇï * **[Application Insights](api-management-howto-app-insights.md)** for live metrics, end-to-end tracing, and troubleshooting |
api-management | Api Management Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md | More information about policies: + [Policy overview](api-management-howto-policies.md) + [Set or edit policies](set-edit-policies.md) + [Policy expressions](api-management-policy-expressions.md)-+ [Author policies using Microsoft Copilot in Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) ++ [Author policies using Microsoft Copilot in Azure](/azure/copilot/author-api-management-policies?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) > [!IMPORTANT] > [Limit call rate by subscription](rate-limit-policy.md) and [Set usage quota by subscription](quota-policy.md) have a dependency on the subscription key. A subscription key isn't required when other policies are applied. |
api-management | Api Management Policy Expressions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md | For more information working with policies, see: + [Tutorial: Transform and protect APIs](transform-api.md) + [Policy reference](./api-management-policies.md) for a full list of policy statements and their settings + [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) -+ [Author policies using Microsoft Copilot in Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) ++ [Author policies using Microsoft Copilot in Azure](/azure/copilot/author-api-management-policies?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) For more information: |
api-management | How To Configure Local Metrics Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-local-metrics-logs.md | The self-hosted gateway supports [StatsD](https://github.com/statsd/statsd), whi The following sample YAML configuration deploys StatsD and Prometheus to the Kubernetes cluster where a self-hosted gateway is deployed. It also creates a [Service](https://kubernetes.io/docs/concepts/services-networking/service/) for each. The self-hosted gateway then publishes metrics to the StatsD Service. We'll access the Prometheus dashboard via its Service. > [!NOTE]-> The following example pulls public container images from Docker Hub. We recommend that you set up a pull secret to authenticate using a Docker Hub account instead of making an anonymous pull request. To improve reliability when working with public content, import and manage the images in a private Azure container registry. [Learn more about working with public images.](../container-registry/buffer-gate-public-content.md) +> The following example pulls public container images from Docker Hub. We recommend that you set up a pull secret to authenticate using a Docker Hub account instead of making an anonymous pull request. To improve reliability when working with public content, import and manage the images in a private Azure container registry. [Learn more about working with public images.](/azure/container-registry/buffer-gate-public-content) ```yaml apiVersion: v1 |
api-management | How To Deploy Self Hosted Gateway Azure Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md | Last updated 06/12/2023 [!INCLUDE [api-management-availability-premium-dev](../../includes/api-management-availability-premium-dev.md)] -With the integration between Azure API Management and [Azure Arc on Kubernetes](../azure-arc/kubernetes/overview.md), you can deploy the API Management gateway component as an [extension in an Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/extensions.md). +With the integration between Azure API Management and [Azure Arc on Kubernetes](/azure/azure-arc/kubernetes/overview), you can deploy the API Management gateway component as an [extension in an Azure Arc-enabled Kubernetes cluster](/azure/azure-arc/kubernetes/extensions). Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster expands API Management support for hybrid and multicloud environments. Enable the deployment using a cluster extension to make managing and applying policies to your Azure Arc-enabled cluster a consistent experience. Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster ## Prerequisites -* [Connect your Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md) within a supported Azure Arc region. +* [Connect your Kubernetes cluster](/azure/azure-arc/kubernetes/quickstart-connect-cluster) within a supported Azure Arc region. * Install the `k8s-extension` Azure CLI extension: ```azurecli To enable monitoring of the self-hosted gateway, configure the following Log Ana * To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md). * Learn more about the [observability capabilities of the Azure API Management gateways](observability.md).-* Discover all [Azure Arc-enabled Kubernetes extensions](../azure-arc/kubernetes/extensions.md). -* Learn more about [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). +* Discover all [Azure Arc-enabled Kubernetes extensions](/azure/azure-arc/kubernetes/extensions). +* Learn more about [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview). * Learn more about guidance to [run the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md). * For configuration options, see the self-hosted gateway extension [reference](self-hosted-gateway-arc-reference.md). |
api-management | How To Deploy Self Hosted Gateway Azure Kubernetes Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md | This article provides the steps for deploying self-hosted gateway component of A [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)] > [!NOTE]-> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). +> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](/azure/azure-arc/kubernetes/extensions). ## Prerequisites |
api-management | How To Deploy Self Hosted Gateway Kubernetes Helm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md | This article provides the steps for deploying self-hosted gateway component of A [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)] > [!NOTE]-> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). +> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](/azure/azure-arc/kubernetes/extensions). ## Prerequisites |
api-management | How To Deploy Self Hosted Gateway Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md | This article describes the steps for deploying the self-hosted gateway component [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)] > [!NOTE]-> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). +> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](/azure/azure-arc/kubernetes/extensions). ## Prerequisites |
api-management | Policy Fragments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md | For more information about working with policies, see: + [Set or edit policies](set-edit-policies.md) + [Policy reference](./api-management-policies.md) for a full list of policy statements + [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) -+ [Author policies using Microsoft Copilot in Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) ++ [Author policies using Microsoft Copilot in Azure](/azure/copilot/author-api-management-policies?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) |
api-management | Set Edit Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md | For more information about working with policies, see: + [Set or edit policies](set-edit-policies.md) + [Policy reference](./api-management-policies.md) for a full list of policy statements and their settings + [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) -+ [Author policies using Microsoft Copilot in Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) ++ [Author policies using Microsoft Copilot in Azure](/azure/copilot/author-api-management-policies?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) |
api-management | Upgrade And Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/upgrade-and-scale.md | If you're scaling from or to the **Developer** tier, there will be downtime. Oth ## Compute isolation -If your security requirements include [compute isolation](../azure-government/azure-secure-isolation-guidance.md#compute-isolation), you can use the **Isolated** pricing tier. This tier ensures the compute resources of an API Management service instance consume the entire physical host and provide the necessary level of isolation required to support, for example, US Department of Defense Impact Level 5 (IL5) workloads. To get access to the Isolated tier, [create a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +If your security requirements include [compute isolation](../azure-government/azure-secure-isolation-guidance.md#compute-isolation), you can use the **Isolated** pricing tier. This tier ensures the compute resources of an API Management service instance consume the entire physical host and provide the necessary level of isolation required to support, for example, US Department of Defense Impact Level 5 (IL5) workloads. To get access to the Isolated tier, [create a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). ## Related content |
app-service | App Service Sql Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-github-actions.md | To run the create Azure resources workflow: ## Build, push, and deploy your image -The build, push, and deploy workflow builds a container with the latest app changes, pushes the container to [Azure Container Registry](../container-registry/index.yml) and, updates the web application staging slot to point to the latest container pushed. The workflow containers a build and deploy job: +The build, push, and deploy workflow builds a container with the latest app changes, pushes the container to [Azure Container Registry](/azure/container-registry/) and, updates the web application staging slot to point to the latest container pushed. The workflow containers a build and deploy job: - The build job checks out source code with the [Checkout action](https://github.com/marketplace/actions/checkout). The job then uses the [Docker login action](https://github.com/marketplace/actions/docker-login) and a custom script to authenticate with Azure Container Registry, build a container image, and deploy it to Azure Container Registry. - The deployment job logs into Azure with the [Azure Login action](https://github.com/marketplace/actions/azure-login) and gathers environment and Azure resource information. The job then updates Web App Settings with the [Azure App Service Settings action](https://github.com/marketplace/actions/azure-app-service-settings) and deploys to an App Service staging slot with the [Azure Web Deploy action](https://github.com/marketplace/actions/azure-webapp). Last, the job runs a custom script to update the SQL database and swaps staging slot to production. |
app-service | Configure Ssl Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md | description: Create a free certificate, import an App Service certificate, impor tags: buy-ssl-certificates Previously updated : 07/28/2023 Last updated : 09/19/2024 -You can add digital security certificates to [use in your application code](configure-ssl-certificate-in-code.md) or to [secure custom DNS names](configure-ssl-bindings.md) in [Azure App Service](overview.md), which provides a highly scalable, self-patching web hosting service. Currently called Transport Layer Security (TLS) certificates, also previously known as Secure Socket Layer (SSL) certificates, these private or public certificates help you secure internet connections by encrypting data sent between your browser, websites that you visit, and the website server. +You can add digital security certificates to [use in your application code](configure-ssl-certificate-in-code.md) or to [help secure custom DNS names](configure-ssl-bindings.md) in [Azure App Service](overview.md), which provides a highly scalable, self-patching web hosting service. Currently called Transport Layer Security (TLS) certificates, also previously known as Secure Socket Layer (SSL) certificates, these private or public certificates help you secure internet connections by encrypting data sent between your browser, websites that you visit, and the website server. The following table lists the options for you to add certificates in App Service: |Option|Description| |-|-|-| Create a free App Service managed certificate | A private certificate that's free of charge and easy to use if you just need to secure your [custom domain](app-service-web-tutorial-custom-domain.md) in App Service. | +| Create a free App Service managed certificate | A private certificate that's free of charge and easy to use if you just need to improve security for your [custom domain](app-service-web-tutorial-custom-domain.md) in App Service. | | Import an App Service certificate | A private certificate that's managed by Azure. It combines the simplicity of automated certificate management and the flexibility of renewal and export options. | | Import a certificate from Key Vault | Useful if you use [Azure Key Vault](/azure/key-vault/) to manage your [PKCS12 certificates](https://wikipedia.org/wiki/PKCS_12). See [Private certificate requirements](#private-certificate-requirements). | | Upload a private certificate | If you already have a private certificate from a third-party provider, you can upload it. See [Private certificate requirements](#private-certificate-requirements). | The following table lists the options for you to add certificates in App Service - Map the domain where you want the certificate to App Service. For information, see [Tutorial: Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md). - - For a root domain (like contoso.com), make sure your app doesn't have any [IP restrictions](app-service-ip-restrictions.md) configured. Both certificate creation and its periodic renewal for a root domain depends on your app being reachable from the internet. + - For a root domain (like contoso.com), make sure your app doesn't have any [IP restrictions](app-service-ip-restrictions.md) configured. Both certificate creation and its periodic renewal for a root domain depend on your app being reachable from the internet. ## Private certificate requirements The [free App Service managed certificate](#create-a-free-managed-certificate) and the [App Service certificate](configure-ssl-app-service-certificate.md) already satisfy the requirements of App Service. If you choose to upload or import a private certificate to App Service, your certificate must meet the following requirements: -* Exported as a [password-protected PFX file](https://en.wikipedia.org/w/index.php?title=X.509§ion=4#Certificate_filename_extensions), encrypted using triple DES. +* Exported as a [password-protected PFX file](https://en.wikipedia.org/w/index.php?title=X.509§ion=4#Certificate_filename_extensions), encrypted using triple DES * Contains private key at least 2048 bits long-* Contains all intermediate certificates and the root certificate in the certificate chain. +* Contains all intermediate certificates and the root certificate in the certificate chain -To secure a custom domain in a TLS binding, the certificate has more requirements: +If you want to help secure a custom domain in a TLS binding, the certificate must meet these additional requirements: * Contains an [Extended Key Usage](https://en.wikipedia.org/w/index.php?title=X.509§ion=4#Extensions_informing_a_specific_usage_of_a_certificate) for server authentication (OID = 1.3.6.1.5.5.7.3.1) * Signed by a trusted certificate authority To secure a custom domain in a TLS binding, the certificate has more requirement ## Create a free managed certificate -The free App Service managed certificate is a turn-key solution for securing your custom DNS name in App Service. Without any action from you, this TLS/SSL server certificate is fully managed by App Service and is automatically renewed continuously in six-month increments, 45 days before expiration, as long as the prerequisites that you set up stay the same. All the associated bindings are updated with the renewed certificate. You create and bind the certificate to a custom domain, and let App Service do the rest. +The free App Service managed certificate is a turn-key solution for helping to secure your custom DNS name in App Service. Without any action from you, this TLS/SSL server certificate is fully managed by App Service and is automatically renewed continuously in six-month increments, 45 days before expiration, as long as the prerequisites that you set up stay the same. All the associated bindings are updated with the renewed certificate. You create and bind the certificate to a custom domain, and let App Service do the rest. > [!IMPORTANT] > Before you create a free managed certificate, make sure you have [met the prerequisites](#prerequisites) for your app. The free certificate comes with the following limitations: - Doesn't support usage as a client certificate by using certificate thumbprint, which is planned for deprecation and removal. - Doesn't support private DNS. - Isn't exportable.-- Isn't supported in an App Service Environment (ASE).+- Isn't supported in an App Service Environment. - Only supports alphanumeric characters, dashes (-), and periods (.). - Only custom domains of length up to 64 characters are supported. The free certificate comes with the following limitations: - Must have an A record pointing to your web app's IP address. - Must be on apps that are publicly accessible. - Isn't supported with root domains that are integrated with Traffic Manager.-- Must meet all the above for successful certificate issuances and renewals.+- Must meet all of the above for successful certificate issuances and renewals. ### [Subdomain](#tab/subdomain) - Must have CNAME mapped _directly_ to `<app-name>.azurewebsites.net` or [trafficmanager.net](configure-domain-traffic-manager.md#enable-custom-domain). Mapping to an intermediate CNAME value blocks certificate issuance and renewal. The free certificate comes with the following limitations: When the operation completes, the certificate appears in the **Managed certificates** list. - :::image type="content" source="media/configure-ssl-certificate/create-free-cert-finished.png" alt-text="Screenshot of 'Managed certificates' pane with newly created certificate listed."::: + :::image type="content" source="media/configure-ssl-certificate/create-free-cert-finished.png" alt-text="Screenshot of the Managed certificates pane with the new certificate listed."::: -1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). +1. To provide security for a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). ## Import an App Service certificate -To import an App Service certificate, first [buy and configure an App Service certificate](configure-ssl-app-service-certificate.md#buy-and-configure-an-app-service-certificate), then follow the steps here. +To import an App Service certificate, first [buy and configure an App Service certificate](configure-ssl-app-service-certificate.md#buy-and-configure-an-app-service-certificate), and then follow the steps here. 1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**. To import an App Service certificate, first [buy and configure an App Service ce :::image type="content" source="media/configure-ssl-certificate/import-app-service-cert-finished.png" alt-text="Screenshot of 'Bring your own certificates (.pfx)' pane with purchased certificate listed."::: -1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). +1. To help secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). ## Import a certificate from Key Vault By default, the App Service resource provider doesn't have access to your key va |--|--|--| | **Microsoft Azure App Service** or **Microsoft.Azure.WebSites** | - `abfa0a7c-a6b6-4736-8310-5855508787cd` for public Azure cloud environment <br><br>- `6a02c803-dafd-4136-b4c3-5a6f318b4714` for Azure Government cloud environment | Certificate User | -The service principal app ID or assignee value is the ID for App Service resource provider. To learn how to authorize key vault permissions for App Service resource provider using access policy refer to the [provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control documentation](/azure/key-vault/general/rbac-guide?tabs=azure-portal#key-vault-scope-role-assignment). +The service principal app ID or assignee value is the ID for the App Service resource provider. To learn how to authorize key vault permissions for the App Service resource provider using an access policy, see the [provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control documentation](/azure/key-vault/general/rbac-guide?tabs=azure-portal#key-vault-scope-role-assignment). > [!NOTE]-> Do not delete these RBAC permissions from key vault, otherwise App Service will not be able to sync your web app with the latest key vault certificate version. +> Do not delete these RBAC permissions from key vault. If you do, App Service will not be able to sync your web app with the latest key vault certificate version. ### [Access policy permissions](#tab/accesspolicy) The service principal app ID or assignee value is the ID for App Service resourc |--|--|--|--| | **Microsoft Azure App Service** or **Microsoft.Azure.WebSites** | - `abfa0a7c-a6b6-4736-8310-5855508787cd` for public Azure cloud environment <br><br>- `6a02c803-dafd-4136-b4c3-5a6f318b4714` for Azure Government cloud environment | Get | Get | -The service principal app ID or assignee value is the ID for App Service resource provider. To learn how to authorize key vault permissions for App Service resource provider using access policy refer to the [assign a Key Vault access policy documentation](/azure/key-vault/general/assign-access-policy?tabs=azure-portal). +The service principal app ID or assignee value is the ID for the App Service resource provider. To learn how to authorize key vault permissions for the App Service resource provider using an access policy, see the [assign a Key Vault access policy documentation](/azure/key-vault/general/assign-access-policy?tabs=azure-portal). > [!NOTE]-> Do not delete these access policy permissions from key vault, otherwise App Service will not be able to sync your web app with the latest key vault certificate version. +> Do not delete these access policy permissions from key vault. If you do, App Service will not be able to sync your web app with the latest key vault certificate version. The service principal app ID or assignee value is the ID for App Service resourc 1. Select **Select key vault certificate**. - :::image type="content" source="media/configure-ssl-certificate/import-key-vault-cert.png" alt-text="Screenshot of app management page with 'Certificates', 'Bring your own certificates (.pfx)', and 'Import from Key Vault' selected"::: + :::image type="content" source="media/configure-ssl-certificate/import-key-vault-cert.png" alt-text="Screenshot of the app management page with 'Certificates', 'Bring your own certificates (.pfx)', and 'Import from Key Vault' selected."::: 1. To help you select the certificate, use the following table: The service principal app ID or assignee value is the ID for App Service resourc | **Key vault** | The key vault that has the certificate you want to import. | | **Certificate** | From this list, select a PKCS12 certificate that's in the vault. All PKCS12 certificates in the vault are listed with their thumbprints, but not all are supported in App Service. | -1. When finished with your selection, select **Select**, **Validate**, then **Add**. +1. When finished with your selection, select **Select**, **Validate**, and then **Add**. When the operation completes, the certificate appears in the **Bring your own certificates** list. If the import fails with an error, the certificate doesn't meet the [requirements for App Service](#private-certificate-requirements). The service principal app ID or assignee value is the ID for App Service resourc > [!NOTE] > If you update your certificate in Key Vault with a new certificate, App Service automatically syncs your certificate within 24 hours. -1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). +1. To helps secure custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). ## Upload a private certificate If your certificate authority gives you multiple certificates in the certificate --END CERTIFICATE-- ``` -#### Export merged private certificate to PFX +#### Export the merged private certificate to PFX Now, export your merged TLS/SSL certificate with the private key that was used to generate your certificate request. If you generated your certificate request using OpenSSL, then you created a private key file. > [!NOTE]-> OpenSSL v3 changed default cipher from 3DES to AES256, but this can be overridden on the command line -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -macalg SHA1. -> OpenSSL v1 uses 3DES as default, so the PFX files generated are supported without any special modifications. +> OpenSSL v3 changed the default cipher from 3DES to AES256, but this can be overridden on the command line: -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -macalg SHA1. +> OpenSSL v1 uses 3DES as the default, so the PFX files generated are supported without any special modifications. 1. To export your certificate to a PFX file, run the following command, but replace the placeholders _<private-key-file>_ and _<merged-certificate-file>_ with the paths to your private key and your merged certificate file. Now, export your merged TLS/SSL certificate with the private key that was used t 1. If you used IIS or _Certreq.exe_ to generate your certificate request, install the certificate to your local computer, and then [export the certificate to a PFX file](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754329(v=ws.11)). -#### Upload certificate to App Service +#### Upload the certificate to App Service You're now ready upload the certificate to App Service. You're now ready upload the certificate to App Service. 1. From your app's navigation menu, select **Certificates** > **Bring your own certificates (.pfx)** > **Upload Certificate**. - :::image type="content" source="media/configure-ssl-certificate/upload-private-cert.png" alt-text="Screenshot of 'Certificates', 'Bring your own certificates (.pfx)', 'Upload Certificate' selected."::: + :::image type="content" source="media/configure-ssl-certificate/upload-private-cert.png" alt-text="Screenshot of the app management page with 'Certificates', 'Bring your own certificates (.pfx)', and 'Upload Certificate' selected."::: 1. To help you upload the .pfx certificate, use the following table: You're now ready upload the certificate to App Service. | **Certificate password** | Enter the password that you created when you exported the PFX file. | | **Certificate friendly name** | The certificate name that will be shown in your web app. | -1. When finished with your selection, select **Select**, **Validate**, then **Add**. +1. When finished with your selection, select **Select**, **Validate**, and then **Add**. When the operation completes, the certificate appears in the **Bring your own certificates** list. - :::image type="content" source="media/configure-ssl-certificate/import-app-service-cert-finished.png" alt-text="Screenshot of 'Bring your own certificates' pane with uploaded certificate listed."::: + :::image type="content" source="media/configure-ssl-certificate/import-app-service-cert-finished.png" alt-text="Screenshot of the 'Bring your own certificates' pane with the uploaded certificate listed."::: -1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). +1. To provide security for a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). ## Upload a public certificate Public certificates are supported in the *.cer* format. > [!NOTE]-> After you upload a public certificate to an app, it is only accessible by the app it is uploaded to. Public certificates must be uploaded to each individual web app that needs access. For App Service Environment specific scenarios, refer to [the documentation for certificates and the App Service Environment](../app-service/environment/overview-certificates.md) +> After you upload a public certificate to an app, it's only accessible by the app it's uploaded to. Public certificates must be uploaded to each individual web app that needs access. For App Service Environment specific scenarios, refer to [the documentation for certificates and the App Service Environment](../app-service/environment/overview-certificates.md). > > You can upload up to 1000 public certificates per App Service Plan. Public certificates are supported in the *.cer* format. 1. When you're done, select **Add**. - :::image type="content" source="media/configure-ssl-certificate/upload-public-cert.png" alt-text="Screenshot of name and public key certificate to upload."::: + :::image type="content" source="media/configure-ssl-certificate/upload-public-cert.png" alt-text="Screenshot of the app management page. It shows the public key certificate to upload and its name."::: 1. After the certificate is uploaded, copy the certificate thumbprint, and then review [Make the certificate accessible](configure-ssl-certificate-in-code.md#make-the-certificate-accessible). Public certificates are supported in the *.cer* format. Before a certificate expires, make sure to add the renewed certificate to App Service, and update any certificate bindings where the process depends on the certificate type. For example, a [certificate imported from Key Vault](#import-a-certificate-from-key-vault), including an [App Service certificate](configure-ssl-app-service-certificate.md), automatically syncs to App Service every 24 hours and updates the TLS/SSL binding when you renew the certificate. For an [uploaded certificate](#upload-a-private-certificate), there's no automatic binding update. Based on your scenario, review the corresponding section: -- [Renew an uploaded certificate](#renew-uploaded-certificate)+- [Renew an uploaded certificate](#renew-an-uploaded-certificate) - [Renew an App Service certificate](configure-ssl-app-service-certificate.md#renew-an-app-service-certificate) - [Renew a certificate imported from Key Vault](#renew-a-certificate-imported-from-key-vault) -#### Renew uploaded certificate +#### Renew an uploaded certificate -When you replace an expiring certificate, the way you update the certificate binding with the new certificate might adversely affect user experience. For example, your inbound IP address might change when you delete a binding, even if that binding is IP-based. This result is especially impactful when you renew a certificate that's already in an IP-based binding. To avoid a change in your app's IP address, and to avoid downtime for your app due to HTTPS errors, follow these steps in the specified sequence: +When you replace an expiring certificate, the way you update the certificate binding with the new certificate might adversely affect the user experience. For example, your inbound IP address might change when you delete a binding, even if that binding is IP-based. This result is especially impactful when you renew a certificate that's already in an IP-based binding. To avoid a change in your app's IP address, and to avoid downtime for your app due to HTTPS errors, follow these steps in the specified sequence: 1. [Upload the new certificate](#upload-a-private-certificate). -1. Go to the **Custom domains** page for your app, select the **...** actions button, and select **Update binding**. +1. Go to the **Custom domains** page for your app, select the **...** button, and then select **Update binding**. -1. Select the new certificate and select **Update**. +1. Select the new certificate and then select **Update**. 1. Delete the existing certificate. When you replace an expiring certificate, the way you update the certificate bin To renew a certificate that you imported into App Service from Key Vault, review [Renew your Azure Key Vault certificate](/azure/key-vault/certificates/overview-renew-certificate). -After the certificate renews inside your key vault, App Service automatically syncs the new certificate, and updates any applicable certificate binding within 24 hours. To sync manually, follow these steps: +After the certificate renews in your key vault, App Service automatically syncs the new certificate and updates any applicable certificate binding within 24 hours. To sync manually, follow these steps: 1. Go to your app's **Certificate** page. -1. Under **Bring your own certificates (.pfx)**, select the **...** details button for the imported key vault certificate, and then select **Sync**. +1. Under **Bring your own certificates (.pfx)**, select the **...** button for the imported key vault certificate, and then select **Sync**. ## Frequently asked questions -### How can I automate adding a bring-your-owncertificate to an app? +### How can I automate adding a bring-your-own certificate to an app? - [Azure CLI: Bind┬áa┬ácustom┬áTLS/SSL┬ácertificate┬áto┬áa┬áweb┬áapp](scripts/cli-configure-ssl-certificate.md)-- [Azure PowerShell Bind a custom TLS/SSL certificate to a web app using PowerShell](scripts/powershell-configure-ssl-certificate.md)+- [Azure PowerShell: Bind a custom TLS/SSL certificate to a web app using PowerShell](scripts/powershell-configure-ssl-certificate.md) ### Can I use a private CA (certificate authority) certificate for inbound TLS on my app?-You can use a private CA certificate for inbound TLS in an [App Service Environment version 3 (ASEv3)](./environment/overview-certificates.md). This isn't possible in App Service (multi-tenant). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). +You can use a private CA certificate for inbound TLS in [App Service Environment version 3](./environment/overview-certificates.md). This isn't possible in App Service (multi-tenant). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). -### Can I make outbound calls using a private CA (certificate authority) client certificate from my app? -This is only supported for Windows container apps in multi-tenant App Service. In addition, you can make outbound calls using a private CA client certificate with both code-based and container-based apps in an [App Service Environment version 3 (ASEv3)](./environment/overview-certificates.md). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). +### Can I make outbound calls using a private CA client certificate from my app? +This is only supported for Windows container apps in multi-tenant App Service. In addition, you can make outbound calls using a private CA client certificate with both code-based and container-based apps in [App Service Environment version 3](./environment/overview-certificates.md). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). -### Can I load a private CA (certificate authority) certificate in my App Service Trusted Root Store? -You can load your own CA certificate into the Trusted Root Store in an [App Service Environment version 3 (ASEv3)](./environment/overview-certificates.md). You can't modify the list of Trusted Root Certificates in App Service (multi-tenant). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). +### Can I load a private CA certificate in my App Service Trusted Root Store? +You can load your own CA certificate into the Trusted Root Store in [App Service Environment version 3](./environment/overview-certificates.md). You can't modify the list of Trusted Root Certificates in App Service (multi-tenant). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). ## More resources You can load your own CA certificate into the Trusted Root Store in an [App Serv * [Enforce HTTPS](configure-ssl-bindings.md#enforce-https) * [Enforce TLS 1.1/1.2](configure-ssl-bindings.md#enforce-tls-versions) * [Use a TLS/SSL certificate in your code in Azure App Service](configure-ssl-certificate-in-code.md)-* [FAQ : App Service Certificates](./faq-configuration-and-management.yml) +* [FAQ: App Service Certificates](./faq-configuration-and-management.yml) |
app-service | Deploy Ci Cd Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ci-cd-custom-container.md | When you enable this option, App Service adds a webhook to your repository in Az ::: zone pivot="container-linux" > [!NOTE] > Support for multi-container (Docker Compose) apps is limited: -> - For Azure Container Registry, App Service creates a webhook in the selected registry with the registry as the scope. A `docker push` to any repository in the registry (including the ones not referenced by your Docker Compose file) triggers an app restart. You may want to [modify the webhook](../container-registry/container-registry-webhook.md) to a narrower scope. +> - For Azure Container Registry, App Service creates a webhook in the selected registry with the registry as the scope. A `docker push` to any repository in the registry (including the ones not referenced by your Docker Compose file) triggers an app restart. You may want to [modify the webhook](/azure/container-registry/container-registry-webhook) to a narrower scope. > - Docker Hub doesn't support webhooks at the registry level. You must **add** the webhooks manually to the images specified in your Docker Compose file. ::: zone-end |
app-service | Manage Create Arc Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-create-arc-environment.md | Last updated 03/24/2023 # Set up an Azure Arc-enabled Kubernetes cluster to run App Service, Functions, and Logic Apps (Preview) -If you have an [Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/overview.md), you can use it to create an [App Service enabled custom location](overview-arc-integration.md) and deploy web apps, function apps, and logic apps to it. +If you have an [Azure Arc-enabled Kubernetes cluster](/azure/azure-arc/kubernetes/overview), you can use it to create an [App Service enabled custom location](overview-arc-integration.md) and deploy web apps, function apps, and logic apps to it. Azure Arc-enabled Kubernetes lets you make your on-premises or cloud Kubernetes cluster visible to App Service, Functions, and Logic Apps in Azure. You can create an app and deploy to it just like another Azure region. az extension add --upgrade --yes --name appservice-kube ## Create a connected cluster > [!NOTE]-> This tutorial uses [Azure Kubernetes Service (AKS)](/azure/aks/) to provide concrete instructions for setting up an environment from scratch. However, for a production workload, you will likely not want to enable Azure Arc on an AKS cluster as it is already managed in Azure. The steps below will help you get started understanding the service, but for production deployments, they should be viewed as illustrative, not prescriptive. See [Quickstart: Connect an existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md) for general instructions on creating an Azure Arc-enabled Kubernetes cluster. +> This tutorial uses [Azure Kubernetes Service (AKS)](/azure/aks/) to provide concrete instructions for setting up an environment from scratch. However, for a production workload, you will likely not want to enable Azure Arc on an AKS cluster as it is already managed in Azure. The steps below will help you get started understanding the service, but for production deployments, they should be viewed as illustrative, not prescriptive. See [Quickstart: Connect an existing Kubernetes cluster to Azure Arc](/azure/azure-arc/kubernetes/quickstart-connect-cluster) for general instructions on creating an Azure Arc-enabled Kubernetes cluster. 1. Create a cluster in Azure Kubernetes Service with a public IP address. Replace `<group-name>` with the resource group name you want. You can learn more about these pods and their role in the system from [Pods crea ## Create a custom location -The [custom location](../azure-arc/kubernetes/custom-locations.md) in Azure is used to assign the App Service Kubernetes environment. +The [custom location](/azure/azure-arc/kubernetes/custom-locations) in Azure is used to assign the App Service Kubernetes environment. <!-- https://github.com/MicrosoftDocs/azure-docs-pr/pull/156618 --> The [custom location](../azure-arc/kubernetes/custom-locations.md) in Azure is u <!-- --kubeconfig ~/.kube/config # needed for non-Azure --> > [!NOTE]- > If you experience issues creating a custom location on your cluster, you may need to [enable the custom location feature on your cluster](../azure-arc/kubernetes/custom-locations.md#enable-custom-locations-on-your-cluster). This is required if logged into the CLI using a Service Principal or if you are logged in with a Microsoft Entra user with restricted permissions on the cluster resource. + > If you experience issues creating a custom location on your cluster, you may need to [enable the custom location feature on your cluster](/azure/azure-arc/kubernetes/custom-locations#enable-custom-locations-on-your-cluster). This is required if logged into the CLI using a Service Principal or if you are logged in with a Microsoft Entra user with restricted permissions on the cluster resource. > 3. Validate that the custom location is successfully created with the following command. The output should show the `provisioningState` property as `Succeeded`. If not, run it again after a minute. |
app-service | Overview Arc Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md | You can run App Service, Functions, and Logic Apps on an Azure Arc-enabled Kuber In most cases, app developers need to know nothing more than how to deploy to the correct Azure region that represents the deployed Kubernetes environment. For operators who provide the environment and maintain the underlying Kubernetes infrastructure, you must be aware of the following Azure resources: -- The connected cluster, which is an Azure projection of your Kubernetes infrastructure. For more information, see [What is Azure Arc-enabled Kubernetes?](../azure-arc/kubernetes/overview.md).-- A cluster extension, which is a subresource of the connected cluster resource. The App Service extension [installs the required pods into your connected cluster](#pods-created-by-the-app-service-extension). For more information about cluster extensions, see [Cluster extensions on Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/conceptual-extensions.md).-- A custom location, which bundles together a group of extensions and maps them to a namespace for created resources. For more information, see [Custom locations on top of Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/conceptual-custom-locations.md).+- The connected cluster, which is an Azure projection of your Kubernetes infrastructure. For more information, see [What is Azure Arc-enabled Kubernetes?](/azure/azure-arc/kubernetes/overview). +- A cluster extension, which is a subresource of the connected cluster resource. The App Service extension [installs the required pods into your connected cluster](#pods-created-by-the-app-service-extension). For more information about cluster extensions, see [Cluster extensions on Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/conceptual-extensions). +- A custom location, which bundles together a group of extensions and maps them to a namespace for created resources. For more information, see [Custom locations on top of Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/conceptual-custom-locations). - An App Service Kubernetes environment, which enables configuration common across apps but not related to cluster operations. Conceptually, it's deployed into the custom location resource, and app developers create apps into this environment. This resource is described in greater detail in [App Service Kubernetes environment](#app-service-kubernetes-environment). ## Public preview limitations |
app-service | Routine Maintenance Downtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/routine-maintenance-downtime.md | + + Title: Routine maintenance, restarts, and downtime +description: Learn about common reasons for restarts and downtime during Routine Maintenance and options to minimize disruptions. ++++ Last updated : 09/10/2024+++# Routine maintenance for Azure App Service, restarts, and downtime +++Azure App Service is a Platform as a Service (PaaS) for hosting web applications, REST APIs, and mobile back ends. One of the benefits of the offering is that planned maintenance is performed behind the scenes. Our customers can focus on deploying, running, and maintaining their application code instead of worrying about maintenance activities for the underlying infrastructure. Azure App Service maintenance is a robust process designed to avoid or minimize downtime to hosted applications. This process remains largely invisible to the users of hosted applications. However, our customers are often curious if downtime that they experience is a result of our planned maintenance, especially if they seem to coincide in time. ++## Background ++Our planned maintenance mechanism revolves around the architecture of the scale units that host the servers on which deployed applications run. Any given scale unit contains several different types of roles that all work together. The two roles that are most relevant to our planned maintenance update mechanism are the Worker and File Server roles. For a more detailed description of all the different roles and other details about the App Service architecture, review [Inside the Azure App Service Architecture](/archive/msdn-magazine/2017/february/azure-inside-the-azure-app-service-architecture) + +There are different ways that an update strategy could be designed and those different designs would each have their own benefits and downsides. One of the strategies that we use for major updates is that these updates don't run on servers / roles that are currently used by our customers. Instead, our update process updates instances in waves and the instances undergoing updates aren't used by applications. Instances being used by applications are gradually swapped out and replaced by updated instances. The resulting effect on an application is that the application experiences a start, or restart. From a statistical perspective and from empirical observations, applications restarts are much less disruptive than performing maintenance on servers that are actively being used by applications. ++## Instance update details + +There are two slightly different scenarios that play out during every Planned Maintenance cycle. These two scenarios are related to the updates performed on the Worker and File Server roles. At a high level, both these scenarios appear similar from an end-user perspective but there are some important differences that can sometimes cause some unexpected behavior. + +When a File Server role needs to be updated, the storage volume used by the application needs to be migrated from one File Server instance to another. During this change, an updated File Server role is added to the application. This causes a worker process restart simultaneously on all worker instances in that App Service Plan. The worker process restart is overlapped - the update mechanism starts the new worker process first, lets it complete its start-up, sends new requests to the new worker process. Once the new worker process is responding, the existing requests have 30 seconds by default to complete in the old worker process, then the old worker process is stopped. + +When a Worker role is updated, the update mechanism similarly swaps in a new updated Worker role. The worker is swapped as follows - An updated Worker is added to the ASP, the application is started on the new Worker, our infrastructure waits for the application to start-up, new requests are sent to the new worker instance, requests are allowed to complete on the old instance, then the old worker instance is removed from the ASP. This sequence usually occurs once for each worker instance in the ASP and is spread out over minutes or hours depending on the size of the plan and scale unit. + +The main differences between these two scenarios are: + +- A File Server role change results in a simultaneous overlapped worker process restart on all instances, whereas a Worker change results in an application start on a single instance. +- A File Server role change means that the application restarts on the same instance as it was running before, whereas a Worker change results in the application running on a different instance after start-up. + +The overlapped restart mechanism results in zero downtime for most applications and planned maintenance isn't even noticed. If the application takes some time to start, the application can experience some minimal downtime associated with application slowness or failures during or shortly after the process starts. Our platform keeps attempting to start the application until successful but if the application fails to start altogether, a longer downtime can occur. The downtime persists until some corrective action is taken, such as manually restarting the application on that instance. + +## Unexpected failure handling + +While this article focuses largely on planned maintenance activities, it's worth mentioning that similar behavior can occur as a result of the platform recovering from unexpected failures. If an unexpected hardware failure occurs that affects a Worker role, the platform similarly replaces it by a new worker. The application starts on this new Worker role. When a failure or latency affects a File Server role that is associated with the application, a new File Server role replaces it. A worker process restart occurs on all the Worker roles. This fact is important to consider when evaluating strategies for improving uptime for your applications. + +## Strategies for increased uptime + +Most of our hosted applications experience limited or no downtime during planned maintenance. However, this fact isn't helpful if your specific applications have a more complicated start-up behavior and are therefore susceptible to downtime when restarted. If applications are experiencing downtime every time they're restarted, addressing the downtime is even more pressing. There are several features available in our App Service product offering that are designed to further minimize downtime in these scenarios. Broadly speaking there are two categories of strategies that can be employed: + +- Improving application start-up consistency +- Minimizing application restarts + +Improving application start-up speed and ensuring it's consistently successful has a higher success rate statistically. We recommend reviewing options that are available in this area first. Some of them are fairly easy to implement and can yield large improvements. Start-up consistency strategies utilize both App Service features and techniques related to application code or configuration. Minimizing restarts is a group of options that can be used if we can't improve application start-up to be consistent enough. These options are typically more expensive and less reliable as they usually protect against a subset of restarts. Avoiding all restarts isn't possible. Using both types of strategies is something that is highly effective. +++### Strategies for start-up consistency + +#### Application Initialization (AppInit) + +When an application starts on a Windows Worker, the Azure App Service infrastructure tries to determine when the application is ready to serve requests before external requests are routed to this worker. By default, a successful request to the root (/) of the application is a signal that the application is ready to serve requests. For some applications, this default behavior isn't sufficient to ensure that the application is fully warmed up. Typically that happens if the root of the application has limited dependencies but other paths rely on more libraries or external dependencies to work. The [IIS Application Initialization Module](/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization) works well to fine tune warm-up behavior. At a high level, it allows the application owner to define which path or paths serve as indicators that the application is in fact ready to serve requests. For a detailed discussion of how to implement this mechanism, review the following article: [App Service Warm-Up Demystified](https://michaelcandido.com/app-service-warm-up-demystified/) . When correctly implemented, this feature can result in zero downtime even if the application start-up is more complex. ++Linux applications can utilize a similar mechanism by using the WEBSITE_WARMUP_PATH application setting. ++#### Health Check + +[Health Check](monitor-instances-health-check.md) is a feature that is designed to handle unexpected code and platform failures during normal application execution but can also be helpful to augment start-up resiliency. Health Check performs two different healing functions - removing a failing instance from the load balancer, and replacing an entire instance. We can utilize the removal of an instance from the load balancer to handle intermittent start-up failures. If an instance returns failures after start-up despite employing all other strategies, health check can remove that instance from the load balancer until that instance starts returning 200 status code to health check requests again. This feature therefore acts as a fail-safe to minimize any post start-up downtime that occurs. This feature can be useful if the post start-up failures are transient and don't require process restart. ++#### Auto-Heal + +Auto-Heal for [Windows](https://azure.github.io/AppService/2018/09/10/Announcing-the-New-Auto-Healing-Experience-in-App-Service-Diagnostics.html) and [Linux](https://azure.github.io/AppService/2021/04/21/Announcing-Autoheal-for-Azure-App-Service-Linux.html) is another feature that is designed for normal application execution but can be used for improving start-up behavior as well. If we know that the application sometimes enters an unrecoverable state after start-up, Health Check won't be suitable. However, auto-heal can automatically restart the worker process which can be useful in that scenario. We can configure an auto-heal rule that monitors failed requests and trigger a process restart on a single instance. ++#### Application start-up testing + +Testing the start-up of an application exhaustively can be overlooked. Start-up testing in combination with other factors such as dependency failures, library load failures, network issues etc. poses a bigger challenge. A relatively small failure rate for start-up can go unnoticed but can result in a high failure rate when there are multiple instances being restarted every update cycle. A plan with 20 instances and an application with a five-percent failure rate in start-up, results in three instances failing to start on average every update cycle. There are usually three application restarts per instance (20 instance moves and 2 File Server related restarts per instance). + +We recommend testing several scenarios + +- General start-up testing (one instance at a time) to establish individual instance start-up success rate. This simplest scenario should approach 100 percent before moving on to other more complicated scenarios. +- Simulate start-up dependency failure. If the app has any dependency on other Azure or non-Azure services, simulate downtime in those dependencies to reveal application behavior under those conditions. +- Simultaneous start-up of many instances - preferably more instances than in production. Testing with many instances often reveals dependency failures that are often used during start-up only, such as KeyVault references, App Configuration, databases etc. These dependencies should be tested for burst volume of requests that a simultaneous instance restart generates. +- Adding an instance under full load - making sure AppInit is configured correctly and application can be initialized fully before requests are sent to the new instance. Manual scaling out is an easy way to replicate an instance move during maintenance. +- Overlapped worker process restart - again testing whether AppInit is configured correctly and if requests can complete successfully as the old worker process completes and new worker process starts up. Changing an environment variable under load can simulate what a File Server change does. +- Multiple apps in a plan - if there are multiple apps in the same plan, perform all these tests simultaneously across all apps. +++#### Start-up logging + +Having the ability to retroactively troubleshoot start-up failures in production is a consideration that is separate from using testing to improve start-up consistency. However, it's equally or more important since despite all our efforts, we might not be able to simulate all types of real-world failures in a test or QA environment. It's also commonly the weakest area for logging as initializing the logging infrastructure is another start-up activity that must be performed. The order of operations for initializing the application is an important consideration for this reason and can become a chicken and egg type of problem. For example, if we need to configure logging based on a KeyVault reference, and we fail to obtain the KeyVault value, how do we log this failure? We might want to consider duplicating start-up logging using a separate logging mechanism that doesn't depend on any other external factors. For example, logging these types of start-up failures to the local disk. Simply turning on a general logging feature, such as [.NET Core stdout logging](/aspnet/core/test/troubleshoot-azure-iis#aspnet-core-module-stdout-log-azure-app-service), can be counter-productive as this logging keeps generating log data even after start-up, and that can fill up the disk over time. This feature can be used strategically for troubleshooting reproducible start-up failures. ++### Strategies for minimizing restarts + +The following strategies can significantly reduce the number of restarts that an application experiences during planned maintenance. Some of the strategies in this section can also give more control over when these restarts occur. In general, these strategies, while effective, can't avoid restarts altogether. The main reason is that some restarts occur due to unexpected failures rather than planned maintenance. ++> [!IMPORTANT] +> Completely avoiding restarts is not possible. The following strategies can help reduce the number of restarts. + +#### Local Cache + +[Local Cache](overview-local-cache.md) is a feature that is designed to improve resiliency due to external storage failures. At a high level, it creates a copy of the application content on the local disk of the instance on which it runs. This isolates the application from unexpected storage failures but also prevents restarts due to File Server changes. Utilizing this feature can vastly reduce the number of restarts during public maintenance - typically it can remove about two thirds of those restarts. Since it primarily avoids simultaneous worker process restarts, the observed improvement on application start-up consistency can be even bigger. Local Cache does have some design implications and changes to application behavior so it's important to fully test the application to ensure that the application is compatible with this feature. ++#### Planned maintenance notifications and paired regions + +If we want to reduce the risk of update-related restarts in production, we can utilize [Planned Maintenance Notifications](https://azure.github.io/AppService/2022/02/01/App-Service-Planned-Notification-Feature.html) to find out when any given application will be updated. We can then set up a copy of the application in a [Paired Region](https://azure.github.io/AppService/2022/02/01/App-Service-Planned-Notification-Feature.html) and route traffic to our secondary application copy during maintenance in the primary copy. This option can be costly as the window for this maintenance is fairly wide so the secondary application copy needs to run on sufficient instances for at least several days. This option can be less costly if we already have a secondary application set up for general resiliency. This option can reduce the number of restarts but like other options in this category can't eliminate all restarts. + +#### Controlling planned maintenance window in ASE v3 + +Controlling the window for maintenance is only available in our isolated ASE v3 environments. If we're using an ASE already, or it's feasible to use an ASE, doing so allows our customers to [Control Planned Maintenance](https://azure.github.io/AppService/2022/09/15/Configure-automation-for-upgrade-preferences-in-App-Service-Environment.html) behavior to a high degree. It isn't possible to control the time of the planned maintenance in a multitenant environment. |
app-service | Tutorial Custom Container Sidecar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container-sidecar.md | First you create the resources that the tutorial uses (for more information, see > `azd provision` uses the included templates to create the following Azure resources: > > - A resource group- > - A [container registry](../container-registry/container-registry-intro.md) with two images deployed: + > - A [container registry](/azure/container-registry/container-registry-intro) with two images deployed: > - An Nginx image with the OpenTelemetry module. > - An OpenTelemetry collector image, configured to export to [Azure Monitor](/azure/azure-monitor/overview). > - A [log analytics workspace](/azure/azure-monitor/logs/log-analytics-overview) |
app-service | Tutorial Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md | zone_pivot_groups: app-service-containers-windows-linux For more information, see [Operating system functionality on Azure App Service](operating-system-functionality.md). -You can deploy a custom-configured Windows image from Visual Studio to make OS changes that your app needs. This makes it easy to migrate an on-premises app that requires a custom OS and software configuration. This tutorial demonstrates how to migrate to App Service an ASP.NET app that uses custom fonts installed in the Windows font library. You deploy a custom-configured Windows image from Visual Studio to [Azure Container Registry](../container-registry/index.yml) and then run it in App Service. +You can deploy a custom-configured Windows image from Visual Studio to make OS changes that your app needs. This makes it easy to migrate an on-premises app that requires a custom OS and software configuration. This tutorial demonstrates how to migrate to App Service an ASP.NET app that uses custom fonts installed in the Windows font library. You deploy a custom-configured Windows image from Visual Studio to [Azure Container Registry](/azure/container-registry/) and then run it in App Service. :::image type="content" source="media/tutorial-custom-container/app-running-newupdate.png" alt-text="Shows the web app running in a Windows container."::: You can find *InstallFont.ps1* in the **CustomFontSample** project. It's a simpl ## Publish to Azure Container Registry -[Azure Container Registry](../container-registry/index.yml) can store your images for container deployments. You can configure App Service to use images that are hosted in Azure Container Registry. +[Azure Container Registry](/azure/container-registry/) can store your images for container deployments. You can configure App Service to use images that are hosted in Azure Container Registry. ### Open the publish wizard |
application-gateway | Http Response Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md | An HTTP 499 response is presented if a client request that is sent to applicatio #### 500 ΓÇô Internal Server Error -Azure Application Gateway shouldn't exhibit 500 response codes. Open a support request if you see this code, because this issue is an internal error to the service. For information on how to open a support case, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +Azure Application Gateway shouldn't exhibit 500 response codes. Open a support request if you see this code, because this issue is an internal error to the service. For information on how to open a support case, see [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). #### 502 ΓÇô Bad Gateway |
automation | Automation Dsc Cd Chocolatey | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-cd-chocolatey.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). In a DevOps world, there are many tools to assist with various points in the continuous integration pipeline. Azure Automation [State Configuration](automation-dsc-overview.md) is a welcome new addition to the options that DevOps teams can employ. |
automation | Automation Dsc Compile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-compile.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). You can compile Desired State Configuration (DSC) configurations in Azure Automation State Configuration in the following ways: |
automation | Automation Dsc Config Data At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-config-data-at-scale.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). > [!IMPORTANT] > This article refers to a solution that is maintained by the Open Source community. Support is only available in the form of GitHub collaboration, and not from Microsoft. |
automation | Automation Dsc Config From Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-config-from-server.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). > [!IMPORTANT] > The article refers to a solution that is maintained by the Open Source community. Support is only available in the form of GitHub collaboration, not from Microsoft. |
automation | Automation Dsc Configuration Based On Stig | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-configuration-based-on-stig.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). Creating configuration content for the first time can be challenging. In many cases, the goal is to automate configuration of servers following a "baseline" that hopefully aligns to an industry recommendation. |
automation | Automation Dsc Create Composite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-create-composite.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). > [!IMPORTANT] > This article refers to a solution that is maintained by the Open Source community and support is only available in the form of GitHub collaboration, not from Microsoft. |
automation | Automation Dsc Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-diagnostics.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). Azure Automation State Configuration retains node status data for 30 days. You can send node status data to [Azure Monitor Logs](/azure/azure-monitor/logs/data-platform-logs) if you prefer to retain this data for a longer period. Compliance status is visible in the Azure portal or with PowerShell, for nodes and for individual DSC resources in node configurations. |
automation | Automation Dsc Extension History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-extension-history.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). The Azure Desired State Configuration (DSC) VM [extension](/azure/virtual-machines/extensions/dsc-overview) is updated as-needed to support enhancements and new capabilities delivered by Azure, Windows Server, and the Windows Management Framework (WMF) that includes Windows PowerShell. |
automation | Automation Dsc Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-getting-started.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). This article provides a step-by-step guide for doing the most common tasks with Azure Automation State Configuration, such as creating, importing, and compiling configurations, enabling machines to manage, and viewing reports. For an overview State Configuration, see [State Configuration overview](automation-dsc-overview.md). For Desired State Configuration (DSC) documentation, see [Windows PowerShell Desired State Configuration Overview](/powershell/dsc/overview). |
automation | Automation Dsc Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-onboarding.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). This topic describes how you can set up your machines for management with Azure Automation State Configuration. For details of this service, see [Azure Automation State Configuration overview](automation-dsc-overview.md). |
automation | Automation Dsc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-overview.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). Azure Automation State Configuration is an Azure configuration management service that allows you to write, manage, and compile PowerShell Desired State Configuration (DSC) [configurations](/powershell/dsc/configurations/configurations) for nodes in any cloud or on-premises datacenter. The service also imports [DSC Resources](/powershell/dsc/resources/resources), and assigns configurations to target nodes, all in the cloud. You can access Azure Automation State Configuration in the Azure portal by selecting **State configuration (DSC)** under **Configuration Management**. |
automation | Automation Dsc Remediate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-remediate.md | Last updated 07/17/2019 # Remediate noncompliant Azure Automation State Configuration servers > [!NOTE]-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). When servers are registered with Azure Automation State Configuration, the configuration mode is set to `ApplyOnly`, `ApplyAndMonitor`, or `ApplyAndAutoCorrect`. If the mode isn't set to `ApplyAndAutoCorrect`, |
automation | Automation Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md | -Azure Automation provides native integration of the Hybrid Runbook Worker role through the Azure virtual machine (VM) extension framework. The Azure VM agent is responsible for management of the extension on Azure VMs on Windows and Linux VMs, and [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) on Non-Azure machines including [Azure Arc-enabled Servers](../azure-arc/servers/overview.md) and [Azure Arc-enabled VMware vSphere (preview)](../azure-arc/vmware-vsphere/overview.md). Now there are two Hybrid Runbook Workers installation platforms supported by Azure Automation. +Azure Automation provides native integration of the Hybrid Runbook Worker role through the Azure virtual machine (VM) extension framework. The Azure VM agent is responsible for management of the extension on Azure VMs on Windows and Linux VMs, and [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) on Non-Azure machines including [Azure Arc-enabled Servers](/azure/azure-arc/servers/overview) and [Azure Arc-enabled VMware vSphere (preview)](/azure/azure-arc/vmware-vsphere/overview). Now there are two Hybrid Runbook Workers installation platforms supported by Azure Automation. | Platform | Description | ||| After the Update Management feature is enabled on Windows or Linux machines, you If you have more than 2,000 hybrid workers, to get a list of all of them, you can run the following PowerShell script: ```powershell-"Get-AzSubscription -SubscriptionName "<subscriptionName>" | Set-AzContext +Get-AzSubscription -SubscriptionName "<subscriptionName>" | Set-AzContext $workersList = (Get-AzAutomationHybridWorkerGroup -ResourceGroupName "<resourceGroupName>" -AutomationAccountName "<automationAccountName>").Runbookworker-$workersList | export-csv -Path "<Path>\output.csv" -NoClobber -NoTypeInformation" +$workersList | export-csv -Path "<Path>\output.csv" -NoClobber -NoTypeInformation ``` ## Next steps |
automation | Automation Linux Hrw Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md | -You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly it and against resources in the environment to manage those local resources. +You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). From the machine or server that's hosting the role, you can run runbooks directly it and against resources in the environment to manage those local resources. The Linux Hybrid Runbook Worker executes runbooks as a special user that can be elevated for running commands that need elevation. Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. This article describes how to: install the Hybrid Runbook Worker on a Linux machine, remove the worker, and remove a Hybrid Runbook Worker group. For User Hybrid Runbook Workers, see also [Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Automation](./extension-based-hybrid-runbook-worker-install.md) If you don't have an Azure Monitor Log Analytics workspace, review the [Azure Mo ### Log Analytics agent -The Hybrid Runbook Worker role requires the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for the supported Linux operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md). The agent is installed with certain service accounts that execute commands requiring root permissions. For more information, see [Service accounts](./automation-hrw-run-runbooks.md#service-accounts). +The Hybrid Runbook Worker role requires the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for the supported Linux operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). The agent is installed with certain service accounts that execute commands requiring root permissions. For more information, see [Service accounts](./automation-hrw-run-runbooks.md#service-accounts). ### Supported Linux operating systems To install and configure a Linux Hybrid Runbook Worker, perform the following st - For Azure VMs, install the Log Analytics agent for Linux using the [virtual machine extension for Linux](/azure/virtual-machines/extensions/oms-linux). The extension installs the Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace. You can use an Azure Resource Manager template, the Azure CLI, or Azure Policy to assign the [Deploy Log Analytics agent for *Linux* or *Windows* VMs](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Once the agent is installed, the machine can be added to a Hybrid Runbook Worker group in your Automation account. - - For non-Azure machines, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md). Azure Arc-enabled servers support deploying the Log Analytics agent using the following methods: + - For non-Azure machines, you can install the Log Analytics agent using [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). Azure Arc-enabled servers support deploying the Log Analytics agent using the following methods: - Using the VM extensions framework. This feature in Azure Arc-enabled servers allows you to deploy the Log Analytics agent VM extension to a non-Azure Windows and/or Linux server. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Azure Arc-enabled servers: - - The [Azure portal](../azure-arc/servers/manage-vm-extensions-portal.md) - - The [Azure CLI](../azure-arc/servers/manage-vm-extensions-cli.md) - - [Azure PowerShell](../azure-arc/servers/manage-vm-extensions-powershell.md) - - Azure [Resource Manager templates](../azure-arc/servers/manage-vm-extensions-template.md) + - The [Azure portal](/azure/azure-arc/servers/manage-vm-extensions-portal) + - The [Azure CLI](/azure/azure-arc/servers/manage-vm-extensions-cli) + - [Azure PowerShell](/azure/azure-arc/servers/manage-vm-extensions-powershell) + - Azure [Resource Manager templates](/azure/azure-arc/servers/manage-vm-extensions-template) - Using Azure Policy. |
automation | Automation Manage Send Joblogs Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-manage-send-joblogs-log-analytics.md | Azure Automation can send runbook job status and job streams to your Log Analyti - Trigger an email or alert based on your runbook job status (for example, failed or suspended). - Write advanced queries across your job streams. - Correlate jobs across Automation accounts.- - Use customized views and search queries to visualize your runbook results, runbook job status, and other related key indicators or metrics through an [Azure dashboard](../azure-portal/azure-portal-dashboards.md). + - Use customized views and search queries to visualize your runbook results, runbook job status, and other related key indicators or metrics through an [Azure dashboard](/azure/azure-portal/azure-portal-dashboards). - Get the audit logs related to Automation accounts, runbooks, and other asset create, modify and delete operations. Using Azure Monitor logs, you can consolidate logs from different resources in the same workspace where it can be analyzed with [queries](/azure/azure-monitor/logs/log-query-overview) to quickly retrieve, consolidate, and analyze the collected data. You can create and test queries using [Log Analytics](/azure/azure-monitor/logs/log-query-overview) in the Azure portal and then either directly analyze the data using these tools or save queries for use with [visualization](/azure/azure-monitor/best-practices-analysis) or [alert rules](/azure/azure-monitor/alerts/alerts-overview). |
automation | Automation Windows Hrw Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md | -You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources. +You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources. Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. This article describes how to deploy a user Hybrid Runbook Worker on a Windows machine, how to remove the worker, and how to remove a Hybrid Runbook Worker group. For user Hybrid Runbook Workers, see also [Deploy an extension-based Windows or Linux user Hybrid Runbook Worker in Automation](./extension-based-hybrid-runbook-worker-install.md) If you don't have an Azure Monitor Log Analytics workspace, review the [Azure Mo ### Log Analytics agent -The Hybrid Runbook Worker role requires the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for the supported Windows operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md). +The Hybrid Runbook Worker role requires the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for the supported Windows operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). ### Supported Windows operating system To install and configure a Windows Hybrid Runbook Worker, perform the following - For Azure VMs, install the Log Analytics agent for Windows using the [virtual machine extension for Windows](/azure/virtual-machines/extensions/oms-windows). The extension installs the Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace. You can use an Azure Resource Manager template, PowerShell, or Azure Policy to assign the [Deploy Log Analytics agent for *Linux* or *Windows* VMs](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Once the agent is installed, the machine can be added to a Hybrid Runbook Worker group in your Automation account. - - For non-Azure machines, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md). Azure Arc-enabled servers support deploying the Log Analytics agent using the following methods: + - For non-Azure machines, you can install the Log Analytics agent using [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). Azure Arc-enabled servers support deploying the Log Analytics agent using the following methods: - Using the VM extensions framework. This feature in Azure Arc-enabled servers allows you to deploy the Log Analytics agent VM extension to a non-Azure Windows or Linux server. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc-enabled servers: - - The [Azure portal](../azure-arc/servers/manage-vm-extensions-portal.md) - - The [Azure CLI](../azure-arc/servers/manage-vm-extensions-cli.md) - - [Azure PowerShell](../azure-arc/servers/manage-vm-extensions-powershell.md) - - Azure [Resource Manager templates](../azure-arc/servers/manage-vm-extensions-template.md) + - The [Azure portal](/azure/azure-arc/servers/manage-vm-extensions-portal) + - The [Azure CLI](/azure/azure-arc/servers/manage-vm-extensions-cli) + - [Azure PowerShell](/azure/azure-arc/servers/manage-vm-extensions-powershell) + - Azure [Resource Manager templates](/azure/azure-arc/servers/manage-vm-extensions-template) - Using Azure Policy. |
automation | Enable From Automation Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-automation-account.md | Sign in to the [Azure portal](https://portal.azure.com). ## Enable non-Azure VMs -Machines not in Azure need to be added manually. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you also plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. +Machines not in Azure need to be added manually. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you also plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. 1. From your Automation account select **Inventory** or **Change tracking** under **Configuration Management**. |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md | Machines connected to the Log Analytics workspace use the [Log Analytics agent]( > [!NOTE] > Change Tracking and Inventory requires linking a Log Analytics workspace to your Automation account. For a definitive list of supported regions, see [Azure Workspace mappings](../how-to/region-mappings.md). The region mappings don't affect the ability to manage VMs in a separate region from your Automation account. -As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](../../lighthouse/overview.md). Azure Lighthouse allows you to perform operations at scale across several Microsoft Entra tenants at once, making management tasks like Change Tracking and Inventory more efficient across those tenants you're responsible for. Change Tracking and Inventory can manage machines in multiple subscriptions in the same tenant, or across tenants using [Azure delegated resource management](../../lighthouse/concepts/architecture.md). +As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](/azure/lighthouse/overview). Azure Lighthouse allows you to perform operations at scale across several Microsoft Entra tenants at once, making management tasks like Change Tracking and Inventory more efficient across those tenants you're responsible for. Change Tracking and Inventory can manage machines in multiple subscriptions in the same tenant, or across tenants using [Azure delegated resource management](/azure/lighthouse/concepts/architecture). ## Current limitations You can enable Change Tracking and Inventory in the following ways: - From your [Automation account](enable-from-automation-account.md) for one or more Azure and non-Azure machines. -- Manually for non-Azure machines, including machines or servers registered with [Azure Arc-enabled servers](../../azure-arc/servers/overview.md). For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you plan to also monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.+- Manually for non-Azure machines, including machines or servers registered with [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you plan to also monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. - For a single Azure VM from the [Virtual machine page](enable-from-vm.md) in the Azure portal. This scenario is available for Linux and Windows VMs. |
automation | Extension Based Hybrid Runbook Worker Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md | The extension-based onboarding is only for **User** Hybrid Runbook Workers. This For **System** Hybrid Runbook Worker onboarding, see [Deploy an agent-based Windows Hybrid Runbook Worker in Automation](./automation-windows-hrw-install.md) or [Deploy an agent-based Linux Hybrid Runbook Worker in Automation](./automation-linux-hrw-install.md). -You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including [Azure Arc-enabled servers](../azure-arc/servers/overview.md), [Arc-enabled VMware vSphere](../azure-arc/vmware-vsphere/overview.md), and [Arc-enabled SCVMM](../azure-arc/system-center-virtual-machine-manager/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources. +You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), [Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/overview), and [Arc-enabled SCVMM](/azure/azure-arc/system-center-virtual-machine-manager/overview). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources. Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. After you successfully deploy a runbook worker, review [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md) to learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment. Azure Automation stores and manages runbooks and then delivers them to one or mo - Two cores - 4 GB of RAM-- **Non-Azure machines** must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers. See [Install Arc agent for Arc-enabled VMware VMs](../azure-arc/vmware-vsphere/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled VMware vSphere VMs and install [Arc agent for Arc-enabled SCVMM](../azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled SCVMM VMs.+- **Non-Azure machines** must have the [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](/azure/azure-arc/servers/onboard-portal) for Arc-enabled servers. See [Install Arc agent for Arc-enabled VMware VMs](/azure/azure-arc/vmware-vsphere/enable-guest-management-at-scale) to enable guest management for Arc-enabled VMware vSphere VMs and install [Arc agent for Arc-enabled SCVMM](/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale) to enable guest management for Arc-enabled SCVMM VMs. - The system-assigned managed identity must be enabled on the Azure virtual machine, Arc-enabled server, Arc-enabled VMware vSphere VM or Arc-enabled SCVMM VM. If the system-assigned managed identity isn't enabled, it will be enabled as part of the adding process. ### Supported operating systems You can also add machines to an existing hybrid worker group. 1. Select the checkbox next to the machine(s) you want to add to the hybrid worker group. - If you don't see your non-Azure machine listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent` see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers. See [Install Arc agent for Arc-enabled VMs](../azure-arc/vmware-vsphere/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled VMware vSphere and [Install Arc agent for Arc-enabled SCVMM](../azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled SCVMM VMs. + If you don't see your non-Azure machine listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent` see [Connect hybrid machines to Azure from the Azure portal](/azure/azure-arc/servers/onboard-portal) for Arc-enabled servers. See [Install Arc agent for Arc-enabled VMs](/azure/azure-arc/vmware-vsphere/enable-guest-management-at-scale) to enable guest management for Arc-enabled VMware vSphere and [Install Arc agent for Arc-enabled SCVMM](/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale) to enable guest management for Arc-enabled SCVMM VMs. 1. Select **Add** to add the machine to the group. Review the parameters used in this template. **Prerequisites** -You would require an Azure VM or Arc-enabled server. You can follow the steps [here](../azure-arc/servers/onboard-portal.md) to create an Arc connected machine. +You would require an Azure VM or Arc-enabled server. You can follow the steps [here](/azure/azure-arc/servers/onboard-portal) to create an Arc connected machine. **Install and use Hybrid Worker extension** To check the version of the extension-based Hybrid Runbook Worker: Using [VM insights](/azure/azure-monitor/vm/vminsights-overview), you can monitor the performance of Azure VMs and Arc-enabled Servers deployed as Hybrid Runbook workers. Among multiple elements that are considered during performances, the VM insights monitors the key operating system performance indicators related to processor, memory, network adapter, and disk utilization. - For Azure VMs, see [How to chart performance with VM insights](/azure/azure-monitor/vm/vminsights-performance).-- For Arc-enabled servers, see [Tutorial: Monitor a hybrid machine with VM insights](../azure-arc/servers/learn/tutorial-enable-vm-insights.md).+- For Arc-enabled servers, see [Tutorial: Monitor a hybrid machine with VM insights](/azure/azure-arc/servers/learn/tutorial-enable-vm-insights). ## Next steps Using [VM insights](/azure/azure-monitor/vm/vminsights-overview), you can monito - To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](/azure/virtual-machines/extensions/features-windows) and [Azure VM extensions and features for Linux](/azure/virtual-machines/extensions/features-linux). -- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).+- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](/azure/azure-arc/servers/manage-vm-extensions). -- To learn about Azure management services for Arc-enabled VMware VMs, see [Install Arc agents at scale for your VMware VMs](../azure-arc/vmware-vsphere/enable-guest-management-at-scale.md).+- To learn about Azure management services for Arc-enabled VMware VMs, see [Install Arc agents at scale for your VMware VMs](/azure/azure-arc/vmware-vsphere/enable-guest-management-at-scale). -- To learn about Azure management services for Arc-enabled SCVMM VMs, see [Install Arc agents at scale for Arc-enabled SCVMM VMs](../azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md).+- To learn about Azure management services for Arc-enabled SCVMM VMs, see [Install Arc agents at scale for Arc-enabled SCVMM VMs](/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale). |
automation | Migrate Existing Agent Based Hybrid Worker To Extension Based Workers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md | The purpose of the Extension-based approach is to simplify the installation and - Two cores - 4 GB of RAM-- **Non-Azure machines** must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs.+- **Non-Azure machines** must have the [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](/azure/azure-arc/servers/onboard-portal) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](/azure/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs. - The system-assigned managed identity must be enabled on the Azure virtual machine, Arc-enabled server or Arc-enabled VMware vSphere VM. If the system-assigned managed identity isn't enabled, it will be enabled as part of the installation process through the Azure portal. ### Supported operating systems To install Hybrid worker extension on an existing agent based hybrid worker, ens 1. Under **Process Automation**, select **Hybrid worker groups**, and then select your existing hybrid worker group to go to the **Hybrid worker group** page. 1. Under **Hybrid worker group**, select **Hybrid Workers** > **+ Add** to go to the **Add machines as hybrid worker** page.-1. Select the checkbox next to the existing Agent based (V1) Hybrid worker. If you don't see your agent-based Hybrid Worker listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers, or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs. +1. Select the checkbox next to the existing Agent based (V1) Hybrid worker. If you don't see your agent-based Hybrid Worker listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](/azure/azure-arc/servers/onboard-portal) for Arc-enabled servers, or see [Manage VMware virtual machines Azure Arc](/azure/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs. :::image type="content" source="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/add-machines-hybrid-worker-inline.png" alt-text="Screenshot of adding machines as hybrid worker." lightbox="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/add-machines-hybrid-worker-expanded.png"::: Review the parameters used in this template. **Prerequisites** -You would require an Azure VM or Arc-enabled server. You can follow the steps [here](../azure-arc/servers/onboard-portal.md) to create an Arc connected machine. +You would require an Azure VM or Arc-enabled server. You can follow the steps [here](/azure/azure-arc/servers/onboard-portal) to create an Arc connected machine. **Install and use Hybrid Worker extension** |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md | Azure Automation supports [source control integration](source-control-integratio Automation is designed to work across Windows and Linux physical servers and virtual machines outside of Azure, on your corporate network, or other cloud provider. It delivers a consistent way to automate and configure deployed workloads and the operating systems that run them. The Hybrid Runbook Worker feature of Azure Automation enables running runbooks directly on the non-Azure physical server or virtual machine hosting the role, and against resources in the environment to manage those local resources. -Through [Arc-enabled servers](../azure-arc/servers/overview.md), it provides a consistent deployment and management experience for your non-Azure machines. It enables integration with the Automation service using the VM extension framework to deploy the Hybrid Runbook Worker role, and simplify onboarding to Update Management and Change Tracking and Inventory. +Through [Arc-enabled servers](/azure/azure-arc/servers/overview), it provides a consistent deployment and management experience for your non-Azure machines. It enables integration with the Automation service using the VM extension framework to deploy the Hybrid Runbook Worker role, and simplify onboarding to Update Management and Change Tracking and Inventory. ## Common scenarios Azure Automation supports management throughout the lifecycle of your infrastruc Depending on your requirements, one or more of the following Azure services integrate with or complement Azure Automation to help fulfill them: -* [Azure Arc-enabled servers](../azure-arc/servers/overview.md) enables simplified onboarding of hybrid machines to Update Management, Change Tracking and Inventory, and the Hybrid Runbook Worker role. +* [Azure Arc-enabled servers](/azure/azure-arc/servers/overview) enables simplified onboarding of hybrid machines to Update Management, Change Tracking and Inventory, and the Hybrid Runbook Worker role. * [Azure Alerts action groups](/azure/azure-monitor/alerts/action-groups) can initiate an Automation runbook when an alert is raised. * [Azure Monitor](/azure/azure-monitor/overview) to collect metrics and log data from your Automation account for further analysis and take action on the telemetry. Automation features such as Update Management and Change Tracking and Inventory rely on the Log Analytics workspace to deliver elements of their functionality. * [Azure Policy](../governance/policy/samples/built-in-policies.md) includes initiative definitions to help establish and maintain compliance with different security standards for your Automation account. |
automation | Dsc Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md | -> Before you enable Azure Automation DSC, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md). +> Before you enable Azure Automation DSC, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). By enabling Azure Automation State Configuration, you can manage and monitor the configurations of your Windows servers using Desired State Configuration (DSC). Configurations that drift from a desired configuration can be identified or auto-corrected. This quickstart steps through enabling an Azure VM and deploying a LAMP stack using Azure Automation State Configuration. |
automation | Install Hybrid Worker Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/install-hybrid-worker-extension.md | -The Azure Automation User Hybrid Worker enables the execution of PowerShell and Python scripts directly on machines for managing guest workloads or as a gateway to environments that aren't accessible from Azure. You can configure Windows and Linux Azure Virtual Machine. [Azure Arc-enabled Server](../../azure-arc/servers/overview.md), [Arc-enabled VMware vSphere VM](../../azure-arc/vmware-vsphere/overview.md), and [Azure Arc-enabled SCVMM](../../azure-arc/system-center-virtual-machine-manager/overview.md) as User Hybrid Worker by installing Hybrid Worker extension. +The Azure Automation User Hybrid Worker enables the execution of PowerShell and Python scripts directly on machines for managing guest workloads or as a gateway to environments that aren't accessible from Azure. You can configure Windows and Linux Azure Virtual Machine. [Azure Arc-enabled Server](/azure/azure-arc/servers/overview), [Arc-enabled VMware vSphere VM](/azure/azure-arc/vmware-vsphere/overview), and [Azure Arc-enabled SCVMM](/azure/azure-arc/system-center-virtual-machine-manager/overview) as User Hybrid Worker by installing Hybrid Worker extension. This quickstart shows you how to install Azure Automation Hybrid Worker extension on an Azure Virtual Machine through the Extensions blade on Azure portal. |
automation | Extension Based Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/extension-based-hybrid-runbook-worker.md | You are deploying an extension-based Hybrid Runbook Worker on a VM and it fails You are deploying the extension-based Hybrid Worker on a non-Azure VM that does not have Arc connected machine agent installed on it. ### Resolution-Non-Azure machines must have the Arc connected machine agent installed on it, before deploying it as an extension-based Hybrid Runbook worker. To install the `AzureConnectedMachineAgent`, see [connect hybrid machines to Azure from the Azure portal](../../azure-arc/servers/onboard-portal.md) -for Arc-enabled servers or [Manage VMware virtual machines Azure Arc](../../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware VM. +Non-Azure machines must have the Arc connected machine agent installed on it, before deploying it as an extension-based Hybrid Runbook worker. To install the `AzureConnectedMachineAgent`, see [connect hybrid machines to Azure from the Azure portal](/azure/azure-arc/servers/onboard-portal) +for Arc-enabled servers or [Manage VMware virtual machines Azure Arc](/azure/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure#enable-guest-management) to enable guest management for Arc-enabled VMware VM. ### Scenario: Hybrid Worker deployment fails due to System assigned identity not enabled |
automation | Enable From Automation Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-automation-account.md | -This article describes how you can use your Automation account to enable the [Update Management](overview.md) feature for VMs in your environment, including machines or servers registered with [Azure Arc-enabled servers](../../azure-arc/servers/overview.md). To enable Azure VMs at scale, you must enable an existing Azure VM using Update Management. +This article describes how you can use your Automation account to enable the [Update Management](overview.md) feature for VMs in your environment, including machines or servers registered with [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). To enable Azure VMs at scale, you must enable an existing Azure VM using Update Management. > [!NOTE] > When enabling Update Management, only certain regions are supported for linking a Log Analytics workspace and an Automation account. For a list of the supported mapping pairs, see [Region mapping for Automation account and Log Analytics workspace](../how-to/region-mappings.md). This article describes how you can use your Automation account to enable the [Up * Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * [Automation account](../automation-security-overview.md) to manage machines.-* An [Azure virtual machine](/azure/virtual-machines/windows/quick-create-portal), or VM or server registered with Azure Arc-enabled servers. Non-Azure VMs or servers need to have the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for Windows or Linux installed and reporting to the workspace linked to the Automation account where Update Management is enabled. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. +* An [Azure virtual machine](/azure/virtual-machines/windows/quick-create-portal), or VM or server registered with Azure Arc-enabled servers. Non-Azure VMs or servers need to have the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for Windows or Linux installed and reporting to the workspace linked to the Automation account where Update Management is enabled. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then use Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. ## Sign in to Azure |
automation | Operating System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md | The section describes operating system-specific requirements. For additional gui - Windows PowerShell 5.1 is required ([Download Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616).) - The Update Management feature depends on the system Hybrid Runbook Worker role, and you should confirm its [system requirements](../automation-windows-hrw-install.md#prerequisites). -Windows Update agents must be configured to communicate with a Windows Server Update Services (WSUS) server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Microsoft Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with VM insights, instead use the [Enable Enable VM insights](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. +Windows Update agents must be configured to communicate with a Windows Server Update Services (WSUS) server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then use Azure Policy to assign the [Deploy Log Analytics agent to Microsoft Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with VM insights, instead use the [Enable Enable VM insights](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. You can use Update Management with Microsoft Configuration Manager. To learn more about integration scenarios, see [Integrate Update Management with Windows Configuration Manager](mecmintegration.md). The [Log Analytics agent for Windows](/azure/azure-monitor/agents/agent-windows) is required for Windows servers managed by sites in your Configuration Manager environment. By default, Windows VMs that are deployed from Azure Marketplace are set to rece > Update assessment of Linux machines is supported in certain regions only. See the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings-for-log-analytics-and-azure-automation). -For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, to monitor the machines use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) instead of Azure Monitor for VMs. +For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, to monitor the machines use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) instead of Azure Monitor for VMs. ## Next steps |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md | -As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](../../lighthouse/overview.md). Update Management can be used to assess and schedule update deployments to machines in multiple subscriptions in the same Microsoft Entra tenant, or across tenants using Azure Lighthouse. +As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](/azure/lighthouse/overview). Update Management can be used to assess and schedule update deployments to machines in multiple subscriptions in the same Microsoft Entra tenant, or across tenants using Azure Lighthouse. Microsoft offers other capabilities to help you manage updates for your Azure VMs or Azure virtual machine scale sets that you should consider as part of your overall update management strategy. |
automation | Plan Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/plan-deployment.md | The [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for W On Azure VMs, if the Log Analytics agent isn't already installed, when you enable Update Management for the VM it is automatically installed using the Log Analytics VM extension for [Windows](/azure/virtual-machines/extensions/oms-windows) or [Linux](/azure/virtual-machines/extensions/oms-linux). The agent is configured to report to the Log Analytics workspace linked to the Automation account Update Management is enabled in. -Non-Azure VMs or servers need to have the Log Analytics agent for Windows or Linux installed and reporting to the linked workspace. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with [VM insights](/azure/azure-monitor/vm/vminsights-overview), instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. +Non-Azure VMs or servers need to have the Log Analytics agent for Windows or Linux installed and reporting to the linked workspace. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with [VM insights](/azure/azure-monitor/vm/vminsights-overview), instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. If you're enabling a machine that's currently managed by Operations Manager, a new agent isn't required. The workspace information is added to the agents configuration when you connect the management group to the Log Analytics workspace. |
automation | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md | Start/Stop VM runbooks have been updated to use Az modules in place of Azure Res **Type:** New feature -Use Azure Automation State Configuration to centrally store configurations and maintain the desired state of hybrid connected machines enabled through the Azure Arc enabled servers DSC VM extension. For more information, read [Arc enabled servers VM extensions overview](../azure-arc/servers/manage-vm-extensions.md). +Use Azure Automation State Configuration to centrally store configurations and maintain the desired state of hybrid connected machines enabled through the Azure Arc enabled servers DSC VM extension. For more information, read [Arc enabled servers VM extensions overview](/azure/azure-arc/servers/manage-vm-extensions). ### July 2020 |
automation | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md | On **31 August 2024**, Azure Automation will [retire](https://azure.microsoft.co ### General Availability: Azure Automation User Hybrid Runbook Worker Extension -User Hybrid Worker enables execution of the scripts directly on the machines for managing guest workloads or as a gateway to environments that are not accessible from Azure. Azure Automation announces **General Availability of User Hybrid Worker extension**, that is based on Virtual Machine extensions framework and provides a **seamless and integrated** installation experience. It is supported for Windows & Linux Azure VMs and [Azure Arc-enabled Servers](../azure-arc/servers/overview.md). It is also available for [Azure Arc-enabled VMware vSphere VMs](../azure-arc/vmware-vsphere/overview.md) in preview. +User Hybrid Worker enables execution of the scripts directly on the machines for managing guest workloads or as a gateway to environments that are not accessible from Azure. Azure Automation announces **General Availability of User Hybrid Worker extension**, that is based on Virtual Machine extensions framework and provides a **seamless and integrated** installation experience. It is supported for Windows & Linux Azure VMs and [Azure Arc-enabled Servers](/azure/azure-arc/servers/overview). It is also available for [Azure Arc-enabled VMware vSphere VMs](/azure/azure-arc/vmware-vsphere/overview) in preview. ## October 2022 |
avere-vfxt | Avere Vfxt Open Ticket | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-open-ticket.md | Follow these steps to make sure that your support ticket is tagged with a resour ## Request a quota increase -Read [Quota for the vFXT cluster](avere-vfxt-prereqs.md#quota-for-the-vfxt-cluster) to learn what components are needed to deploy the Avere vFXT for Azure. You can [request a quota increase](../azure-portal/supportability/regional-quota-requests.md) from the Azure portal. +Read [Quota for the vFXT cluster](avere-vfxt-prereqs.md#quota-for-the-vfxt-cluster) to learn what components are needed to deploy the Avere vFXT for Azure. You can [request a quota increase](/azure/azure-portal/supportability/regional-quota-requests) from the Azure portal. |
avere-vfxt | Avere Vfxt Prereqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-prereqs.md | There are some workarounds to allow a non-owner to create an Avere vFXT for Azur ## Quota for the vFXT cluster -Check that you have sufficient quota for the following Azure components. If needed, [request a quota increase](../azure-portal/supportability/regional-quota-requests.md). +Check that you have sufficient quota for the following Azure components. If needed, [request a quota increase](/azure/azure-portal/supportability/regional-quota-requests). > [!NOTE] > The virtual machines and SSD components listed here are for the vFXT cluster itself. Remember that you also need quota for the VMs and SSDs you will use for your compute farm. |
azure-app-configuration | Integrate Ci Cd Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-ci-cd-pipeline.md | This article explains how to use data from Azure App Configuration in a continuo ## Use App Configuration in your Azure DevOps Pipeline -If you have an Azure DevOps Pipeline, you can fetch key-values from App Configuration and set them as task variables. The [Azure App Configuration DevOps extension](https://go.microsoft.com/fwlink/?linkid=2091063) is an add-on module that provides this functionality. Follow its instructions to use the extension in a build or release task sequence. +If you have an Azure DevOps Pipeline, you can fetch key-values from App Configuration and set them as task variables. The Azure App Configuration DevOps extension is an add-on module that provides this functionality. [Get this module](https://go.microsoft.com/fwlink/?linkid=2091063) and refer to [Pull settings from App Configuration with Azure Pipelines](./pull-key-value-devops-pipeline.md) for instructions to use it in your Azure Pipelines. ## Deploy App Configuration data with your application |
azure-app-configuration | Quickstart Container Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-container-apps.md | Create an Azure Container Registry (ACR). ACR enables you to build, store, and m #### [Portal](#tab/azure-portal) -1. To create the container registry, follow the [Azure Container Registry quickstart](../container-registry/container-registry-get-started-portal.md). +1. To create the container registry, follow the [Azure Container Registry quickstart](/azure/container-registry/container-registry-get-started-portal). 1. Once the deployment is complete, open your ACR instance and from the left menu, select **Settings > Access keys**. 1. Take note of the **Login server** value listed on this page. You'll use this information in a later step. 1. Switch **Admin user** to *Enabled*. This option lets you connect the ACR to Azure Container Apps using admin user credentials. Alternatively, you can leave it disabled and configure the container app to [pull images from the registry with a managed identity](../container-apps/managed-identity-image-pull.md). #### [Azure CLI](#tab/azure-cli) -1. Create an ACR instance using the following command. It creates a basic tier registry named *myregistry* with admin user enabled that allows the container app to connect to the registry using admin user credentials. For more information, see [Azure Container Registry quickstart](../container-registry/container-registry-get-started-azure-cli.md). +1. Create an ACR instance using the following command. It creates a basic tier registry named *myregistry* with admin user enabled that allows the container app to connect to the registry using admin user credentials. For more information, see [Azure Container Registry quickstart](/azure/container-registry/container-registry-get-started-azure-cli). ```azurecli az acr create In this quickstart, you: - Added the container image to Azure Container Apps - Browsed to the URL of the Azure Container Apps instance updated with the settings you configured in your App Configuration store. -The managed identity enables one Azure resource to access another without you maintaining secrets. You can streamline access from Container Apps to other Azure resources. For more information, see how to [access App Configuration using the managed identity](howto-integrate-azure-managed-service-identity.md) and how to [[access Container Registry using the managed identity](../container-registry/container-registry-authentication-managed-identity.md)]. +The managed identity enables one Azure resource to access another without you maintaining secrets. You can streamline access from Container Apps to other Azure resources. For more information, see how to [access App Configuration using the managed identity](howto-integrate-azure-managed-service-identity.md) and how to [[access Container Registry using the managed identity](/azure/container-registry/container-registry-authentication-managed-identity)]. To learn how to configure your ASP.NET Core web app to dynamically refresh configuration settings, continue to the next tutorial. |
azure-app-configuration | Quickstart Feature Flag Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-aspnet-core.md | Follow the documents to create an ASP.NET Core app with dynamic configuration. ## Create a feature flag -Add a feature flag called *Beta* to the App Configuration store and leave **Label** and **Description** with their default values. For more information about how to add feature flags to a store using the Azure portal or the CLI, go to [Create a feature flag](./manage-feature-flags.md#create-a-feature-flag). +Add a feature flag called *Beta* to the App Configuration store (created in the [Prerequisites](./quickstart-feature-flag-aspnet-core.md#prerequisites) steps), and leave **Label** and **Description** with their default values. For more information about how to add feature flags to a store using the Azure portal or the CLI, go to [Create a feature flag](./manage-feature-flags.md#create-a-feature-flag). > [!div class="mx-imgBorder"] > ![Enable feature flag named Beta](./media/add-beta-feature-flag.png) ## Use a feature flag -1. Navigate into the project's directory, and run the following command to add a reference to the [Microsoft.FeatureManagement.AspNetCore](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore) NuGet package. +1. Navigate into the project's directory (created in the [Prerequisites](./quickstart-feature-flag-aspnet-core.md#prerequisites) steps), and run the following command to add a reference to the [Microsoft.FeatureManagement.AspNetCore](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore) NuGet package. ```dotnetcli dotnet add package Microsoft.FeatureManagement.AspNetCore |
azure-arc | Choose Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/choose-service.md | - Title: Choosing the right Azure Arc service for machines -description: Learn about the different services offered by Azure Arc and how to choose the right one for your machines. Previously updated : 06/19/2024----# Choosing the right Azure Arc service for machines --Azure Arc offers different services based on your existing IT infrastructure and management needs. Before onboarding your resources to Azure Arc-enabled servers, you should investigate the different Azure Arc offerings to determine which best suits your requirements. Choosing the right Azure Arc service provides the best possible inventorying and management of your resources. --There are several different ways you can connect your existing Windows and Linux machines to Azure Arc: --- Azure Arc-enabled servers-- Azure Arc-enabled VMware vSphere-- Azure Arc-enabled System Center Virtual Machine Manager (SCVMM)-- Azure Stack HCI--Each of these services extends the Azure control plane to your existing infrastructure and enables the use of [Azure security, governance, and management capabilities using the Connected Machine agent](/azure/azure-arc/servers/overview). Other services besides Azure Arc-enabled servers also use an [Azure Arc resource bridge](/azure/azure-arc/resource-bridge/overview), a part of the core Azure Arc platform that provides self-servicing and additional management capabilities. --General recommendations about the right service to use are as follows: --|If your machine is a... |...connect to Azure with... | -||| -|VMware VM (not running on AVS) |[Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) | -|Azure VMware Solution (AVS) VM |[Azure Arc-enabled VMware vSphere for Azure VMware Solution](/azure/azure-vmware/deploy-arc-for-azure-vmware-solution?tabs=windows) | -|VM managed by System Center Virtual Machine Manager |[Azure Arc-enabled SCVMM](system-center-virtual-machine-manager/overview.md) | -|Azure Stack HCI VM |[Azure Stack HCI](/azure-stack/hci/overview) | -|Physical server |[Azure Arc-enabled servers](servers/overview.md) | -|VM on another hypervisor |[Azure Arc-enabled servers](servers/overview.md) | -|VM on another cloud provider |[Azure Arc-enabled servers](servers/overview.md) | --If you're unsure about which of these services to use, you can start with Azure Arc-enabled servers and add a resource bridge for additional management capabilities later. Azure Arc-enabled servers allows you to connect servers containing all of the types of VMs supported by the other services and provides a wide range of capabilities such as Azure Policy and monitoring, while adding resource bridge can extend additional capabilities. --Region availability also varies between Azure Arc services, so you may need to use Azure Arc-enabled servers if a more specialized version of Azure Arc is unavailable in your preferred region. See [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc®ions=all&rar=true) to learn more about region availability for Azure Arc services. --Where your machine runs determines the best Azure Arc service to use. Organizations with diverse infrastructure may end up using more than one Azure Arc service; this is alright. The core set of features remains the same no matter which Azure Arc service you use. --## Azure Arc-enabled servers --[Azure Arc-enabled servers](servers/overview.md) lets you manage Windows and Linux physical servers and virtual machines hosted outside of Azure, on your corporate network, or other cloud provider. When connecting your machine to Azure Arc-enabled servers, you can perform various operational functions similar to native Azure virtual machines. --### Capabilities --- Govern: Assign Azure Automanage machine configurations to audit settings within the machine. Utilize Azure Policy pricing guide for cost understanding.--- Protect: Safeguard non-Azure servers with Microsoft Defender for Endpoint, integrated through Microsoft Defender for Cloud. This includes threat detection, vulnerability management, and proactive security monitoring. Utilize Microsoft Sentinel for collecting security events and correlating them with other data sources.--- Configure: Employ Azure Automation for managing tasks using PowerShell and Python runbooks. Use Change Tracking and Inventory for assessing configuration changes. Utilize Update Management for handling OS updates. Perform post-deployment configuration and automation tasks using supported Azure Arc-enabled servers VM extensions.--- Monitor: Utilize VM insights for monitoring OS performance and discovering application components. Collect log data, such as performance data and events, through the Log Analytics agent, storing it in a Log Analytics workspace.--- Procure Extended Security Updates (ESUs) at scale for your Windows Server 2012 and 2012R2 machines running on vCenter managed estate.--> [!IMPORTANT] -> Azure Arc-enabled VMware vSphere and Azure Arc-enabled SCVMM have all the capabilities of Azure Arc-enabled servers, but also provide specific, additional capabilities. -> -## Azure Arc-enabled VMware vSphere --[Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) simplifies the management of hybrid IT resources distributed across VMware vSphere and Azure. --Running software in Azure VMware Solution, as a private cloud in Azure, offers some benefits not realized by operating your environment outside of Azure. For software running in a VM, such as SQL Server and Windows Server, running in Azure VMware Solution provides additional value such as free Extended Security Updates (ESUs). --To take advantage of these benefits if you're running in an Azure VMware Solution, it's important to follow respective [onboarding](/azure/azure-vmware/deploy-arc-for-azure-vmware-solution?tabs=windows) processes to fully integrate the experience with the AVS private cloud. --Additionally, when a VM in Azure VMware Solution private cloud is Azure Arc-enabled using a method distinct from the one outlined in the AVS public document, the steps are provided in the [document](/azure/azure-vmware/deploy-arc-for-azure-vmware-solution?tabs=windows) to refresh the integration between the Azure Arc-enabled VMs and Azure VMware Solution. --### Capabilities --- Discover your VMware vSphere estate (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register resources with Azure Arc at scale.--- Perform various virtual machine (VM) operations directly from Azure, such as create, resize, delete, and power cycle operations such as start/stop/restart on VMware VMs consistently with Azure.--- Empower developers and application teams to self-serve VM operations on-demand usingΓÇ»Azure role-based access controlΓÇ»(RBAC).--- Install the Azure Arc-connected machine agent at scale on VMware VMs toΓÇ»govern, protect, configure, and monitorΓÇ»them.--- Browse your VMware vSphere resources (VMs, templates, networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.--## Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) --[Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md) (SCVMM) empowers System Center customers to connect their VMM environment to Azure and perform VM self-service operations from Azure portal. --Azure Arc-enabled System Center Virtual Machine Manager also allows you to manage your hybrid environment consistently and perform self-service VM operations through Azure portal. For Microsoft Azure Pack customers, this solution is intended as an alternative to perform VM self-service operations. --### Capabilities --- Discover and onboard existing SCVMM managed VMs to Azure.--- Perform various VM lifecycle operations such as start, stop, pause, and delete VMs on SCVMM managed VMs directly from Azure.--- Empower developers and application teams to self-serve VM operations on demand usingΓÇ»Azure role-based access control (RBAC).--- Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.--- Install the Azure Arc-connected machine agents at scale on SCVMM VMs toΓÇ»govern, protect, configure, and monitor them.--## Azure Stack HCI --[Azure Stack HCI](/azure-stack/hci/overview) is a hyperconverged infrastructure operating system delivered as an Azure service. This is a hybrid solution that is designed to host virtualized Windows and Linux VM or containerized workloads and their storage. Azure Stack HCI is a hybrid product that is offered on validated hardware and connects on-premises estates to Azure, enabling cloud-based services, monitoring and management. This helps customers manage their infrastructure from Azure and run virtualized workloads on-premises, making it easy for them to consolidate aging infrastructure and connect to Azure. --> [!NOTE] -> Azure Stack HCI comes with Azure resource bridge installed and uses the Azure Arc control plane for infrastructure and workload management, allowing you to monitor, update, and secure your HCI infrastructure from the Azure portal. -> --### Capabilities --- Deploy and manage workloads, including VMs and Kubernetes clusters from Azure through the Azure Arc resource bridge.--- Manage VM lifecycle operations such as start, stop, delete from Azure control plane.--- Manage Kubernetes lifecycle operations such as scale, update, upgrade, and delete clusters from Azure control plane.--- Install Azure connected machine agent and Azure Arc-enabled Kubernetes agent on your VM and Kubernetes clusters to use Azure services (i.e., Azure Monitor, Azure Defender for cloud, etc.).--- Leverage Azure Virtual Desktop for Azure Stack HCI to deploy session hosts on to your on-premises infrastructure to better meet your performance or data locality requirements.--- Empower developers and application teams to self-serve VM and Kubernetes cluster operations on demand usingΓÇ»Azure role-based access control (RBAC).--- Monitor, update, and secure your Azure Stack HCI infrastructure and workloads across fleets of locations directly from the Azure portal.--- Deploy and manage static and DHCP-based logical networks on-premises to host your workloads.--- VM image management with Azure Marketplace integration and ability to bring your own images from Azure storage account and cluster shared volumes.--- Create and manage storage paths to store your VM disks and config files.--## Capabilities at a glance --The following table provides a quick way to see the major capabilities of the three Azure Arc services that connect your existing Windows and Linux machines to Azure Arc. --| _ |Arc-enabled servers |Arc-enabled VMware vSphere |Arc-enabled SCVMM |Azure Stack HCI | -||||||| -|Microsoft Defender for Cloud |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Microsoft Sentinel | Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Azure Automation |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Azure Update Manager |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|VM extensions |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Azure Monitor |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Extended Security Updates for Windows Server 2012/2012R2 and SQL Server 2012 (11.x) |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Discover & onboard VMs to Azure | |Γ£ô |Γ£ô |Γ£ù | -|Lifecycle operations (start/stop VMs, etc.) | |Γ£ô |Γ£ô |Γ£ô | -|Self-serve VM provisioning | |Γ£ô |Γ£ô |Γ£ô | -|SQL Server enabled by Azure Arc |Γ£ô |Γ£ô |Γ£ô |Γ£ô | --## Switching from Arc-enabled servers to another service --If you currently use Azure Arc-enabled servers, you can get the additional capabilities that come with Arc-enabled VMware vSphere or Arc-enabled SCVMM: --- [Enable virtual hardware and VM CRUD capabilities in a VMware machine with Azure Arc agent installed](/azure/azure-arc/vmware-vsphere/enable-virtual-hardware)--- [Enable virtual hardware and VM CRUD capabilities in an SCVMM machine with Azure Arc agent installed](/azure/azure-arc/system-center-virtual-machine-manager/enable-virtual-hardware-scvmm)- |
azure-arc | Alternate Key Based | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/alternate-key-based.md | - Title: Alternate key-based configuration for Cloud Ingest Edge Volumes -description: Learn about an alternate key-based configuration for Cloud Ingest Edge Volumes. ---- Previously updated : 08/26/2024---# Alternate: Key-based authentication configuration for Cloud Ingest Edge Volumes --This article describes an alternate configuration for [Cloud Ingest Edge Volumes](cloud-ingest-edge-volume-configuration.md) (blob upload with local purge) with key-based authentication. --This configuration is an alternative option for use with key-based authentication methods. You should review the recommended configuration using system-assigned managed identities in [Cloud Ingest Edge Volumes configuration](cloud-ingest-edge-volume-configuration.md). --## Prerequisites --1. Create a storage account [following these instructions](/azure/storage/common/storage-account-create?tabs=azure-portal). -- > [!NOTE] - > When you create a storage account, it's recommended that you create it under the same resource group and region/location as your Kubernetes cluster. --1. Create a container in the storage account that you created in the previous step, [following these instructions](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container). --## Create a Kubernetes secret --Edge Volumes supports the following three authentication methods: --- Shared Access Signature (SAS) Authentication (recommended)-- Connection String Authentication-- Storage Key Authentication--After you complete authentication for one of these methods, proceed to the [Create a Cloud Ingest Persistent Volume Claim (PVC)](#create-a-cloud-ingest-persistent-volume-claim-pvc) section. --### [Shared Access Signature (SAS) authentication](#tab/sas) --### Create a Kubernetes secret using Shared Access Signature (SAS) authentication --You can configure SAS authentication using YAML and `kubectl`, or by using the Azure CLI. --To find your `storageaccountsas`, perform the following procedure: --1. Navigate to your storage account in the Azure portal. -1. Expand **Security + networking** on the left blade and then select **Shared access signature**. -1. Under **Allowed resource types**, select **Service > Container > Object**. -1. Under **Allowed permissions**, unselect **Immutable storage** and **Permanent delete**. -1. Under **Start and expiry date/time**, choose your desired end date and time. -1. At the bottom, select **Generate SAS and connection string**. -1. The values listed under **SAS token** are used for the `storageaccountsas` variables in the next section. --#### Shared Access Signature (SAS) authentication using YAML and `kubectl` --1. Create a file named `sas.yaml` with the following contents. Replace `metadata::name`, `metadata::namespace`, and `storageaccountconnectionstring` with your own values. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: v1 - kind: Secret - metadata: - ### This name should look similar to "kharrisStorageAccount-secret" where "kharrisStorageAccount" is replaced with your storage account name - name: <your-storage-acct-name-secret> - # Use a namespace that matches your intended consuming pod, or "default" - namespace: <your-intended-consuming-pod-or-default> - stringData: - authType: SAS - # Container level SAS (must have ? prefixed) - storageaccountsas: "?..." - type: Opaque - ``` --1. To apply `sas.yaml`, run: -- ```bash - kubectl apply -f "sas.yaml" - ``` --#### Shared Access Signature (SAS) authentication using CLI --- If you want to scope SAS authentication at the container level, use the following commands. You must update `YOUR_CONTAINER_NAME` from the first command and `YOUR_NAMESPACE`, `YOUR_STORAGE_ACCT_NAME`, and `YOUR_SECRET` from the second command:-- ```bash - az storage container generate-sas [OPTIONAL auth via --connection-string "..."] --name YOUR_CONTAINER_NAME --permissions acdrw --expiry '2025-02-02T01:01:01Z' - kubectl create secret generic -n "YOUR_NAMESPACE" "YOUR_STORAGE_ACCT_NAME"-secret --from-literal=storageaccountsas="YOUR_SAS" - ``` --### [Connection string authentication](#tab/connectionstring) --### Create a Kubernetes secret using connection string authentication --You can configure connection string authentication using YAML and `kubectl`, or by using Azure CLI. --To find your `storageaccountconnectionstring`, perform the following procedure: --1. Navigate to your storage account in the Azure portal. -1. Expand **Security + networking** on the left blade and then select **Shared access signature**. -1. Under **Allowed resource types**, select **Service > Container > Object**. -1. Under **Allowed permissions**, unselect **Immutable storage** and **Permanent delete**. -1. Under **Start and expiry date/time**, choose your desired end date and time. -1. At the bottom, select **Generate SAS and connection string**. -1. The values listed under **Connection string** are used for the `storageaccountconnectionstring` variables in the next section.. --For more information, see [Create a connection string using a shared access signature](/azure/storage/common/storage-configure-connection-string?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json#create-a-connection-string-using-a-shared-access-signature). --#### Connection string authentication using YAML and `kubectl` --1. Create a file named `connectionString.yaml` with the following contents. Replace `metadata::name`, `metadata::namespace`, and `storageaccountconnectionstring` with your own values. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: v1 - kind: Secret - metadata: - ### This name should look similar to "kharrisStorageAccount-secret" where "kharrisStorageAccount" is replaced with your storage account name - name: <your-storage-acct-name-secret> - # Use a namespace that matches your intended consuming pod or "default" - namespace: <your-intended-consuming-pod-or-default> - stringData: - authType: CONNECTION_STRING - # Connection string which can contain a storage key or SAS. - # Depending on your decision on using storage key or SAS, comment out the undesired storageaccoutnconnectionstring. - # - Storage key example - - storageaccountconnectionstring: "DefaultEndpointsProtocol=https;AccountName=YOUR_ACCT_NAME_HERE;AccountKey=YOUR_ACCT_KEY_HERE;EndpointSuffix=core.windows.net" - # - SAS example - - storageaccountconnectionstring: "BlobEndpoint=https://YOUR_BLOB_ENDPOINT_HERE;SharedAccessSignature=YOUR_SHARED_ACCESS_SIG_HERE" - type: Opaque - ``` --1. To apply `connectionString.yaml`, run: -- ```bash - kubectl apply -f "connectionString.yaml" - ``` --#### Connection string authentication using CLI --A connection string can contain a storage key or SAS. --- For a storage key connection string, run the following commands. You must update the `your_storage_acct_name` value from the first command, and the `your_namespace`, `your_storage_acct_name`, and `your_secret` values from the second command:-- ```bash - az storage account show-connection-string --name YOUR_STORAGE_ACCT_NAME --output tsv - kubectl create secret generic -n "your_namespace" "your_storage_acct_name"-secret --from-literal=storageaccountconnectionstring="your_secret" - ``` --- For a SAS connection string, run the following commands. You must update the `your_storage_acct_name` and `your_sas_token` values from the first command, and the `your_namespace`, `your_storage_acct_name`, and `your_secret` values from the second command:-- ```bash - az storage account show-connection-string --name your_storage_acct_name --sas-token "your_sas_token" -output tsv - kubectl create secret generic -n "your_namespace" "your_storage_acct_name"-secret --from-literal=storageaccountconnectionstring="your_secret" - ``` --### [Storage key authentication](#tab/storagekey) --### Create a Kubernetes secret using storage key authentication --1. Create a file named `add-key.sh` with the following contents. No edits to the contents are necessary: -- ```bash - #!/usr/bin/env bash - - while getopts g:n:s: flag - do - case "${flag}" in - g) RESOURCE_GROUP=${OPTARG};; - s) STORAGE_ACCOUNT=${OPTARG};; - n) NAMESPACE=${OPTARG};; - esac - done - - SECRET=$(az storage account keys list -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT --query [0].value --output tsv) - - kubectl create secret generic -n "${NAMESPACE}" "${STORAGE_ACCOUNT}"-secret --from-literal=storageaccountkey="${SECRET}" --from-literal=storageaccountname="${STORAGE_ACCOUNT}" - ``` --1. Once you create the file, change the write permissions on the file and execute the shell script using the following commands. Running these commands creates a secret named `{your_storage_account}-secret`. This secret name is used for the `secretName` value when you configure the Persistent Volume (PV). -- ```bash - chmod +x add-key.sh - ./add-key.sh -g "$your_resource_group_name" -s "$your_storage_account_name" -n "$your_kubernetes_namespace" - ``` ----## Create a Cloud Ingest Persistent Volume Claim (PVC) --1. Create a file named `cloudIngestPVC.yaml` with the following contents. You must edit the `metadata::name` value, and add a name for your Persistent Volume Claim. This name is referenced on the last line of `deploymentExample.yaml` in the next step. You must also update the `metadata::namespace` value with your intended consuming pod. If you don't have an intended consuming pod, the `metadata::namespace` value is `default`: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - ### Create a name for the PVC ### - name: <your-storage-acct-name-secret> - ### Use a namespace that matches your intended consuming pod, or "default" ### - namespace: <your-intended-consuming-pod-or-default> - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi - storageClassName: cloud-backed-sc - ``` --2. To apply `cloudIngestPVC.yaml`, run: -- ```bash - kubectl apply -f "cloudIngestPVC.yaml" - ``` --## Attach sub-volume to Edge Volume --1. Get the name of your Edge Volume using the following command: -- ```bash - kubectl get edgevolumes - ``` --1. Create a file named `edgeSubvolume.yaml` and copy the following contents. Update the variables with your information: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your sub-volume. - - `spec::edgevolume`: This name was retrieved from the previous step using `kubectl get edgevolumes`. - - `spec::path`: Create your own subdirectory name under the mount path. Note that the following example already contains an example name (`exampleSubDir`). If you change this path name, line 33 in `deploymentExample.yaml` must be updated with the new path name. If you choose to rename the path, don't use a preceding slash. - - `spec::auth::authType`: Depends on what authentication method you used in the previous steps. Accepted inputs include `sas`, `connection_string`, and `key`. - - `spec::auth::secretName`: If you used storage key authentication, your `secretName` is `{your_storage_account_name}-secret`. If you used connection string or SAS authentication, your `secretName` was specified by you. - - `spec::auth::secretNamespace`: Matches your intended consuming pod, or `default`. - - `spec::container`: The container name in your storage account. - - `spec::storageaccountendpoint`: Navigate to your storage account in the Azure portal. On the **Overview** page, near the top right of the screen, select **JSON View**. You can find the `storageaccountendpoint` link under **properties::primaryEndpoints::blob**. Copy the entire link (for example, `https://mytest.blob.core.windows.net/`). -- ```yaml - apiVersion: "arccontainerstorage.azure.net/v1" - kind: EdgeSubvolume - metadata: - name: <create-a-subvolume-name-here> - spec: - edgevolume: <your-edge-volume-name-here> - path: exampleSubDir # If you change this path, line 33 in deploymentExample.yaml must be updated. Don't use a preceding slash. - auth: - authType: MANAGED_IDENTITY - secretName: <your-secret-name> - secretNamespace: <your_namespace> - storageaccountendpoint: <your_storage_account_endpoint> - container: <your-blob-storage-account-container-name> - ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration - ``` --2. To apply `edgeSubvolume.yaml`, run: -- ```bash - kubectl apply -f "edgeSubvolume.yaml" - ``` --### Optional: Modify the `ingestPolicy` from the default --1. If you want to change the `ingestPolicy` from the default `edgeingestpolicy-default`, create a file named `myedgeingest-policy.yaml` with the following contents. Update the following variables with your preferences. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your **ingestPolicy**. This name must be updated and referenced in the spec::ingestPolicy section of your `edgeSubvolume.yaml`. - - `spec::ingest::order`: The order in which dirty files are uploaded. This is best effort, not a guarantee (defaults to **oldest-first**). Options for order are: **oldest-first** or **newest-first**. - - `spec::ingest::minDelaySec`: The minimum number of seconds before a dirty file is eligible for ingest (defaults to 60). This number can range between 0 and 31536000. - - `spec::eviction::order`: How files are evicted (defaults to **unordered**). Options for eviction order are: **unordered** or **never**. - - `spec::eviction::minDelaySec`: The number of seconds before a clean file is eligible for eviction (defaults to 300). This number can range between 0 and 31536000. -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeIngestPolicy - metadata: - name: <create-a-policy-name-here> # This will need to be updated and referenced in the spec::ingestPolicy section of the edgeSubvolume.yaml - spec: - ingest: - order: <your-ingest-order> - minDelaySec: <your-min-delay-sec> - eviction: - order: <your-eviction-order> - minDelaySec: <your-min-delay-sec> - ``` --1. To apply `myedgeingest-policy.yaml`, run: -- ```bash - kubectl apply -f "myedgeingest-policy.yaml" - ``` --## Attach your app (Kubernetes native application) --1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `deploymentExample.yaml` with the following contents. Replace `containers::name` and `volumes::persistentVolumeClaim::claimName` with your values. If you updated the path name from `edgeSubvolume.yaml`, `exampleSubDir` on line 33 must be updated with your new path name. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: cloudingestedgevol-deployment ### This will need to be unique for every volume you choose to create - spec: - replicas: 2 - selector: - matchLabels: - name: wyvern-testclientdeployment - template: - metadata: - name: wyvern-testclientdeployment - labels: - name: wyvern-testclientdeployment - spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - wyvern-testclientdeployment - topologyKey: kubernetes.io/hostname - containers: - ### Specify the container in which to launch the busy box. ### - - name: <create-a-container-name-here> - image: mcr.microsoft.com/azure-cli:2.57.0@sha256:c7c8a97f2dec87539983f9ded34cd40397986dcbed23ddbb5964a18edae9cd09 - command: - - "/bin/sh" - - "-c" - - "dd if=/dev/urandom of=/data/exampleSubDir/esaingesttestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done" - volumeMounts: - ### This name must match the following volumes::name attribute ### - - name: wyvern-volume - ### This mountPath is where the PVC will be attached to the pod's filesystem ### - mountPath: "/data" - volumes: - ### User-defined 'name' that is used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ### - - name: wyvern-volume - persistentVolumeClaim: - ### This claimName must refer to your PVC metadata::name - claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml> - ``` --1. To apply `deploymentExample.yaml`, run: -- ```bash - kubectl apply -f "deploymentExample.yaml" - ``` --1. Use `kubectl get pods` to find the name of your pod. Copy this name; you use it in the next step. -- > [!NOTE] - > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods will appear using `kubectl get pods`. You can choose either pod name to use for the next step. --1. Run the following command and replace `POD_NAME_HERE` with your copied value from the last step: -- ```bash - kubectl exec -it pod_name_here -- sh - ``` --1. Change directories (`cd`) into the `/data` mount path as specified in your `deploymentExample.yaml`. --1. You should see a directory with the name you specified as your `path` in Step 2 of the [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume) section. Now, `cd` into `/your_path_name_here`, and replace `your_path_name_here` with your respective details. --1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`. --1. In the Azure portal, navigate to your storage account and find the container specified from Step 2 of [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume). When you select your container, you should see `file1.txt` populated within the container. If the file hasn't yet appeared, wait approximately 1 minute; Edge Volumes waits a minute before uploading. --## Next steps --After completing these steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring, or 3rd party monitoring with Prometheus and Grafana. --[Monitor your deployment](monitor-deployment-edge-volumes.md) |
azure-arc | Alternate Onelake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/alternate-onelake.md | - Title: Alternate OneLake configuration for Cloud Ingest Edge Volumes -description: Learn about an alternate Cloud Ingest Edge Volumes configuration. ---- Previously updated : 08/26/2024---# Alternate: OneLake configuration for Cloud Ingest Edge Volumes --This article describes an alternate configuration for [Cloud Ingest Edge Volumes](cloud-ingest-edge-volume-configuration.md) (blob upload with local purge) for OneLake Lakehouses. --This configuration is an alternative option that you can use with key-based authentication methods. You should review the recommended configuration using the system-assigned managed identities described in [Cloud Ingest Edge Volumes configuration](cloud-ingest-edge-volume-configuration.md). --## Configure OneLake for Extension Identity --### Add Extension Identity to OneLake workspace --1. Navigate to your OneLake portal; for example, `https://youraccount.powerbi.com`. -1. Create or navigate to your workspace. - :::image type="content" source="media/onelake-workspace.png" alt-text="Screenshot showing workspace ribbon in portal." lightbox="media/onelake-workspace.png"::: -1. Select **Manage Access**. - :::image type="content" source="media/onelake-manage-access.png" alt-text="Screenshot showing manage access screen in portal." lightbox="media/onelake-manage-access.png"::: -1. Select **Add people or groups**. -1. Enter your extension name from your Azure Container Storage enabled by Azure Arc installation. This must be unique within your tenant. - :::image type="content" source="media/add-extension-name.png" alt-text="Screenshot showing add extension name screen." lightbox="media/add-extension-name.png"::: -1. Change the drop-down for permissions from **Viewer** to **Contributor**. - :::image type="content" source="media/onelake-set-contributor.png" alt-text="Screenshot showing set contributor screen." lightbox="media/onelake-set-contributor.png"::: -1. Select **Add**. --### Create a Cloud Ingest Persistent Volume Claim (PVC) --1. Create a file named `cloudIngestPVC.yaml` with the following contents. Modify the `metadata::name` value with a name for your Persistent Volume Claim. This name is referenced on the last line of `deploymentExample.yaml` in the next step. You must also update the `metadata::namespace` value with your intended consuming pod. If you don't have an intended consuming pod, the `metadata::namespace` value is `default`. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - ### Create a nane for your PVC ### - name: <create-a-pvc-name-here> - ### Use a namespace that matches your intended consuming pod, or "default" ### - namespace: <intended-consuming-pod-or-default-here> - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi - storageClassName: cloud-backed-sc - ``` --1. To apply `cloudIngestPVC.yaml`, run: -- ```bash - kubectl apply -f "cloudIngestPVC.yaml" - ``` --### Attach sub-volume to Edge Volume --You can use the following process to create a sub-volume using Extension Identity to connect to your OneLake LakeHouse. --1. Get the name of your Edge Volume using the following command: -- ```bash - kubectl get edgevolumes - ``` --1. Create a file named `edgeSubvolume.yaml` and copy/paste the following contents. The following variables must be updated with your information: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your sub-volume. - - `spec::edgevolume`: This name was retrieved from the previous step using `kubectl get edgevolumes`. - - `spec::path`: Create your own subdirectory name under the mount path. Note that the following example already contains an example name (`exampleSubDir`). If you change this path name, line 33 in `deploymentExample.yaml` must be updated with the new path name. If you choose to rename the path, don't use a preceding slash. - - `spec::container`: Details of your One Lake Data Lake Lakehouse (for example, `<WORKSPACE>/<DATA_LAKE>/Files`). - - `spec::storageaccountendpoint`: Your storage account endpoint is the prefix of your Power BI web link. For example, if your OneLake page is `https://contoso-motors.powerbi.com/`, then your endpoint is `https://contoso-motors.dfs.fabric.microsoft.com`. -- ```yaml - apiVersion: "arccontainerstorage.azure.net/v1" - kind: EdgeSubvolume - metadata: - name: <create-a-subvolume-name-here> - spec: - edgevolume: <your-edge-volume-name-here> - path: exampleSubDir # If you change this path, line 33 in deploymentExample.yaml must to be updated. Don't use a preceding slash. - auth: - authType: MANAGED_IDENTITY - storageaccountendpoint: "https://<Your AZ Site>.dfs.fabric.microsoft.com/" # Your AZ site is the root of your Power BI OneLake interface URI, such as https://contoso-motors.powerbi.com - container: "<WORKSPACE>/<DATA_LAKE>/Files" # Details of your One Lake Data Lake Lakehouse - ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration - ``` --2. To apply `edgeSubvolume.yaml`, run: -- ```bash - kubectl apply -f "edgeSubvolume.yaml" - ``` --#### Optional: Modify the `ingestPolicy` from the default --1. If you want to change the `ingestPolicy` from the default `edgeingestpolicy-default`, create a file named `myedgeingest-policy.yaml` with the following contents. The following variables must be updated with your preferences: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your `ingestPolicy`. This name must be updated and referenced in the `spec::ingestPolicy` section of your `edgeSubvolume.yaml`. - - `spec::ingest::order`: The order in which dirty files are uploaded. This is best effort, not a guarantee (defaults to `oldest-first`). Options for order are: `oldest-first` or `newest-first`. - - `spec::ingest::minDelaySec`: The minimum number of seconds before a dirty file is eligible for ingest (defaults to 60). This number can range between 0 and 31536000. - - `spec::eviction::order`: How files are evicted (defaults to `unordered`). Options for eviction order are: `unordered` or `never`. - - `spec::eviction::minDelaySec`: The number of seconds before a clean file is eligible for eviction (defaults to 300). This number can range between 0 and 31536000. -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeIngestPolicy - metadata: - name: <create-a-policy-name-here> # This will need to be updated and referenced in the spec::ingestPolicy section of the edgeSubvolume.yaml - spec: - ingest: - order: <your-ingest-order> - minDelaySec: <your-min-delay-sec> - eviction: - order: <your-eviction-order> - minDelaySec: <your-min-delay-sec> - ``` --1. To apply `myedgeingest-policy.yaml`, run: -- ```bash - kubectl apply -f "myedgeingest-policy.yaml" - ``` --## Attach your app (Kubernetes native application) --1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `deploymentExample.yaml` with the following contents. Replace the values for `containers::name` and `volumes::persistentVolumeClaim::claimName` with your own. If you updated the path name from `edgeSubvolume.yaml`, `exampleSubDir` on line 33 must be updated with your new path name. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: cloudingestedgevol-deployment ### This must be unique for each deployment you choose to create. - spec: - replicas: 2 - selector: - matchLabels: - name: wyvern-testclientdeployment - template: - metadata: - name: wyvern-testclientdeployment - labels: - name: wyvern-testclientdeployment - spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - wyvern-testclientdeployment - topologyKey: kubernetes.io/hostname - containers: - ### Specify the container in which to launch the busy box. ### - - name: <create-a-container-name-here> - image: mcr.microsoft.com/azure-cli:2.57.0@sha256:c7c8a97f2dec87539983f9ded34cd40397986dcbed23ddbb5964a18edae9cd09 - command: - - "/bin/sh" - - "-c" - - "dd if=/dev/urandom of=/data/exampleSubDir/esaingesttestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done" - volumeMounts: - ### This name must match the following volumes::name attribute ### - - name: wyvern-volume - ### This mountPath is where the PVC is attached to the pod's filesystem ### - mountPath: "/data" - volumes: - ### User-defined name that's used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ### - - name: wyvern-volume - persistentVolumeClaim: - ### This claimName must refer to your PVC metadata::name - claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml> - ``` --1. To apply `deploymentExample.yaml`, run: -- ```bash - kubectl apply -f "deploymentExample.yaml" - ``` --1. Use `kubectl get pods` to find the name of your pod. Copy this name, as you need it in the next step. -- > [!NOTE] - > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods appear using `kubectl get pods`. You can choose either pod name to use for the next step. --1. Run the following command and replace `POD_NAME_HERE` with your copied value from the previous step: -- ```bash - kubectl exec -it POD_NAME_HERE -- sh - ``` --1. Change directories into the `/data` mount path as specified in `deploymentExample.yaml`. --1. You should see a directory with the name you specified as your `path` in Step 2 of the [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume) section. Now, `cd` into `/YOUR_PATH_NAME_HERE`, replacing `YOUR_PATH_NAME_HERE` with your details. --1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`. --1. In the Azure portal, navigate to your storage account and find the container specified from step 2 of [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume). When you select your container, you should find `file1.txt` populated within the container. If the file hasn't yet appeared, wait approximately 1 minute; Edge Volumes waits a minute before uploading. --## Next steps --After you complete these steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring, or 3rd-party monitoring with Prometheus and Grafana. --[Monitor Your Deployment](monitor-deployment-edge-volumes.md) |
azure-arc | Attach App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/attach-app.md | - Title: Attach your application using the Azure IoT Operations data processor or Kubernetes native application (preview) -description: Learn how to attach your app using the Azure IoT Operations data processor or Kubernetes native application in Azure Container Storage enabled by Azure Arc Cache Volumes. --- Previously updated : 08/26/2024-zone_pivot_groups: attach-app ---# Attach your application (preview) --This article assumes you created a Persistent Volume (PV) and a Persistent Volume Claim (PVC). For information about creating a PV, see [Create a persistent volume](create-persistent-volume.md). For information about creating a PVC, see [Create a Persistent Volume Claim](create-persistent-volume-claim.md). --## Configure the Azure IoT Operations data processor --When you use Azure IoT Operations (AIO), the Data Processor is spawned without any mounts for Cache Volumes. You can perform the following tasks: --- Add a mount for the Cache Volumes PVC you created previously.-- Reconfigure all pipelines' output stage to output to the Cache Volumes mount you just created. --## Add Cache Volumes to your aio-dp-runner-worker-0 pods --These pods are part of a **statefulSet**. You can't edit the statefulSet in place to add mount points. Instead, follow this procedure: --1. Dump the statefulSet to yaml: -- ```bash - kubectl get statefulset -o yaml -n azure-iot-operations aio-dp-runner-worker > stateful_worker.yaml - ``` --1. Edit the statefulSet to include the new mounts for Cache Volumes in volumeMounts and volumes: -- ```yaml - volumeMounts: - - mountPath: /etc/bluefin/config - name: config-volume - readOnly: true - - mountPath: /var/lib/bluefin/registry - name: nfs-volume - - mountPath: /var/lib/bluefin/local - name: runner-local - ### Add the next 2 lines ### - - mountPath: /mnt/esa - name: esa4 - - volumes: - - configMap: - defaultMode: 420 - name: file-config - name: config-volume - - name: nfs-volume - persistentVolumeClaim: - claimName: nfs-provisioner - ### Add the next 3 lines ### - - name: esa4 - persistentVolumeClaim: - claimName: esa4 - ``` --1. Delete the existing statefulSet: -- ```bash - kubectl delete statefulset -n azure-iot-operations aio-dp-runner-worker - ``` -- This deletes all `aio-dp-runner-worker-n` pods. This is an outage-level event. --1. Create a new statefulSet of aio-dp-runner-worker(s) with the Cache Volumes mounts: -- ```bash - kubectl apply -f stateful_worker.yaml -n azure-iot-operations - ``` -- When the `aio-dp-runner-worker-n` pods start, they include mounts to Cache Volumes. The PVC should convey this in the state. --1. Once you reconfigure your Data Processor workers to have access to the Cache Volumes, you must manually update the pipeline configuration to use a local path that corresponds to the mounted location of your Cache Volume on the worker PODs. -- In order to modify the pipeline, use `kubectl edit pipeline <name of your pipeline>`. In that pipeline, replace your output stage with the following YAML: -- ```yaml - output: - batch: - path: .payload - time: 60s - description: An example file output stage - displayName: Sample File output - filePath: '{{{instanceId}}}/{{{pipelineId}}}/{{{partitionId}}}/{{{YYYY}}}/{{{MM}}}/{{{DD}}}/{{{HH}}}/{{{mm}}}/{{{fileNumber}}}' - format: - type: jsonStream - rootDirectory: /mnt/esa - type: output/file@v1 - ``` ---## Configure a Kubernetes native application --1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `configPod.yaml` with the following contents: -- ```yaml - kind: Deployment - apiVersion: apps/v1 - metadata: - name: example-static - labels: - app: example-static - ### Uncomment the next line and add your namespace only if you are not using the default namespace (if you are using azure-iot-operations) as specified from Line 6 of your pvc.yaml. If you are not using the default namespace, all future kubectl commands require "-n YOUR_NAMESPACE" to be added to the end of your command. - # namespace: YOUR_NAMESPACE - spec: - replicas: 1 - selector: - matchLabels: - app: example-static - template: - metadata: - labels: - app: example-static - spec: - containers: - - image: mcr.microsoft.com/cbl-mariner/base/core:2.0 - name: mariner - command: - - sleep - - infinity - volumeMounts: - ### This name must match the 'volumes.name' attribute in the next section. ### - - name: blob - ### This mountPath is where the PVC is attached to the pod's filesystem. ### - mountPath: "/mnt/blob" - volumes: - ### User-defined 'name' that's used to link the volumeMounts. This name must match 'volumeMounts.name' as specified in the previous section. ### - - name: blob - persistentVolumeClaim: - ### This claimName must refer to the PVC resource 'name' as defined in the PVC config. This name must match what your PVC resource was actually named. ### - claimName: YOUR_CLAIM_NAME_FROM_YOUR_PVC - ``` -- > [!NOTE] - > If you are using your own namespace, all future `kubectl` commands require `-n YOUR_NAMESPACE` to be appended to the command. For example, you must use `kubectl get pods -n YOUR_NAMESPACE` instead of the standard `kubectl get pods`. --1. To apply this .yaml file, run the following command: -- ```bash - kubectl apply -f "configPod.yaml" - ``` --1. Use `kubectl get pods` to find the name of your pod. Copy this name, as you need it for the next step. --1. Run the following command and replace `POD_NAME_HERE` with your copied value from the previous step: -- ```bash - kubectl exec -it POD_NAME_HERE -- bash - ``` --1. Change directories into the `/mnt/blob` mount path as specified from your `configPod.yaml`. --1. As an example, to write a file, run `touch file.txt`. --1. In the Azure portal, navigate to your storage account and find the container. This is the same container you specified in your `pv.yaml` file. When you select your container, you see `file.txt` populated within the container. ---## Next steps --After you complete these steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring or third-party monitoring with Prometheus and Grafana: --[Third-party monitoring](third-party-monitoring.md) |
azure-arc | Azure Monitor Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/azure-monitor-kubernetes.md | - Title: Azure Monitor and Kubernetes monitoring (preview) -description: Learn how to monitor your deployment using Azure Monitor and Kubernetes monitoring in Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024----# Azure Monitor and Kubernetes monitoring (preview) --This article describes how to monitor your deployment using Azure Monitor and Kubernetes monitoring. --## Azure Monitor --[Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) is a full-stack monitoring service that you can use to monitor Azure resources for their availability, performance, and operation. --## Azure Monitor metrics --[Azure Monitor metrics](/azure/azure-monitor/essentials/data-platform-metrics) is a feature of Azure Monitor that collects data from monitored resources into a time-series database. --These metrics can originate from a number of different sources, including native platform metrics, native custom metrics via [Azure Monitor agent Application Insights](/azure/azure-monitor/insights/insights-overview), and [Azure Managed Prometheus](/azure/azure-monitor/essentials/prometheus-metrics-overview). --Prometheus metrics can be stored in an [Azure Monitor workspace](/azure/azure-monitor/essentials/azure-monitor-workspace-overview) for subsequent visualization via [Azure Managed Grafana](/azure/managed-grafana/overview). --### Metrics configuration --To configure the scraping of Prometheus metrics data into Azure Monitor, see the [Azure Monitor managed service for Prometheus scrape configuration](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration#enable-pod-annotation-based-scraping) article, which builds upon [this configmap](https://aka.ms/azureprometheus-addon-settings-configmap). Azure Container Storage enabled by Azure Arc specifies the `prometheus.io/scrape:true` and `prometheus.io/port` values, and relies on the default of `prometheus.io/path: '/metrics'`. You must specify the Azure Container Storage enabled by Azure Arc installation namespace under `pod-annotation-based-scraping` to properly scope your metrics' ingestion. --Once the Prometheus configuration has been completed, follow the [Azure Managed Grafana instructions](/azure/managed-grafana/overview) to create an [Azure Managed Grafana instance](/azure/managed-grafana/quickstart-managed-grafana-portal). --## Azure Monitor logs --[Azure Monitor logs](/azure/azure-monitor/logs/data-platform-logs) is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources, and can be used to [analyze this data in many ways](/azure/azure-monitor/logs/data-platform-logs#what-can-you-do-with-azure-monitor-logs). --### Logs configuration --If you want to access log data via Azure Monitor, you must enable [Azure Monitor Container Insights](/azure/azure-monitor/containers/container-insights-overview) on your Arc-enabled Kubernetes cluster, and then analyze the collected data with [a collection of views](/azure/azure-monitor/containers/container-insights-analyze) and [workbooks](/azure/azure-monitor/containers/container-insights-reports). --Additionally, you can use [Azure Monitor Log Analytics](/azure/azure-monitor/logs/log-analytics-tutorial) to query collected log data. --## Next steps --[Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | Blob Index Metadata Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/blob-index-metadata-tags.md | - Title: Blob index and metadata tags -description: Learn about blob index and metadata tags in Edge Volumes. ---- Previously updated : 08/26/2024---# Blob index and metadata tags --Cloud Ingest Edge Volumes now supports the ability to generate blob index tags and blob metadata tags directly from Azure Container Storage enabled by Azure Arc. This process involves incorporating extended attributes to the files within your Cloud Ingest Edge Volume, where Edge Volumes translates that into your selected index or metadata tag. --## Blob index tags --To generate a blob index tag, create an extended attribute using the prefix `azindex`, followed by the desired key and its corresponding value for the index tag. Edge Volumes subsequently propagates these values to the blob, appearing as the key matching the value. --> [!NOTE] -> Index tags are only supported for non-hierarchical namespace (HNS) accounts. --### Example 1: index tags --The following example creates the blob index tag `location=chicagoplant2` on `logfile1`: --```bash -$ attr -s azindex.location -V chicagoplant2 logfile1 -Attribute "azindex.location" set to a 13 byte value for logfile1: -chicagoplant2 -``` --### Example 2: index tags --The following example creates the blob index tag `datecreated=1705523841` on `logfile2`: --```bash -$ attr -s azindex.datecreated -V $(date +%s) logfile2 -Attribute " azindex.datecreated " set to a 10 byte value for logfile2: -1705523841 -``` --## Blob metadata tags --To generate a blob metadata tag, create an extended attribute using the prefix `azmeta`, followed by the desired key and its corresponding value for the metadata tag. Edge Volumes subsequently propagates these values to the blob, appearing as the key matching the value. --> [!NOTE] -> Metadata tags are supported for HNS and non-HNS accounts. --> [!NOTE] -> HNS blobs also receive `x-ms-meta-is_adls=true` to indicate that the blob was created with Datalake APIs. --### Example 1: metadata tags --The following example creates the blob metadata tag `x-ms-meta-location=chicagoplant2` on `logfile1`: --```bash -$ attr -s azmeta.location -V chicagoplant2 logfile1 -Attribute "azmeta.location" set to a 13 byte value for logfile1: -chicagoplant2 -``` --### Example 2: metadata tags --The following example creates the blob metadata tag `x-ms-meta-datecreated=1705523841` on `logfile2`: --```bash -$ attr -s azmeta.datecreated -V $(date +%s) logfile2 -Attribute " azmeta.datecreated " set to a 10 byte value for logfile2: -1705523841 -``` --## Next steps --[Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | Cache Volumes Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/cache-volumes-overview.md | - Title: Cache Volumes overview -description: Learn about the Cache Volumes offering from Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024----# Overview of Cache Volumes --This article describes the Cache Volumes offering from Azure Container Storage enabled by Azure Arc. --## How does Cache Volumes work? ---Cache Volumes works by performing the following operations: --- **Write** - Your file is processed locally and saved in the cache. If the file doesn't change within 3 seconds, Cache Volumes automatically uploads it to your chosen blob destination.-- **Read** - If the file is already in the cache, the file is served from the cache memory. If it isn't available in the cache, the file is pulled from your chosen blob storage target.--## Next steps --- [Prepare Linux](prepare-linux.md)-- [How to install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md)-- [Create a persistent volume](create-persistent-volume.md)-- [Monitor your deployment](azure-monitor-kubernetes.md) |
azure-arc | Cloud Ingest Edge Volume Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/cloud-ingest-edge-volume-configuration.md | - Title: Cloud Ingest Edge Volumes configuration -description: Learn about Cloud Ingest Edge Volumes configuration for Edge Volumes. ---- Previously updated : 08/26/2024---# Cloud Ingest Edge Volumes configuration --This article describes the configuration for *Cloud Ingest Edge Volumes* (blob upload with local purge). --## What is Cloud Ingest Edge Volumes? --*Cloud Ingest Edge Volumes* facilitates limitless data ingestion from edge to blob, including ADLSgen2. Files written to this storage type are seamlessly transferred to blob storage and once confirmed uploaded, are subsequently purged locally. This removal ensures space availability for new data. Moreover, this storage option supports data integrity in disconnected environments, which enables local storage and synchronization upon reconnection to the network. --For example, you can write a file to your cloud ingest PVC, and a process runs a scan to check for new files every minute. Once identified, the file is sent for uploading to your designated blob destination. Following confirmation of a successful upload, Cloud Ingest Edge Volume waits for five minutes, and then deletes the local version of your file. --## Prerequisites --1. Create a storage account [following the instructions here](/azure/storage/common/storage-account-create?tabs=azure-portal). -- > [!NOTE] - > When you create your storage account, it's recommended that you create it under the same resource group and region/location as your Kubernetes cluster. --1. Create a container in the storage account that you created previously, [following the instructions here](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container). --## Configure Extension Identity --Edge Volumes allows the use of a system-assigned extension identity for access to blob storage. This section describes how to use the system-assigned extension identity to grant access to your storage account, allowing you to upload cloud ingest volumes to these storage systems. --It's recommended that you use Extension Identity. If your final destination is blob storage or ADLSgen2, see the following instructions. If your final destination is OneLake, follow the instructions in [Configure OneLake for Extension Identity](alternate-onelake.md). --While it's not recommended, if you prefer to use key-based authentication, follow the instructions in [Key-based authentication](alternate-key-based.md). --### Obtain Extension Identity --#### [Azure portal](#tab/portal) --#### Azure portal --1. Navigate to your Arc-connected cluster. -1. Select **Extensions**. -1. Select your Azure Container Storage enabled by Azure Arc extension. -1. Note the Principal ID under **Cluster Extension Details**. - -#### [Azure CLI](#tab/cli) --#### Azure CLI --In Azure CLI, enter your values for the exports (`CLUSTER_NAME`, `RESOURCE_GROUP`) and run the following command: --```bash -export CLUSTER_NAME = <your-cluster-name-here> -export RESOURCE_GROUP = <your-resource-group-here> -export EXTENSION_TYPE=${1:-"microsoft.arc.containerstorage"} -az k8s-extension list --cluster-name ${CLUSTER_NAME} --resource-group ${RESOURCE_GROUP} --cluster-type connectedClusters | jq --arg extType ${EXTENSION_TYPE} 'map(select(.extensionType == $extType)) | .[] | .identity.principalId' -r -``` ----### Configure blob storage account for Extension Identity --#### Add Extension Identity permissions to a storage account --1. Navigate to storage account in the Azure portal. -1. Select **Access Control (IAM)**. -1. Select **Add+ -> Add role assignment**. -1. Select **Storage Blob Data Owner**, then select **Next**. -1. Select **+Select Members**. -1. To add your principal ID to the **Selected Members:** list, paste the ID and select **+** next to the identity. -1. Click **Select**. -1. To review and assign permissions, select **Next**, then select **Review + Assign**. --## Create a Cloud Ingest Persistent Volume Claim (PVC) --1. Create a file named `cloudIngestPVC.yaml` with the following contents. Edit the `metadata::name` line and create a name for your Persistent Volume Claim. This name is referenced on the last line of `deploymentExample.yaml` in the next step. Also, update the `metadata::namespace` value with your intended consuming pod. If you don't have an intended consuming pod, the `metadata::namespace` value is `default`. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - ### Create a name for your PVC ### - name: <create-persistent-volume-claim-name-here> - ### Use a namespace that matched your intended consuming pod, or "default" ### - namespace: <intended-consuming-pod-or-default-here> - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi - storageClassName: cloud-backed-sc - ``` --1. To apply `cloudIngestPVC.yaml`, run: -- ```bash - kubectl apply -f "cloudIngestPVC.yaml" - ``` --## Attach sub-volume to Edge Volume --To create a sub-volume using extension identity to connect to your storage account container, use the following process: --1. Get the name of your Edge Volume using the following command: -- ```bash - kubectl get edgevolumes - ``` --1. Create a file named `edgeSubvolume.yaml` and copy the following contents. These variables must be updated with your information: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your sub-volume. - - `spec::edgevolume`: This name was retrieved from the previous step using `kubectl get edgevolumes`. - - `spec::path`: Create your own subdirectory name under the mount path. Note that the following example already contains an example name (`exampleSubDir`). If you change this path name, line 33 in `deploymentExample.yaml` must be updated with the new path name. If you choose to rename the path, don't use a preceding slash. - - `spec::container`: The container name in your storage account. - - `spec::storageaccountendpoint`: Navigate to your storage account in the Azure portal. On the **Overview** page, near the top right of the screen, select **JSON View**. You can find the `storageaccountendpoint` link under **properties::primaryEndpoints::blob**. Copy the entire link (for example, `https://mytest.blob.core.windows.net/`). -- ```yaml - apiVersion: "arccontainerstorage.azure.net/v1" - kind: EdgeSubvolume - metadata: - name: <create-a-subvolume-name-here> - spec: - edgevolume: <your-edge-volume-name-here> - path: exampleSubDir # If you change this path, line 33 in deploymentExample.yaml must be updated. Don't use a preceding slash. - auth: - authType: MANAGED_IDENTITY - storageaccountendpoint: "https://<STORAGE ACCOUNT NAME>.blob.core.windows.net/" - container: <your-blob-storage-account-container-name> - ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration - ``` --2. To apply `edgeSubvolume.yaml`, run: -- ```bash - kubectl apply -f "edgeSubvolume.yaml" - ``` --### Optional: Modify the `ingestPolicy` from the default --1. If you want to change the `ingestPolicy` from the default `edgeingestpolicy-default`, create a file named `myedgeingest-policy.yaml` with the following contents. The following variables must be updated with your preferences: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your **ingestPolicy**. This name must be updated and referenced in the `spec::ingestPolicy` section of your `edgeSubvolume.yaml`. - - `spec::ingest::order`: The order in which dirty files are uploaded. This is best effort, not a guarantee (defaults to **oldest-first**). Options for order are: **oldest-first** or **newest-first**. - - `spec::ingest::minDelaySec`: The minimum number of seconds before a dirty file is eligible for ingest (defaults to 60). This number can range between 0 and 31536000. - - `spec::eviction::order`: How files are evicted (defaults to **unordered**). Options for eviction order are: **unordered** or **never**. - - `spec::eviction::minDelaySec`: The number of seconds before a clean file is eligible for eviction (defaults to 300). This number can range between 0 and 31536000. -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeIngestPolicy - metadata: - name: <create-a-policy-name-here> # This must be updated and referenced in the spec::ingestPolicy section of the edgeSubvolume.yaml - spec: - ingest: - order: <your-ingest-order> - minDelaySec: <your-min-delay-sec> - eviction: - order: <your-eviction-order> - minDelaySec: <your-min-delay-sec> - ``` --1. To apply `myedgeingest-policy.yaml`, run: -- ```bash - kubectl apply -f "myedgeingest-policy.yaml" - ``` --## Attach your app (Kubernetes native application) --1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `deploymentExample.yaml` with the following contents. Modify the `containers::name` and `volumes::persistentVolumeClaim::claimName` values. If you updated the path name from `edgeSubvolume.yaml`, `exampleSubDir` on line 33 must be updated with your new path name. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: cloudingestedgevol-deployment ### This must be unique for each deployment you choose to create. - spec: - replicas: 2 - selector: - matchLabels: - name: wyvern-testclientdeployment - template: - metadata: - name: wyvern-testclientdeployment - labels: - name: wyvern-testclientdeployment - spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - wyvern-testclientdeployment - topologyKey: kubernetes.io/hostname - containers: - ### Specify the container in which to launch the busy box. ### - - name: <create-a-container-name-here> - image: mcr.microsoft.com/azure-cli:2.57.0@sha256:c7c8a97f2dec87539983f9ded34cd40397986dcbed23ddbb5964a18edae9cd09 - command: - - "/bin/sh" - - "-c" - - "dd if=/dev/urandom of=/data/exampleSubDir/esaingesttestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done" - volumeMounts: - ### This name must match the volumes::name attribute below ### - - name: wyvern-volume - ### This mountPath is where the PVC is attached to the pod's filesystem ### - mountPath: "/data" - volumes: - ### User-defined 'name' that's used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ### - - name: wyvern-volume - persistentVolumeClaim: - ### This claimName must refer to your PVC metadata::name (Line 5) - claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml> - ``` --1. To apply `deploymentExample.yaml`, run: -- ```bash - kubectl apply -f "deploymentExample.yaml" - ``` --1. Use `kubectl get pods` to find the name of your pod. Copy this name to use in the next step. -- > [!NOTE] - > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods appear using `kubectl get pods`. You can choose either pod name to use for the next step. --1. Run the following command and replace `POD_NAME_HERE` with your copied value from the last step: -- ```bash - kubectl exec -it POD_NAME_HERE -- sh - ``` --1. Change directories into the `/data` mount path as specified from your `deploymentExample.yaml`. --1. You should see a directory with the name you specified as your `path` in Step 2 of the [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume) section. Change directories into `/YOUR_PATH_NAME_HERE`, replacing the `YOUR_PATH_NAME_HERE` value with your details. --1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`. --1. In the Azure portal, navigate to your storage account and find the container specified from Step 2 of [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume). When you select your container, you should find `file1.txt` populated within the container. If the file hasn't yet appeared, wait approximately 1 minute; Edge Volumes waits a minute before uploading. --## Next steps --After you complete these steps, you can begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring or 3rd-party monitoring with Prometheus and Grafana. --[Monitor your deployment](monitor-deployment-edge-volumes.md) |
azure-arc | Create Persistent Volume Claim | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/create-persistent-volume-claim.md | - Title: Create a Persistent Volume Claim (PVC) (preview) -description: Learn how to create a Persistent Volume Claim (PVC) in Cache Volumes. --- Previously updated : 08/26/2024----# Create a Persistent Volume Claim (PVC) (preview) --The PVC is a persistent volume claim against the persistent volume that you can use to mount a Kubernetes pod. --This size does not affect the ceiling of blob storage used in the cloud to support this local cache. Make a note of the name of this PVC, as you need it when you create your application pod. --## Create PVC --1. Create a file named **pvc.yaml** with the following contents: -- ```yaml - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - ### Create a name for your PVC ### - name: CREATE_A_NAME_HERE - ### Use a namespace that matched your intended consuming pod, or "default" ### - namespace: INTENDED_CONSUMING_POD_OR_DEFAULT_HERE - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 5Gi - storageClassName: esa - volumeMode: Filesystem - ### This name references your PV name in your PV config ### - volumeName: INSERT_YOUR_PV_NAME - ``` -- > [!NOTE] - > If you intend to use your PVC with the Azure IoT Operations Data Processor, use `azure-iot-operations` as the `namespace` on line 7. --1. To apply this .yaml file, run: -- ```bash - kubectl apply -f "pvc.yaml" - ``` --## Next steps --After you create a Persistent Volume Claim (PVC), attach your app (Azure IoT Operations Data Processor or Kubernetes Native Application): --[Attach your app](attach-app.md) |
azure-arc | Create Persistent Volume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/create-persistent-volume.md | - Title: Create a persistent volume (preview) -description: Learn about creating persistent volumes in Cache Volumes. --- Previously updated : 08/26/2024----# Create a persistent volume (preview) --This article describes how to create a persistent volume using storage key authentication. --## Prerequisites --This section describes the prerequisites for creating a persistent volume (PV). --1. Create a storage account [following the instructions here](/azure/storage/common/storage-account-create?tabs=azure-portal). -- > [!NOTE] - > When you create your storage account, create it under the same resource group as your Kubernetes cluster. It is recommended that you also create it under the same region/location as your Kubernetes cluster. --1. Create a container in the storage account that you created in the previous step, [following the instructions here](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container). --## Storage key authentication configuration --1. Create a file named **add-key.sh** with the following contents. No edits or changes are necessary: -- ```bash - #!/usr/bin/env bash - - while getopts g:n:s: flag - do - case "${flag}" in - g) RESOURCE_GROUP=${OPTARG};; - s) STORAGE_ACCOUNT=${OPTARG};; - n) NAMESPACE=${OPTARG};; - esac - done - - SECRET=$(az storage account keys list -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT --query [0].value --output tsv) - - kubectl create secret generic -n "${NAMESPACE}" "${STORAGE_ACCOUNT}"-secret --from-literal=azurestorageaccountkey="${SECRET}" --from-literal=azurestorageaccountname="${STORAGE_ACCOUNT}" - ``` --1. After you create the file, change the write permissions on the file and execute the shell script using the following commands. Running these commands creates a secret named `{YOUR_STORAGE_ACCOUNT}-secret`. This secret name is used for the `secretName` value when configuring your PV: -- ```bash - chmod +x add-key.sh - ./add-key.sh -g "$YOUR_RESOURCE_GROUP_NAME" -s "$YOUR_STORAGE_ACCOUNT_NAME" -n "$YOUR_KUBERNETES_NAMESPACE" - ``` --## Create Persistent Volume (PV) --You must create a Persistent Volume (PV) for Cache Volumes to create a local instance and bind to a remote BLOB storage account. --Make a note of the `metadata: name:` as you must specify it in the `spec: volumeName` of the PVC that binds to it. Use your storage account and container that you created as part of the [prerequisites](#prerequisites). --1. Create a file named **pv.yaml**: -- ```yaml - apiVersion: v1 - kind: PersistentVolume - metadata: - ### Create a name here ### - name: CREATE_A_NAME_HERE - spec: - capacity: - ### This storage capacity value is not enforced at this layer. ### - storage: 10Gi - accessModes: - - ReadWriteMany - persistentVolumeReclaimPolicy: Retain - storageClassName: esa - csi: - driver: edgecache.csi.azure.com - readOnly: false - ### Make sure this volumeid is unique in the cluster. You must specify it in the spec:volumeName of the PVC. ### - volumeHandle: YOUR_NAME_FROM_METADATA_NAME_IN_LINE_4_HERE - volumeAttributes: - protocol: edgecache - edgecache-storage-auth: AccountKey - ### Fill in the next two/three values with your information. ### - secretName: YOUR_SECRET_NAME_HERE ### From the previous step, this name is "{YOUR_STORAGE_ACCOUNT}-secret" ### - ### If you use a non-default namespace, uncomment the following line and add your namespace. ### - ### secretNamespace: YOUR_NAMESPACE_HERE - containerName: YOUR_CONTAINER_NAME_HERE - ``` --1. To apply this .yaml file, run: -- ```bash - kubectl apply -f "pv.yaml" - ``` --## Next steps --- [Create a persistent volume claim](create-persistent-volume-claim.md)-- [Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | Install Cache Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/install-cache-volumes.md | - Title: Install Cache Volumes (preview) -description: Learn how to install the Cache Volumes offering from Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024----# Install Azure Container Storage enabled by Azure Arc Cache Volumes (preview) --This article describes the steps to install the Azure Container Storage enabled by Azure Arc extension. --## Optional: increase cache disk size --Currently, the cache disk size defaults to 8 GB. If you're satisfied with the cache disk size, see the next section, [Install the Azure Container Storage enabled by Azure Arc extension](#install-the-azure-container-storage-enabled-by-azure-arc-extension). --If you use Edge Essentials, require a larger cache disk size, and already created a **config.json** file, append the key and value pair (`"cachedStorageSize": "20Gi"`) to your existing **config.json**. Don't erase the previous contents of **config.json**. --If you require a larger cache disk size, create **config.json** with the following contents: --```json -{ - "cachedStorageSize": "20Gi" -} -``` --## Prepare the `azure-arc-containerstorage` namespace --In this step, you prepare a namespace in Kubernetes for `azure-arc-containerstoragee` and add it to your Open Service Mesh (OSM) configuration for link security. If you want to use a namespace other than `azure-arc-containerstorage`, substitute it in the `export extension_namespace`: --```bash -export extension_namespace=azure-arc-containerstorage -kubectl create namespace "${extension_namespace}" -kubectl label namespace "${extension_namespace}" openservicemesh.io/monitored-by=osm -kubectl annotate namespace "${extension_namespace}" openservicemesh.io/sidecar-injection=enabled -# Disable OSM permissive mode. -kubectl patch meshconfig osm-mesh-config \ - -n "arc-osm-system" \ - -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":'"false"'}}}' \ - --type=merge -``` --## Install the Azure Container Storage enabled by Azure Arc extension --Install the Azure Container Storage enabled by Azure Arc extension using the following command: --> [!NOTE] -> If you created a **config.json** file from the previous steps in [Prepare Linux](prepare-linux.md), append `--config-file "config.json"` to the following `az k8s-extension create` command. Any values set at installation time persist throughout the installation lifetime (including manual and auto-upgrades). --```bash -az k8s-extension create --resource-group "${YOUR-RESOURCE-GROUP}" --cluster-name "${YOUR-CLUSTER-NAME}" --cluster-type connectedClusters --name hydraext --extension-type microsoft.arc.containerstorage -``` --## Next steps --Once you complete these prerequisites, you can begin to [create a Persistent Volume (PV) with Storage Key Authentication](create-persistent-volume.md). |
azure-arc | Install Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/install-edge-volumes.md | - Title: Install Edge Volumes (preview) -description: Learn how to install the Edge Volumes offering from Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024---# Install Azure Container Storage enabled by Azure Arc Edge Volumes (preview) --This article describes the steps to install the Azure Container Storage enabled by Azure Arc extension. --## Prepare the `azure-arc-containerstorage` namespace --In this step, you prepare a namespace in Kubernetes for `azure-arc-containerstorage` and add it to your Open Service Mesh (OSM) configuration for link security. If you want to use a namespace other than `azure-arc-containerstorage`, substitute it in the `export extension_namespace`: --```bash -export extension_namespace=azure-arc-containerstorage -kubectl create namespace "${extension_namespace}" -kubectl label namespace "${extension_namespace}" openservicemesh.io/monitored-by=osm -kubectl annotate namespace "${extension_namespace}" openservicemesh.io/sidecar-injection=enabled -# Disable OSM permissive mode. -kubectl patch meshconfig osm-mesh-config \ - -n "arc-osm-system" \ - -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":'"false"'}}}' \ - --type=merge -``` --## Install the Azure Container Storage enabled by Azure Arc extension --Install the Azure Container Storage enabled by Azure Arc extension using the following command: --```azurecli -az k8s-extension create --resource-group "${YOUR-RESOURCE-GROUP}" --cluster-name "${YOUR-CLUSTER-NAME}" --cluster-type connectedClusters --name azure-arc-containerstorage --extension-type microsoft.arc.containerstorage -``` --> [!NOTE] -> By default, the `--release-namespace` parameter is set to `azure-arc-containerstorage`. If you want to override this setting, add the `--release-namespace` flag to the following command and populate it with your details. Any values set at installation time persist throughout the installation lifetime (including manual and auto-upgrades). --> [!IMPORTANT] -> If you use OneLake, you must use a unique extension name for the `--name` variable in the `az k8s-extension create` command. --## Configuration operator --### Configuration CRD --The Azure Container Storage enabled by Azure Arc extension uses a Custom Resource Definition (CRD) in Kubernetes to configure the storage service. Before you publish this CRD on your Kubernetes cluster, the Azure Container Storage enabled by Azure Arc extension is dormant and uses minimal resources. Once your CRD is applied with the configuration options, the appropriate storage classes, CSI driver, and service PODs are deployed to provide services. In this way, you can customize Azure Container Storage enabled by Azure Arc to meet your needs, and it can be reconfigured without reinstalling the Arc Kubernetes Extension. Common configurations are contained here, however this CRD offers the capability to configure non-standard configurations for Kubernetes clusters with differing storage capabilities. --#### [Single node or 2-node cluster](#tab/single) --#### Single node or 2-node cluster with Ubuntu or Edge Essentials --If you run a single node or 2-node cluster with **Ubuntu** or **Edge Essentials**, follow these instructions: --1. Create a file named **edgeConfig.yaml** with the following contents: -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeStorageConfiguration - metadata: - name: edge-storage-configuration - spec: - defaultDiskStorageClasses: - - "default" - - "local-path" - serviceMesh: "osm" - ``` --1. To apply this .yaml file, run: -- ```bash - kubectl apply -f "edgeConfig.yaml" - ``` --#### [Multi-node cluster](#tab/multi) --#### Multi-node cluster with Ubuntu or Edge Essentials --If you run a 3 or more node Kubernetes cluster with **Ubuntu** or **Edge Essentials**, follow these instructions. This configuration installs the ACStor storage subsystem to provide fault-tolerant, replicated storage for Kubernetes clusters with 3 or more nodes: --1. Create a file named **edgeConfig.yaml** with the following contents: -- > [!NOTE] - > To relocate storage to a different location on disk, update `diskMountPoint` with your desired path. -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeStorageConfiguration - metadata: - name: edge-storage-configuration - spec: - defaultDiskStorageClasses: - - acstor-arccontainerstorage-storage-pool - serviceMesh: "osm" - - apiVersion: arccontainerstorage.azure.net/v1 - kind: ACStorConfiguration - metadata: - name: acstor-configuration - spec: - diskMountPoint: /mnt - diskCapacity: 10Gi - createStoragePool: - enabled: true - replicas: 3 - ``` --1. To apply this .yaml file, run: -- ```bash - kubectl apply -f "edgeConfig.yaml" - ``` --#### [Arc-connected AKS/AKS Arc](#tab/arc) --#### Arc-connected AKS or AKS Arc --If you run a single-node or multi-node cluster with **Arc-connected AKS** or **AKS enabled by Arc**, follow these instructions: --1. Create a file named **edgeConfig.yaml** with the following contents: -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeStorageConfiguration - metadata: - name: edge-storage-configuration - spec: - defaultDiskStorageClasses: - - "default" - - "local-path" - serviceMesh: "osm" - ``` --1. To apply this .yaml file, run: -- ```bash - kubectl apply -f "edgeConfig.yaml" - ``` ----## Next steps --- [Configure your Local Shared Edge volumes](local-shared-edge-volumes.md)-- [Configure your Cloud Ingest Edge Volumes](cloud-ingest-edge-volume-configuration.md) |
azure-arc | Jumpstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/jumpstart.md | - Title: Azure Container Storage enabled by Azure Arc using Azure Arc Jumpstart (preview) -description: Learn about Azure Arc Jumpstart and Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024----# Azure Arc Jumpstart and Azure Container Storage enabled by Azure Arc --Azure Container Storage enabled by Azure Arc partnered with [Azure Arc Jumpstart](https://azurearcjumpstart.com/) to produce both a new Arc Jumpstart scenario and Azure Arc Jumpstart Drops, furthering the capabilities of edge computing solutions. This partnership led to an innovative scenario in which a computer vision AI model detects defects in bolts from real-time video streams, with the identified defects securely stored using Azure Container Storage enabled by Azure Arc on an AKS Edge Essentials instance. This scenario showcases the powerful integration of Azure Arc with AI and edge storage technologies. --Additionally, Azure Container Storage enabled by Azure Arc contributed to Azure Arc Jumpstart Drops, a curated collection of resources that simplify deployment and management for developers and IT professionals. These tools, including Kubernetes files and scripts, are designed to streamline edge storage solutions and demonstrate the practical applications of Microsoft's cutting-edge technology. --## Azure Arc Jumpstart scenario using Azure Container Storage enabled by Azure Arc --Azure Container Storage enabled by Azure Arc collaborated with the [Azure Arc Jumpstart](https://azurearcjumpstart.com/) team to implement a scenario in which a computer vision AI model detects defects in bolts by analyzing video from a supply line video feed streamed over Real-Time Streaming Protocol (RTSP). The identified defects are then stored in a container within a storage account using Azure Container Storage enabled by Azure Arc. --In this automated setup, Azure Container Storage enabled by Azure Arc is deployed on an [AKS Edge Essentials](/azure/aks/hybrid/aks-edge-overview) single-node instance, running in an Azure virtual machine. An Azure Resource Manager template is provided to create the necessary Azure resources and configure the **LogonScript.ps1** custom script extension. This extension handles AKS Edge Essentials cluster creation, Azure Arc onboarding for the Azure VM and AKS Edge Essentials cluster, and Azure Container Storage enabled by Azure Arc deployment. Once AKS Edge Essentials is deployed, Azure Container Storage enabled by Azure Arc is installed as a Kubernetes service that exposes a CSI driven storage class for use by applications in the Edge Essentials Kubernetes cluster. --For more information, see the following articles: --- [Watch the Jumpstart scenario on YouTube](https://youtu.be/Qnh2UH1g6Q4).-- [See the Jumpstart documentation](https://aka.ms/esajumpstart).-- [See the Jumpstart architecture diagrams](https://aka.ms/arcposters).--## Azure Arc Jumpstart Drops for Azure Container Storage enabled by Azure Arc --Azure Container Storage enabled by Azure Arc created Jumpstart Drops as part of another collaboration with [Azure Arc Jumpstart](https://azurearcjumpstart.com/). --[Jumpstart Drops](https://aka.ms/jumpstartdrops) is a curated online collection of tools, scripts, and other assets that simplify the daily tasks of developers, IT, OT, and day-2 operations professionals. Jumpstart Drops is designed to showcase the power of Microsoft's products and services and promote mutual support and knowledge sharing among community members. --For more information, see the article [Create an Azure Container Storage enabled by Azure Arc instance on a Single Node Ubuntu K3s system](https://arcjumpstart.com/create_an_edge_storage_accelerator_(esa)_instance_on_a_single_node_ubuntu_k3s_system). --This Jumpstart Drop provides Kubernetes files to create an Azure Container Storage enabled by Azure Arc Cache Volumes instance on an install on Ubuntu with K3s. --## Next steps --- [Azure Container Storage enabled by Azure Arc overview](overview.md)-- [AKS Edge Essentials overview](/azure/aks/hybrid/aks-edge-overview) |
azure-arc | Local Shared Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/local-shared-edge-volumes.md | - Title: Local Shared Edge Volume configuration for Edge Volumes -description: Learn about Local Shared Edge Volume configuration for Edge Volumes. ---- Previously updated : 08/26/2024---# Local Shared Edge Volumes --This article describes the configuration for Local Shared Edge Volumes (highly available, durable local storage). --## What is a Local Shared Edge Volume? --The *Local Shared Edge Volumes* feature provides highly available, failover-capable storage, local to your Kubernetes cluster. This shared storage type remains independent of cloud infrastructure, making it ideal for scratch space, temporary storage, and locally persistent data that might be unsuitable for cloud destinations. --## Create a Local Shared Edge Volumes Persistent Volume Claim (PVC) and configure a pod against the PVC --1. Create a file named `localSharedPVC.yaml` with the following contents. Modify the `metadata::name` value with a name for your Persistent Volume Claim. Then, in line 8, specify the namespace that matches your intended consuming pod. The `metadata::name` value is referenced on the last line of `deploymentExample.yaml` in the next step. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - ### Create a name for your PVC ### - name: <create-a-pvc-name-here> - ### Use a namespace that matches your intended consuming pod, or "default" ### - namespace: <intended-consuming-pod-or-default-here> - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi - storageClassName: unbacked-sc - ``` --1. Create a file named `deploymentExample.yaml` with the following contents. Add the values for `containers::name` and `volumes::persistentVolumeClaim::claimName`: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: localsharededgevol-deployment ### This will need to be unique for every volume you choose to create - spec: - replicas: 2 - selector: - matchLabels: - name: wyvern-testclientdeployment - template: - metadata: - name: wyvern-testclientdeployment - labels: - name: wyvern-testclientdeployment - spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - wyvern-testclientdeployment - topologyKey: kubernetes.io/hostname - containers: - ### Specify the container in which to launch the busy box. ### - - name: <create-a-container-name-here> - image: 'mcr.microsoft.com/mirror/docker/library/busybox:1.35' - command: - - "/bin/sh" - - "-c" - - "dd if=/dev/urandom of=/data/esalocalsharedtestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done" - volumeMounts: - ### This name must match the following volumes::name attribute ### - - name: wyvern-volume - ### This mountPath is where the PVC will be attached to the pod's filesystem ### - mountPath: /data - volumes: - ### User-defined name that is used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ### - - name: wyvern-volume - persistentVolumeClaim: - ### This claimName must refer to your PVC metadata::name from lsevPVC.yaml. - claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml> - ``` --1. To apply these YAML files, run: -- ```bash - kubectl apply -f "localSharedPVC.yaml" - kubectl apply -f "deploymentExample.yaml" - ``` --1. Run `kubectl get pods` to find the name of your pod. Copy this name, as it's needed in the next step. -- > [!NOTE] - > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods appear using `kubectl get pods`. You can choose either pod name to use for the next step. --1. Run the following command and replace `POD_NAME_HERE` with your copied value from the previous step: -- ```bash - kubectl exec -it pod_name_here -- sh - ``` --1. Change directories to the `/data` mount path, as specified in `deploymentExample.yaml`. --1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`. --After you complete the previous steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring, or third-party monitoring with Prometheus and Grafana. --## Next steps --[Monitor your deployment](monitor-deployment-edge-volumes.md) |
azure-arc | Monitor Deployment Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/monitor-deployment-edge-volumes.md | - Title: Monitor your Azure Container Storage enabled by Azure Arc Edge Volumes deployment (preview) -description: Learn how to monitor your Azure Container Storage enabled by Azure Arc Edge Volumes deployment. --- Previously updated : 08/26/2024----# Monitor your Edge Volumes deployment (preview) --This article describes how to monitor your Azure Container Storage enabled by Azure Arc Edge Volumes deployment. --## Deployment monitoring overviews --For information about how to monitor your Edge Volumes deployment using Azure Monitor and Kubernetes Monitoring and 3rd-party monitoring with Prometheus and Grafana, see the following Azure Container Storage enabled by Azure Arc articles: --- [3rd party monitoring with Prometheus and Grafana](third-party-monitoring.md)-- [Azure Monitor and Kubernetes Monitoring](azure-monitor-kubernetes.md)--## Next steps --[Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | Multi Node Cluster Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/multi-node-cluster-edge-volumes.md | - Title: Prepare Linux for Edge Volumes using a multi-node cluster (preview) -description: Learn how to prepare Linux for Edge Volumes with a multi-node cluster using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/26/2024-zone_pivot_groups: platform-select ---# Prepare Linux for Edge Volumes using a multi-node cluster (preview) --This article describes how to prepare Linux using a multi-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux.md#prerequisites). --## Prepare Linux with AKS enabled by Azure Arc --Install and configure Open Service Mesh (OSM) using the following commands: --```azurecli -az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm -``` -----## Prepare Linux with Ubuntu --This section describes how to prepare Linux with Ubuntu if you run a multi-node cluster. --First, install and configure Open Service Mesh (OSM) using the following command: --```azurecli -az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm -``` ---## Next steps --[Install Extension](install-edge-volumes.md) |
azure-arc | Multi Node Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/multi-node-cluster.md | - Title: Prepare Linux for Cache Volumes using a multi-node cluster (preview) -description: Learn how to prepare Linux for Cache Volumes with a multi-node cluster using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/26/2024-zone_pivot_groups: platform-select ---# Prepare Linux using a multi-node cluster (preview) --This article describes how to prepare Linux using a multi-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux.md#prerequisites). --## Prepare Linux with AKS enabled by Azure Arc --Install and configure Open Service Mesh (OSM) using the following commands: --```azurecli -az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm -kubectl patch meshconfig osm-mesh-config -n "arc-osm-system" -p '{"spec":{"featureFlags":{"enableWASMStats": false }, "traffic":{"outboundPortExclusionList":[443,2379,2380], "inboundPortExclusionList":[443,2379,2380]}}}' --type=merge -``` ----5. Create a file named **config.json** with the following contents: -- ```json - { - "acstor.capacityProvisioner.tempDiskMountPoint": /var - } - ``` -- > [!NOTE] - > The location/path of this file is referenced later, when you install the Cache Volumes Arc extension. ---## Prepare Linux with Ubuntu --This section describes how to prepare Linux with Ubuntu if you run a multi-node cluster. --1. Install and configure Open Service Mesh (OSM) using the following command: -- ```azurecli - az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm - kubectl patch meshconfig osm-mesh-config -n "arc-osm-system" -p '{"spec":{"featureFlags":{"enableWASMStats": false }, "traffic":{"outboundPortExclusionList":[443,2379,2380], "inboundPortExclusionList":[443,2379,2380]}}}' --type=merge - ``` ----## Next steps --[Install Azure Container Storage enabled by Azure Arc](install-cache-volumes.md) |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/overview.md | - Title: What is Azure Container Storage enabled by Azure Arc? (preview) -description: Learn about Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024-----# What is Azure Container Storage enabled by Azure Arc? (preview) --> [!IMPORTANT] -> Azure Container Storage enabled by Azure Arc is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --Azure Container Storage enabled by Azure Arc is a first-party storage system designed for Arc-connected Kubernetes clusters. Azure Container Storage enabled by Azure Arc can be deployed to write files to a "ReadWriteMany" persistent volume claim (PVC) where they are then transferred to Azure Blob Storage. Azure Container Storage enabled by Azure Arc offers a range of features to support Azure IoT Operations and other Arc services. Azure Container Storage enabled by Azure Arc with high availability and fault-tolerance will be fully supported and generally available (GA) in the second half of 2024. --## What does Azure Container Storage enabled by Azure Arc do? --Azure Container Storage enabled by Azure Arc serves as a native persistent storage system for Arc-connected Kubernetes clusters. Its primary role is to provide a reliable, fault-tolerant file system that allows data to be tiered to Azure. For Azure IoT Operations (AIO) and other Arc Services, Azure Container Storage enabled by Azure Arc is crucial in making Kubernetes clusters stateful. Key features of Azure Container Storage enabled by Azure Arc for Arc-connected K8s clusters include: --- **Tolerance to node failures:** When configured as a 3 node cluster, Azure Container Storage enabled by Azure Arc replicates data between nodes (triplication) to ensure high availability and tolerance to single node failures.-- **Data synchronization to Azure:** Azure Container Storage enabled by Azure Arc is configured with a storage target, so data written to volumes is automatically tiered to Azure Blob (block blob, ADLSgen-2 or OneLake) in the cloud.-- **Low latency operations:** Arc services, such as AIO, can expect low latency for read and write operations.-- **Simple connection:** Customers can easily connect to an Azure Container Storage enabled by Azure Arc volume using a CSI driver to start making Persistent Volume Claims against their storage.-- **Flexibility in deployment:** Azure Container Storage enabled by Azure Arc can be deployed as part of AIO or as a standalone solution.-- **Observable:** Azure Container Storage enabled by Azure Arc supports industry standard Kubernetes monitoring logs and metrics facilities, and supports Azure Monitor Agent observability.-- **Designed with integration in mind:** Azure Container Storage enabled by Azure Arc integrates seamlessly with AIO's Data Processor to ease the shuttling of data from your edge to Azure. -- **Platform neutrality:** Azure Container Storage enabled by Azure Arc is a Kubernetes storage system that can run on any Arc Kubernetes supported platform. Validation was done for specific platforms, including Ubuntu + CNCF K3s/K8s, Windows IoT + AKS-EE, and Azure Stack HCI + AKS-HCI.--## What are the different Azure Container Storage enabled by Azure Arc offerings? --The original Azure Container Storage enabled by Azure Arc offering is [*Cache Volumes*](cache-volumes-overview.md). The newest offering is [*Edge Volumes*](install-edge-volumes.md). --## What are Azure Container Storage enabled by Azure Arc Edge Volumes? --The first addition to the Edge Volumes offering is *Local Shared Edge Volumes*, providing highly available, failover-capable storage, local to your Kubernetes cluster. This shared storage type remains independent of cloud infrastructure, making it ideal for scratch space, temporary storage, and locally persistent data unsuitable for cloud destinations. --The second new offering is *Cloud Ingest Edge Volumes*, which facilitates limitless data ingestion from edge to Blob, including ADLSgen2 and OneLake. Files written to this storage type are seamlessly transferred to Blob storage and subsequently purged from the local cache once confirmed uploaded, ensuring space availability for new data. Moreover, this storage option supports data integrity in disconnected environments, enabling local storage and synchronization upon reconnection to the network. --Tailored for IoT applications, Edge Volumes not only eliminates local storage concerns and ingest limitations, but also optimizes local resource utilization and reduces storage requirements. --### How does Edge Volumes work? --You write to Edge Volumes as if it was your local file system. For a Local Shared Edge Volume, your data is stored and left untouched. For a Cloud Ingest Edge Volume, the volume checks for new data to mark for upload every minute, and then uploads that new data to your specified cloud destination. Five minutes after the confirmed upload to the cloud, the local copy is purged, allowing you to keep your local volume clear of old data and continue to receive new data. --Get started with [Edge Volumes](prepare-linux-edge-volumes.md). --### Supported Azure regions for Azure Container Storage enabled by Azure Arc --Azure Container Storage enabled by Azure Arc is only available in the following Azure regions: --- East US-- East US 2-- West US-- West US 2-- West US 3-- North Europe-- West Europe--## Next steps --- [Prepare Linux](prepare-linux-edge-volumes.md)-- [How to install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md) |
azure-arc | Prepare Linux Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/prepare-linux-edge-volumes.md | - Title: Prepare Linux for Edge Volumes (preview) -description: Learn how to prepare Linux in Azure Container Storage enabled by Azure Arc Edge Volumes using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/30/2024----# Prepare Linux for Edge Volumes (preview) --The article describes how to prepare Linux for Edge Volumes using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. --> [!NOTE] -> The minimum supported Linux kernel version is 5.1. At this time, there are known issues with 6.4 and 6.2. --## Prerequisites --> [!NOTE] -> Azure Container Storage enabled by Azure Arc is only available in the following regions: East US, East US 2, West US, West US 2, West US 3, North Europe, West Europe. --### Uninstall previous instance of Azure Container Storage enabled by Azure Arc extension --If you previously installed a version of Azure Container Storage enabled by Azure Arc earlier than **2.1.0-preview**, you must uninstall that previous instance in order to install the newer version. If you installed the **1.2.0-preview** release or earlier, [use these instructions](release-notes.md#if-i-installed-the-120-preview-or-any-earlier-release-how-do-i-uninstall-the-extension). Versions after **2.1.0-preview** are upgradeable and do not require this uninstall. --1. In order to delete the old version of the extension, the Kubernetes resources holding references to old version of the extension must be cleaned up. Any pending resources can delay the clean-up of the extension. There are at least two ways to clean up these resources: either using `kubectl delete <resource_type> <resource_name>`, or by "unapplying" the YAML files used to create the resources. The resources that need to be deleted are typically the pods, the PVC referenced, and the subvolume CRD (if Cloud Ingest Edge Volume was configured). Alternatively, the following four YAML files can be passed to `kubectl delete -f` using the following commands in the specified order. These variables must be updated with your information: -- - `YOUR_DEPLOYMENT_FILE_NAME_HERE`: Add your deployment file names. In the example in this article, the file name used was `deploymentExample.yaml`. If you created multiple deployments, each one must be deleted on a separate line. - - `YOUR_PVC_FILE_NAME_HERE`: Add your Persistent Volume Claim file names. In the example in this article, if you used the Cloud Ingest Edge Volume, the file name used was `cloudIngestPVC.yaml`. If you used the Local Shared Edge Volume, the file name used was `localSharedPVC.yaml`. If you created multiple PVCs, each one must be deleted on a separate line. - - `YOUR_EDGE_SUBVOLUME_FILE_NAME_HERE`: Add your Edge subvolume file names. In the example in this article, the file name used was `edgeSubvolume.yaml`. If you created multiple subvolumes, each one must be deleted on a separate line. - - `YOUR_EDGE_STORAGE_CONFIGURATION_FILE_NAME_HERE`: Add your Edge storage configuration file name here. In the example in this article, the file name used was `edgeConfig.yaml`. -- ```bash - kubectl delete -f "<YOUR_DEPLOYMENT_FILE_NAME_HERE.yaml>" - kubectl delete -f "<YOUR_PVC_FILE_NAME_HERE.yaml>" - kubectl delete -f "<YOUR_EDGE_SUBVOLUME_FILE_NAME_HERE.yaml>" - kubectl delete -f "<YOUR_EDGE_STORAGE_CONFIGURATION_FILE_NAME_HERE.yaml>" - ``` --1. After you delete the files for your deployments, PVCs, Edge subvolumes, and Edge storage configuration from the previous step, you can uninstall the extension using the following command. Replace `YOUR_RESOURCE_GROUP_NAME_HERE`, `YOUR_CLUSTER_NAME_HERE`, and `YOUR_EXTENSION_NAME_HERE` with your respective information: -- ```azurecli - az k8s-extension delete --resource-group YOUR_RESOURCE_GROUP_NAME_HERE --cluster-name YOUR_CLUSTER_NAME_HERE --cluster-type connectedClusters --name YOUR_EXTENSION_NAME_HERE - ``` ---## Next steps --- [Prepare Linux using a single-node cluster](single-node-cluster-edge-volumes.md)-- [Prepare Linux using a multi-node cluster](multi-node-cluster-edge-volumes.md) |
azure-arc | Prepare Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/prepare-linux.md | - Title: Prepare Linux (preview) -description: Learn how to prepare Linux in Azure Container Storage enabled by Azure Arc using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/26/2024----# Prepare Linux (preview) --The article describes how to prepare Linux using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. --> [!NOTE] -> The minimum supported Linux kernel version is 5.1. At this time, there are known issues with 6.4 and 6.2. --## Prerequisites --> [!NOTE] -> Azure Container Storage enabled by Azure Arc is only available in the following regions: East US, East US 2, West US, West US 2, West US 3, North Europe, West Europe. --### Arc-connected Kubernetes cluster --These instructions assume that you already have an Arc-connected Kubernetes cluster. To connect an existing Kubernetes cluster to Azure Arc, [see these instructions](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli). --If you want to use Azure Container Storage enabled by Azure Arc with Azure IoT Operations, follow the [instructions to create a cluster for Azure IoT Operations](/azure/iot-operations/get-started/quickstart-deploy?tabs=linux). --Use Ubuntu 22.04 on Standard D8s v3 machines with three SSDs attached for more storage. --## Single-node and multi-node clusters --A single-node cluster is commonly used for development or testing purposes due to its simplicity in setup and minimal resource requirements. These clusters offer a lightweight and straightforward environment for developers to experiment with Kubernetes without the complexity of a multi-node setup. Additionally, in situations where resources such as CPU, memory, and storage are limited, a single-node cluster is more practical. Its ease of setup and minimal resource requirements make it a suitable choice in resource-constrained environments. --However, single-node clusters come with limitations, mostly in the form of missing features, including their lack of high availability, fault tolerance, scalability, and performance. --A multi-node Kubernetes configuration is typically used for production, staging, or large-scale scenarios because of features such as high availability, fault tolerance, scalability, and performance. A multi-node cluster also introduces challenges and trade-offs, including complexity, overhead, cost, and efficiency considerations. For example, setting up and maintaining a multi-node cluster requires extra knowledge, skills, tools, and resources (network, storage, compute). The cluster must handle coordination and communication among nodes, leading to potential latency and errors. Additionally, running a multi-node cluster is more resource-intensive and is costlier than a single-node cluster. Optimization of resource usage among nodes is crucial for maintaining cluster and application efficiency and performance. --In summary, a [single-node Kubernetes cluster](single-node-cluster.md) might be suitable for development, testing, and resource-constrained environments. A [multi-node cluster](multi-node-cluster.md) is more appropriate for production deployments, high availability, scalability, and scenarios in which distributed applications are a requirement. This choice ultimately depends on your specific needs and goals for your deployment. --## Minimum hardware requirements --### Single-node or 2-node cluster --- Standard_D8ds_v5 VM recommended-- Equivalent specifications per node:- - 4 CPUs - - 16 GB RAM --### Multi-node cluster --- Standard_D8as_v5 VM recommended-- Equivalent specifications per node:- - 8 CPUs - - 32 GB RAM --32 GB RAM serves as a buffer; however, 16 GB RAM should suffice. Edge Essentials configurations require 8 CPUs with 10 GB RAM per node, making 16 GB RAM the minimum requirement. --## Minimum storage requirements --### Edge Volumes requirements --When you use the fault tolerant storage option, Edge Volumes allocates disk space out of a fault tolerant storage pool, which is made up of the storage exported by each node in the cluster. --The storage pool is configured to use 3-way replication to ensure fault tolerance. When an Edge Volume is provisioned, it allocates disk space from the storage pool, and allocates storage on 3 of the replicas. --For example, in a 3-node cluster with 20 GB of disk space per node, the cluster has a storage pool of 60 GB. However, due to replication, it has an effective storage size of 20 GB. --When an Edge Volume is provisioned with a requested size of 10 GB, it allocates a reserved system volume (statically sized to 1 GB) and a data volume (sized to the requested volume size, for example 10 GB). The reserved system volume consumes 3 GB (3 x 1 GB) of disk space in the storage pool, and the data volume will consume 30 GB (3 x 10 GB) of disk space in the storage pool, for a total of 33 GB. --### Cache Volumes requirements --Cache Volumes requires at least 4 GB per node of storage. For example, if you have a 3-node cluster, you need at least 12 GB of storage. --## Next steps --To continue preparing Linux, see the following instructions for single-node or multi-node clusters: --- [Single-node clusters](single-node-cluster.md)-- [Multi-node clusters](multi-node-cluster.md) |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/release-notes.md | - Title: Azure Container Storage enabled by Azure Arc FAQ and release notes (preview) -description: Learn about new features and known issues in Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/30/2024----# Azure Container Storage enabled by Azure Arc FAQ and release notes (preview) --This article provides information about new features and known issues in Azure Container Storage enabled by Azure Arc, and answers some frequently asked questions. --## Release notes --### Version 2.1.0-preview --- CRD operator-- Cloud Ingest Tunable Timers-- Uninstall during version updates-- Added regions: West US, West US 2, North Europe--### Version 1.2.0-preview --- Extension identity and OneLake support: Azure Container Storage enabled by Azure Arc now allows use of a system-assigned extension identity for access to blob storage or OneLake lake houses.-- Security fixes: security maintenance (package/module version updates).--### Version 1.1.0-preview --- Kernel versions: the minimum supported Linux kernel version is 5.1. Currently there are known issues with 6.4 and 6.2.--## FAQ --### Uninstall previous instance of the Azure Container Storage enabled by Azure Arc extension --#### If I installed the 1.2.0-preview or any earlier release, how do I uninstall the extension? --If you previously installed a version of Azure Container Storage enabled by Azure Arc earlier than **2.1.0-preview**, you must uninstall that previous instance in order to install the newer version. --> [!NOTE] -> The extension name for Azure Container Storage enabled by Azure Arc was previously **Edge Storage Accelerator**. If you still have this instance installed, the extension is referred to as **microsoft.edgestorageaccelerator** in the Azure portal. --1. Before you can delete the extension, you must delete your configPods, Persistent Volume Claims, and Persistent Volumes using the following commands in this order. Replace `YOUR_POD_FILE_NAME_HERE`, `YOUR_PVC_FILE_NAME_HERE`, and `YOUR_PV_FILE_NAME_HERE` with your respective file names. If you have more than one of each type, add one line per instance: -- ```bash - kubectl delete -f "YOUR_POD_FILE_NAME_HERE.yaml" - kubectl delete -f "YOUR_PVC_FILE_NAME_HERE.yaml" - kubectl delete -f "YOUR_PV_FILE_NAME_HERE.yaml" - ``` --1. After you delete your configPods, PVCs, and PVs in the previous step, you can uninstall the extension using the following command. Replace `YOUR_RESOURCE_GROUP_NAME_HERE`, `YOUR_CLUSTER_NAME_HERE`, and `YOUR_EXTENSION_NAME_HERE` with your respective information: -- ```azurecli - az k8s-extension delete --resource-group YOUR_RESOURCE_GROUP_NAME_HERE --cluster-name YOUR_CLUSTER_NAME_HERE --cluster-type connectedClusters --name YOUR_EXTENSION_NAME_HERE - ``` --1. If you installed the extension before the **1.1.0-preview** release (released on 4/19/24) and have a pre-existing `config.json` file, the `config.json` schema changed. Remove the old `config.json` file using `rm config.json`. --### Encryption --#### What types of encryption are used by Azure Container Storage enabled by Azure Arc? --There are three types of encryption that might be interesting for an Azure Container Storage enabled by Azure Arc customer: --- **Cluster to Blob Encryption**: Data in transit from the cluster to blob is encrypted using standard HTTPS protocols. Data is decrypted once it reaches the cloud.-- **Encryption Between Nodes**: This encryption is covered by Open Service Mesh (OSM) that is installed as part of setting up your Azure Container Storage enabled by Azure Arc cluster. It uses standard TLS encryption protocols.-- **On Disk Encryption**: Encryption at rest. Not currently supported by Azure Container Storage enabled by Azure Arc.--#### Is data encrypted in transit? --Yes, data in transit is encrypted using standard HTTPS protocols. Data is decrypted once it reaches the cloud. --#### Is data encrypted at REST? --Data persisted by the Azure Container Storage enabled by Azure Arc extension is encrypted at REST if the underlying platform provides encrypted disks. --### ACStor Triplication --#### What is ACStor triplication? --ACStor triplication stores data across three different nodes, each with its own hard drive. This intended behavior ensures data redundancy and reliability. --#### Can ACStor triplication occur on a single physical device? --No, ACStor triplication isn't designed to operate on a single physical device with three attached hard drives. --## Next steps --[Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | Single Node Cluster Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/single-node-cluster-edge-volumes.md | - Title: Prepare Linux for Edge Volumes using a single-node or 2-node cluster (preview) -description: Learn how to prepare Linux for Edge Volumes with a single-node or 2-node cluster in Azure Container Storage enabled by Azure Arc using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/26/2024-zone_pivot_groups: platform-select ---# Prepare Linux for Edge Volumes using a single-node or two-node cluster (preview) --This article describes how to prepare Linux using a single-node or two-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux-edge-volumes.md#prerequisites). --## Prepare Linux with AKS enabled by Azure Arc --This section describes how to prepare Linux with AKS enabled by Azure Arc if you run a single-node or two-node cluster. --1. Install Open Service Mesh (OSM) using the following command: -- ```azurecli - az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm - ``` -----## Next steps --[Install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md) |
azure-arc | Single Node Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/single-node-cluster.md | - Title: Prepare Linux for Cache Volumes using a single-node or 2-node cluster (preview) -description: Learn how to prepare Linux for Cache Volumes with a single-node or 2-node cluster in Azure Container Storage enabled by Azure Arc using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/26/2024-zone_pivot_groups: platform-select ---# Prepare Linux for Cache Volumes using a single-node or 2-node cluster (preview) --This article describes how to prepare Linux using a single-node or 2-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux.md#prerequisites). --## Prepare Linux with AKS enabled by Azure Arc --This section describes how to prepare Linux with AKS enabled by Azure Arc if you run a single-node or 2-node cluster. --1. Install Open Service Mesh (OSM) using the following command: -- ```azurecli - az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm - ``` --1. Disable **ACStor** by creating a file named **config.json** with the following contents: -- ```json - { - "feature.diskStorageClass": "default", - "acstorController.enabled": false - } - ``` ----5. Disable **ACStor** by creating a file named **config.json** with the following contents: -- ```json - { - "acstorController.enabled": false, - "feature.diskStorageClass": "local-path" - } - ``` ----3. Disable **ACStor** by creating a file named **config.json** with the following contents: -- ```json - { - "acstorController.enabled": false, - "feature.diskStorageClass": "local-path" - } - ``` ---## Next steps --[Install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md) |
azure-arc | Support Feedback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/support-feedback.md | - Title: Support and feedback for Azure Container Storage enabled by Azure Arc (preview) -description: Learn how to get support and provide feedback on Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024----# Support and feedback for Azure Container Storage enabled by Azure Arc (preview) --If you experience an issue or need support during the preview, see the following video and steps to request support for Azure Container Storage enabled by Azure Arc in the Azure portal: --> [!VIDEO f477de99-2036-41a3-979a-586a39b1854f] --1. Navigate to the desired Arc-connected Kubernetes cluster with the Azure Container Storage enabled by Azure Arc extension that you are experiencing issues with. -1. To expand the menu, select **Settings** on the left blade. -1. Select **Extensions**. -1. Select the name for **Type**: `microsoft.arc.containerstorage`. In this example, the name is `hydraext`. -1. Select **Help** on the left blade to expand the menu. -1. Select **Support + Troubleshooting**. -1. In the search text box, describe the issue you are facing in a few words. -1. Select "Go" to the right of the search text box. -1. For **Which service you are having an issue with**, make sure that **Edge Storage Accelerator - Preview** is selected. If not, you might need to search for **Edge Storage Accelerator - Preview** in the drop-down. -1. Select **Next** after you select **Edge Storage Accelerator - Preview**. -1. **Subscription** should already be populated with the subscription that you used to set up your Kubernetes cluster. If not, select the subscription to which your Arc-connected Kubernetes cluster is linked. -1. For **Resource**, select **General question** from the drop-down menu. -1. Select **Next**. -1. For **Problem type**, from the drop-down menu, select the problem type that best describes your issue. -1. For **Problem subtype**, from the drop-down menu, select the subtype that best describes your issue. The subtype options vary based on your selected **Problem type**. -1. Select **Next**. -1. Based on the issue, there might be documentation available to help you triage your issue. If these articles are not relevant or don't solve the issue, select **Create a support request** at the top. -1. After you select **Create a support request at the top**, the fields in the **Problem description** section should already be populated with the details that you provided earlier. If you want to change anything, you can do so in this window. -1. Select **Next** once you verify that the information in the **Problem description** section is accurate. -1. In the **Recommended solution** section, recommended solutions appear based on the information you entered. If the recommended solutions are not helpful, select **Next** to continue filing a support request. -1. In the **Additional details** section, populate the **Problem details** with your information. -1. Once all required fields are complete, select **Next**. -1. Review your information from the previous sections, then select **Create**. --## Release notes --See the [release notes for Azure Container Storage enabled by Azure Arc](release-notes.md) for information about new features and known issues. --## Next steps --[What is Azure Container Storage enabled by Azure Arc?](overview.md) |
azure-arc | Third Party Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/third-party-monitoring.md | - Title: Third-party monitoring with Prometheus and Grafana (preview) -description: Learn how to monitor your Azure Container Storage enabled by Azure Arc deployment using third-party monitoring with Prometheus and Grafana. --- Previously updated : 08/26/2024----# Third-party monitoring with Prometheus and Grafana (preview) --This article describes how to monitor your deployment using third-party monitoring with Prometheus and Grafana. --## Metrics --### Configure an existing Prometheus instance for use with Azure Container Storage enabled by Azure Arc --This guidance assumes that you previously worked with and/or configured Prometheus for Kubernetes. If you haven't previously done so, [see this overview](/azure/azure-monitor/containers/kubernetes-monitoring-enable#enable-prometheus-and-grafana) for more information about how to enable Prometheus and Grafana. --[See the metrics configuration section](azure-monitor-kubernetes.md#metrics-configuration) for information about the required Prometheus scrape configuration. Once you configure Prometheus metrics, you can deploy [Grafana](/azure/azure-monitor/visualize/grafana-plugin) to monitor and visualize your Azure services and applications. --## Logs --The Azure Container Storage enabled by Azure Arc logs are accessible through the Azure Kubernetes Service [kubelet logs](/azure/aks/kubelet-logs). You can also collect this log data using the [syslog collection feature in Azure Monitor Container Insights](/azure/azure-monitor/containers/container-insights-syslog). --## Next steps --[Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | About Arcdata Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/about-arcdata-extension.md | - Title: Reference for `az arcdata` extension- -description: Reference article for `az arcdata` commands. --- Previously updated : 06/17/2022------# Azure (`az`) CLI `arcdata` extension --The `arcdata` extension for Azure CLI provides tools for managing Azure Arc data services. --## Install extension --To install the extension, see [Install `arcdata` Azure CLI extension](install-arcdata-extension.md). --## Reference documentation --To access the latest reference documentation: --- [`az arcdata`](/cli/azure/arcdata)-- [`az sql mi-arc`](/cli/azure/sql/mi-arc)-- [`az sql midb-arc`](/cli/azure/sql/midb-arc)-- [`sql instance-failover-group-arc`](/cli/azure/sql/instance-failover-group-arc)-- [`az postgres server-arc`](/cli/azure/postgres/server-arc)--## Related content --[Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) |
azure-arc | Active Directory Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-introduction.md | - Title: Introduction to Azure Arc-enabled data services with Active Directory authentication -description: Introduction to Azure Arc-enabled data services with Active Directory authentication ------ Previously updated : 10/11/2022----# SQL Managed Instance enabled by Azure Arc with Active Directory authentication --Azure Arc-enabled data services support Active Directory (AD) for Identity and Access Management (IAM). SQL Managed Instance enabled by Azure Arc uses an existing on-premises Active Directory (AD) domain for authentication. --This article describes how to enable SQL Managed Instance enabled by Azure Arc with Active Directory (AD) Authentication. The article demonstrates two possible AD integration modes: -- Customer-managed keytab (CMK) -- Service-managed keytab (SMK) --The notion of Active Directory(AD) integration mode describes the process for keytab management including: -- Creating AD account used by SQL Managed Instance-- Registering Service Principal Names (SPNs) under the above AD account.-- Generating keytab file --## Background -To enable Active Directory authentication for SQL Server on Linux and Linux containers, use a [keytab file](/sql/linux/sql-server-linux-ad-auth-understanding#what-is-a-keytab-file). The keytab file is a cryptographic file containing service principal names (SPNs), account names and hostnames. SQL Server uses the keytab file for authenticating itself to the Active Directory (AD) domain and authenticating its clients using Active Directory (AD). Do the following steps to enable Active Directory authentication for Arc-enabled SQL Managed Instance: --- [Deploy data controller](create-data-controller-indirect-cli.md) -- [Deploy a customer-managed keytab AD connector](deploy-customer-managed-keytab-active-directory-connector.md) or [Deploy a service-managed keytab AD connector](deploy-system-managed-keytab-active-directory-connector.md)-- [Deploy SQL managed instances](deploy-active-directory-sql-managed-instance.md)--The following diagram shows how to enable Active Directory authentication for SQL Managed Instance enabled by Azure Arc: --![Actice Directory Deployment User journey](media/active-directory-deployment/active-directory-user-journey.png) ---## What is an Active Directory (AD) connector? --In order to enable Active Directory authentication for SQL Managed Instance, the instance must be deployed in an environment that allows it to communicate with the Active Directory domain. --To facilitate this, Azure Arc-enabled data services introduces a new Kubernetes-native [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called `Active Directory Connector`. It provides instances running on the same data controller the ability to perform Active Directory authentication. --## Compare AD integration modes --What is the difference between the two Active Directory integration modes? --To enable Active Directory authentication for SQL Managed Instance enabled by Azure Arc, you need an Active Directory connector where you specify the Active Directory integration deployment mode. The two Active Directory integration modes are: --- Customer-managed keytab-- Service-managed keytab --The following section compares these modes. --| |Customer-managed keytabΓÇï|System-managed keytab| -|||--| -|**Use cases**|Small and medium size businesses who are familiar with managing Active Directory objects and want flexibility in their automation process |All sizes of businesses - seeking to highly automated Active Directory management experience| -|**User provides**|An Active Directory account and SPNs under that account, and a [keytab file](/sql/linux/sql-server-linux-ad-auth-understanding#what-is-a-keytab-file) for Active Directory authentication |An [Organizational Unit (OU)](../../active-directory-domain-services/create-ou.md) and a domain service account has [sufficient permissions](deploy-system-managed-keytab-active-directory-connector.md?#prerequisites) on that OU in Active Directory.| -|**Characteristics**|User managed. Users bring the Active Directory account, which impersonates the identity of the managed instance and the keytab file. |System managed. The system creates a domain service account for each managed instance and sets SPNs automatically on that account. It also, creates and delivers a keytab file to the managed instance. | -|**Deployment process**| 1. Deploy data controller <br/> 2. Create keytab file <br/>3. Set up keytab information to Kubernetes secret<br/> 4. Deploy AD connector, deploy SQL managed instance<br/><br/>For more information, see [Deploy a customer-managed keytab Active Directory connector](deploy-customer-managed-keytab-active-directory-connector.md) | 1. Deploy data controller, deploy AD connector<br/>2. Deploy SQL managed instance<br/><br/>For more information, see [Deploy a system-managed keytab Active Directory connector](deploy-system-managed-keytab-active-directory-connector.md) | -|**Manageability**|You can create the keytab file by following the instructions from [Active Directory utility (`adutil`)](/sql/linux/sql-server-linux-ad-auth-adutil-introduction). Manual keytab rotation. |Managed keytab rotation.| -|**Limitations**|We do not recommend sharing keytab files among services. Each service should have a specific keytab file. As the number of keytab files increases the level of effort and complexity increases. |Managed keytab generation and rotation. The service account will require sufficient permissions in Active Directory to manage the credentials. <br/> <br/> Distributed Availability Group is not supported.| --For either mode, you need a specific Active Directory account, keytab, and Kubernetes secret for each SQL managed instance. --## Enable Active Directory authentication --When you deploy an instance with the intention to enable Active Directory authentication, the deployment needs to reference an Active Directory connector instance to use. Referencing the Active Directory connector in managed instance specification automatically sets up the needed environment in instance container to authenticate with Active Directory. --## Related content --* [Deploy a customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md) -* [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md) -* [Deploy SQL Managed Instance enabled by Azure Arc in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md) -* [Connect to SQL Managed Instance enabled by Azure Arc using Active Directory authentication](connect-active-directory-sql-managed-instance.md) |
azure-arc | Active Directory Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-prerequisites.md | - Title: Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites -description: Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites ------ Previously updated : 10/11/2022----# SQL Server enabled by Azure Arc in Active Directory authentication with system-managed keytab - prerequisites --This document explains how to prepare to deploy Azure Arc-enabled data services with Active Directory (AD) authentication. Specifically the article describes Active Directory objects you need to configure before the deployment of Kubernetes resources. --[The introduction](active-directory-introduction.md#compare-ad-integration-modes) describes two different integration modes: -- *System-managed keytab* mode allows the system to create and manage the AD accounts for each SQL Managed Instance.-- *Customer-managed keytab* mode allows you to create and manage the AD accounts for each SQL Managed Instance.--The requirements and recommendations are different for the two integration modes. ---|Active Directory Object|Customer-managed keytab |System-managed keytab | -|||| -|Organizational unit (OU) |Recommended|Required | -|Active Directory domain service account (DSA) for Active Directory Connector |Not required|Required | -|Active directory account for SQL Managed Instance |Created for each managed instance|System creates AD account for each managed instance| --### DSA account - system-managed keytab mode --To be able to create all the required objects in Active Directory automatically, AD Connector needs a domain service account (DSA). The DSA is an Active Directory account that has specific permissions to create, manage and delete users accounts inside the provided organizational unit (OU). This article explains how to configure the permission of this Active Directory account. The examples call the DSA account `arcdsa` as an example in this article. --### Auto generated Active Directory objects --An Arc-enabled SQL Managed Instance deployment automatically generates accounts in system-managed keytab mode. Each of the accounts represents a SQL Managed Instance and will be managed by the system throughout the lifetime of SQL. These accounts own the Service Principal Names (SPNs) required by each SQL. --The steps below assume you already have an Active Directory domain controller. If you don't have a domain controller, the following [guide](https://social.technet.microsoft.com/wiki/contents/articles/37528.create-and-configure-active-directory-domain-controller-in-azure-windows-server.aspx) includes steps that can be helpful. --## Create Active Directory objects --Do the following things before you deploy an Arc-enabled SQL Managed Instance with AD authentication: --1. Create an organizational unit (OU) for all Arc-enabled SQL Managed Instance related AD objects. Alternatively, you can choose an existing OU upon deployment. -1. Create an AD account for the AD Connector, or use an existing account, and provide this account the right permissions on the OU created in the previous step. --### Create an OU --System-managed keytab mode requires a designated OU. For customer-managed keytab mode an OU is recommended. --On the domain controller, open **Active Directory Users and Computers**. On the left panel, right-click the directory under which you want to create your OU and select **New**\> **Organizational Unit**, then follow the prompts from the wizard to create the OU. Alternatively, you can create an OU with PowerShell: --```powershell -New-ADOrganizationalUnit -Name "<name>" -Path "<Distinguished name of the directory you wish to create the OU in>" -``` --The examples in this article use `arcou` for the OU name. --![Screenshot of Active Directory Users and computers menu.](media/active-directory-deployment/start-new-organizational-unit.png) --![Screenshot of new object - organizational unit dialog.](media/active-directory-deployment/new-organizational-unit.png) --### Create the domain service account (DSA) --For system-managed keytab mode, you need an AD domain service account. --Create the Active Directory user that you will use as the domain service account. This account requires specific permissions. Make sure that you have an existing Active Directory account or create a new account, which Arc-enabled SQL Managed Instance can use to set up the necessary objects. --To create a new user in AD, you can right-click the domain or the OU and select **New** > **User**: --![Screenshot of user properties.](media/active-directory-deployment/start-ad-new-user.png) --This account will be referred to as *arcdsa* in this article. --### Set permissions for the DSA --For system-managed keytab mode, you need to set the permissions for the DSA. --Whether you have created a new account for the DSA or are using an existing Active Directory user account, there are certain permissions the account needs to have. The DSA needs to be able to create users, groups, and computer accounts in the OU. In the following steps, the Arc-enabled SQL Managed Instance domain service account name is `arcdsa`. --> [!IMPORTANT] -> You can choose any name for the DSA, but we do not recommend altering the account name once AD Connector is deployed. --1. On the domain controller, open **Active Directory Users and Computers**, click on **View**, select **Advanced Features** --1. In the left panel, navigate to your domain, then the OU which `arcou` will use --1. Right-click the OU, and select **Properties**. --> [!NOTE] -> Make sure that you have selected **Advanced Features** by right-clicking on the OU, and selecting **View** --1. Go to the Security tab. Select **Advanced Features** right-click on the OU, and select **View**. -- ![AD object properties](./media/active-directory-deployment/start-ad-new-user.png) --1. Select **Add...** and add the **arcdsa** user. -- ![Screenshot of add user dialog.](./media/active-directory-deployment/add-user.png) --1. Select the **arcdsa** user and clear all permissions, then select **Advanced**. --1. Select **Add** -- - Select **Select a Principal**, insert **arcdsa**, and select **Ok**. -- - Set **Type** to **Allow**. -- - Set **Applies To** to **This Object and all descendant objects**. -- ![Screenshot of permission entries.](./media/active-directory-deployment/set-permissions.png) -- - Scroll down to the bottom, and select **Clear all**. -- - Scroll back to the top, and select: - - **Read all properties** - - **Write all properties** - - **Create User objects** - - **Delete User objects** -- - Select **OK**. --1. Select **Add**. -- - Select **Select a Principal**, insert **arcdsa**, and select **Ok**. -- - Set **Type** to **Allow**. -- - Set **Applies To** to **Descendant User objects**. -- - Scroll down to the bottom, and select **Clear all**. -- - Scroll back to the top, and select **Reset password**. -- - Select **OK**. --- Select **OK** twice more to close open dialog boxes.--## Related content --* [Deploy a customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md) -* [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md) -* [Deploy a SQL Managed Instance enabled by Azure Arc in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md) -* [Connect to SQL Managed Instance enabled by Azure Arc using Active Directory authentication](connect-active-directory-sql-managed-instance.md) |
azure-arc | Adding Exporters And Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/adding-exporters-and-pipelines.md | - Title: Adding Exporters and Pipelines | Azure Arc-enabled Data Services -description: Learn how to add exporters and pipelines to the telemetry router ---- Previously updated : 10/25/2022----# Add exporters and pipelines to your telemetry router deployment --> [!NOTE] -> -> - The telemetry router is in Public Preview and should be deployed for **testing purposes only**. -> - While the telemetry router is in Public Preview, be advised that future preview releases could include changes to CRD specs, CLI commands, and/or telemetry router messages. -> - The current preview does not support in-place upgrades of a data controller deployed with the Arc telemetry router enabled. In order to install or upgrade a data controller in a future release, you will need to uninstall the data controller and then re-install. --## What are Exporters and Pipelines? --Exporters and Pipelines are two of the main components of the telemetry router. Exporters describe how to send data to a destination system such as Kafka. When creating an exporter, you associate it with a pipeline in order to route that type of telemetry data to that destination. You can have multiple exporters for each pipeline. --This article provides examples of how you can set up your own exporters and pipelines to route monitoring telemetry data to your own supported exporter. --### Supported Exporters --| Exporter | Supported Pipeline Types | -|--|--| -| Kafka | logs, metrics | -| Elasticsearch | logs | --## Configurations --All configurations are specified through the telemetry router's custom resource specification and support the configuration of exporters and pipelines. --### Exporters --For the Public Preview, exporters are partially configurable and support the following solutions: --| Exporter | Supported Telemetry Types | -|--|--| -| Kafka | logs, metrics | -| Elasticsearch | logs | --The following properties are currently configurable during the Public Preview: --#### General Exporter Settings --| Setting | Description | -|--|--| -| certificateName | The client certificate in order to export to the monitoring solution | -| caCertificateName | The cluster's Certificate Authority or customer-provided certificate for the Exporter | --#### Kafka Exporter Settings --| Setting | Description | -|--|--| -| topic | Name of the topic to export | -| brokers | List of brokers to connect to | -| encoding | Encoding for the telemetry: otlp_json or otlp_proto | --#### Elasticsearch Exporter Settings --| Setting | Description | -|--|--| -| index | This setting can be the name of an index or datastream name to publish events | -| endpoint | Endpoint of the Elasticsearch to export to | --### Pipelines --The Telemetry Router supports logs and metrics pipelines. These pipelines are exposed in the custom resource specification of the Arc telemetry router and available for modification. --You can't remove the last pipeline from the telemetry router. If you apply a yaml file that removes the last pipeline, the service rejects the update. --#### Pipeline Settings --| Setting | Description | -|--|--| -| logs | Can only declare new logs pipelines | -| metrics | Can only declare new metrics pipelines | -| exporters | List of exporters. Can be multiple of the same type | --### Credentials --#### Credentials Settings --| Setting | Description | -|--|--| -| certificateName | Name of the certificate must correspond to the certificate name specified in the exporter declaration | -| secretName | Name of the secret provided through Kubernetes | -| secretNamespace | Namespace with secret provided through Kubernetes | --## Example TelemetryRouter Specification --```yaml -apiVersion: arcdata.microsoft.com/v1beta4 -kind: TelemetryRouter -metadata: - name: arc-telemetry-router - namespace: <namespace> -spec: - credentials: - certificates: - - certificateName: arcdata-elasticsearch-exporter - - certificateName: cluster-ca-certificate - exporters: - elasticsearch: - - caCertificateName: cluster-ca-certificate - certificateName: arcdata-elasticsearch-exporter - endpoint: https://logsdb-svc:9200 - index: logstash-otel - name: arcdata - pipelines: - logs: - exporters: - - elasticsearch/arcdata -``` ---## Example 1: Adding a Kafka exporter for a metrics pipeline --You can test creating a Kafka exporter for a metrics pipeline that can send metrics data to your own instance of Kafka. You need to prefix the name of your metrics pipeline with `kafka/`. You can have one unnamed instance for each telemetry type. For example, "kafka" is a valid name for a metrics pipeline. - -1. Provide your client and CA certificates in the `credentials` section through Kubernetes secrets -2. Declare the new Exporter in the `exporters` section with the needed settings - name, certificates, broker, and index. Be sure to list the new exporter under the applicable type ("kakfa:") -3. List your exporter in the `pipelines` section of the spec as a metrics pipeline. The exporter name needs to be prefixed with the type of exporter. For example, `kafka/myMetrics` --In this example, we've added a metrics pipeline called "metrics" with a single exporter (`kafka/myMetrics`) that routes to your instance of Kafka. --**arc-telemetry-router.yaml** --```yaml -apiVersion: arcdata.microsoft.com/v1beta4 -kind: TelemetryRouter -metadata: - name: arc-telemetry-router - namespace: <namespace> -spec: - credentials: - certificates: - # Step 1. Provide your client and ca certificates through Kubernetes secrets - # where the name of the secret and its namespace are specified. - - certificateName: <kafka-client-certificate-name> - secretName: <name_of_secret> - secretNamespace: <namespace_with_secret> - - certificateName: <ca-certificate-name> - secretName: <name_of_secret> - secretNamespace: <namespace_with_secret> - exporters: - kafka: - # Step 2. Declare your Kafka exporter with the needed settings - # (name, certificates, endpoint, and index to export to) - - name: myMetrics - # Provide your client and CA certificate names - # for the exporter as well as any additional settings needed - caCertificateName: <ca-certificate-name> - certificateName: <kafka-client-certificate-name> - broker: <kafka_broker> - # Index can be the name of an index or datastream name to publish events to - index: <kafka_index> - pipelines: - metrics: - exporters: - # Step 3. Assign your kafka exporter to the list - # of exporters for the metrics pipeline. - - kafka/myMetrics -``` --```bash -kubectl apply -f arc-telemetry-router.yaml -n <namespace> -``` --You've added a metrics pipeline that exports to your instance of Kafka. After you've applied the changes to the yaml file, the TelemetryRouter custom resource will go into an updating state, and the collector service will restart. --## Example 2: Adding an Elasticsearch exporter for a logs pipeline --Your telemetry router deployment can export to multiple destinations by configuring more exporters. Multiple types of exporters are supported on a given telemetry router deployment. This example demonstrates adding an Elasticsearch exporter as a second exporter. We activate this second exporter by adding it to a logs pipeline. --1. Provide your client and CA certificates in the `credentials` section through Kubernetes secrets -2. Declare the new Exporter beneath the `exporters` section with the needed settings - name, certificates, endpoint, and index. Be sure to list the new exporter under the applicable type ("Elasticsearch:"). -3. List your exporter in the `pipelines` section of the spec as a logs pipeline. The exporter name needs to be prefixed with the type of exporter. For example, `elasticsearch/myLogs` --This example builds on the previous example by adding a logs pipeline for an Elasticsearch exporter (`elasticsearch/myLogs`). At the end of the example, we have two exporters with each exporter added to a different pipeline. --**arc-telemetry-router.yaml** --```yaml -apiVersion: arcdata.microsoft.com/v1beta4 -kind: TelemetryRouter -metadata: - name: arc-telemetry-router - namespace: <namespace> -spec: - credentials: - certificates: - # Step 1. Provide your client and ca certificates through Kubernetes secrets - # where the name of the secret and its namespace are specified. - - certificateName: <elasticsearch-client-certificate-name> - secretName: <name_of_secret> - secretNamespace: <namespace_with_secret> - - certificateName: <kafka-client-certificate-name> - secretName: <name_of_secret> - secretNamespace: <namespace_with_secret> - - certificateName: <ca-certificate-name> - secretName: <name_of_secret> - secretNamespace: <namespace_with_secret> - exporters: - Elasticsearch: - # Step 2. Declare your Elasticsearch exporter with the needed settings - # (certificates, endpoint, and index to export to) - - name: myLogs - # Provide your client and CA certificate names - # for the exporter as well as any additional settings needed - caCertificateName: <ca-certificate-name> - certificateName: <elasticsearch-client-certificate-name> - endpoint: <elasticsearch_endpoint> - # Index can be the name of an index or datastream name to publish events to - index: <elasticsearch_index> - kafka: - - name: myMetrics - caCertificateName: <ca-certificate-name> - certificateName: <kafka-client-certificate-name> - broker: <kafka_broker> - index: <kafka_index> - pipelines: - logs: - exporters: - # Step 3. Add your Elasticsearch exporter to - # the exporters list of a logs pipeline. - - elasticsearch/myLogs - metrics: - exporters: - - kafka/myMetrics -``` --```bash -kubectl apply -f arc-telemetry-router.yaml -n <namespace> -``` --You now have Kafka and Elasticsearch exporters, added to metrics and logs pipelines. After you apply the changes to the yaml file, the TelemetryRouter custom resource will go into an updating state, and the collector service will restart. |
azure-arc | Automated Integration Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/automated-integration-testing.md | - Title: Azure Arc-enabled data services - Automated validation testing -description: Running containerized validation tests on any Kubernetes Cluster ------ Previously updated : 09/07/2022------# Tutorial: Automated validation testing --As part of each commit that builds up Arc-enabled data services, Microsoft runs automated CI/CD pipelines that perform end-to-end tests. These tests are orchestrated via two containers that are maintained alongside the core-product (Data Controller, SQL Managed Instance enabled by Azure Arc & PostgreSQL server). These containers are: --- `arc-ci-launcher`: Containing deployment dependencies (for example, CLI extensions), as well product deployment code (using Azure CLI) for both Direct and Indirect connectivity modes. Once Kubernetes is onboarded with the Data Controller, the container leverages [Sonobuoy](https://sonobuoy.io/) to trigger parallel integration tests.-- `arc-sb-plugin`: A [Sonobuoy plugin](https://sonobuoy.io/plugins/) containing [Pytest](https://docs.pytest.org/en/7.1.x/)-based end-to-end integration tests, ranging from simple smoke-tests (deployments, deletes), to complex high-availability scenarios, chaos-tests (resource deletions) etc.--These testing containers are made publicly available for customers and partners to perform Arc-enabled data services validation testing in their own Kubernetes clusters running anywhere, to validate: -* Kubernetes distro/versions -* Host disto/versions -* Storage (`StorageClass`/CSI), networking (e.g. `LoadBalancer`s, DNS) -* Other Kubernetes or infrastructure specific setup --For Customers intending to run Arc-enabled Data Services on an undocumented distribution, they must run these validation tests successfully to be considered supported. Additionally, Partners can use this approach to certify their solution is compliant with Arc-enabled Data Services - see [Azure Arc-enabled data services Kubernetes validation](validation-program.md). --The following diagram outlines this high-level process: --![Diagram that shows the Arc-enabled data services Kube-native integration tests.](media/automated-integration-testing/integration-testing-overview.png) --In this tutorial, you learn how to: --> [!div class="checklist"] -> * Deploy `arc-ci-launcher` using `kubectl` -> * Examine validation test results in your Azure Blob Storage account --## Prerequisites - -- **Credentials**: - * The [`test.env.tmpl`](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/test/launcher/base/configs/.test.env.tmpl) file contains the necessary credentials required, and is a combination of the existing pre-requisites required to onboard an [Azure Arc Connected Cluster](../kubernetes/quickstart-connect-cluster.md?tabs=azure-cli) and [Directly Connected Data Controller](plan-azure-arc-data-services.md). Setup of this file is explained below with samples. - * A [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file to the tested Kubernetes cluster with `cluster-admin` access (required for Connected Cluster onboarding at this time) --- **Client-tooling**: - * `kubectl` installed - minimum version (Major:"1", Minor:"21") - * `git` command line interface (or UI-based alternatives) --## Kubernetes manifest preparation --The launcher is made available as part of the [`microsoft/azure_arc`](https://github.com/microsoft/azure_arc) repository, as a [Kustomize](https://kustomize.io/) manifest - Kustomize is [built into `kubectl`](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/) - so no additional tooling is required. --1. Clone the repo locally: --```bash -git clone https://github.com/microsoft/azure_arc.git -``` --2. Navigate to `azure_arc/arc_data_services/test/launcher`, to see the following folder structure: --```text -Γö£ΓöÇΓöÇ base <- Comon base for all Kubernetes Clusters -Γöé Γö£ΓöÇΓöÇ configs -Γöé Γöé ΓööΓöÇΓöÇ .test.env.tmpl <- To be converted into .test.env with credentials for a Kubernetes Secret -Γöé Γö£ΓöÇΓöÇ kustomization.yaml <- Defines the generated resources as part of the launcher -Γöé ΓööΓöÇΓöÇ launcher.yaml <- Defines the Kubernetes resources that make up the launcher -ΓööΓöÇΓöÇ overlays <- Overlays for specific Kubernetes Clusters - Γö£ΓöÇΓöÇ aks - Γöé Γö£ΓöÇΓöÇ configs - Γöé Γöé ΓööΓöÇΓöÇ patch.json.tmpl <- To be converted into patch.json, patch for Data Controller control.json - Γöé ΓööΓöÇΓöÇ kustomization.yaml - Γö£ΓöÇΓöÇ kubeadm - Γöé Γö£ΓöÇΓöÇ configs - Γöé Γöé ΓööΓöÇΓöÇ patch.json.tmpl - Γöé ΓööΓöÇΓöÇ kustomization.yaml - ΓööΓöÇΓöÇ openshift - Γö£ΓöÇΓöÇ configs - Γöé ΓööΓöÇΓöÇ patch.json.tmpl - Γö£ΓöÇΓöÇ kustomization.yaml - ΓööΓöÇΓöÇ scc.yaml -``` --In this tutorial, we're going to focus on steps for AKS, but the overlay structure above can be extended to include additional Kubernetes distributions. --The ready to deploy manifest will represent the following: -```text -Γö£ΓöÇΓöÇ base -Γöé Γö£ΓöÇΓöÇ configs -Γöé Γöé Γö£ΓöÇΓöÇ .test.env <- Config 1: For Kubernetes secret, see sample below -Γöé Γöé ΓööΓöÇΓöÇ .test.env.tmpl -Γöé Γö£ΓöÇΓöÇ kustomization.yaml -Γöé ΓööΓöÇΓöÇ launcher.yaml -ΓööΓöÇΓöÇ overlays - ΓööΓöÇΓöÇ aks - Γö£ΓöÇΓöÇ configs - Γöé Γö£ΓöÇΓöÇ patch.json.tmpl - Γöé ΓööΓöÇΓöÇ patch.json <- Config 2: For control.json patching, see sample below - ΓööΓöÇΓöÇ kustomization.yam -``` --There are two files that need to be generated to localize the launcher to run inside a specific environment. Each of these files can be generated by copy-pasting and filling out each of the template (`*.tmpl`) files above: -* `.test.env`: fill out from `.test.env.tmpl` -* `patch.json`: fill out from `patch.json.tmpl` --> [!TIP] -> The `.test.env` is a single set of environment variables that drives the launcher's behavior. Generating it with care for a given environment will ensure reproducibility of the launcher's behavior. --### Config 1: `.test.env` --A filled-out sample of the `.test.env` file, generated based on `.test.env.tmpl` is shared below with inline commentary. --> [!IMPORTANT] -> The `export VAR="value"` syntax below is not meant to be run locally to source environment variables from your machine - but is there for the launcher. The launcher mounts this `.test.env` file **as-is** as a Kubernetes `secret` using Kustomize's [`secretGenerator`](https://github.com/kubernetes-sigs/kustomize/blob/master/examples/secretGeneratorPlugin.md#secret-values-from-local-files) (Kustomize takes a file, base64 encodes the entire file's content, and turns it into a Kubernetes secret). During initialization, the launcher runs bash's [`source`](https://ss64.com/bash/source.html) command, which imports the environment variables from the as-is mounted `.test.env` file into the launcher's environment. --In other words, after copy-pasting `.test.env.tmpl` and editing to create `.test.env`, the generated file should look similar to the sample below. The process to fill out the `.test.env` file is identical across operating systems and terminals. --> [!TIP] -> There are a handful of environment variables that require additional explanation for clarity in reproducibility. These will be commented with `see detailed explanation below [X]`. --> [!TIP] -> Note that the `.test.env` example below is for **direct** mode. Some of these variables, such as `ARC_DATASERVICES_EXTENSION_VERSION_TAG` do not apply to **indirect** mode. For simplicity, it's best to setup the `.test.env` file with **direct** mode variables in mind, switching `CONNECTIVITY_MODE=indirect` will have the launcher ignore **direct** mode specific-settings and use a subset from the list. -> -> In other words, planning for **direct** mode allows us to satisfy **indirect** mode variables. --Finished sample of `.test.env`: -```bash -# ====================================== -# Arc Data Services deployment version = -# ====================================== --# Controller deployment mode: direct, indirect -# For 'direct', the launcher will also onboard the Kubernetes Cluster to Azure Arc -# For 'indirect', the launcher will skip Azure Arc and extension onboarding, and proceed directly to Data Controller deployment - see `patch.json` file -export CONNECTIVITY_MODE="direct" --# The launcher supports deployment of both GA/pre-GA trains - see detailed explanation below [1] -export ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN="stable" -export ARC_DATASERVICES_EXTENSION_VERSION_TAG="1.11.0" --# Image version -export DOCKER_IMAGE_POLICY="Always" -export DOCKER_REGISTRY="mcr.microsoft.com" -export DOCKER_REPOSITORY="arcdata" -export DOCKER_TAG="v1.11.0_2022-09-13" --# "arcdata" Azure CLI extension version override - see detailed explanation below [2] -export ARC_DATASERVICES_WHL_OVERRIDE="" --# ================ -# ARM parameters = -# ================ --# Custom Location Resource Provider Azure AD Object ID - this is a single, unique value per Azure AD tenant - see detailed explanation below [3] -export CUSTOM_LOCATION_OID="..." --# A pre-rexisting Resource Group is used if found with the same name. Otherwise, launcher will attempt to create a Resource Group -# with the name specified, using the Service Principal specified below (which will require `Owner/Contributor` at the Subscription level to work) -export LOCATION="eastus" -export RESOURCE_GROUP_NAME="..." --# A Service Principal with "sufficient" privileges - see detailed explanation below [4] -export SPN_CLIENT_ID="..." -export SPN_CLIENT_SECRET="..." -export SPN_TENANT_ID="..." -export SUBSCRIPTION_ID="..." --# Optional: certain integration tests test upload to Log Analytics workspace: -# https://learn.microsoft.com/azure/azure-arc/data/upload-logs -export WORKSPACE_ID="..." -export WORKSPACE_SHARED_KEY="..." --# ==================================== -# Data Controller deployment profile = -# ==================================== --# Samples for AKS -# To see full list of CONTROLLER_PROFILE, run: az arcdata dc config list -export CONTROLLER_PROFILE="azure-arc-aks-default-storage" --# azure, aws, gcp, onpremises, alibaba, other -export DEPLOYMENT_INFRASTRUCTURE="azure" --# The StorageClass used for PVCs created during the tests -export KUBERNETES_STORAGECLASS="default" --# ============================== -# Launcher specific parameters = -# ============================== --# Log/test result upload from launcher container, via SAS URL - see detailed explanation below [5] -export LOGS_STORAGE_ACCOUNT="<your-storage-account>" -export LOGS_STORAGE_ACCOUNT_SAS="?sv=2021-06-08&ss=bfqt&srt=sco&sp=rwdlacupiytfx&se=...&spr=https&sig=..." -export LOGS_STORAGE_CONTAINER="arc-ci-launcher-1662513182" --# Test behavior parameters -# The test suites to execute - space seperated array, -# Use these default values that run short smoke tests, further elaborate test suites will be added in upcoming releases -export SQL_HA_TEST_REPLICA_COUNT="3" -export TESTS_DIRECT="direct-crud direct-hydration controldb" -export TESTS_INDIRECT="billing controldb kube-rbac" -export TEST_REPEAT_COUNT="1" -export TEST_TYPE="ci" --# Control launcher behavior by setting to '1': -# -# - SKIP_PRECLEAN: Skips initial cleanup -# - SKIP_SETUP: Skips Arc Data deployment -# - SKIP_TEST: Skips sonobuoy tests -# - SKIP_POSTCLEAN: Skips final cleanup -# - SKIP_UPLOAD: Skips log upload -# -# See detailed explanation below [6] -export SKIP_PRECLEAN="0" -export SKIP_SETUP="0" -export SKIP_TEST="0" -export SKIP_POSTCLEAN="0" -export SKIP_UPLOAD="0" -``` --> [!IMPORTANT] -> If performing the configuration file generation in a Windows machine, you will need to convert the End-of-Line sequence from `CRLF` (Windows) to `LF` (Linux), as `arc-ci-launcher` runs as a Linux container. Leaving the line ending as `CRLF` may cause an error upon `arc-ci-launcher` container start - such as: `/launcher/config/.test.env: $'\r': command not found` -> For example, perform the change using VSCode (bottom-right of window): <br> -> ![Screenshot that shows where to change the end of line sequence (CRLF).](media/automated-integration-testing/crlf-to-lf.png) --#### Detailed explanation for certain variables --##### 1. `ARC_DATASERVICES_EXTENSION_*` - Extension version and train --> Mandatory: this is required for `direct` mode deployments. --The launcher can deploy both GA and pre-GA releases. --The extension version to release-train (`ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN`) mapping are obtained from here: -* **GA**: `stable` - [Version log](version-log.md) -* **Pre-GA**: `preview` - [Pre-release testing](preview-testing.md) --##### 2. `ARC_DATASERVICES_WHL_OVERRIDE` - Azure CLI previous version download URL --> Optional: leave this empty in `.test.env` to use the pre-packaged default. --The launcher image is pre-packaged with the latest arcdata CLI version at the time of each container image release. However, to work with older releases and upgrade testing, it may be necessary to provide the launcher with Azure CLI Blob URL download link, to override the pre-packaged version; e.g to instruct the launcher to install version **1.4.3**, fill in: --```bash -export ARC_DATASERVICES_WHL_OVERRIDE="https://azurearcdatacli.blob.core.windows.net/cli-extensions/arcdata-1.4.3-py2.py3-none-any.whl" -``` -The CLI version to Blob URL mapping can be found [here](https://azcliextensionsync.blob.core.windows.net/index1/index.json). --<a name='3-custom_location_oidcustom-locations-object-id-from-your-specific-azure-ad-tenant'></a> --##### 3. `CUSTOM_LOCATION_OID` - Custom Locations Object ID from your specific Microsoft Entra tenant --> Mandatory: this is required for Connected Cluster Custom Location creation. --The following steps are sourced from [Enable custom locations on your cluster](../kubernetes/custom-locations.md#enable-custom-locations-on-your-cluster) to retrieve the unique Custom Location Object ID for your Microsoft Entra tenant. --There are two approaches to obtaining the `CUSTOM_LOCATION_OID` for your Microsoft Entra tenant. --1. Via Azure CLI: -- ```bash - az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv - # 51dfe1e8-70c6-4de... < This is for Microsoft's own tenant - do not use, the value for your tenant will be different, use that instead to align with the Service Principal for launcher. - ``` -- ![A screenshot of a PowerShell terminal that shows `az ad sp show --id <>`.](media/automated-integration-testing/custom-location-oid-cli.png) --2. Via Azure portal - navigate to your Microsoft Entra blade, and search for `Custom Locations RP`: -- ![A screenshot of the custom locations RP.](media/automated-integration-testing/custom-location-oid-portal.png) --##### 4. `SPN_CLIENT_*` - Service Principal Credentials --> Mandatory: this is required for Direct Mode deployments. --The launcher logs in to Azure using these credentials. --Validation testing is meant to be performed on **Non-Production/Test Kubernetes cluster & Azure Subscriptions** - focusing on functional validation of the Kubernetes/Infrastructure setup. Therefore, to avoid the number of manual steps required to perform launches, it's recommended to provide a `SPN_CLIENT_ID/SECRET` that has `Owner` at the Resource Group (or Subscription) level, as it will create several resources in this Resource Group, as well as assigning permissions to those resources against several Managed Identities created as part of the deployment (these role assignments in turn require the Service Principal to have `Owner`). --##### 5. `LOGS_STORAGE_ACCOUNT_SAS` - Blob Storage Account SAS URL --> Recommended: leaving this empty means you will not obtain test results and logs. --The launcher needs a persistent location (Azure Blob Storage) to upload results to, as Kubernetes doesn't (yet) allow copying files from stopped/completed pods - [see here](https://github.com/kubernetes/kubectl/issues/454). The launcher achieves connectivity to Azure Blob Storage using an _**account-scoped SAS URL**_ (as opposed to _container_ or _blob_ scoped) - a signed URL with a time-bound access definition - see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md), in order to: -1. Create a new Storage Container in the pre-existing Storage Account (`LOGS_STORAGE_ACCOUNT`), if it doesn't exist (name based on `LOGS_STORAGE_CONTAINER`) -2. Create new, uniquely named blobs (test log tar files) --The follow steps are sourced from [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md#grant-limited-access-to-azure-storage-resources-using-shared-access-signatures-sas). --> [!TIP] -> SAS URLs are different from the Storage Account Key, a SAS URL is formatted as follows. -> ```text -> ?sv=2021-06-08&ss=bfqt&srt=sco&sp=rwdlacupiytfx&se=...&spr=https&sig=... -> ``` --There are several approaches to generating a SAS URL. This example shows the portal: --![A screenshot of the shared access signature details on the Azure portal.](media/automated-integration-testing/sas-url-portal.png) --To use the Azure CLI instead, see [`az storage account generate-sas`](/cli/azure/storage/account?view=azure-cli-latest&preserve-view=true#az-storage-account-generate-sas) --##### 6. `SKIP_*` - controlling the launcher behavior by skipping certain stages --> Optional: leave this empty in `.test.env` to run all stages (equivalent to `0` or blank) --The launcher exposes `SKIP_*` variables, to run and skip specific stages - for example, to perform a "cleanup only" run. --Although the launcher is designed to clean up both in the beginning and the end of each run, it's possible for launch and/or test-failures to leave residue resources behind. To run the launcher in "cleanup only" mode, set the following variables in `.test.env`: --```bash -export SKIP_PRECLEAN="0" # Run cleanup -export SKIP_SETUP="1" # Do not setup Arc-enabled Data Services -export SKIP_TEST="1" # Do not run integration tests -export SKIP_POSTCLEAN="1" # POSTCLEAN is identical to PRECLEAN, although idempotent, not needed here -export SKIP_UPLOAD="1" # Do not upload logs from this run -``` --The settings above instructs the launcher to clean up all Arc and Arc Data Services resources, and to not deploy/test/upload logs. --### Config 2: `patch.json` --A filled-out sample of the `patch.json` file, generated based on `patch.json.tmpl` is shared below: --> Note that the `spec.docker.registry, repository, imageTag` should be identical to the values in `.test.env` above --Finished sample of `patch.json`: -```json -{ - "patch": [ - { - "op": "add", - "path": "spec.docker", - "value": { - "registry": "mcr.microsoft.com", - "repository": "arcdata", - "imageTag": "v1.11.0_2022-09-13", - "imagePullPolicy": "Always" - } - }, - { - "op": "add", - "path": "spec.storage.data.className", - "value": "default" - }, - { - "op": "add", - "path": "spec.storage.logs.className", - "value": "default" - } - ] -} -``` --## Launcher deployment --> It is recommended to deploy the launcher in a **Non-Production/Test cluster** - as it performs destructive actions on Arc and other used Kubernetes resources. --### `imageTag` specification -The launcher is defined within the Kubernetes Manifest as a [`Job`](https://kubernetes.io/docs/concepts/workloads/controllers/job/), which requires instructing Kubernetes where to find the launcher's image. This is set in `base/kustomization.yaml`: --```YAML -images: -- name: arc-ci-launcher- newName: mcr.microsoft.com/arcdata/arc-ci-launcher - newTag: v1.11.0_2022-09-13 -``` --> [!TIP] -> To recap, at this point - there are **3** places we specified `imageTag`s, for clarity, here's an explanation of the different uses of each. Typically - when testing a given release, all 3 values would be the same (aligning to a given release): -> ->| # | Filename | Variable name | Why? | Used by? | ->| | | - | -- | | ->| 1 | **`.test.env`** | `DOCKER_TAG` | Sourcing the [Bootstrapper image](https://mcr.microsoft.com/v2/arcdata/arc-bootstrapper/tags/list) as part of [extension install](https://mcr.microsoft.com/v2/arcdata/arcdataservices-extension/tags/list) | [`az k8s-extension create`](/cli/azure/k8s-extension?view=azure-cli-latest&preserve-view=true#az-k8s-extension-create) in the launcher | ->| 2 | **`patch.json`** | `value.imageTag` | Sourcing the [Data Controller image](https://mcr.microsoft.com/v2/arcdata/arc-controller/tags/list) | [`az arcdata dc create`](/cli/azure/arcdata/dc?view=azure-cli-latest&preserve-view=true#az-arcdata-dc-create) in the launcher | ->| 3 | **`kustomization.yaml`** | `images.newTag` | Sourcing the [launcher's image](https://mcr.microsoft.com/v2/arcdata/arc-ci-launcher/tags/list) | `kubectl apply`ing the launcher | --### `kubectl apply` --To validate that the manifest has been properly set up, attempt client-side validation with `--dry-run=client`, which prints out the Kubernetes resources to be created for the launcher: --```bash -kubectl apply -k arc_data_services/test/launcher/overlays/aks --dry-run=client -# namespace/arc-ci-launcher created (dry run) -# serviceaccount/arc-ci-launcher created (dry run) -# clusterrolebinding.rbac.authorization.k8s.io/arc-ci-launcher created (dry run) -# secret/test-env-fdgfm8gtb5 created (dry run) <- Created from Config 1: `patch.json` -# configmap/control-patch-2hhhgk847m created (dry run) <- Created from Config 2: `.test.env` -# job.batch/arc-ci-launcher created (dry run) -``` --To deploy the launcher and tail logs, run the following: -```bash -kubectl apply -k arc_data_services/test/launcher/overlays/aks -kubectl wait --for=condition=Ready --timeout=360s pod -l job-name=arc-ci-launcher -n arc-ci-launcher -kubectl logs job/arc-ci-launcher -n arc-ci-launcher --follow -``` --At this point, the launcher should start - and you should see the following: --![A screenshot of the console terminal after the launcher starts.](media/automated-integration-testing/launcher-start.png) --Although it's best to deploy the launcher in a cluster with no pre-existing Arc resources, the launcher contains pre-flight validation to discover pre-existing Arc and Arc Data Services CRDs and ARM resources, and attempts to clean them up on a best-effort basis (using the provided Service Principal credentials), prior to deploying the new release: --![A screenshot of the console terminal discovering Kubernetes and other resources.](media/automated-integration-testing/launcher-pre-flight.png) --This same metadata-discovery and cleanup process is also run upon launcher exit, to leave the cluster as close as possible to it's pre-existing state before the launch. --## Steps performed by launcher --At a high-level, the launcher performs the following sequence of steps: --1. Authenticate to Kubernetes API using Pod-mounted Service Account -2. Authenticate to ARM API using Secret-mounted Service Principal -3. Perform CRD metadata scan to discover existing Arc and Arc Data Services Custom Resources -4. Clean up any existing Custom Resources in Kubernetes, and subsequent resources in Azure. If any mismatch between the credentials in `.test.env` compared to resources existing in the cluster, quit. -5. Generate a unique set of environment variables based on timestamp for Arc Cluster name, Data Controller and Custom Location/Namespace. Prints out the environment variables, obfuscating sensitive values (e.g. Service Principal Password etc.) -6. a. For Direct Mode - Onboard the Cluster to Azure Arc, then deploys the controller. -- b. For Indirect Mode: deploy the Data Controller -7. Once Data Controller is `Ready`, generate a set of Azure CLI ([`az arcdata dc debug`](/cli/azure/arcdata/dc/debug?view=azure-cli-latest&preserve-view=true)) logs and store locally, labeled as `setup-complete` - as a baseline. -8. Use the `TESTS_DIRECT/INDIRECT` environment variable from `.test.env` to launch a set of parallelized Sonobuoy test runs based on a space-separated array (`TESTS_(IN)DIRECT`). These runs execute in a new `sonobuoy` namespace, using `arc-sb-plugin` pod that contains the Pytest validation tests. -9. [Sonobuoy aggregator](https://sonobuoy.io/docs/v0.56.0/plugins/) accumulate the [`junit` test results](https://sonobuoy.io/docs/v0.56.0/results/) and logs per `arc-sb-plugin` test run, which are exported into the launcher pod. -10. Return the exit code of the tests, and generates another set of debug logs - Azure CLI and `sonobuoy` - stored locally, labeled as `test-complete`. -11. Perform a CRD metadata scan, similar to Step 3, to discover existing Arc and Arc Data Services Custom Resources. Then, proceed to destroy all Arc and Arc Data resources in reverse order from deployment, as well as CRDs, Role/ClusterRoles, PV/PVCs etc. -12. Attempt to use the SAS token `LOGS_STORAGE_ACCOUNT_SAS` provided to create a new Storage Account container named based on `LOGS_STORAGE_CONTAINER`, in the **pre-existing** Storage Account `LOGS_STORAGE_ACCOUNT`. If Storage Account container already exists, use it. Upload all local test results and logs to this storage container as a tarball (see below). -13. Exit. --## Tests performed per test suite --There are approximately **375** unique integration tests available, across **27** test suites - each testing a separate functionality. --| Suite # | Test suite name | Description of test | -| - | | | -| 1 | `ad-connector` | Tests the deployment and update of an Active Directory Connector (AD Connector). | -| 2 | `billing` | Testing various Business Critical license types are reflected in resource table in controller, used for Billing upload. | -| 3 | `ci-billing` | Similar as `billing`, but with more CPU/Memory permutations. | -| 4 | `ci-sqlinstance` | Long running tests for multi-replica creation, updates, GP -> BC Update, Backup validation and SQL Server Agent. | -| 5 | `controldb` | Tests Control database - SA secret check, system login verification, audit creation, and sanity checks for SQL build version. | -| 6 | `dc-export` | Indirect Mode billing and usage upload. | -| 7 | `direct-crud` | Creates a SQL instance using ARM calls, validates in both Kubernetes and ARM. | -| 8 | `direct-fog` | Creates multiple SQL instances and creates a Failover Group between them using ARM calls. | -| 9 | `direct-hydration` | Creates SQL Instance with Kubernetes API, validates presence in ARM. | -| 10 | `direct-upload` | Validates billing upload in Direct Mode | -| 11 | `kube-rbac` | Ensures Kubernetes Service Account permissions for Arc Data Services matches least-privilege expectations. | -| 12 | `nonroot` | Ensures containers run as non-root user | -| 13 | `postgres` | Completes various Postgres creation, scaling, backup/restore tests. | -| 14 | `release-sanitychecks` | Sanity checks for month-to-month releases, such as SQL Server Build versions. | -| 15 | `sqlinstance` | Shorter version of `ci-sqlinstance`, for fast validations. | -| 16 | `sqlinstance-ad` | Tests creation of SQL Instances with Active Directory Connector. | -| 17 | `sqlinstance-credentialrotation` | Tests automated Credential Rotation for both General Purpose and Business Critical. | -| 18 | `sqlinstance-ha` | Various High Availability Stress tests, including pod reboots, forced failovers and suspensions. | -| 19 | `sqlinstance-tde` | Various Transparent Data Encryption tests. | -| 20 | `telemetry-elasticsearch` | Validates Log ingestion into Elasticsearch. | -| 21 | `telemetry-grafana` | Validates Grafana is reachable. | -| 22 | `telemetry-influxdb` | Validates Metric ingestion into InfluxDB. | -| 23 | `telemetry-kafka` | Various tests for Kafka using SSL, single/multi-broker setup. | -| 24 | `telemetry-monitorstack` | Tests Monitoring components, such as `Fluentbit` and `Collectd` are functional. | -| 25 | `telemetry-telemetryrouter` | Tests Open Telemetry. | -| 26 | `telemetry-webhook` | Tests Data Services Webhooks with valid and invalid calls. | -| 27 | `upgrade-arcdata` | Upgrades a full suite of SQL Instances (GP, BC 2 replica, BC 3 replica, with Active Directory) and upgrades from last month's release to latest build. | --As an example, for `sqlinstance-ha`, the following tests are performed: --- `test_critical_configmaps_present`: Ensures the ConfigMaps and relevant fields are present for a SQL Instance.-- `test_suspended_system_dbs_auto_heal_by_orchestrator`: Ensures if `master` and `msdb` are suspended by any means (in this case, user). Orchestrator maintenance reconcile auto-heals it.-- `test_suspended_user_db_does_not_auto_heal_by_orchestrator`: Ensures if a User Database is deliberately suspended by user, Orchestrator maintenance reconcile does not auto-heal it.-- `test_delete_active_orchestrator_twice_and_delete_primary_pod`: Deletes orchestrator pod multiple times, followed by the primary replica, and verifies all replicas are synchronized. Failover time expectations for 2 replica are relaxed.-- `test_delete_primary_pod`: Deletes primary replica and verifies all replicas are synchronized. Failover time expectations for 2 replica are relaxed.-- `test_delete_primary_and_orchestrator_pod`: Deletes primary replica and orchestrator pod and verifies all replicas are synchronized.-- `test_delete_primary_and_controller`: Deletes primary replica and data controller pod and verifies primary endpoint is accessible and the new primary replica is synchronized. Failover time expectations for 2 replica are relaxed.-- `test_delete_one_secondary_pod`: Deletes secondary replica and data controller pod and verifies all replicas are synchronized.-- `test_delete_two_secondaries_pods`: Deletes secondary replicas and data controller pod and verifies all replicas are synchronized.-- `test_delete_controller_orchestrator_secondary_replica_pods`:-- `test_failaway`: Forces AG failover away from current primary, ensures the new primary is not the same as the old primary. Verifies all replicas are synchronized.-- `test_update_while_rebooting_all_non_primary_replicas`: Tests Controller-driven updates are resilient with retries despite various turbulent circumstances.--> [!NOTE] -> Certain tests may require specific hardware, such as privileged Access to Domain Controllers for `ad` tests for Account and DNS entry creation - which may not be available in all environments looking to use the `arc-ci-launcher`. --## Examining Test Results --A sample storage container and file uploaded by the launcher: --![A screenshot of the launcher storage container.](media/automated-integration-testing/launcher-storage-container.png) --![A screenshot of the launcher tarball.](media/automated-integration-testing/launcher-tarball.png) --And the test results generated from the run: --![A screenshot of the launcher test results.](media/automated-integration-testing/launcher-test-results.png) --## Clean up resources --To delete the launcher, run: -```bash -kubectl delete -k arc_data_services/test/launcher/overlays/aks -``` --This cleans up the resource manifests deployed as part of the launcher. --## Related content --> [!div class="nextstepaction"] -> [Pre-release testing](preview-testing.md) |
azure-arc | Azure Data Studio Dashboards | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/azure-data-studio-dashboards.md | - Title: Azure Data Studio dashboards -description: Azure Data Studio dashboards ------ Previously updated : 11/03/2021----# Azure Data Studio dashboards --[Azure Data Studio](/azure-data-studio/what-is-azure-data-studio) provides an experience similar to the Azure portal for viewing information about your Azure Arc resources. These views are called **dashboards** and have a layout and options similar to what you could see about a given resource in the Azure portal, but give you the flexibility of seeing that information locally in your environment in cases where you don't have a connection available to Azure. --## Connect to a data controller --### Prerequisites --- Download [Azure Data Studio](/azure-data-studio/download-azure-data-studio)-- Azure Arc extension is installed--### Connect --1. Open Azure Data Studio. -2. Select the **Connections** tab on the left. -3. Expand the panel called **Azure Arc Controllers**. -4. Select the **Connect Controller** button. -- Azure Data Studio opens a blade on the right side. --1. Enter the **Namespace** for the data controller. -- Azure Data Studio reads from the `kube.config` file in your default directory and lists the available Kubernetes cluster contexts. It selects the current cluster context. If this is the right cluster to connect to, use that namespace. -- If you need to retrieve the namespace where the Azure Arc data controller is deployed, you can run `kubectl get datacontrollers -A` on your Kubernetes cluster. --6. Optionally add a display name for the Azure Arc data controller in the input for **Name**. -7. Select **Connect**. ---After you connect to a data controller, you can view the dashboards. Azure Data Studio has dashboards for the data controller and any SQL managed instances or PostgreSQL server resources that you have. --## View the data controller dashboard --Right-click on the data controller in the Connections panel in the **Arc Controllers** expandable panel and choose **Manage**. --Here you can see details about the data controller resource such as name, region, connection mode, resource group, subscription, controller endpoint, and namespace. You can see a list of all of the managed database resources managed by the data controller as well. --You'll notice that the layout is similar to what you might see in the Azure portal. --Conveniently, you can launch the creation of a SQL managed instance or PostgreSQL server by clicking the + New Instance button. --You can also open the Azure portal in context to this data controller by clicking the Open in Azure portal button. --## View the SQL Managed Instance dashboards --If you have created some SQL Managed Instances, see them listed under **Connections** in the **Azure Data Controllers** expandable panel underneath the data controller that is managing them. --To view the SQL Managed Instance dashboard for a given instance, right-click on the instance and choose **Manage**. --The **Connection** panel prompts you for the login and password to connect to an instance. If you know the connection information you can enter it and choose **Connect**. If you don't know, choose **Cancel**. Either way, Azure Data Studio returns to the dashboard when the **Connection** panel closes. --On the **Overview** tab, view resource group, data controller, subscription ID, status, region, and other information. This location also provides links to the Grafana dashboard for viewing metrics or Kibana dashboard for viewing logs in context to that SQL managed instance. --With a connection to the SQL manage instance, you can see additional information here. --You can delete the SQL managed instance from here or open the Azure portal to view the SQL managed instance in the Azure portal. --If you click on the **Connection Strings** tab, the Azure Data Studio presents a list of pre-constructed connection strings for that instance making. Copy and paste these strings into various other applications or code. --## View the PostgreSQL server dashboards --If the deployment includes PostgreSQL servers, Azure Data Studio lists them in the **Connections** panel in the **Azure Data Controllers** expandable panel underneath the data controller that is managing them. --To view the PostgreSQL server dashboard for a given server group, right-click on the server group and choose Manage. --On the **Overview** tab, review details about the server group such as resource group, data controller, subscription ID, status, region and more. The tab also has links to the Grafana dashboard for viewing metrics or Kibana dashboard for viewing logs in context to that server group. --You can delete the server group from here or open the Azure portal to view the server group in the Azure portal. --If you click on the **Connection Strings** tab on the left, Azure Data Studio provides pre-constructed connection strings for that server group. Copy and paste these strings to various other applications or code. --Select the **Properties** tab on the left to see additional details. --The **Resource health** tab on the left displays the current health of that server group. --The **Diagnose and solve problems** tab on the left, launches the PostgreSQL troubleshooting notebook. --For Azure support, select the **New support request** tab. This launches the Azure portal in context to the server group. Create an Azure support request from there. --## Related content --- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md) |
azure-arc | Backup Controller Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/backup-controller-database.md | - Title: Back up controller database -description: Explains how to back up the controller database for Azure Arc-enabled data services ------ Previously updated : 04/26/2023----# Back up and recover controller database --When you deploy Azure Arc data services, the Azure Arc Data Controller is one of the most critical components that is deployed. The functions of the data controller include: --- Provision, de-provision and update resources-- Orchestrate most of the activities for SQL Managed Instance enabled by Azure Arc such as upgrades, scale out etc. -- Capture the billing and usage information of each Arc SQL managed instance. --In order to perform above functions, the Data controller needs to store an inventory of all the current Arc SQL managed instances, billing, usage and the current state of all these SQL managed instances. All this data is stored in a database called `controller` within the SQL Server instance that is deployed into the `controldb-0` pod. --This article explains how to back up the controller database. --## Back up data controller database --As part of built-in capabilities, the Data controller database `controller` is automatically backed up every 5 minutes once backups are enabled. To enable backups: --- Create a `backups-controldb` `PersistentVolumeClaim` with a storage class that supports `ReadWriteMany` access:--```yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: backups-controldb - namespace: <namespace> -spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 15Gi - storageClassName: <storage-class> -``` --- Edit the `DataController` custom resource spec to include a `backups` storage definition:--```yaml -storage: - backups: - accessMode: ReadWriteMany - className: <storage-class> - size: 15Gi - data: - accessMode: ReadWriteOnce - className: managed-premium - size: 15Gi - logs: - accessMode: ReadWriteOnce - className: managed-premium - size: 10Gi -``` --The `.bak` files for the `controller` database are stored on the `backups` volume of the `controldb` pod at `/var/opt/backups/mssql`. --## Recover controller database --There are two types of recovery possible: --1. `controller` is corrupted and you just need to restore the database -1. the entire storage that contains the `controller` data and log files is corrupted/gone and you need to recover --### Corrupted controller database scenario --In this scenario, all the pods are up and running, you are able to connect to the `controldb` SQL Server, and there may be a corruption with the `controller` database. You just need to restore the database from a backup. --Follow these steps to restore the controller database from a backup, if the SQL Server is still up and running on the `controldb` pod, and you are able to connect to it: --1. Verify connectivity to SQL Server pod hosting the `controller` database. -- - First, retrieve the credentials for the secret. `controller-system-secret` is the secret that holds the credentials for the `system` user account that can be used to connect to the SQL instance. - Run the following command to retrieve the secret contents: - - ```console - kubectl get secret controller-system-secret --namespace [namespace] -o yaml - ``` -- For example: -- ```console - kubectl get secret controller-system-secret --namespace arcdataservices -o yaml - ``` -- - Decode the base64 encoded credentials. The contents of the yaml file of the secret `controller-system-secret` contain a `password` and `username`. You can use any base64 decoder tool to decode the contents of the `password`. - - Verify connectivity: With the decoded credentials, run a command such as `SELECT @@SERVERNAME` to verify connectivity to the SQL Server. -- ```powershell - kubectl exec controldb-0 -n <namespace> -c mssql-server -- /opt/mssql-tools/bin/sqlcmd -S localhost -U system -P "<password>" -Q "SELECT @@SERVERNAME" - ``` - - ```powershell - kubectl exec controldb-0 -n contosons -c mssql-server -- /opt/mssql-tools/bin/sqlcmd -S localhost -U system -P "<password>" -Q "SELECT @@SERVERNAME" - ``` --1. Scale the controller ReplicaSet down to 0 replicas as follows: -- ```console - kubectl scale --replicas=0 rs/control -n <namespace>` - ``` -- For example: -- ```console - kubectl scale --replicas=0 rs/control -n arcdataservices - ``` --1. Connect to the `controldb` SQL Server as `system` as described in step 1. --1. Delete the corrupted controller database using T-SQL: -- ```sql - DROP DATABASE controller - ``` --1. Restore the database from backup - after the corrupted `controllerdb` is dropped. For example: -- ```sql - RESTORE DATABASE test FROM DISK = '/var/opt/backups/mssql/<controller backup file>.bak' - WITH MOVE 'controller' to '/var/opt/mssql/datf - ,MOVE 'controller' to '/var/opt/mssql/data/controller_log.ldf' - ,RECOVERY; - GO - ``` - -1. Scale the controller ReplicaSet back up to 1 replica. -- ```console - kubectl scale --replicas=1 rs/control -n <namespace> - ``` -- For example: -- ```console - kubectl scale --replicas=1 rs/control -n arcdataservices - ``` --### Corrupted storage scenario --In this scenario, the storage hosting the Data controller data and log files, has corruption and a new storage was provisioned and you need to restore the controller database. --Follow these steps to restore the controller database from a backup with new storage for the `controldb` StatefulSet: --1. Ensure that you have a backup of the last known good state of the `controller` database --2. Scale the controller ReplicaSet down to 0 replicas as follows: -- ```console - kubectl scale --replicas=0 rs/control -n <namespace> - ``` -- For example: -- ```console - kubectl scale --replicas=0 rs/control -n arcdataservices - ``` -3. Scale the `controldb` StatefulSet down to 0 replicas, as follows: -- ```console - kubectl scale --replicas=0 sts/controldb -n <namespace> - ``` -- For example: -- ```console - kubectl scale --replicas=0 sts/controldb -n arcdataservices` - ``` --4. Create a kubernetes secret named `controller-sa-secret` with the following YAML: -- ```yml - apiVersion: v1 - kind: Secret - metadata: - name: controller-sa-secret - namespace: <namespace> - type: Opaque - data: - password: <base64 encoded password> - ``` --5. Edit the `controldb` StatefulSet to include a `controller-sa-secret` volume and corresponding volume mount (`/var/run/secrets/mounts/credentials/mssql-sa-password`) in the `mssql-server` container, by using `kubectl edit sts controldb -n <namespace>` command. --6. Create new data (`data-controldb`) and logs (`logs-controldb`) persistent volume claims for the `controldb` pod as follows: -- ```yml - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: data-controldb - namespace: <namespace> - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 15Gi - storageClassName: <storage class> - - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: logs-controldb - namespace: <namespace> - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi - storageClassName: <storage class> - ``` --7. Scale the `controldb` StatefulSet back to 1 replica using: -- ```console - kubectl scale --replicas=1 sts/controldb -n <namespace> - ``` --8. Connect to the `controldb` SQL server as `sa` using the password in the `controller-sa-secret` secret created earlier. --9. Create a `system` login with sysadmin role using the password in the `controller-system-secret` kubernetes secret as follows: -- ```sql - CREATE LOGIN [system] WITH PASSWORD = '<password-from-secret>' - ALTER SERVER ROLE sysadmin ADD MEMBER [system] - ``` --10. Restore the backup using the `RESTORE` command as follows: -- ```sql - RESTORE DATABASE [controller] FROM DISK = N'/var/opt/backups/mssql/<controller backup file>.bak' WITH FILE = 1 - ``` --11. Create a `controldb-rw-user` login using the password in the `controller-db-rw-secret` secret `CREATE LOGIN [controldb-rw-user] WITH PASSWORD = '<password-from-secret>'` and associate it with the existing `controldb-rw-user` user in the controller DB `ALTER USER [controldb-rw-user] WITH LOGIN = [controldb-rw-user]`. --12. Disable the `sa` login using TSQL - `ALTER LOGIN [sa] DISABLE`. --13. Edit the `controldb` StatefulSet to remove the `controller-sa-secret` volume and corresponding volume mount. --14. Delete the `controller-sa-secret` secret. --16. Scale the controller ReplicaSet back up to 1 replica using the `kubectl scale` command. --## Related content --[Azure Data Studio dashboards](azure-data-studio-dashboards.md) |
azure-arc | Backup Restore Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/backup-restore-postgresql.md | - Title: Automated backup for Azure Arc-enabled PostgreSQL server -description: Explains how to configure backups for Azure Arc-enabled PostgreSQL server ------ Previously updated : 03/12/2023----# Automated backup Azure Arc-enabled PostgreSQL servers --To enable automated backups, include the `--storage-class-backups` argument when you create an Azure Arc-enabled PostgreSQL server. Specify the retention period for backups with the `--retention-days` parameter. Use this parameter when you create or update an Arc-enabled PostgreSQL server. The retention period can be between 0 and 35 days. If backups are enabled but no retention period is specified, the default is seven days. --Additionally, if you set the retention period to zero, then automated backups are disabled. ---## Create server with automated backup --Create an Azure Arc-enabled PostgreSQL server with automated backups: --```azurecli -az postgres server-arc create -n <name> -k <namespace> --storage-class-backups <storage-class> --retention-days <number of days> --use-k8s -``` --## Update a server to set retention period --Update the backup retention period for an Azure Arc-enabled PostgreSQL server: --```azurecli -az postgres server-arc update -n pg01 -k test --retention-days <number of days> --use-k8s -``` --## Related content --- [Restore Azure Arc-enabled PostgreSQL servers](restore-postgresql.md)-- [Scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-server-using-cli.md) your server. |
azure-arc | Change Postgresql Port | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/change-postgresql-port.md | - Title: Change the PostgreSQL port -description: Change the port on which the Azure Arc-enabled PostgreSQL server is listening. ------ Previously updated : 11/03/2021-----# Change the port on which the server group is listening --To change the port, edit the server group. For example, run the following command: --```azurecli - az postgres server-arc update -n <server name> --port <desired port number> --k8s-namespace <namespace> --use-k8s -``` --If the name of your server group is _postgres01_ and you would like it to listen on port _866_. Run the following command: --```azurecli - az postgres server-arc update -n postgres01 --port 866 --k8s-namespace arc --use-k8s -``` --## Verify that the port was changed --To verify that the port was changed, run the following command to show the configuration of your server group: --```azurecli -az postgres server-arc show -n <server name> --k8s-namespace <namespace> --use-k8s -``` --In the output of that command, look at the port number displayed for the item "port" in the "service" section of the specifications of your server group. --Alternatively, you can verify in the item `externalEndpoint` of the status section of the specifications of your server group that the IP address is followed by the port number you configured. --As an illustration, to continue the example above, run the command: --```azurecli -az postgres server-arc show -n postgres01 --k8s-namespace arc --use-k8s -``` --The command return port 866: --```output -"services": { - "primary": { - "port": 866, - "type": "LoadBalancer" - } - } -``` --In addition, note the value for `primaryEndpoint`. --```output -"primaryEndpoint": "12.345.67.890:866", -``` --## Related content -- Read about [how to connect to your server group](get-connection-endpoints-and-connection-strings-postgresql-server.md).-- Read about how you can configure other aspects of your server group in the section How-to\Manage\Configure & scale section of the documentation. |
azure-arc | Clean Up Past Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/clean-up-past-installation.md | - Title: Clean up past installations -description: Describes how to remove Azure Arc-enabled data controller and associated resources from past installations. ------ Previously updated : 07/11/2022----# Clean up from past installations --If you installed the data controller in the past and later deleted the data controller, there may be some cluster level objects that would still need to be deleted. --This article describes how to delete these cluster level objects. --## Replace values in sample script --For some of the tasks, you'll need to replace `{namespace}` with the value for your namespace. Substitute the name of the namespace the data controller was deployed in into `{namespace}`. If unsure, get the name of the `mutatingwebhookconfiguration` using `kubectl get clusterrolebinding`. --## Run script to remove artifacts --Run the following commands to delete the data controller cluster level objects: --> [!NOTE] -> Not all of these objects will exist in your environment. The objects in your environment depend on which version of the Arc data controller was installed --```console -# Clean up azure arc data service artifacts --# Custom resource definitions (CRD) -kubectl delete crd datacontrollers.arcdata.microsoft.com -kubectl delete crd postgresqls.arcdata.microsoft.com -kubectl delete crd sqlmanagedinstances.sql.arcdata.microsoft.com -kubectl delete crd sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com -kubectl delete crd dags.sql.arcdata.microsoft.com -kubectl delete crd exporttasks.tasks.arcdata.microsoft.com -kubectl delete crd monitors.arcdata.microsoft.com -kubectl delete crd activedirectoryconnectors.arcdata.microsoft.com -kubectl delete crd failovergroups.sql.arcdata.microsoft.com -kubectl delete crd kafkas.arcdata.microsoft.com -kubectl delete crd postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com -kubectl delete crd sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com -kubectl delete crd sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com -kubectl delete crd telemetrycollectors.arcdata.microsoft.com -kubectl delete crd telemetryrouters.arcdata.microsoft.com --# Substitute the name of the namespace the data controller was deployed in into {namespace}. --# Cluster roles and role bindings -kubectl delete clusterrole arcdataservices-extension -kubectl delete clusterrole arc:cr-arc-metricsdc-reader -kubectl delete clusterrole arc:cr-arc-dc-watch -kubectl delete clusterrole cr-arc-webhook-job -kubectl delete clusterrole {namespace}:cr-upgrade-worker -kubectl delete clusterrole {namespace}:cr-deployer -kubectl delete clusterrolebinding {namespace}:crb-arc-metricsdc-reader -kubectl delete clusterrolebinding {namespace}:crb-arc-dc-watch -kubectl delete clusterrolebinding crb-arc-webhook-job -kubectl delete clusterrolebinding {namespace}:crb-upgrade-worker -kubectl delete clusterrolebinding {namespace}:crb-deployer --# Substitute the name of the namespace the data controller was deployed in into {namespace}. If unsure, get the name of the mutatingwebhookconfiguration using 'kubectl get clusterrolebinding' --# API services -# Up to May 2021 release -kubectl delete apiservice v1alpha1.arcdata.microsoft.com -kubectl delete apiservice v1alpha1.sql.arcdata.microsoft.com --# June 2021 release -kubectl delete apiservice v1beta1.arcdata.microsoft.com -kubectl delete apiservice v1beta1.sql.arcdata.microsoft.com --# GA/July 2021 release -kubectl delete apiservice v1.arcdata.microsoft.com -kubectl delete apiservice v1.sql.arcdata.microsoft.com --# Substitute the name of the namespace the data controller was deployed in into {namespace}. If unsure, get the name of the mutatingwebhookconfiguration using 'kubectl get mutatingwebhookconfiguration' -kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{namespace} -``` --## Related content --[Start by creating a Data Controller](create-data-controller-indirect-cli.md) --Already created a Data Controller? [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) |
azure-arc | Configure Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-managed-instance.md | - Title: Configure SQL Managed Instance enabled by Azure Arc -description: Configure SQL Managed Instance enabled by Azure Arc. --- Previously updated : 12/05/2023----- - devx-track-azurecli --# Configure SQL Managed Instance enabled by Azure Arc --This article explains how to configure SQL Managed Instance enabled by Azure Arc. --## Configure resources such as cores and memory --### Configure using CLI --To update the configuration of an instance with the CLI. Run the following command to see configuration options. --```azurecli -az sql mi-arc update --help -``` --To update the available memory and cores for an instance use: --```azurecli -az sql mi-arc update --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s -``` --The following example sets the cpu core and memory requests and limits. --```azurecli -az sql mi-arc update --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n sqlinstance1 --k8s-namespace arc --use-k8s -``` --To view the changes made to the instance, you can use the following commands to view the configuration yaml file: --```azurecli -az sql mi-arc show -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s -``` --## Configure readable secondaries --When you deploy SQL Managed Instance enabled by Azure Arc in `BusinessCritical` service tier with 2 or more replicas, by default, one secondary replica is automatically configured as `readableSecondary`. This setting can be changed, either to add or to remove the readable secondaries as follows: --```azurecli -az sql mi-arc update --name <sqlmi name> --readable-secondaries <value> --k8s-namespace <namespace> --use-k8s -``` --For example, the following example resets the readable secondaries to 0. --```azurecli -az sql mi-arc update --name sqlmi1 --readable-secondaries 0 --k8s-namespace mynamespace --use-k8s -``` --## Configure replicas --You can also scale up or down the number of replicas deployed in the `BusinessCritical` service tier as follows: --```azurecli -az sql mi-arc update --name <sqlmi name> --replicas <value> --k8s-namespace <namespace> --use-k8s -``` --For example: --The following example scales down the number of replicas from 3 to 2. --```azurecli -az sql mi-arc update --name sqlmi1 --replicas 2 --k8s-namespace mynamespace --use-k8s -``` --> [!NOTE] -> If you scale down from 2 replicas to 1 replica, you might run into a conflict with the pre-configured `--readable--secondaries` setting. You can first edit the `--readable--secondaries` before scaling down the replicas. --## Configure server options --You can configure certain server configuration settings for SQL Managed Instance enabled by Azure Arc either during or after creation time. This article describes how to configure settings like enabling "Ad Hoc Distributed Queries" or "backup compression default" etc. --Currently the following server options can be configured: -- Ad Hoc Distributed Queries-- Default Trace Enabled-- Database Mail XPs-- Backup compression default-- Cost threshold for parallelism-- Optimize for ad hoc workloads--> [!NOTE] -> - Currently these options can only be specified via YAML file, either during SQL Managed Instance creation or post deployment. -> -> - The SQL managed instance image tag has to be at least version v1.19.x or above. --Add the following to your YAML file during deployment to configure any of these options. --```yml -spec: - serverConfigurations: - - name: "Ad Hoc Distributed Queries" - value: 1 - - name: "Default Trace Enabled" - value: 0 - - name: "Database Mail XPs" - value: 1 - - name: "backup compression default" - value: 1 - - name: "cost threshold for parallelism" - value: 50 - - name: "optimize for ad hoc workloads" - value: 1 -``` --If you already have an existing SQL managed instance enabled by Azure Arc, you can run `kubectl edit sqlmi <sqlminame> -n <namespace>` and add the above options into the spec. --Example YAML file: --```yml -apiVersion: sql.arcdata.microsoft.com/v13 -kind: SqlManagedInstance -metadata: - name: sql1 - annotations: - exampleannotation1: exampleannotationvalue1 - exampleannotation2: exampleannotationvalue2 - labels: - examplelabel1: examplelabelvalue1 - examplelabel2: examplelabelvalue2 -spec: - dev: true #options: [true, false] - licenseType: LicenseIncluded #options: [LicenseIncluded, BasePrice]. BasePrice is used for Azure Hybrid Benefits. - tier: GeneralPurpose #options: [GeneralPurpose, BusinessCritical] - serverConfigurations: - - name: "Ad Hoc Distributed Queries" - value: 1 - - name: "Default Trace Enabled" - value: 0 - - name: "Database Mail XPs" - value: 1 - - name: "backup compression default" - value: 1 - - name: "cost threshold for parallelism" - value: 50 - - name: "optimize for ad hoc workloads" - value: 1 - security: - adminLoginSecret: sql1-login-secret - scheduling: - default: - resources: - limits: - cpu: "2" - memory: 4Gi - requests: - cpu: "1" - memory: 2Gi - - primary: - type: LoadBalancer - storage: - backups: - volumes: - - className: azurefile # Backup volumes require a ReadWriteMany (RWX) capable storage class - size: 5Gi - data: - volumes: - - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment - size: 5Gi - datalogs: - volumes: - - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment - size: 5Gi - logs: - volumes: - - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment - size: 5Gi -``` --## Enable SQL Server Agent --SQL Server agent is disabled during a default deployment of SQL Managed Instance enabled by Azure Arc. It can be enabled by running the following command: --```azurecli -az sql mi-arc update -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s --agent-enabled true -``` --As an example: --```azurecli -az sql mi-arc update -n sqlinstance1 --k8s-namespace arc --use-k8s --agent-enabled true -``` --## Enable trace flags --Trace flags can be enabled as follows: --```azurecli -az sql mi-arc update -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s --trace-flags "3614,1234" -``` |
azure-arc | Configure Security Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-security-postgresql.md | - Title: Configure security for your Azure Arc-enabled PostgreSQL server -description: Configure security for your Azure Arc-enabled PostgreSQL server ------ Previously updated : 11/03/2021----# Configure security for your Azure Arc-enabled PostgreSQL server --This document describes various aspects related to security of your server group: --- Encryption at rest-- Postgres roles and users management- - General perspectives - - Change the password of the _postgres_ administrative user -- Audit---## Encryption at rest --You can implement encryption at rest either by encrypting the disks on which you store your databases and/or by using database functions to encrypt the data you insert or update. --### Hardware: Linux host volume encryption --Implement system data encryption to secure any data that resides on the disks used by your Azure Arc-enabled Data Services setup. You can read more about this topic: --- [Data encryption at rest](https://wiki.archlinux.org/index.php/Data-at-rest_encryption) on Linux in general -- Disk encryption with LUKS `cryptsetup` command (Linux)(https://www.cyberciti.biz/security/howto-linux-hard-disk-encryption-with-luks-cryptsetup-command/) specifically. Since Azure Arc-enabled Data Services runs on the physical infrastructure that you provide, you are in charge of securing the infrastructure.--### Software: Use the PostgreSQL `pgcrypto` extension in your server group --In addition of encrypting the disks used to host your Azure Arc setup, you can configure your Azure Arc-enabled PostgreSQL server to expose mechanisms that your applications can use to encrypt data in your database(s). The `pgcrypto` extension is part of the `contrib` extensions of Postgres and is available in your Azure Arc-enabled PostgreSQL server. You find details about the `pgcrypto` extension [here](https://www.postgresql.org/docs/current/pgcrypto.html). -In summary, with the following commands, you enable the extension, you create it and you use it: --#### Create the `pgcrypto` extension --Connect to your server group with the client tool of your choice and run the standard PostgreSQL query: --```console -CREATE EXTENSION pgcrypto; -``` --> Find details [here](get-connection-endpoints-and-connection-strings-postgresql-server.md) about how to connect. --#### Verify the list the extensions ready to use in your server group --You can verify that the `pgcrypto` extension is ready to use by listing the extensions available in your server group. -Connect to your server group with the client tool of your choice and run the standard PostgreSQL query: --```console -select * from pg_extension; -``` -You should see `pgcrypto` if you enabled and created it with the commands indicated above. --#### Use the `pgcrypto` extension --Now you can adjust the code your applications so that they use any of the functions offered by `pgcrypto`: --- General hashing functions-- Password hashing functions-- PGP encryption functions-- Raw encryption functions-- Random-data functions--For example, to generate hash values. Run the command: --```console -select crypt('Les sanglots longs des violons de l_automne', gen_salt('md5')); -``` --Returns the following hash: --```console - crypt -- $1$/9ACBYOV$z52PAGjQ5WTU9xvEECBNv/ -``` --Or, for example: --```console -select hmac('Les sanglots longs des violons de l_automne', 'md5', 'sha256'); -``` --Returns the following hash: --```console - hmac - \xd4e4790b69d2cc8dbce3385ee63272bc7760f1603640bb211a7b864e695570c5 -``` --Or, for example, to store encrypted data like a password: --- An application stores secrets in the following table:-- ```console - create table mysecrets(USERid int, USERname char(255), USERpassword char(512)); - ``` --- Encrypt their password when creating a user:-- ```console - insert into mysecrets values (1, 'Me', crypt('MySecretPasswrod', gen_salt('md5'))); - ``` --- Notice that the password is encrypted:-- ```console - select * from mysecrets; - ``` --Output: --```output -- USERid: 1-- USERname: Me-- USERpassword: $1$Uc7jzZOp$NTfcGo7F10zGOkXOwjHy31-``` --When you connect with the application and pass a password, it looks up in the `mysecrets` table and returns the name of the user if there is a match between the password that is provided to the application and the passwords stored in the table. For example: ---- Pass the wrong password:- - ```console - select USERname from mysecrets where (USERpassword = crypt('WrongPassword', USERpassword)); - ``` -- Output -- ```output - USERname - - (0 rows) - ``` --- Pass the correct password:-- ```console - select USERname from mysecrets where (USERpassword = crypt('MySecretPasswrod', USERpassword)); - ``` -- Output: -- ```output - USERname - - Me - (1 row) - ``` --This small example demonstrates that you can encrypt data at rest (store encrypted data) in Azure Arc-enabled PostgreSQL server using the Postgres `pgcrypto` extension and your applications can use functions offered by `pgcrypto` to manipulate this encrypted data. --## Postgres roles and users management --### General perspectives --To configure roles and users in your Azure Arc-enabled PostgreSQL server, use the standard Postgres way to manage roles and users. For more details, read [here](https://www.postgresql.org/docs/12/user-manag.html). --## Audit --For audit scenarios please configure your server group to use the `pgaudit` extensions of Postgres. For more details about `pgaudit` see [`pgAudit` GitHub project](https://github.com/pgaudit/pgaudit/blob/master/README.md). To enable the `pgaudit` extension in your server group read [Use PostgreSQL extensions](using-extensions-in-postgresql-server.md). --## Use SSL connection --SSL is required for client connections. In connection string, the SSL mode parameter should not be disabled. [Form connection strings](get-connection-endpoints-and-connection-strings-postgresql-server.md#form-connection-strings). --## Related content -- See [`pgcrypto` extension](https://www.postgresql.org/docs/current/pgcrypto.html)-- See [Use PostgreSQL extensions](using-extensions-in-postgresql-server.md) |
azure-arc | Configure Transparent Data Encryption Manually | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-manually.md | - Title: Encrypt a database with transparent data encryption manually in SQL Managed Instance enabled by Azure Arc -description: How-to guide to turn on transparent data encryption in an SQL Managed Instance enabled by Azure Arc ------- Previously updated : 05/22/2022----# Encrypt a database with transparent data encryption on SQL Managed Instance enabled by Azure Arc --This article describes how to enable transparent data encryption on a database created in a SQL Managed Instance enabled by Azure Arc. In this article, the term *managed instance* refers to a deployment of SQL Managed Instance enabled by Azure Arc. --## Prerequisites --Before you proceed with this article, you must have a SQL Managed Instance enabled by Azure Arc resource created and connect to it. --- [Create a SQL Managed Instance enabled by Azure Arc](./create-sql-managed-instance.md)-- [Connect to SQL Managed Instance enabled by Azure Arc](./connect-managed-instance.md)--## Turn on transparent data encryption on a database in the managed instance --Turning on transparent data encryption in the managed instance follows the same steps as SQL Server on-premises. Follow the steps described in [SQL Server's transparent data encryption guide](/sql/relational-databases/security/encryption/transparent-data-encryption#enable-tde). --After you create the necessary credentials, back up any newly created credentials. --## Back up a transparent data encryption credential --When you back up credentials from the managed instance, the credentials are stored within the container. To store credentials on a persistent volume, specify the mount path in the container. For example, `var/opt/mssql/data`. The following example backs up a certificate from the managed instance: --> [!NOTE] -> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below. --1. Back up the certificate from the container to `/var/opt/mssql/data`. -- ```sql - USE master; - GO -- BACKUP CERTIFICATE <cert-name> TO FILE = '<cert-path>' - WITH PRIVATE KEY ( FILE = '<private-key-path>', - ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>'); - ``` -- Example: -- ```sql - USE master; - GO -- BACKUP CERTIFICATE MyServerCert TO FILE = '/var/opt/mssql/data/servercert.crt' - WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key', - ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>'); - ``` --2. Copy the certificate from the container to your file system. --### [Windows](#tab/windows) -- ```console - kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-certificate-path> > <local-certificate-path> - ``` -- Example: -- ```console - kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.crt > $HOME\sqlcerts\servercert.crt - ``` --### [Linux](#tab/linux) - ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-certificate-path> <local-certificate-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.crt $HOME/sqlcerts/servercert.crt - ``` ----3. Copy the private key from the container to your file system. --### [Windows](#tab/windows) - ```console - kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-private-key-path> > <local-private-key-path> - ``` -- Example: -- ```console - kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.key > $HOME\sqlcerts\servercert.key - ``` --### [Linux](#tab/linux) - ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-private-key-path> <local-private-key-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.key $HOME/sqlcerts/servercert.key - ``` ----4. Delete the certificate and private key from the container. -- ```console - kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path> - ``` -- Example: -- ```console - kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" - ``` --## Restore a transparent data encryption credential to a managed instance --Similar to above, to restore the credentials, copy them into the container and run the corresponding T-SQL afterwards. --> [!NOTE] -> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below. --1. Copy the certificate from your file system to the container. -### [Windows](#tab/windows) - ```console - type <local-certificate-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-certificate-path> - ``` -- Example: -- ```console - type $HOME\sqlcerts\servercert.crt | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.crt - ``` --### [Linux](#tab/linux) - ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <local-certificate-path> <pod-name>:<pod-certificate-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.crt sql-0:/var/opt/mssql/data/servercert.crt - ``` ----2. Copy the private key from your file system to the container. -### [Windows](#tab/windows) - ```console - type <local-private-key-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-private-key-path> - ``` -- Example: -- ```console - type $HOME\sqlcerts\servercert.key | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.key - ``` --### [Linux](#tab/linux) - ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <local-private-key-path> <pod-name>:<pod-private-key-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.key sql-0:/var/opt/mssql/data/servercert.key - ``` ----3. Create the certificate using file paths from `/var/opt/mssql/data`. -- ```sql - USE master; - GO -- CREATE CERTIFICATE <certicate-name> - FROM FILE = '<certificate-path>' - WITH PRIVATE KEY ( FILE = '<private-key-path>', - DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' ); - ``` -- Example: -- ```sql - USE master; - GO -- CREATE CERTIFICATE MyServerCertRestored - FROM FILE = '/var/opt/mssql/data/servercert.crt' - WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key', - DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' ); - ``` --4. Delete the certificate and private key from the container. -- ```console - kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path> - ``` -- Example: -- ```console - kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" - ``` --## Related content --[Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) |
azure-arc | Configure Transparent Data Encryption Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-sql-managed-instance.md | - Title: Turn on transparent data encryption in SQL Managed Instance enabled by Azure Arc (preview) -description: How-to guide to turn on transparent data encryption in an SQL Managed Instance enabled by Azure Arc (preview) ------- Previously updated : 06/06/2023----# Enable transparent data encryption on SQL Managed Instance enabled by Azure Arc (preview) --This article describes how to enable and disable transparent data encryption (TDE) at-rest on a SQL Managed Instance enabled by Azure Arc. In this article, the term *managed instance* refers to a deployment of SQL Managed Instance enabled by Azure Arc and enabling/disabling TDE will apply to all databases running on a managed instance. --For more info on TDE, please refer to [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption). --Turning on the TDE feature does the following: --- All existing databases will now be automatically encrypted.-- All newly created databases will get automatically encrypted.---## Prerequisites --Before you proceed with this article, you must have a SQL Managed Instance enabled by Azure Arc resource created and connect to it. --- [Create a SQL Managed Instance enabled by Azure Arc](./create-sql-managed-instance.md)-- [Connect to SQL Managed Instance enabled by Azure Arc](./connect-managed-instance.md)--## Limitations --The following limitations apply when you enable automatic TDE: --- Only General Purpose Tier is supported.-- Failover groups aren't supported.---## Create a managed instance with TDE enabled (Azure CLI) --The following example creates a SQL Managed Instance enabled by Azure Arc with one replica, TDE enabled: --```azurecli -az sql mi-arc create --name sqlmi-tde --k8s-namespace arc --tde-mode ServiceManaged --use-k8s -``` --## Turn on TDE on the managed instance --When TDE is enabled on Arc-enabled SQL Managed Instance, the data service automatically does the following tasks: --1. Adds the service-managed database master key in the `master` database. -2. Adds the service-managed certificate protector. -3. Adds the associated Database Encryption Keys (DEK) on all databases on the managed instance. -4. Enables encryption on all databases on the managed instance. --You can set SQL Managed Instance enabled by Azure Arc TDE in one of two modes: --- Service-managed-- Customer-managed--In service-managed mode, TDE requires the managed instance to use a service-managed database master key as well as the service-managed server certificate. These credentials are automatically created when service-managed TDE is enabled. --In customer-managed mode, TDE uses a service-managed database master key and uses keys you provide for the server certificate. To configure customer-managed mode: --1. Create a certificate. -1. Store the certificate as a secret in the same Kubernetes namespace as the instance. --### Enable --# [Service-managed](#tab/service-managed) --The following section explains how to enable TDE in service-managed mode. --# [Customer-managed](#tab/customer-managed) --The following section explains how to enable TDE in customer-managed mode. ----# [Azure CLI](#tab/azure-cli/service-managed) --To enable TDE in service managed mode, run the following command: --```azurecli -az sql mi-arc update --tde-mode ServiceManaged -``` --# [Kubernetes native tools](#tab/kubernetes-native/service-managed) --To enable TDE in service-managed mode, run kubectl patch to enable service-managed TDE: --```console -kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "ServiceManaged" } } } }' -``` --Example: --```console -kubectl patch sqlmi sqlmi-tde --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "ServiceManaged" } } } }' -``` --# [Azure CLI](#tab/azure-cli/customer-managed) --To enable TDE in customer-managed mode with Azure CLI: --1. Create a certificate. -- ```console - openssl req -x509 -newkey rsa:2048 -nodes -keyout <key-file> -days 365 -out <cert-file> - ``` --1. Create a secret for the certificate. -- > [!IMPORTANT] - > Store the secret in the same namespace as the managed instance -- ```console - kubectl create secret generic <tde-secret-name> --from-literal=privatekey.pem="$(cat <key-file>)" --from-literal=certificate.pem="$(cat <cert-file>) --namespace <namespace>" - ``` --1. Update and run the following example to enable customer-managed TDE: -- ```azurecli - az sql mi-arc update --tde-mode CustomerManaged --tde-protector-private-key-file <key-file> --tde-protector-public-key-file <cert-file> - ``` --# [Kubernetes native tools](#tab/kubernetes-native/customer-managed) --To enable TDE in customer-managed mode: --1. Create a certificate. -- ```console - openssl req -x509 -newkey rsa:2048 -nodes -keyout <key-file> -days 365 -out <cert-file> - ``` --1. Create a secret for the certificate. -- > [!IMPORTANT] - > Store the secret in the same namespace as the managed instance -- ```console - kubectl create secret generic <tde-secret-name> --from-literal=privatekey.pem="$(cat <key-file>)" --from-literal=certificate.pem="$(cat <cert-file>) --namespace <namespace>" - ``` --1. Run `kubectl patch ...` to enable customer-managed TDE -- ```console - kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "CustomerManaged", "protectorSecret": "<tde-secret-name>" } } } }' - ``` -- Example: -- ```console - kubectl patch sqlmi sqlmi-tde --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "CustomerManaged", "protectorSecret": "sqlmi-tde-protector-cert-secret" } } } }' - ``` -----## Turn off TDE on the managed instance --When TDE is disabled on Arc-enabled SQL Managed Instance, the data service automatically does the following tasks: --1. Disables encryption on all databases on the managed instance. -2. Drops the associated DEKs on all databases on the managed instance. -3. Drops the service-managed certificate protector. -4. Drops the service-managed database master key in the `master` database. --# [Azure CLI](#tab/azure-cli) --To disable TDE: --```azurecli -az sql mi-arc update --tde-mode Disabled -``` --# [Kubernetes native tools](#tab/kubernetes-native) --Run kubectl patch to disable service-managed TDE. --```console -kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "Disabled" } } } }' -``` --Example: -```console -kubectl patch sqlmi sqlmi-tde --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "Disabled" } } } }' -``` -----## Back up a TDE credential --When you back up credentials from the managed instance, the credentials are stored within the container. To store credentials on a persistent volume, specify the mount path in the container. For example, `var/opt/mssql/data`. The following example backs up a certificate from the managed instance: --> [!NOTE] -> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below. --1. Back up the certificate from the container to `/var/opt/mssql/data`. -- ```sql - USE master; - GO -- BACKUP CERTIFICATE <cert-name> TO FILE = '<cert-path>' - WITH PRIVATE KEY ( FILE = '<private-key-path>', - ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>'); - ``` -- Example: -- ```sql - USE master; - GO -- BACKUP CERTIFICATE MyServerCert TO FILE = '/var/opt/mssql/data/servercert.crt' - WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key', - ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>'); - ``` --2. Copy the certificate from the container to your file system. -- ### [Windows](#tab/windows) -- ```console - kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-certificate-path> > <local-certificate-path> - ``` -- Example: -- ```console - kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.crt > $HOME\sqlcerts\servercert.crt - ``` -- ### [Linux](#tab/linux) - ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-certificate-path> <local-certificate-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.crt $HOME/sqlcerts/servercert.crt - ``` -- --3. Copy the private key from the container to your file system. -- ### [Windows](#tab/windows) -- ```console - kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-private-key-path> > <local-private-key-path> - ``` -- Example: -- ```console - kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.key > $HOME\sqlcerts\servercert.key - ``` -- ### [Linux](#tab/linux) -- ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-private-key-path> <local-private-key-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.key $HOME/sqlcerts/servercert.key - ``` -- --4. Delete the certificate and private key from the container. -- ```console - kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path> - ``` -- Example: -- ```console - kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" - ``` --## Restore a TDE credential to a managed instance --Similar to above, to restore the credentials, copy them into the container and run the corresponding T-SQL afterwards. ----> [!NOTE] -> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below. -> To restore database backups that have been taken before enabling TDE, you would need to disable TDE on the SQL Managed Instance, restore the database backup and enable TDE again. --1. Copy the certificate from your file system to the container. -- ### [Windows](#tab/windows) -- ```console - type <local-certificate-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-certificate-path> - ``` -- Example: -- ```console - type $HOME\sqlcerts\servercert.crt | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.crt - ``` -- ### [Linux](#tab/linux) -- ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <local-certificate-path> <pod-name>:<pod-certificate-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.crt sql-0:/var/opt/mssql/data/servercert.crt - ``` -- --2. Copy the private key from your file system to the container. -- # [Windows](#tab/windows) - - ```console - type <local-private-key-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-private-key-path> - ``` -- Example: -- ```console - type $HOME\sqlcerts\servercert.key | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.key - ``` -- ### [Linux](#tab/linux) -- ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <local-private-key-path> <pod-name>:<pod-private-key-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.key sql-0:/var/opt/mssql/data/servercert.key - ``` - --3. Create the certificate using file paths from `/var/opt/mssql/data`. -- ```sql - USE master; - GO -- CREATE CERTIFICATE <certicate-name> - FROM FILE = '<certificate-path>' - WITH PRIVATE KEY ( FILE = '<private-key-path>', - DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' ); - ``` -- Example: -- ```sql - USE master; - GO -- CREATE CERTIFICATE MyServerCertRestored - FROM FILE = '/var/opt/mssql/data/servercert.crt' - WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key', - DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' ); - ``` --4. Delete the certificate and private key from the container. -- ```console - kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path> - ``` -- Example: -- ```console - kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" - ``` --## Related content --[Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) |
azure-arc | Connect Active Directory Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connect-active-directory-sql-managed-instance.md | - Title: Connect to AD-integrated SQL Managed Instance enabled by Azure Arc -description: Connect to AD-integrated SQL Managed Instance enabled by Azure Arc ------ Previously updated : 10/11/2022----# Connect to AD-integrated SQL Managed Instance enabled by Azure Arc --This article describes how to connect to SQL Managed Instance endpoint using Active Directory (AD) authentication. Before you proceed, make sure you have an AD-integrated SQL Managed Instance enabled by Azure Arc deployed already. --See [Tutorial ΓÇô Deploy AD-integrated SQL Managed Instance](deploy-active-directory-sql-managed-instance.md) to deploy SQL Managed Instance enabled by Azure Arc with Active Directory authentication enabled. --> [!NOTE] -> Ensure that a DNS record for the SQL endpoint is created in Active Directory DNS servers before continuing on this page. --## Create Active Directory logins in SQL Managed Instance --Once SQL Managed Instance is successfully deployed, you will need to provision Active Directory logins in SQL Server. --To provision logins, first connect to the SQL Managed Instance using the SQL login with administrative privileges and run the following T-SQL: --```sql -CREATE LOGIN [<NetBIOS domain name>\<AD account name>] FROM WINDOWS; -GO -``` --The following example creates a login for an Active Directory account named `admin`, in the domain named `contoso.local`, with NetBIOS domain name as `CONTOSO`: --```sql -CREATE LOGIN [CONTOSO\admin] FROM WINDOWS; -GO -``` --## Connect to SQL Managed Instance enabled by Azure Arc --From your domain joined Windows-based client machine or a Linux-based domain aware machine, you can use `sqlcmd` utility, or open [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio (ADS)](/azure-data-studio/download-azure-data-studio) to connect to the instance with AD authentication. --A domain-aware Linux-based machine is one where you are able to use Kerberos authentication using kinit. Such machine should have /etc/krb5.conf file set to point to the Active Directory domain (realm) being used. It should also have /etc/resolv.conf file set such that one can run DNS lookups against the Active Directory domain. ---### Connect from Linux/Mac OS --To connect from a Linux/Mac OS client, authenticate to Active Directory using the kinit command and then use sqlcmd tool to connect to the SQL Managed Instance. --```console -kinit <username>@<REALM> -sqlcmd -S <Endpoint DNS name>,<Endpoint port number> -E -``` --For example, to connect with the CONTOSO\admin account to the SQL managed instance with endpoint `sqlmi.contoso.local` at port `31433`, use the following command: --```console -kinit admin@CONTOSO.LOCAL -sqlcmd -S sqlmi.contoso.local,31433 -E -``` --In the example, `-E` specifies Active Directory integrated authentication. --## Connect SQL Managed Instance from Windows --To log in to SQL Managed Instance with your current Windows Active Directory login, run the following command: --```console -sqlcmd -S <DNS name for master instance>,31433 -E -``` --## Connect to SQL Managed Instance from SSMS --![Connect with SSMS](media/active-directory-deployment/connect-with-ssms.png) --## Connect to SQL Managed Instance from ADS --![Connect with ADS](media/active-directory-deployment/connect-with-ads.png) |
azure-arc | Connect Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connect-managed-instance.md | - Title: Connect to SQL Managed Instance enabled by Azure Arc -description: Connect to SQL Managed Instance enabled by Azure Arc ------ Previously updated : 07/30/2021---# Connect to SQL Managed Instance enabled by Azure Arc --This article explains how you can connect to your SQL Managed Instance enabled by Azure Arc. ---## View SQL Managed Instance enabled by Azure Arc --To view instance and the external endpoints, use the following command: --```azurecli -az sql mi-arc list --k8s-namespace <namespace> --use-k8s -o table -``` --Output should look like this: --```console -Name PrimaryEndpoint Replicas State - - - - -sqldemo 10.240.0.107,1433 1/1 Ready -``` --If you are using AKS or kubeadm or OpenShift etc., you can copy the external IP and port number from here and connect to it using your favorite tool for connecting to a SQL Sever/Azure SQL instance such as Azure Data Studio or SQL Server Management Studio. However, if you are using the quick start VM, see below for special information about how to connect to that VM from outside of Azure. --> [!NOTE] -> Your corporate policies may block access to the IP and port, especially if this is created in the public cloud. --## Connect --Connect with Azure Data Studio, SQL Server Management Studio, or SQLCMD --Open Azure Data Studio and connect to your instance with the external endpoint IP address and port number above. If you are using an Azure VM you will need the _public_ IP address, which is identifiable using the [Special note about Azure virtual machine deployments](#special-note-about-azure-virtual-machine-deployments). --For example: --- Server: 52.229.9.30,30913-- Username: sa-- Password: your specified SQL password at provisioning time--> [!NOTE] -> You can use Azure Data Studio [view the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards). --> [!NOTE] -> In order to connect to a managed instance that was created using a Kubernetes manifest, the username and password need to be provided to sqlcmd in base64 encoded form. --To connect using SQLCMD or Linux or Windows you can use a command like this. Enter the SQL password when prompted: --```bash -sqlcmd -S 52.229.9.30,30913 -U sa -``` --## Special note about Azure virtual machine deployments --If you are using an Azure virtual machine, then the endpoint IP address will not show the public IP address. To locate the external IP address, use the following command: --```azurecli -az network public-ip list -g azurearcvm-rg --query "[].{PublicIP:ipAddress}" -o table -``` --You can then combine the public IP address with the port to make your connection. --You may also need to expose the port of the sql instance through the network security gateway (NSG). To allow traffic through the (NSG) you will need to add a rule which you can do using the following command. --To set a rule you will need to know the name of your NSG which you can find out using the command below: --```azurecli -az network nsg list -g azurearcvm-rg --query "[].{NSGName:name}" -o table -``` --Once you have the name of the NSG, you can add a firewall rule using the following command. The example values here create an NSG rule for port 30913 and allows connection from **any** source IP address. This is not a security best practice! You can lock things down better by specifying a -source-address-prefixes value that is specific to your client IP address or an IP address range that covers your team's or organization's IP addresses. --Replace the value of the `--destination-port-ranges` parameter below with the port number you got from the `az sql mi-arc list` command above. --```azurecli -az network nsg rule create -n db_port --destination-port-ranges 30913 --source-address-prefixes '*' --nsg-name azurearcvmNSG --priority 500 -g azurearcvm-rg --access Allow --description 'Allow port through for db access' --destination-address-prefixes '*' --direction Inbound --protocol Tcp --source-port-ranges '*' -``` --## Related content --- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards)-- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md) |
azure-arc | Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connectivity.md | - Title: Connectivity modes and requirements -description: Explains Azure Arc-enabled data services connectivity options for from your environment to Azure ------ Previously updated : 07/19/2023----# Connectivity modes and requirements --This article describes the connectivity modes available for Azure Arc-enabled data services, and their respective requirements. --## Connectivity modes --There are multiple options for the degree of connectivity from your Azure Arc-enabled data services environment to Azure. As your requirements vary based on business policy, government regulation, or the availability of network connectivity to Azure, you can choose from the following connectivity modes. --Azure Arc-enabled data services provide you the option to connect to Azure in two different *connectivity modes*: --- Directly connected -- Indirectly connected--The connectivity mode provides you the flexibility to choose how much data is sent to Azure and how users interact with the Arc Data Controller. Depending on the connectivity mode that is chosen, some functionality of Azure Arc-enabled data services might or might not be available. --Importantly, if the Azure Arc-enabled data services are directly connected to Azure, then users can use [Azure Resource Manager APIs](/rest/api/resources/), the Azure CLI, and the Azure portal to operate the Azure Arc data services. The experience in directly connected mode is much like how you would use any other Azure service with provisioning/de-provisioning, scaling, configuring, and so on, all in the Azure portal. If the Azure Arc-enabled data services are indirectly connected to Azure, then the Azure portal is a read-only view. You can see the inventory of SQL managed instances and PostgreSQL servers that you have deployed and the details about them, but you can't take action on them in the Azure portal. In the indirectly connected mode, all actions must be taken locally using Azure Data Studio, the appropriate CLI, or Kubernetes native tools like kubectl. --Additionally, Microsoft Entra ID and Azure Role-Based Access Control can be used in the directly connected mode only because there's a dependency on a continuous and direct connection to Azure to provide this functionality. --Some Azure-attached services are only available when they can be directly reached such as Container Insights, and backup to blob storage. --||**Indirectly connected**|**Directly connected**|**Never connected**| -||||| -|**Description**|Indirectly connected mode offers most of the management services locally in your environment with no direct connection to Azure. A minimal amount of data must be sent to Azure for inventory and billing purposes _only_. It's exported to a file and uploaded to Azure at least once per month. No direct or continuous connection to Azure is required. Some features and services that require a connection to Azure won't be available.|Directly connected mode offers all of the available services when a direct connection can be established with Azure. Connections are always initiated _from_ your environment to Azure and use standard ports and protocols such as HTTPS/443.|No data can be sent to or from Azure in any way.| -|**Current availability**| Available |Available|Not currently supported.| -|**Typical use cases**|On-premises data centers that donΓÇÖt allow connectivity in or out of the data region of the data center due to business or regulatory compliance policies or out of concerns of external attacks or data exfiltration. Typical examples: Financial institutions, health care, government. <br/><br/>Edge site locations where the edge site doesnΓÇÖt typically have connectivity to the Internet. Typical examples: oil/gas or military field applications. <br/><br/>Edge site locations that have intermittent connectivity with long periods of outages. Typical examples: stadiums, cruise ships. | Organizations who are using public clouds. Typical examples: Azure, AWS or Google Cloud.<br/><br/>Edge site locations where Internet connectivity is typically present and allowed. Typical examples: retail stores, manufacturing.<br/><br/>Corporate data centers with more permissive policies for connectivity to/from their data region of the datacenter to the Internet. Typical examples: Nonregulated businesses, small/medium sized businesses|Truly "air-gapped" environments where no data under any circumstances can come or go from the data environment. Typical examples: top secret government facilities.| -|**How data is sent to Azure**|There are three options for how the billing and inventory data can be sent to Azure:<br><br> 1) Data is exported out of the data region by an automated process that has connectivity to both the secure data region and Azure.<br><br>2) Data is exported out of the data region by an automated process within the data region, automatically copied to a less secure region, and an automated process in the less secure region uploads the data to Azure.<br><br>3) Data is manually exported by a user within the secure region, manually brought out of the secure region, and manually uploaded to Azure. <br><br>The first two options are an automated continuous process that can be scheduled to run frequently so there's minimal delay in the transfer of data to Azure subject only to the available connectivity to Azure.|Data is automatically and continuously sent to Azure.|Data is never sent to Azure.| --## Feature availability by connectivity mode --|**Feature**|**Indirectly connected**|**Directly connected**| -|||| -|**Automatic high availability**|Supported|Supported| -|**Self-service provisioning**|Supported<br/>Use Azure Data Studio, the appropriate CLI, or Kubernetes native tools like Helm, `kubectl`, or `oc`, or use Azure Arc-enabled Kubernetes GitOps provisioning.|Supported<br/>In addition to the indirectly connected mode creation options, you can also create through the Azure portal, Azure Resource Manager APIs, the Azure CLI, or ARM templates. -|**Elastic scalability**|Supported|Supported<br/>| -|**Billing**|Supported<br/>Billing data is periodically exported out and sent to Azure.|Supported<br/>Billing data is automatically and continuously sent to Azure and reflected in near real time. | -|**Inventory management**|Supported<br/>Inventory data is periodically exported out and sent to Azure.<br/><br/>Use client tools like Azure Data Studio, Azure Data CLI, or `kubectl` to view and manage inventory locally.|Supported<br/>Inventory data is automatically and continuously sent to Azure and reflected in near real time. As such, you can manage inventory directly from the Azure portal.| -|**Automatic upgrades and patching**|Supported<br/>The data controller must either have direct access to the Microsoft Container Registry (MCR) or the container images need to be pulled from MCR and pushed to a local, private container registry that the data controller has access to.|Supported| -|**Automatic backup and restore**|Supported<br/>Automatic local backup and restore.|Supported<br/>In addition to automated local backup and restore, you can _optionally_ send backups to Azure blob storage for long-term, off-site retention.| -|**Monitoring**|Supported<br/>Local monitoring using Grafana and Kibana dashboards.|Supported<br/>In addition to local monitoring dashboards, you can _optionally_ send monitoring data and logs to Azure Monitor for at-scale monitoring of multiple sites in one place. | -|**Authentication**|Use local username/password for data controller and dashboard authentication. Use SQL and Postgres logins or Active Directory (AD isn't currently supported) for connectivity to database instances. Use Kubernetes authentication providers for authentication to the Kubernetes API.|In addition to or instead of the authentication methods for the indirectly connected mode, you can _optionally_ use Microsoft Entra ID.| -|**Role-based access control (RBAC)**|Use Kubernetes RBAC on Kubernetes API. Use SQL and Postgres RBAC for database instances.|You can use Microsoft Entra ID and Azure RBAC.| --## Connectivity requirements --**Some functionality requires a connection to Azure.** --**All communication with Azure is always initiated from your environment.** This is true even for operations that are initiated by a user in the Azure portal. In that case, there is effectively a task, which is queued up in Azure. An agent in your environment initiates the communication with Azure to see what tasks are in the queue, runs the tasks, and reports back the status/completion/fail to Azure. --|**Type of Data**|**Direction**|**Required/Optional**|**Additional Costs**|**Mode Required**|**Notes**| -||||||| -|**Container images**|Microsoft Container Registry -> Customer|Required|No|Indirect or direct|Container images are the method for distributing the software. In an environment which can connect to the Microsoft Container Registry (MCR) over the Internet, the container images can be pulled directly from MCR. If the deployment environment doesnΓÇÖt have direct connectivity, you can pull the images from MCR and push them to a private container registry in the deployment environment. At creation time, you can configure the creation process to pull from the private container registry instead of MCR. This also applies to automated updates.| -|**Resource inventory**|Customer environment -> Azure|Required|No|Indirect or direct|An inventory of data controllers, database instances (PostgreSQL and SQL) is kept in Azure for billing purposes and also for purposes of creating an inventory of all data controllers and database instances in one place which is especially useful if you have more than one environment with Azure Arc data services. As instances are provisioned, deprovisioned, scaled out/in, scaled up/down the inventory is updated in Azure.| -|**Billing telemetry data**|Customer environment -> Azure|Required|No|Indirect or direct|Utilization of database instances must be sent to Azure for billing purposes. | -|**Monitoring data and logs**|Customer environment -> Azure|Optional|Maybe depending on data volume (see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/))|Indirect or direct|You might want to send the locally collected monitoring data and logs to Azure Monitor for aggregating data across multiple environments into one place and also to use Azure Monitor services like alerts, using the data in Azure Machine Learning, etc.| -|**Azure Role-based Access Control (Azure RBAC)**|Customer environment -> Azure -> Customer Environment|Optional|No|Direct only|If you want to use Azure RBAC, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure RBAC, then local Kubernetes RBAC can be used.| -|**Microsoft Entra ID (Future)**|Customer environment -> Azure -> Customer environment|Optional|Maybe, but you might already be paying for Microsoft Entra ID|Direct only|If you want to use Microsoft Entra ID for authentication, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Microsoft Entra ID for authentication, you can use Active Directory Federation Services (ADFS) over Active Directory. **Pending availability in directly connected mode**| -|**Backup and restore**|Customer environment -> Customer environment|Required|No|Direct or indirect|The backup and restore service can be configured to point to local storage classes. | -|**Azure backup - long term retention (Future)**| Customer environment -> Azure | Optional| Yes for Azure storage | Direct only |You might want to send backups that are taken locally to Azure Backup for long-term, off-site retention of backups and bring them back to the local environment for restore. | -|**Provisioning and configuration changes from Azure portal**|Customer environment -> Azure -> Customer environment|Optional|No|Direct only|Provisioning and configuration changes can be done locally using Azure Data Studio or the appropriate CLI. In directly connected mode, you can also provision and make configuration changes from the Azure portal.| --## Details on internet addresses, ports, encryption, and proxy server support ---## Additional network requirements --In addition, resource bridge requires [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md#azure-arc-enabled-kubernetes-endpoints). |
azure-arc | Create Complete Managed Instance Directly Connected | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-directly-connected.md | - Title: Quickstart - Deploy Azure Arc-enabled data services - directly connected mode - Azure portal -description: Demonstrates how to deploy Azure Arc-enabled data services from beginning, including a Kubernetes cluster. Finishes with an instance of Azure SQL Managed Instance. ------ Previously updated : 12/09/2021----# Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal --This article demonstrates how to deploy Azure Arc-enabled data services in directly connected mode from the Azure portal. --To deploy in indirectly connected mode, see [Quickstart: Deploy Azure Arc-enabled data services - indirectly connected mode - Azure CLI](create-complete-managed-instance-indirectly-connected.md). --When you complete the steps in this article, you will have: --- An Arc-enabled Azure Kubernetes cluster.-- A data controller in directly connected mode.-- An instance of SQL Managed Instance enabled by Azure Arc.-- A connection to the instance with Azure Data Studio.--Azure Arc allows you to run Azure data services on-premises, at the edge, and in public clouds via Kubernetes. Deploy SQL Managed Instance and PostgreSQL server (preview) data services with Azure Arc. The benefits of using Azure Arc include staying current with constant service patches, elastic scale, self-service provisioning, unified management, and support for disconnected mode. --## Install client tools --First, install the [client tools](install-client-tools.md) needed on your machine. To complete the steps in this article, you will use the following tools: -* Azure Data Studio -* The Azure Arc extension for Azure Data Studio -* Kubernetes CLI -* Azure CLI -* `arcdata` extension for Azure CLI. --In addition, you need the following additional extensions to connect the cluster to Azure: --* connectedk8s -* k8s-extension ---## Access your Kubernetes cluster --After installing the client tools, you need access to a Kubernetes cluster. You can create a Kubernetes cluster with [`az aks create`](/cli/azure/aks#az-aks-create), or you can follow the steps below to create the cluster in the Azure portal. --### Create a cluster --To quickly create a Kubernetes cluster, use Azure Kubernetes Services (AKS). --1. Log in to [Azure portal](https://portal.azure.com). -1. In the search resources field at the top of the portal, type **Kubernetes**, and select **Kubernetes services**. - Azure takes you to Kubernetes services. -1. Select **Create** > **Create Kubernetes cluster**. -1. Under **Basics**, - 1. Specify your **Subscription**. - 1. Create a resource group, or specify an existing resource group. - 2. For **Cluster preset configuration**, review the available options and select for your workload. For a development/test proof of concept, use **Dev/Test**. Select a configuration with at least 4 vCPUs. - 3. Specify a cluster name. - 4. Specify a region. - 5. Under **Availability zones**, remove all selected zones. You should not specify any zones. - 6. Verify the Kubernetes version. For minimum supported version, see [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md). - 7. Under **Node size**, select a node size for your cluster based on the [Sizing guidance](sizing-guidance.md). - 8. For **Scale method**, select **Manual**. -1. Click **Review + create**. -1. Click **Create**. --Azure creates your Kubernetes cluster. --When the cluster is completed, the Azure updates the portal to show the completed status: ---### Connect to the cluster --After creating the cluster, connect to the cluster through the Azure CLI. --1. Log in to Azure - if not already. -- ```azurecli - az login - ``` -- Follow the steps to connect. --1. Get the credentials to connect to the cluster. -- The scripts in this article use angle brackets `< ... >` to identify values you will need to replace before you run the scripts. Do not include the angle brackets. -- ```azurecli - az aks get-credentials --resource-group <resource_group_name> --name <cluster_name> - ``` -- Use the resource group and cluster name that you defined when you created the cluster in the portal. -- Azure CLI returns the following output. -- ```output - Merged "<cluster name>" as current context in C:<current path>\.kube\config - ``` --1. Confirm that your cluster is running. Use the following command: -- ```azurecli - kubectl get nodes - ``` -- The command returns a list of the running nodes. -- ```output - NAME STATUS ROLES AGE VERSION - aks-agentpool-37241625-vmss000000 Ready agent 3h10m v1.20.9 - aks-agentpool-37241625-vmss000001 Ready agent 3h10m v1.20.9 - aks-agentpool-37241625-vmss000002 Ready agent 3h9m v1.20.9 - ``` --### Arc enable the Kubernetes cluster --Now that the cluster is running, connect the cluster to Azure. When you connect a cluster to Azure, you enable it for Azure Arc. Connecting the cluster to Azure allows you to view and manage the cluster. In addition, you can deploy and manage additional services such as Arc-enabled data services on the cluster directly from Azure portal. --Use `az connectedk8s connect` to connect the cluster to Azure: --```azurecli -az connectedk8s connect --resource-group <resource group> --name <cluster name> -``` --After the connect command completes successfully, you can view the shadow object in the Azure portal. The shadow object is the representation of the Azure Arc-enabled cluster. --1. In the Azure portal, locate the resource group. One way to find the resource group is to type the resource group name in search on the portal. The portal displays a link to the resource group below the search box. Click the resource group link. -1. In the resource group, under **Overview** you can see the Kubernetes cluster, and the shadow object. See the following image: -- :::image type="content" source="media/create-complete-managed-instance-directly-connected/azure-arc-resources.png" alt-text="The Kubernetes - Azure Arc item type is the shadow resource." lightbox="media/create-complete-managed-instance-directly-connected/azure-arc-resources-expanded.png"::: -- The shadow resource is the resource type **Kubernetes - Azure Arc** in the image above. The other resource is the **Kubernetes service** cluster. Both resources have the same name. --## Create the data controller --The next step is to create the data controller in directly connected mode via the Azure portal. Use the same subscription and resource group that you used to [create a cluster](#create-a-cluster). --1. In the portal, locate the resource group from the previous step. -1. From the search bar in Azure portal, search for *Azure Arc data controllers*, and select **+ Create**. -1. Select **Azure Arc-enabled Kubernetes cluster (Direct connectivity mode)**. Select **Next: Data controller details**. -1. Specify a name for the data controller. -1. Specify a custom location (namespace). -- :::image type="content" source="media/create-complete-managed-instance-directly-connected/custom-location.png" alt-text="Create a new custom location and specify a namespace."::: --1. For **Kubernetes configuration template**, specify *azure-arc-aks-premium-storage* because this example uses an AKS cluster. -2. For **Service type**, select **Load balancer**. -3. Set a user name and password for the metrics and log services. -- The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters. --Follow the instructions in the portal to complete the specification and deploy the data controller. --To view data controllers, run following command: --```console -kubectl get datacontrollers -A -``` --### Monitor deployment --You can also monitor the creation of the data controller with the following command: --```console -kubectl get datacontroller --namespace <namespace> -``` --The command returns the state of the data controller. For example, the following results indicate that the deployment is in progress: --```output -NAME STATE -<namespace> DeployingMonitoring -``` --Once the state of the data controller is ΓÇÿREADYΓÇÖ, then this step is completed. For example: --```output -NAME STATE -<namespace> Ready -``` --## Deploy SQL Managed Instance enabled by Azure Arc --1. In the portal, locate the resource group. -1. In the resource group, select **Create**. -1. Enter *managed instance*. The Azure portal returns resource types with a matching name. -1. Select **Azure SQL Managed Instance - Azure Arc**. -1. Click **Create**. -1. Specify your resource group, and custom location. Use the same value that you set in the [previous step](#create-a-cluster). -1. Set the **LoadBalancer** service type. -1. Provide credentials (login and password) for the managed instance administrator account. -1. Click **Review and Create**. -1. Click **Create**. --Azure creates the managed instance on the Azure Arc-enabled Kubernetes cluster. --To know when the instance has been created, run: --```console -kubectl get sqlmi -n <namespace> -``` --Once the state of the managed instance namespace is ΓÇÿREADYΓÇÖ, then this step is completed. For example: --```output -NAME STATE -<namespace> Ready -``` ---## Connect with Azure Data Studio --To connect with Azure Data Studio, see [Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md). |
azure-arc | Create Complete Managed Instance Indirectly Connected | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-indirectly-connected.md | - Title: Quickstart - Deploy Azure Arc-enabled data services -description: Quickstart - deploy Azure Arc-enabled data services in indirectly connected mode. Includes a Kubernetes cluster. Uses Azure CLI. ------ Previously updated : 09/20/2022----# Quickstart: Deploy Azure Arc-enabled data services - indirectly connected mode - Azure CLI --In this quickstart, you will deploy Azure Arc-enabled data services in indirectly connected mode from with the Azure CLI. --When you complete the steps in this article, you will have: --- A Kubernetes cluster on Azure Kubernetes Services (AKS).-- A data controller in indirectly connected mode.-- SQL Managed Instance enabled by Azure Arc.-- A connection to the instance with Azure Data Studio.--Use these objects to experience Azure Arc-enabled data services. --Azure Arc allows you to run Azure data services on-premises, at the edge, and in public clouds via Kubernetes. Deploy SQL Managed Instance and PostgreSQL server data services (preview) with Azure Arc. The benefits of using Azure Arc include staying current with constant service patches, elastic scale, self-service provisioning, unified management, and support for disconnected mode. --## Prerequisites --If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. --To complete the task in this article, install the required [client tools](install-client-tools.md). Specifically, you will use the following tools: --* Azure Data Studio -* The Azure Arc extension for Azure Data Studio -* Kubernetes CLI -* Azure CLI -* `arcdata` extension for Azure CLI --## Set metrics and logs service credentials --Azure Arc-enabled data services provides: -- Log services and dashboards with Kibana-- Metrics services and dashboards with Grafana--These services require a credential for each service. The credential is a username and a password. For this step, set an environment variable with the values for each credential. --The environment variables include passwords for log and metric services. The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters. --Run the following command to set the credential. --### [Linux](#tab/linux) --```console -export AZDATA_LOGSUI_USERNAME=<username for logs> -export AZDATA_LOGSUI_PASSWORD=<password for logs> -export AZDATA_METRICSUI_USERNAME=<username for metrics> -export AZDATA_METRICSUI_PASSWORD=<password for metrics> -``` --### [Windows / PowerShell](#tab/powershell) --```powershell -$ENV:AZDATA_LOGSUI_USERNAME="<username for logs>" -$ENV:AZDATA_LOGSUI_PASSWORD="<password for logs>" -$ENV:AZDATA_METRICSUI_USERNAME="<username for metrics>" -$ENV:AZDATA_METRICSUI_PASSWORD="<password for metrics>" -``` ----## Create and connect to your Kubernetes cluster --After you install the client tools, and configure the environment variables, you need access to a Kubernetes cluster. The steps in this section deploy a cluster on Azure Kubernetes Service (AKS). ---Follow the steps below to deploy the cluster from the Azure CLI. --1. Create the resource group -- Create a resource group for the cluster. For location, specify a supported region. For Azure Arc-enabled data services, supported regions are listed in the [Overview](overview.md#supported-regions). -- ```azurecli - az group create --name <resource_group_name> --location <location> - ``` -- To learn more about resource groups, see [What is Azure Resource Manager](../../azure-resource-manager/management/overview.md). --1. Create Kubernetes cluster -- Create the cluster in the resource group that you created previously. -- Select a node size that meets your requirements. See [Sizing guidance](sizing-guidance.md). -- The following example creates a three node cluster, with monitoring enabled, and generates public and private key files if missing. -- ```azurecli - az aks create --resource-group <resource_group_name> --name <cluster_name> --node-count 3 --enable-addons monitoring --generate-ssh-keys --node-vm-size <node size> - ``` -- For command details, see [az aks create](/cli/azure/aks#az-aks-create). -- For a complete demonstration, including an application on a single-node Kubernetes cluster, go to [Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI](/azure/aks/learn/quick-kubernetes-deploy-cli). --1. Get credentials -- You will need to get credential to connect to your cluster. -- Run the following command to get the credentials: -- ```azurecli - az aks get-credentials --resource-group <resource_group_name> --name <cluster_name> - ``` --1. Verify cluster -- To confirm the cluster is running and that you have the current connection context, run -- ```console - kubectl get nodes - ``` -- The command returns a list of nodes. For example: -- ```output - NAME STATUS ROLES AGE VERSION - aks-nodepool1-34164736-vmss000000 Ready agent 4h28m v1.20.9 - aks-nodepool1-34164736-vmss000001 Ready agent 4h28m v1.20.9 - aks-nodepool1-34164736-vmss000002 Ready agent 4h28m v1.20.9 - ``` --## Create the data controller --Now that our cluster is up and running, we are ready to create the data controller in indirectly connected mode. --The CLI command to create the data controller is: --```azurecli -az arcdata dc create --profile-name azure-arc-aks-premium-storage --k8s-namespace <namespace> --name <data controller name> --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --use-k8s -``` --### Monitor deployment --You can also monitor the creation of the data controller with the following command: --```console -kubectl get datacontroller --namespace <namespace> -``` --The command returns the state of the data controller. For example, the following results indicate that the deployment is in progress: --```output -NAME STATE -<namespace> DeployingMonitoring -``` --Once the state of the data controller is ΓÇÿREADYΓÇÖ, then this step is completed. For example: --```output -NAME STATE -<namespace> Ready -``` --## Deploy an instance of SQL Managed Instance enabled by Azure Arc --Now, we can create the Azure MI for indirectly connected mode with the following command: --```azurecli -az sql mi-arc create -n <instanceName> --k8s-namespace <namespace> --use-k8s -``` --To know when the instance has been created, run: --```console -kubectl get sqlmi -n <namespace>[ -``` --Once the state of the managed instance namespace is ΓÇÿREADYΓÇÖ, then this step is completed. For example: --```output -NAME STATE -<namespace> Ready -``` --## Connect to managed instance on Azure Data Studio --To connect with Azure Data Studio, see [Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md). --## Upload usage and metrics to Azure portal --If you wish, you can [Upload usage data, metrics, and logs to Azure](upload-metrics-and-logs-to-azure-monitor.md). --## Clean up resources --After you are done with the resources you created in this article. --Follow the steps in [Delete data controller in indirectly connected mode](uninstall-azure-arc-data-controller.md#delete-data-controller-in-indirectly-connected-mode). --## Related content --> [!div class="nextstepaction"] -> [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md). |
azure-arc | Create Custom Configuration Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-custom-configuration-template.md | - Title: Create custom configuration templates -description: Create custom configuration templates ------- Previously updated : 07/30/2021---# Create custom configuration templates --This article explains how to create a custom configuration template for Azure Arc-enabled data controller. --One of required parameters during deployment of a data controller in indirectly connected mode, is the `az arcdata dc create --profile-name` parameter. Currently, the available list of built-in profiles can be found via running the query: --```azurecli -az arcdata dc config list -``` --These profiles are template JSON files that have various settings for the Azure Arc-enabled data controller such as container registry and repository settings, storage classes for data and logs, storage size for data and logs, security, service type etc. and can be customized to your environment. --However, in some cases, you may want to customize those configuration templates to meet your requirements and pass the customized configuration template using the `--path` parameter to the `az arcdata dc create` command rather than pass a preconfigured configuration template using the `--profile-name` parameter. --## Create control.json file --Run `az arcdata dc config init` to initiate a control.json file with pre-defined settings based on your distribution of Kubernetes cluster. -For instance, a template control.json file for a Kubernetes cluster based on the `azure-arc-kubeadm` template in a subdirectory called `custom` in the current working directory can be created as follows: --```azurecli -az arcdata dc config init --source azure-arc-kubeadm --path custom -``` -The created control.json file can be edited in any editor such as Visual Studio Code to customize the settings appropriate for your environment. --## Use custom control.json file to deploy Azure Arc-enabled data controller using Azure CLI (az) --Once the template file is created, the file can be applied during Azure Arc-enabled data controller create command as follows: --```azurecli -az arcdata dc create --path ./custom --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --k8s-namespace <namespace> --use-k8s --#Example: -#az arcdata dc create --path ./custom --name arc --subscription <subscription ID> --resource-group my-resource-group --location eastus --connectivity-mode indirect --k8s-namespace <namespace> --use-k8s -``` --## Use custom control.json file for deploying Azure Arc data controller using Azure portal --From the Azure Arc data controller create screen, select "Configure custom template" under Custom template. This will invoke a blade to provide custom settings. In this blade, you can either type in the values for the various settings, or upload a pre-configured control.json file directly. --After ensuring the values are correct, click Apply to proceed with the Azure Arc data controller deployment. --## Related content --* For direct connectivity mode: [Deploy data controller - direct connect mode (prerequisites)](create-data-controller-direct-prerequisites.md) --* For indirect connectivity mode: [Create data controller using CLI](create-data-controller-indirect-cli.md) |
azure-arc | Create Data Controller Direct Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-azure-portal.md | - Title: Deploy Azure Arc data controller from Azure portal| Direct connect mode -description: Explains how to deploy the data controller in direct connect mode from Azure portal. ------ Previously updated : 11/03/2021----# Create Azure Arc data controller from Azure portal - Direct connectivity mode --This article describes how to deploy the Azure Arc data controller in direct connect mode from the Azure portal. --## Complete prerequisites --Before you begin, verify that you have completed the prerequisites in [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md). --## Deploy Azure Arc data controller --Azure Arc data controller create flow can be launched from the Azure portal in one of the following ways: --- From the search bar in Azure portal, search for "Azure Arc data controllers", and select "+ Create"-- From the Overview page of your Azure Arc-enabled Kubernetes cluster,- - Select "Extensions " under Settings. - - Select "Add" from the Extensions overview page and then select "Azure Arc data controller" - - Select Create from the Azure Arc data controller marketplace gallery - -Either of these actions should bring you to the Azure Arc data controller prerequisites page of the create flow. --- Ensure the Azure Arc-enabled Kubernetes cluster (Direct connectivity mode) option is selected. Select "Next : Data controller details"-- In the **Data controller details** page:- - Select the Azure Subscription and Resource group where the Azure Arc data controller will be projected to. - - Enter a **name** for the Data controller - - Select a pre-created **Custom location** or select "Create new" to create a new custom location. If you choose to create a new custom location, enter a name for the new custom location, select the Azure Arc-enabled Kubernetes cluster from the dropdown, and then enter a namespace to be associated with the new custom location, and finally select Create in the Create new custom location window. Learn more about [custom locations](../kubernetes/conceptual-custom-locations.md) - - **Kubernetes configuration** - Select a Kubernetes configuration template that best matches your Kubernetes distribution from the dropdown. If you choose to use your own settings or have a custom profile you want to use, select the Custom template option from the dropdown. In the blade that opens on the right side, enter the details for Docker credentials, repository information, Image tag, Image pull policy, infrastructure type, storage settings for data, logs and their sizes, Service type, and ports for controller and management proxy. Select Apply when all the required information is provided. You can also choose to upload your own template file by selecting the "Upload a template (JSON) from the top of the blade. If you use custom settings and would like to download a copy of those settings, use the "Download this template (JSON)" to do so. Learn more about [custom configuration profiles](create-custom-configuration-template.md). - - Select the appropriate **Service Type** for your environment - - **Metrics and Logs Dashboard Credentials** - Enter the credentials for the Grafana and Kibana dashboards - - Select the "Next: Additional settings" button to proceed forward after all the required information is provided. -- In the **Additional Settings** page:- - **Metrics upload:** Select this option to automatically upload your metrics to Azure Monitor so you can aggregate and analyze metrics, raise alerts, send notifications, or trigger automated actions. The required **Monitoring Metrics Publisher** role will be granted to the Managed Identity of the extension. - - **Logs upload:** Select this option to automatically upload logs to an existing Log Analytics workspace. Enter the Log Analytics workspace ID and the Log analytics shared access key. - - Select "Next: Tags" to proceed. -- In the **Tags** page, enter the Names and Values for your tags and select "Next: Review + Create".-- In the **Review + Create** page, view the summary of your deployment. Ensure all the settings look correct and select "Create" to start the deployment of Azure Arc data controller.--## Monitor the creation from Azure portal --Selecting the "Create" button from the previous step should launch the Azure deployment overview page which shows the progress of the deployment of Azure Arc data controller. --## Monitor the creation from your Kubernetes cluster --The progress of Azure Arc data controller deployment can be monitored as follows: --- Check if the CRDs are created by running ```kubectl get crd ``` from your cluster -- Check if the namespace is created by running ```kubectl get ns``` from your cluster-- Check if the custom location is created by running ```az customlocation list --resource-group <resourcegroup> -o table``` -- Check the status of pod deployment by running ```kubectl get pods -ns <namespace>```--## Related information --[Deploy SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) --[Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) |
azure-arc | Create Data Controller Direct Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md | -- Title: Create Azure Arc data controller | Direct connect mode -description: Explains how to create the data controller in direct connect mode. ------- Previously updated : 05/27/2022----# Create Azure Arc data controller in direct connectivity mode using CLI --This article describes how to create the Azure Arc data controller in direct connectivity mode using Azure CLI. --## Complete prerequisites --Before you begin, verify that you have completed the prerequisites in [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md). ---## Deploy Arc data controller --Creating an Azure Arc data controller in direct connectivity mode involves the following steps: --1. Create an Azure Arc-enabled data services extension. -1. Create a custom location. -1. Create the data controller. --Create the Arc data controller extension, custom location, and Arc data controller all in one command as follows: --##### [Linux](#tab/linux) --```console -## variables for Azure subscription, resource group, cluster name, location, extension, and namespace. -export resourceGroup=<Your resource group> -export clusterName=<name of your connected Kubernetes cluster> -export customLocationName=<name of your custom location> --## variables for logs and metrics dashboard credentials -export AZDATA_LOGSUI_USERNAME=<username for Kibana dashboard> -export AZDATA_LOGSUI_PASSWORD=<password for Kibana dashboard> -export AZDATA_METRICSUI_USERNAME=<username for Grafana dashboard> -export AZDATA_METRICSUI_PASSWORD=<password for Grafana dashboard> -``` --##### [Windows (PowerShell)](#tab/windows) --``` PowerShell -## variables for Azure location, extension and namespace -$ENV:resourceGroup="<Your resource group>" -$ENV:clusterName="<name of your connected Kubernetes cluster>" -$ENV:customLocationName="<name of your custom location>" --## variables for Metrics and Monitoring dashboard credentials -$ENV:AZDATA_LOGSUI_USERNAME="<username for Kibana dashboard>" -$ENV:AZDATA_LOGSUI_PASSWORD="<password for Kibana dashboard>" -$ENV:AZDATA_METRICSUI_USERNAME="<username for Grafana dashboard>" -$ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>" -``` -- --Deploy the Azure Arc data controller using released profile -##### [Linux](#tab/linux) --```azurecli -az arcdata dc create --name <name> -g ${resourceGroup} --custom-location ${customLocationName} --cluster-name ${clusterName} --connectivity-mode direct --profile-name <the-deployment-profile> --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass> --# Example -az arcdata dc create --name arc-dc1 --resource-group my-resource-group -custom-location cl-name --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --auto-upload-logs true --storage-class mystorageclass -``` --##### [Windows (PowerShell)](#tab/windows) --```azurecli -az arcdata dc create --name <name> -g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --profile-name <the-deployment-profile> --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass> --# Example -az arcdata dc create --name arc-dc1 --g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --auto-upload-logs true --storage-class mystorageclass --``` ---If you want to create the Azure Arc data controller using a custom configuration template, follow the steps described in [Create custom configuration profile](create-custom-configuration-template.md) and provide the path to the file as follows: -##### [Linux](#tab/linux) --```azurecli -az arcdata dc create --name -g ${resourceGroup} --custom-location ${customLocationName} --cluster-name ${clusterName} --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true --# Example -az arcdata dc create --name arc-dc1 --resource-group my-resource-group -custom-location cl-name --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true -``` --##### [Windows (PowerShell)](#tab/windows) --```azurecli -az arcdata dc create --name <name> -g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass> --# Example -az arcdata dc create --name arc-dc1 --resource-group $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true --storage-class mystorageclass --``` ----## Monitor the status of Azure Arc data controller deployment --The deployment status of the Arc data controller on the cluster can be monitored as follows: --```console -kubectl get datacontrollers --namespace arc -``` --## Related content --[Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) --[Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) |
azure-arc | Create Data Controller Direct Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-prerequisites.md | - Title: Prerequisites | Direct connect mode -description: Prerequisites to deploy the data controller in direct connect mode. ------ Previously updated : 11/03/2021----# Prerequisites to deploy the data controller in direct connectivity mode --This article describes how to prepare to deploy a data controller for Azure Arc-enabled data services in direct connect mode. Before you deploy an Azure Arc data controller understand the concepts described in [Plan to deploy Azure Arc-enabled data services](plan-azure-arc-data-services.md). --At a high level, the prerequisites for creating Azure Arc data controller in **direct** connectivity mode include: --1. Have access to your Kubernetes cluster. If you do not have a Kubernetes cluster, you can create a test/demonstration cluster on Azure Kubernetes Service (AKS). -1. Connect Kubernetes cluster to Azure using Azure Arc-enabled Kubernetes. --Follow the instructions at [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md) --## Connect Kubernetes cluster to Azure using Azure Arc-enabled Kubernetes --To connect your Kubernetes cluster to Azure, use Azure CLI `az` with the following extensions or Helm. --### Install tools --- Helm version 3.3+ ([install](https://helm.sh/docs/intro/install/))-- Install or upgrade to the latest version of [Azure CLI](/cli/azure/install-azure-cli)--### Add extensions for Azure CLI --Install the latest versions of the following az extensions: -- `k8s-extension`-- `connectedk8s`-- `k8s-configuration`-- `customlocation`--Run the following commands to install the az CLI extensions: --```azurecli -az extension add --name k8s-extension -az extension add --name connectedk8s -az extension add --name k8s-configuration -az extension add --name customlocation -``` --If you've previously installed the `k8s-extension`, `connectedk8s`, `k8s-configuration`, `customlocation` extensions, update to the latest version using the following command: --```azurecli -az extension update --name k8s-extension -az extension update --name connectedk8s -az extension update --name k8s-configuration -az extension update --name customlocation -``` --### Connect your cluster to Azure --Connect Kubernetes cluster to Azure using Azure Arc-enabled Kubernetes -- To connect your Kubernetes cluster to Azure, use Azure CLI `az` or PowerShell. -- Run the following command: -- # [Azure CLI](#tab/azure-cli) -- ```azurecli - az connectedk8s connect --name <cluster_name> --resource-group <resource_group_name> - ``` -- ```output - <pre> - Helm release deployment succeeded -- { - "aadProfile": { - "clientAppId": "", - "serverAppId": "", - "tenantId": "" - }, - "agentPublicKeyCertificate": "xxxxxxxxxxxxxxxxxxx", - "agentVersion": null, - "connectivityStatus": "Connecting", - "distribution": "gke", - "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/AzureArcTest/providers/Microsoft.Kubernetes/connectedClusters/AzureArcTest1", - "identity": { - "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "type": "SystemAssigned" - }, - "infrastructure": "gcp", - "kubernetesVersion": null, - "lastConnectivityTime": null, - "location": "eastus", - "managedIdentityCertificateExpirationTime": null, - "name": "AzureArcTest1", - "offering": null, - "provisioningState": "Succeeded", - "resourceGroup": "AzureArcTest", - "tags": {}, - "totalCoreCount": null, - "totalNodeCount": null, - "type": "Microsoft.Kubernetes/connectedClusters" - } - </pre> - ``` -- > [!TIP] - > The above command without the location parameter specified creates the Azure Arc-enabled Kubernetes resource in the same location as the resource group. To create the Azure Arc-enabled Kubernetes resource in a different location, specify either `--location <region>` or `-l <region>` when running the `az connectedk8s connect` command. -- > [!NOTE] - > If you are logged into Azure CLI using a service principal, an [additional parameter](../kubernetes/troubleshooting.md#enable-custom-locations-using-service-principal) needs to be set for enabling the custom location feature on the cluster. -- # [Azure PowerShell](#tab/azure-powershell) -- ```azurepowershell - New-AzConnectedKubernetes -ClusterName AzureArcTest1 -ResourceGroupName AzureArcTest -Location eastus - ``` -- ```output - <pre> - Location Name Type - -- - - - eastus AzureArcTest1 microsoft.kubernetes/connectedclusters - </pre> - ``` -- ---A more thorough walk-through of this task is available at [Connect an existing Kubernetes cluster to Azure arc](../kubernetes/quickstart-connect-cluster.md). --### Verify `azure-arc` namespace pods are created -- Before you proceed to the next step, make sure that all of the `azure-arc-` namespace pods are created. Run the following command. -- ```console - kubectl get pods -n azure-arc - ``` -- :::image type="content" source="media/deploy-data-controller-direct-mode-prerequisites/verify-azure-arc-pods.png" alt-text="All containers return a status of running."::: -- When all containers return a status of running, you can connect the cluster to Azure. --## Optionally, keep the Log Analytics workspace ID and Shared access key ready --When you deploy Azure Arc-enabled data controller, you can enable automatic upload of metrics and logs during setup. Metrics upload uses the system assigned managed identity. However, uploading logs requires a Workspace ID and the access key for the workspace. --You can also enable or disable automatic upload of metrics and logs after you deploy the data controller. --For instructions, see [Create a log analytics workspace](upload-logs.md#create-a-log-analytics-workspace). --## Create Azure Arc data services --After you have completed these prerequisites, you can [Deploy Azure Arc data controller | Direct connect mode - Azure Portal](create-data-controller-direct-azure-portal.md) or [using the Azure CLI](create-data-controller-direct-cli.md). |
azure-arc | Create Data Controller Indirect Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-azure-data-studio.md | - Title: Create data controller in Azure Data Studio -description: Create data controller in Azure Data Studio ------ Previously updated : 11/03/2021----# Create data controller in Azure Data Studio --You can create a data controller using Azure Data Studio through the deployment wizard and notebooks. ---## Prerequisites --- You need access to a Kubernetes cluster and have your kubeconfig file configured to point to the Kubernetes cluster you want to deploy to.-- You need to [install the client tools](install-client-tools.md) including **Azure Data Studio**, the Azure Data Studio extensions called **Azure Arc** and Azure CLI with the `arcdata` extension.-- You need to log in to Azure in Azure Data Studio. To do this: type CTRL/Command + SHIFT + P to open the command text window and type **Azure**. Choose **Azure: Sign in**. In the panel, that comes up click the + icon in the top right to add an Azure account.-- You need to run `az login` in your local Command Prompt to login to Azure CLI.--## Use the Deployment Wizard to create Azure Arc data controller --Follow these steps to create an Azure Arc data controller using the Deployment wizard. --1. In Azure Data Studio, click on the Connections tab on the left navigation. -1. Click on the **...** button at the top of the Connections panel and choose **New Deployment...** -1. In the new Deployment wizard, choose **Azure Arc Data Controller**, and then click the **Select** button at the bottom. -1. Ensure the prerequisite tools are available and meet the required versions. **Click Next**. -1. Use the default kubeconfig file or select another one. Click **Next**. -1. Choose a Kubernetes cluster context. Click **Next**. -1. Choose a deployment configuration profile depending on your target Kubernetes cluster. **Click Next**. -1. Choose the desired subscription and resource group. -1. Select an Azure location. - - The Azure location selected here is the location in Azure where the *metadata* about the data controller and the database instances that it manages will be stored. The data controller and database instances will be actually created in your Kubernetes cluster wherever that may be. - - Once done, click **Next**. --1. Enter a name for the data controller and for the namespace that the data controller will be created in. -- The data controller and namespace name will be used to create a custom resource in the Kubernetes cluster so they must conform to [Kubernetes naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). - - If the namespace already exists it will be used if the namespace does not already contain other Kubernetes objects - pods, etc. If the namespace does not exist, an attempt to create the namespace will be made. Creating a namespace in a Kubernetes cluster requires Kubernetes cluster administrator privileges. If you don't have Kubernetes cluster administrator privileges, ask your Kubernetes cluster administrator to perform the first few steps in the [Create a data controller using Kubernetes-native tools](./create-data-controller-using-kubernetes-native-tools.md) article which are required to be performed by a Kubernetes administrator before you complete this wizard. ---1. Select the storage class where the data controller will be deployed. -1. Enter a username and password and confirm the password for the data controller administrator user account. Click **Next**. --1. Review the deployment configuration. -1. Click the **Deploy** to deploy the desired configuration or the **Script to Notebook** to review the deployment instructions or make any changes necessary such as storage class names or service types. Click **Run All** at the top of the notebook. --## Monitoring the creation status --Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --> [!NOTE] -> The example commands below assume that you created a data controller and Kubernetes namespace with the name 'arc'. If you used a different namespace/data controller name, you can replace 'arc' with your name. --```console -kubectl get datacontroller --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. --```console -kubectl describe pod/<pod name> --namespace arc --#Example: -#kubectl describe pod/control-2g7bl --namespace arc -``` --## Troubleshooting creation problems --If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md). |
azure-arc | Create Data Controller Indirect Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-azure-portal.md | - Title: Create an Azure Arc data controller in indirect mode from Azure portal -description: Create an Azure Arc data controller in indirect mode from Azure portal ------ Previously updated : 07/30/2021----# Create Azure Arc data controller from Azure portal - Indirect connectivity mode ---## Introduction --You can use the Azure portal to create an Azure Arc data controller, in indirect connectivity mode. --Many of the creation experiences for Azure Arc start in the Azure portal even though the resource to be created or managed is outside of Azure infrastructure. The user experience pattern in these cases, especially when there is no direct connectivity between Azure and your environment, is to use the Azure portal to generate a script which can then be downloaded and executed in your environment to establish a secure connection back to Azure. For example, Azure Arc-enabled servers follows this pattern to [create Azure Arc-enabled servers](../servers/onboard-portal.md). --When you use the indirect connect mode of Azure Arc-enabled data services, you can use the Azure portal to generate a notebook for you that can then be downloaded and run in Azure Data Studio against your Kubernetes cluster. -- [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --When you use direct connect mode, you can provision the data controller directly from the Azure portal. You can read more about [connectivity modes](connectivity.md). --## Use the Azure portal to create an Azure Arc data controller --Follow the steps below to create an Azure Arc data controller using the Azure portal and Azure Data Studio. --1. First, log in to the [Azure portal marketplace](https://portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home/searchQuery/azure%20arc%20data%20controller). The marketplace search results will be filtered to show you the 'Azure Arc data controller'. -1. If the first step has not entered the search criteria. Please enter in to the search results, click on 'Azure Arc data controller'. -1. Select the Azure Data Controller tile from the marketplace. -1. Click on the **Create** button. -1. Select the indirect connectivity mode. Learn more about [Connectivity modes and requirements](./connectivity.md). -1. Review the requirements to create an Azure Arc data controller and install any missing prerequisite software such as Azure Data Studio and kubectl. -1. Click on the **Next: Data controller details** button. -1. Choose a subscription, resource group and Azure location just like you would for any other resource that you would create in the Azure portal. In this case the Azure location that you select will be where the metadata about the resource will be stored. The resource itself will be created on whatever infrastructure you choose. It doesn't need to be on Azure infrastructure. -1. Enter a name for your data controller. --1. Click the **Open in Azure Studio** button. -1. On the next screen, you will see a summary of your selections and a notebook that is generated. You can click the **Open link in Azure Data Studio** button to open the generated notebook in Azure Data Studio. -1. Open the notebook in Azure Data Studio and click the **Run All** button at the top. -1. Follow the prompts and instructions in the notebook to complete the data controller creation. --## Monitoring the creation status --Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --> [!NOTE] -> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly. --```console -kubectl get datacontroller/arc-dc --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. --```console -kubectl describe po/<pod name> --namespace arc --#Example: -#kubectl describe po/control-2g7bl --namespace arc -``` --## Troubleshooting creation problems --If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md). |
azure-arc | Create Data Controller Indirect Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-cli.md | - Title: Create data controller using CLI -description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster that you already have created, using the CLI. ------- Previously updated : 11/03/2021----# Create Azure Arc data controller using the CLI ---## Prerequisites --Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) for overview information. --### Install tools --Before you begin, install the `arcdata` extension for Azure (az) CLI. --[Install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](install-client-tools.md) --Regardless of which target platform you choose, you need to set the following environment variables prior to the creation for the data controller. These environment variables become the credentials used for accessing the metrics and logs dashboards after data controller creation. --### Set environment variables --Following are two sets of environment variables needed to access the metrics and logs dashboards. --The environment variables include passwords for log and metric services. The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters. ---# [Linux](#tab/linux) --```console -## variables for Metrics and Monitoring dashboard credentials -export AZDATA_LOGSUI_USERNAME=<username for Kibana dashboard> -export AZDATA_LOGSUI_PASSWORD=<password for Kibana dashboard> -export AZDATA_METRICSUI_USERNAME=<username for Grafana dashboard> -export AZDATA_METRICSUI_PASSWORD=<password for Grafana dashboard> -``` --# [Windows (PowerShell)](#tab/windows) --```PowerShell -## variables for Metrics and Monitoring dashboard credentials -$ENV:AZDATA_LOGSUI_USERNAME="<username for Kibana dashboard>" -$ENV:AZDATA_LOGSUI_PASSWORD="<password for Kibana dashboard>" -$ENV:AZDATA_METRICSUI_USERNAME="<username for Grafana dashboard>" -$ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>" -``` -- --### Connect to Kubernetes cluster --Connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server. --You can check to see that you have a current Kubernetes connection and confirm your current context with the following commands. --```console -kubectl cluster-info -kubectl config current-context -``` --## Create the Azure Arc data controller --The following sections provide instructions for specific types of Kubernetes platforms. Follow the instructions for your platform. --- [Azure Kubernetes Service (AKS)](#create-on-azure-kubernetes-service-aks)-- [AKS on Azure Stack HCI](#create-on-aks-on-azure-stack-hci)-- [Azure Red Hat OpenShift (ARO)](#create-on-azure-red-hat-openshift-aro)-- [Red Hat OpenShift Container Platform (OCP)](#create-on-red-hat-openshift-container-platform-ocp)-- [Open source, upstream Kubernetes (kubeadm)](#create-on-open-source-upstream-kubernetes-kubeadm)-- [AWS Elastic Kubernetes Service (EKS)](#create-on-aws-elastic-kubernetes-service-eks)-- [Google Cloud Kubernetes Engine Service (GKE)](#create-on-google-cloud-kubernetes-engine-service-gke)--> [!TIP] -> If you have no Kubernetes cluster, you can create one on Azure. Follow the instructions at [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md) to walk through the entire process. -> -> Then follow the instructions under [Create on Azure Kubernetes Service (AKS)](#create-on-azure-kubernetes-service-aks). --## Create on Azure Kubernetes Service (AKS) --By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class only works if you have VMs that were deployed using VM images that have premium disks. --If you are going to use `managed-premium` as your storage class, then you can run the following command to create the data controller. Substitute the placeholders in the command with your resource group name, subscription ID, and Azure location. --```azurecli -az arcdata dc create --profile-name azure-arc-aks-premium-storage --k8s-namespace <namespace> --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --use-k8s --#Example: -#az arcdata dc create --profile-name azure-arc-aks-premium-storage --k8s-namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect --use-k8s -``` --If you are not sure what storage class to use, you should use the `default` storage class which is supported regardless of which VM type you are using. It just won't provide the fastest performance. --If you want to use the `default` storage class, then you can run this command: --```azurecli -az arcdata dc create --profile-name azure-arc-aks-default-storage --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --#Example: -#az arcdata dc create --profile-name azure-arc-aks-default-storage --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on AKS on Azure Stack HCI --### Configure storage (Azure Stack HCI with AKS-HCI) --If you are using Azure Stack HCI with AKS-HCI, create a custom storage class with `fsType`. -- ```json - fsType: ext4 - ``` --Use this type to deploy the data controller. See the complete instructions at [Create a custom storage class for an AKS on Azure Stack HCI disk](/azure-stack/aks-hci/container-storage-interface-disks#create-a-custom-storage-class-for-an-aks-on-azure-stack-hci-disk). --By default, the deployment profile uses a storage class named `default` and the service type `LoadBalancer`. --You can run the following command to create the data controller using the `default` storage class and service type `LoadBalancer`. --```azurecli -az arcdata dc create --profile-name azure-arc-aks-hci --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --#Example: -#az arcdata dc create --profile-name azure-arc-aks-hci --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on Azure Red Hat OpenShift (ARO) --### Create custom deployment profile --Use the profile `azure-arc-azure-openshift` for Azure RedHat Open Shift. --```azurecli -az arcdata dc config init --source azure-arc-azure-openshift --path ./custom -``` --### Create data controller --You can run the following command to create the data controller: --```azurecli -az arcdata dc create --profile-name azure-arc-azure-openshift --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --#Example -#az arcdata dc create --profile-name azure-arc-azure-openshift --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on Red Hat OpenShift Container Platform (OCP) --### Determine storage class --To determine which storage class to use, run the following command. --```console -kubectl get storageclass -``` --### Create custom deployment profile --Create a new custom deployment profile file based on the `azure-arc-openshift` deployment profile by running the following command. This command creates a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory. --Use the profile `azure-arc-openshift` for OpenShift Container Platform. --```azurecli -az arcdata dc config init --source azure-arc-openshift --path ./custom -``` --### Set storage class --Now, set the desired storage class by replacing `<storageclassname>` in the command below with the name of the storage class that you want to use that was determined by running the `kubectl get storageclass` command above. --```azurecli -az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=<storageclassname>" -az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=<storageclassname>" --#Example: -#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=mystorageclass" -#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=mystorageclass" -``` --### Set LoadBalancer (optional) --By default, the `azure-arc-openshift` deployment profile uses `NodePort` as the service type. If you are using an OpenShift cluster that is integrated with a load balancer, you can change the configuration to use the `LoadBalancer` service type using the following command: --```azurecli -az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer" -``` --### Create data controller --Now you are ready to create the data controller using the following command. --> [!NOTE] -> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself. --> [!NOTE] -> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`. --```azurecli -az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure> --#Example: -#az arcdata dc create --path ./custom --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect --infrastructure onpremises -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on open source, upstream Kubernetes (kubeadm) --By default, the kubeadm deployment profile uses a storage class called `local-storage` and service type `NodePort`. If this is acceptable you can skip the instructions below that set the desired storage class and service type and immediately run the `az arcdata dc create` command below. --If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command creates a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory. --```azurecli -az arcdata dc config init --source azure-arc-kubeadm --path ./custom -``` --You can look up the available storage classes by running the following command. --```console -kubectl get storageclass -``` --Now, set the desired storage class by replacing `<storageclassname>` in the command below with the name of the storage class that you want to use that was determined by running the `kubectl get storageclass` command above. --```azurecli -az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=<storageclassname>" -az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=<storageclassname>" --#Example: -#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=mystorageclass" -#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=mystorageclass" -``` --By default, the kubeadm deployment profile uses `NodePort` as the service type. If you are using a Kubernetes cluster that is integrated with a load balancer, you can change the configuration using the following command. --```azurecli -az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer" -``` --Now you are ready to create the data controller using the following command. --> [!NOTE] -> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`. --```azurecli -az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure> --#Example: -#az arcdata dc create --path ./custom - --k8s-namespace <namespace> --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect --infrastructure onpremises -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on AWS Elastic Kubernetes Service (EKS) --By default, the EKS storage class is `gp2` and the service type is `LoadBalancer`. --Run the following command to create the data controller using the provided EKS deployment profile. --```azurecli -az arcdata dc create --profile-name azure-arc-eks --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --#Example: -#az arcdata dc create --profile-name azure-arc-eks --k8s-namespace <namespace> --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on Google Cloud Kubernetes Engine Service (GKE) --By default, the GKE storage class is `standard` and the service type is `LoadBalancer`. --Run the following command to create the data controller using the provided GKE deployment profile. --```azurecli -az arcdata dc create --profile-name azure-arc-gke --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --#Example: -#az arcdata dc create --profile-name azure-arc-gke --k8s-namespace <namespace> --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Monitor the creation status --It takes a few minutes to create the controller completely. You can monitor the progress in another terminal window with the following commands: --> [!NOTE] -> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly. --```console -kubectl get datacontroller/arc-dc --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. --```console -kubectl describe po/<pod name> --namespace arc --#Example: -#kubectl describe po/control-2g7bl --namespace arc -``` --## Troubleshooting creation problems --If you encounter any troubles with creation, see the [troubleshooting guide](troubleshoot-guide.md). |
azure-arc | Create Data Controller Using Kubernetes Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md | - Title: Create a data controller using Kubernetes tools -description: Create a data controller using Kubernetes tools ------ Previously updated : 11/03/2021----# Create Azure Arc-enabled data controller using Kubernetes tools --A data controller manages Azure Arc-enabled data services for a Kubernetes cluster. This article describes how to use Kubernetes tools to create a data controller. --Creating the data controller has the following high level steps: --1. Create the namespace and bootstrapper service -1. Create the data controller --> [!NOTE] -> For simplicity, the steps below assume that you are a Kubernetes cluster administrator. For production deployments or more secure environments, it is recommended to follow the security best practices of "least privilege" when deploying the data controller by granting only specific permissions to users and service accounts involved in the deployment process. -> -> See the topic [Operate Arc-enabled data services with least privileges](least-privilege.md) for detailed instructions. ---## Prerequisites --Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) for overview information. --To create the data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json. --[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) --## Create the namespace and bootstrapper service --The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller. --Save a copy of [bootstrapper-unified.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper-unified.yaml), and replace the placeholder `{{NAMESPACE}}` in *all the places* in the file with the desired namespace name, for example: `arc`. --> [!IMPORTANT] -> The bootstrapper-unified.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment can't directly access the Microsoft Container Registry, you can do the following: -> - Follow the steps to [pull the container images from the Microsoft Container Registry and push them to a private container registry](offline-deployment.md). -> - [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) named `arc-private-registry` for your private container registry. -> - Change the image URL for the bootstrapper image in the bootstrap.yaml file. -> - Replace `arc-private-registry` in the bootstrap.yaml file if a different name was used for the image pull secret. --Run the following command to create the namespace and bootstrapper service with the edited file. --```console -kubectl apply --namespace arc -f bootstrapper-unified.yaml -``` --Verify that the bootstrapper pod is running using the following command. --```console -kubectl get pod --namespace arc -l app=bootstrapper -``` --If the status is not _Running_, run the command a few times until the status is _Running_. --## Create the data controller --Now you are ready to create the data controller itself. --First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings. --### Create the metrics and logs dashboards user names and passwords --At the top of the file, you can specify a user name and password that is used to authenticate to the metrics and logs dashboards as an administrator. Choose a secure password and share it with only those that need to have these privileges. --A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password. --You can use an online tool to base64 encode your desired username and password or you can use built in CLI tools depending on your platform. --PowerShell --```console -[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>')) --#Example -#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example')) --``` --Linux/macOS --```console -echo -n '<your string to encode here>' | base64 --#Example -# echo -n 'example' | base64 -``` --### Create certificates for logs and metrics dashboards --Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify SSL/TLS certificates during Kubernetes native tools deployment](monitor-certificates.md). --### Edit the data controller configuration --Edit the data controller configuration as needed: --**REQUIRED** -- **location**: Change this to be the Azure location where the _metadata_ about the data controller will be stored. Review the [list of available regions](overview.md#supported-regions).-- **resourceGroup**: the Azure resource group where you want to create the data controller Azure resource in Azure Resource Manager. Typically this resource group should already exist, but it is not required until the time that you upload the data to Azure.-- **subscription**: the Azure subscription GUID for the subscription that you want to create the Azure resources in.--**RECOMMENDED TO REVIEW AND POSSIBLY CHANGE DEFAULTS** -- **storage..className**: the storage class to use for the data controller data and log files. If you are unsure of the available storage classes in your Kubernetes cluster, you can run the following command: `kubectl get storageclass`. The default is `default` which assumes there is a storage class that exists and is named `default` not that there is a storage class that _is_ the default. Note: There are two className settings to be set to the desired storage class - one for data and one for logs.-- **serviceType**: Change the service type to `NodePort` if you are not using a LoadBalancer.-- **Security** For Azure Red Hat OpenShift or Red Hat OpenShift Container Platform, replace the `security:` settings with the following values in the data controller yaml file.--```yml - security: - allowDumps: false - allowNodeMetricsCollection: false - allowPodMetricsCollection: false -``` --**OPTIONAL** -- **name**: The default name of the data controller is `arc`, but you can change it if you want.-- **displayName**: Set this to the same value as the name attribute at the top of the file.-- **logsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the logs UI certificate.-- **metricsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.--The following example shows a completed data controller yaml. ---Save the edited file on your local computer and run the following command to create the data controller: --```console -kubectl create --namespace arc -f <path to your data controller file> --#Example -kubectl create --namespace arc -f data-controller.yaml -``` --## Monitoring the creation status --Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --```console -kubectl get datacontroller --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status or logs of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. --```console -kubectl describe pod/<pod name> --namespace arc -kubectl logs <pod name> --namespace arc --#Example: -#kubectl describe pod/control-2g7bl --namespace arc -#kubectl logs control-2g7b1 --namespace arc -``` --## Troubleshooting creation problems --If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md). --## Related content --- [Create a SQL managed instance using Kubernetes-native tools](./create-sql-managed-instance-using-kubernetes-native-tools.md)-- [Create a PostgreSQL server using Kubernetes-native tools](./create-postgresql-server-kubernetes-native-tools.md)- |
azure-arc | Create Postgresql Server Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server-azure-data-studio.md | - Title: Create Azure Arc-enabled PostgreSQL server using Azure Data Studio -description: Create Azure Arc-enabled PostgreSQL server using Azure Data Studio ------ Previously updated : 07/30/2021----# Create Azure Arc-enabled PostgreSQL server using Azure Data Studio --This document walks you through the steps for using Azure Data Studio to provision Azure Arc-enabled PostgreSQL servers. ----## Preliminary and temporary step for OpenShift users only --Implement this step before moving to the next step. To deploy PostgreSQL server onto Red Hat OpenShift in a project other than the default, you need to execute the following commands against your cluster to update the security constraints. This command grants the necessary privileges to the service accounts that will run your PostgreSQL server. The security context constraint (SCC) **_arc-data-scc_** is the one you added when you deployed the Azure Arc data controller. --```console -oc adm policy add-scc-to-user arc-data-scc -z <server-name> -n <namespace name> -``` --_**Server-name** is the name of the server you will deploy during the next step._ - -For more details on SCCs in OpenShift, please refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.2/authentication/managing-security-context-constraints.html). -You may now implement the next step. --## Create an Azure Arc-enabled PostgreSQL server --1. Launch Azure Data Studio -1. On the Connections tab, Click on the three dots on the top left and choose "New Deployment" -1. From the deployment options, select **PostgreSQL server - Azure Arc** - >[!NOTE] - > You may be prompted to install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] here if it is not currently installed. -1. Accept the Privacy and license terms and click **Select** at the bottom -1. In the Deploy PostgreSQL server - Azure Arc blade, enter the following information: - - Enter a name for the server - - Enter and confirm a password for the _postgres_ administrator user of the server - - Select the storage class as appropriate for data - - Select the storage class as appropriate for logs - - Select the storage class as appropriate for backups -1. Click the **Deploy** button --This starts the creation of the Azure Arc-enabled PostgreSQL server on the data controller. --In a few minutes, your creation should successfully complete. --### Storage class considerations - -It is important you set the storage class right at the time you deploy a server as this cannot be changed after you deploy. If you were to change the storage class after deployment, you would need to extract the data, delete your server, create a new server, and import the data. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used. - - - to set the storage class for the data, indicate the parameter `--storage-class-data` followed by the name of the storage class. - - to set the storage class for the logs, indicate the parameter `--storage-class-logs` followed by the name of the storage class. - - setting the storage class for the backups has been temporarily removed as we temporarily removed the backup/restore functionalities as we finalize designs and experiences. ---## Related content -- [Manage your server using Azure Data Studio](manage-postgresql-server-with-azure-data-studio.md)-- [Monitor your server](monitor-grafana-kibana.md)-- > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL server offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL server. --- [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities) |
azure-arc | Create Postgresql Server Kubernetes Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server-kubernetes-native-tools.md | - Title: Create a PostgreSQL server using Kubernetes tools -description: Create a PostgreSQL server using Kubernetes tools ------ Previously updated : 11/03/2021----# Create a PostgreSQL server using Kubernetes tools ---## Prerequisites --You should have already created a [data controller](plan-azure-arc-data-services.md). --To create a PostgreSQL server using Kubernetes tools, you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json. --[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) --## Overview --To create a PostgreSQL server, you need to create a Kubernetes secret to store your postgres administrator login and password securely and a PostgreSQL server custom resource based on the `postgresqls` custom resource definitions. --## Create a yaml file --You can use the [template yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/postgresql.yaml) file as a starting point to create your own custom PostgreSQL server yaml file. Download this file to your local computer and open it in a text editor. It is useful to use a text editor such as [VS Code](https://code.visualstudio.com/download) that support syntax highlighting and linting for yaml files. --**Example yaml file**: --```yaml -apiVersion: v1 -data: - username: <your base64 encoded username> - password: <your base64 encoded password> -kind: Secret -metadata: - name: pg1-login-secret -type: Opaque --apiVersion: arcdata.microsoft.com/v1beta3 -kind: postgresql -metadata: - name: pg1 -spec: - scheduling: - default: - resources: - limits: - cpu: "4" - memory: 4Gi - requests: - cpu: "1" - memory: 2Gi - - primary: - type: LoadBalancer # Modify service type based on your Kubernetes environment - storage: - data: - volumes: - - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment - size: 5Gi - logs: - volumes: - - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment - size: 5Gi -``` --### Customizing the login and password. -A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password. You will need to base64 encode an administrator login and password and place them in the placeholder location at `data.password` and `data.username`. Do not include the `<` and `>` symbols provided in the template. --You can use an online tool to base64 encode your desired username and password or you can use built in CLI tools depending on your platform. --PowerShell --```console -[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>')) --#Example -#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example')) --``` --Linux/macOS --```console -echo -n '<your string to encode here>' | base64 --#Example -# echo -n 'example' | base64 -``` --### Customizing the name --The template has a value of `pg1` for the name attribute. You can change this value but it must be characters that follow the DNS naming standards. If you change the name, change the name of the secret to match. For example, if you change the name of the PostgreSQL server to `pg2`, you must change the name of the secret from `pg1-login-secret` to `pg2-login-secret` ---### Customizing the resource requirements --You can change the resource requirements - the RAM and core limits and requests - as needed. --> [!NOTE] -> You can learn more about [Kubernetes resource governance](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes). --Requirements for resource limits and requests: -- The cores limit value is **required** for billing purposes.-- The rest of the resource requests and limits are optional.-- The cores limit and request must be a positive integer value, if specified.-- The minimum of one core is required for the cores request, if specified.-- The memory value format follows the Kubernetes notation. --### Customizing service type --The service type can be changed to NodePort if desired. A random port number will be assigned. --### Customizing storage --You can customize the storage classes for storage to match your environment. If you are not sure which storage classes are available, run the command `kubectl get storageclass` to view them. The template has a default value of `default`. This value means that there is a storage class _named_ `default` not that there is a storage class that _is_ the default. You can also optionally change the size of your storage. You can read more about [storage configuration](./storage-configuration.md). --## Creating the PostgreSQL server --Now that you have customized the PostgreSQL server yaml file, you can create the PostgreSQL server by running the following command: --```console -kubectl create -n <your target namespace> -f <path to your yaml file> --#Example -#kubectl create -n arc -f C:\arc-data-services\postgres.yaml -``` ---## Monitoring the creation status --Creating the PostgreSQL server will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --> [!NOTE] -> The example commands below assume that you created a PostgreSQL server named `pg1` and Kubernetes namespace with the name `arc`. If you used a different namespace/PostgreSQL server name, you can replace `arc` and `pg1` with your names. --```console -kubectl get postgresqls/pg1 --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status of any particular pod by running `kubectl describe` command. The `describe` command is especially useful for troubleshooting any issues. For example: --```console -kubectl describe pod/<pod name> --namespace arc --#Example: -#kubectl describe pod/pg1-0 --namespace arc -``` --## Troubleshooting creation problems --If you encounter any troubles with creation, see the [troubleshooting guide](troubleshoot-guide.md). |
azure-arc | Create Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server.md | - Title: Create an Azure Arc-enabled PostgreSQL server from CLI -description: Create an Azure Arc-enabled PostgreSQL server from CLI ------- Previously updated : 11/03/2021----# Create an Azure Arc-enabled PostgreSQL server from CLI --This document describes the steps to create a PostgreSQL server on Azure Arc and to connect to it. ----## Getting started -If you are already familiar with the topics below, you may skip this paragraph. -There are important topics you may want read before you proceed with creation: -- [Overview of Azure Arc-enabled data services](overview.md)-- [Connectivity modes and requirements](connectivity.md)-- [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities)--If you prefer to try out things without provisioning a full environment yourself, get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. ---## Preliminary and temporary step for OpenShift users only -Implement this step before moving to the next step. To deploy PostgreSQL server onto Red Hat OpenShift in a project other than the default, you need to execute the following commands against your cluster to update the security constraints. This command grants the necessary privileges to the service accounts that will run your PostgreSQL server. The security context constraint (SCC) arc-data-scc is the one you added when you deployed the Azure Arc data controller. --```Console -oc adm policy add-scc-to-user arc-data-scc -z <server-name> -n <namespace-name> -``` --**Server-name is the name of the server you will create during the next step.** --For more details on SCCs in OpenShift, refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.2/authentication/managing-security-context-constraints.html). Proceed to the next step. ---## Create an Azure Arc-enabled PostgreSQL server --To create an Azure Arc-enabled PostgreSQL server on your Arc data controller, you will use the command `az postgres server-arc create` to which you will pass several parameters. --For details about all the parameters you can set at the creation time, review the output of the command: -```azurecli -az postgres server-arc create --help -``` --The main parameters should consider are: -- **the name of the server** you want to deploy. Indicate either `--name` or `-n` followed by a name whose length must not exceed 11 characters.--- **The storage classes** you want your server to use. It is important you set the storage class right at the time you deploy a server as this setting cannot be changed after you deploy. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used.- - To set the storage class for the backups, indicate the parameter `--storage-class-backups` followed by the name of the storage class. Excluding this parameter disables automated backups - - To set the storage class for the data, indicate the parameter `--storage-class-data` followed by the name of the storage class. - - To set the storage class for the logs, indicate the parameter `--storage-class-logs` followed by the name of the storage class. -- > [!IMPORTANT] - > If you need to change the storage class after deployment, extract the data, delete your server, create a new server, and import the data. --When you execute the create command, you will be prompted to enter the username and password for the administrative user. You may skip the interactive prompt by setting the `AZDATA_USERNAME` and `AZDATA_PASSWORD` session environment variables before you run the create command. --### Examples --**To deploy a PostgreSQL server named postgres01 that uses the same storage classes as the data controller, run the following command**: --```azurecli -az postgres server-arc create -n postgres01 --k8s-namespace <namespace> --use-k8s -``` --> [!NOTE] -> - If you deployed the data controller using `AZDATA_USERNAME` and `AZDATA_PASSWORD` session environment variables in the same terminal session, then the values for `AZDATA_PASSWORD` will be used to deploy the PostgreSQL server too. If you prefer to use another password, either (1) update the values for `AZDATA_USERNAME` and `AZDATA_PASSWORD` or (2) delete the `AZDATA_USERNAME` and `AZDATA_PASSWORD` environment variables or (3) delete their values to be prompted to enter a username and password interactively when you create a server. -> - Creating a PostgreSQL server will not immediately register resources in Azure. As part of the process of uploading [resource inventory](upload-metrics-and-logs-to-azure-monitor.md) or [usage data](view-billing-data-in-azure.md) to Azure, the resources will be created in Azure and you will be able to see your resources in the Azure portal. ---## List the PostgreSQL servers deployed in your Arc data controller --To list the PostgreSQL servers deployed in your Arc data controller, run the following command: --```azurecli -az postgres server-arc list --k8s-namespace <namespace> --use-k8s -``` ---```output - { - "name": "postgres01", - "state": "Ready" - } -``` --## Get the endpoints to connect to your Azure Arc-enabled PostgreSQL servers --To view the endpoints for a PostgreSQL server, run the following command: --```azurecli -az postgres server-arc endpoint list -n <server name> --k8s-namespace <namespace> --use-k8s -``` -For example: -```console -{ - "instances": [ - { - "endpoints": [ - { - "description": "PostgreSQL Instance", - "endpoint": "postgresql://postgres:<replace with password>@123.456.78.912:5432" - }, - { - "description": "Log Search Dashboard", - }, - { - "description": "Metrics Dashboard", - "endpoint": "https://98.765.432.11:3000/d/postgres-metrics?var-Namespace=arc&var-Name=postgres01" - } - ], - "engine": "PostgreSql", - "name": "postgres01" - } - ], - "namespace": "arc" -} -``` --You can use the PostgreSQL Instance endpoint to connect to the PostgreSQL server from your favorite tool: [Azure Data Studio](/azure-data-studio/download-azure-data-studio), [pgcli](https://www.pgcli.com/) psql, pgAdmin, etc. -- [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --## Special note about Azure virtual machine deployments --When you are using an Azure virtual machine, then the endpoint IP address will not show the _public_ IP address. To locate the public IP address, use the following command: -```azurecli -az network public-ip list -g azurearcvm-rg --query "[].{PublicIP:ipAddress}" -o table -``` -You can then combine the public IP address with the port to make your connection. --You may also need to expose the port of the PostgreSQL server through the network security gateway (NSG). To allow traffic through the (NSG), set a rule. To set a rule, you will need to know the name of your NSG. You determine the NSG using the command below: --```azurecli -az network nsg list -g azurearcvm-rg --query "[].{NSGName:name}" -o table -``` --Once you have the name of the NSG, you can add a firewall rule using the following command. The example values here create an NSG rule for port 30655 and allows connection from **any** source IP address. --> [!WARNING] -> We do not recommend setting a rule to allow connection from any source IP address. You can lock down things better by specifying a `-source-address-prefixes` value that is specific to your client IP address or an IP address range that covers your team's or organization's IP addresses. --Replace the value of the `--destination-port-ranges` parameter below with the port number you got from the `az postgres server-arc list` command above. --```azurecli -az network nsg rule create -n db_port --destination-port-ranges 30655 --source-address-prefixes '*' --nsg-name azurearcvmNSG --priority 500 -g azurearcvm-rg --access Allow --description 'Allow port through for db access' --destination-address-prefixes '*' --direction Inbound --protocol Tcp --source-port-ranges '*' -``` --## Connect with Azure Data Studio --Open Azure Data Studio and connect to your instance with the external endpoint IP address and port number above, and the password you specified at the time you created the instance. If PostgreSQL isn't available in the *Connection type* dropdown, you can install the PostgreSQL extension by searching for PostgreSQL in the extensions tab. --> [!NOTE] -> You will need to click the [Advanced] button in the connection panel to enter the port number. --Remember, if you are using an Azure VM you will need the _public_ IP address, which is accessible via the following command: --```azurecli -az network public-ip list -g azurearcvm-rg --query "[].{PublicIP:ipAddress}" -o table -``` --## Connect with psql --To access your PostgreSQL server, pass the external endpoint of the PostgreSQL server that you retrieved from above: --You can now connect either psql: --```console -psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655 -``` --## Related content --- Connect to your Azure Arc-enabled PostgreSQL server: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgresql-server.md)-- > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL server offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL server. --- [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Expanding Persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)-- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities) |
azure-arc | Create Sql Managed Instance Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance-azure-data-studio.md | - Title: Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio -description: Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio ------ Previously updated : 06/16/2021----# Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio --This document demonstrates how to install Azure SQL Managed Instance - Azure Arc using Azure Data Studio. ---## Steps --1. Launch Azure Data Studio -2. On the Connections tab, select on the three dots on the top left and choose **New Deployment...**. -3. From the deployment options, select **Azure SQL managed instance**. - > [!NOTE] - > You may be prompted to install the appropriate CLI here if it is not currently installed. - -4. Select **Select**. -- Azure Data Studio opens **Azure SQL managed instance**. --5. For **Resource Type**, choose **Azure SQL managed instance - Azure Arc**. -6. Accept the privacy statement and license terms -1. Review the required tools. Follow instructions to update tools before you proceed. -1. Select **Next**. -- Azure Data Studio allows you to set your specifications for the managed instance. The following table describes the fields: -- |Setting | Description | Required or optional - |-|-|-| - |**Target Azure Controller** | Name of the Azure Arc data controller. | Required | - |**Instance name** | Managed instance name. | Required | - |**Username** | System administrator user name. | Required | - |**System administrator password** | SQL authentication password for the managed instance. The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters.<br/></br> Confirm the password. | Required | - |**Service tier** | Specify the appropriate service tier: Business Critical or General Purpose. | Required | - |**I already have a SQL Server License** | Select if this managed instance will use a license from your organization. | Optional | - |**Storage Class (Data)** | Select from the list. | Required | - |**Volume Size in Gi (Data)** | The amount of space in gibibytes to allocate for data. | Required | - |**Storage Class (Database logs)** | Select from the list. | Required | - |**Volume Size in Gi (Database logs)** | The amount of space in gibibytes to allocate for database transaction logs. | Required | - |**Storage Class (Logs)** | Select from the list. | Required | - |**Volume Size in Gi (Logs)** | The amount of space in gibibytes to allocate for logs. | Required | - |**Storage Class (Backups)** | Select from the list. Specify a ReadWriteMany (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). If this storage class isn't RWX capable, the deployment may not succeed. | Required | - |**Volume Size in Gi (Backups)** | The size of the storage volume to be used for database backups in gibibytes. | Required | - |**Cores Request** | The number of cores to request for the managed instance. Integer. | Optional | - |**Cores Limit** | The request for the capacity for the managed instance in gigabytes. Integer. | Optional | - |**Memory Request** | Select from the list. | Required | - |**Point in time retention (days)** | The number of days to keep your point in time backups. | Optional | -- After you've set all of the required values, Azure Data Studio enables the **Deploy** button. If this control is disabled, verify that you have all required settings configured. --1. Select the **Deploy** button to create the managed instance. --After you select the deploy button, the Azure Arc data controller initiates the deployment. The deployment creates the managed instance. The deployment process takes a few minutes to create the data controller. --## Connect from Azure Data Studio --View all the SQL Managed Instances provisioned to this data controller. Use the following command: -- ```azurecli - az sql mi-arc list --k8s-namespace <namespace> --use-k8s - ``` -- Output should look like this, copy the ServerEndpoint (including the port number) from here. -- ```console - Name Replicas ServerEndpoint State - - -- - - sqlinstance1 1/1 25.51.65.109:1433 Ready - ``` --1. In Azure Data Studio, under **Connections** tab, select the **New Connection** on the **Servers** view -1. Under **Connection**>**Server**, paste the ServerEndpoint -1. Select **SQL Login** as the Authentication type -1. Enter *sa* as the user name -1. Enter the password for the `sa` account -1. Optionally, enter the specific database name to connect to -1. Optionally, select/Add New Server Group as appropriate -1. Select **Connect** to connect to the Azure SQL Managed Instance - Azure Arc --## Related information --Now try to [monitor your SQL instance](monitor-grafana-kibana.md) |
azure-arc | Create Sql Managed Instance Using Kubernetes Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools.md | - Title: Deploy a new SQL Managed Instance enabled by Azure Arc using Kubernetes tools -description: Describes how to use Kubernetes tools to deploy SQL Managed Instance enabled by Azure Arc. ------ Previously updated : 02/28/2022----# Deploy SQL Managed Instance enabled by Azure Arc using Kubernetes tools --This article demonstrates how to deploy Azure SQL Managed Instance for Azure Arc with Kubernetes tools. --## Prerequisites --You should have already created a [data controller](plan-azure-arc-data-services.md). --To create a SQL managed instance using Kubernetes tools, you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json. --[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) --## Overview --To create a SQL Managed Instance, you need to: -1. Create a Kubernetes secret to store your system administrator login and password securely -1. Create a SQL Managed Instance custom resource based on the `SqlManagedInstance` custom resource definition --Define both of these items in a yaml file. --## Create a yaml file --Use the [template yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/sqlmi.yaml) file as a starting point to create your own custom SQL managed instance yaml file. Download this file to your local computer and open it in a text editor. Use a text editor such as [VS Code](https://code.visualstudio.com/download) that support syntax highlighting and linting for yaml files. --> [!NOTE] -> Beginning with the February, 2022 release, `ReadWriteMany` (RWX) capable storage class needs to be specified for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). -> If no storage class is specified for backups, the default storage class in Kubernetes is used. If the default is not RWX capable, the SQL Managed Instance installation may not succeed. --### Example yaml file --See the following example of a yaml file: ---### Customizing the login and password --A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password. You will need to base64 encode a system administrator login and password and place them in the placeholder location at `data.password` and `data.username`. Do not include the `<` and `>` symbols provided in the template. --> [!NOTE] -> For optimum security, using the value `sa` is not allowed for the login . -> Follow the [password complexity policy](/sql/relational-databases/security/password-policy#password-complexity). --You can use an online tool to base64 encode your desired username and password or you can use CLI tools depending on your platform. --PowerShell --```console -[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>')) --#Example -#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example')) -``` --Linux/macOS --```console -echo -n '<your string to encode here>' | base64 --#Example -# echo -n 'example' | base64 -``` --### Customizing the name --The template has a value of `sql1` for the name attribute. You can change this value, but it must include characters that follow the DNS naming standards. You must also change the name of the secret to match. For example, if you change the name of the SQL managed instance to `sql2`, you must change the name of the secret from `sql1-login-secret` to `sql2-login-secret` --### Customizing the resource requirements --You can change the resource requirements - the RAM and core limits and requests - as needed. --> [!NOTE] -> You can learn more about [Kubernetes resource governance](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes). --Requirements for resource limits and requests: -- The cores limit value is **required** for billing purposes.-- The rest of the resource requests and limits are optional.-- The cores limit and request must be a positive integer value, if specified.-- The minimum of 1 core is required for the cores request, if specified.-- The memory value format follows the Kubernetes notation. -- A minimum of 2 GB is required for memory request, if specified.-- As a general guideline, you should have 4 GB of RAM for each 1 core for production use cases.--### Customizing service type --The service type can be changed to NodePort if desired. A random port number will be assigned. --### Customizing storage --You can customize the storage classes for storage to match your environment. If you are not sure which storage classes are available, run the command `kubectl get storageclass` to view them. --The template has a default value of `default`. --For example --```yml -storage: - data: - volumes: - - className: default -``` --This example means that there is a storage class named `default` - not that there is a storage class that is the default. You can also optionally change the size of your storage. For more information, see [storage configuration](./storage-configuration.md). --## Creating the SQL managed instance --Now that you have customized the SQL managed instance yaml file, you can create the SQL managed instance by running the following command: --```console -kubectl create -n <your target namespace> -f <path to your yaml file> --#Example -#kubectl create -n arc -f C:\arc-data-services\sqlmi.yaml -``` --## Monitoring the creation status --Creating the SQL managed instance will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --> [!NOTE] -> The example commands below assume that you created a SQL managed instance named `sql1` and Kubernetes namespace with the name `arc`. If you used a different namespace/SQL managed instance name, you can replace `arc` and `sqlmi` with your names. --```console -kubectl get sqlmi/sql1 --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status of any particular pod. Run `kubectl describe pod ...`. Use this command to troubleshoot any issues. For example: --```console -kubectl describe pod/<pod name> --namespace arc --#Example: -#kubectl describe pod/sql1-0 --namespace arc -``` --## Troubleshoot deployment problems --If you encounter any troubles with the deployment, please see the [troubleshooting guide](troubleshoot-guide.md). --## Related content --[Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md) |
azure-arc | Create Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance.md | - Title: Create a SQL Managed Instance enabled by Azure Arc -description: Deploy SQL Managed Instance enabled by Azure Arc ------- Previously updated : 07/30/2021----# Deploy a SQL Managed Instance enabled by Azure Arc ---To view available options for the create command for SQL Managed Instance enabled by Azure Arc, use the following command: --```azurecli -az sql mi-arc create --help -``` --To create a SQL Managed Instance enabled by Azure Arc, use `az sql mi-arc create`. See the following examples for different connectivity modes: --> [!NOTE] -> A ReadWriteMany (RWX) capable storage class needs to be specified for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) --If no storage class is specified for backups, the default storage class in Kubernetes is used and if this is not RWX capable, the SQL Managed Instance enabled by Azure Arc installation may not succeed. --### [Directly connected mode](#tab/directly-connected-mode) --```azurecli -az sql mi-arc create --name <name> --resource-group <group> -ΓÇôsubscription <subscription> --custom-location <custom-location> --storage-class-backups <RWX capable storageclass> -``` --Example: --```azurecli -az sql mi-arc create --name sqldemo --resource-group rg -ΓÇôsubscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --storage-class-backups mybackups -``` ---### [Indirectly connected mode](#tab/indirectly-connected-mode) --```azurecli -az sql mi-arc create -n <instanceName> --storage-class-backups <RWX capable storageclass> --k8s-namespace <namespace> --use-k8s -``` --Example: --```azurecli -az sql mi-arc create -n sqldemo --storage-class-backups mybackups --k8s-namespace my-namespace --use-k8s -``` -----> [!NOTE] -> Names must be less than 60 characters in length and conform to [DNS naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#rfc-1035-label-names). -> When specifying memory allocation and vCore allocation use this formula to ensure your performance is acceptable: for each 1 vCore you should have at least 4GB of RAM of capacity available on the Kubernetes node where the SQL Managed Instance enabled by Azure Arc pod will run. -> If you want to automate the creation of SQL Managed Instance enabled by Azure Arc and avoid the interactive prompt for the admin password, you can set the `AZDATA_USERNAME` and `AZDATA_PASSWORD` environment variables to the desired username and password prior to running the `az sql mi-arc create` command. -> If you created the data controller using AZDATA_USERNAME and AZDATA_PASSWORD in the same terminal session, then the values for AZDATA_USERNAME and AZDATA_PASSWORD will be used to create the SQL Managed Instance enabled by Azure Arc too. -> [!NOTE] -> If you are using the indirect connectivity mode, creating SQL Managed Instance enabled by Azure Arc in Kubernetes will not automatically register the resources in Azure. Steps to register the resource are in the following articles: -> - [Upload billing data to Azure and view it in the Azure portal](view-billing-data-in-azure.md) -> ---## View instance on Azure Arc --To view the instance, use the following command: --```azurecli -az sql mi-arc list --k8s-namespace <namespace> --use-k8s -``` --You can copy the external IP and port number from here and connect to SQL Managed Instance enabled by Azure Arc using your favorite tool for connecting to eg. SQL Server or Azure SQL Managed Instance such as Azure Data Studio or SQL Server Management Studio. ---## Related content -- [Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md)-- [Register your instance with Azure and upload metrics and logs about your instance](upload-metrics-and-logs-to-azure-monitor.md)-- [Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio](create-sql-managed-instance-azure-data-studio.md)- |
azure-arc | Delete Azure Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-azure-resources.md | - Title: Delete resources from Azure Arc-enabled data services -description: Describes how to delete resources from Azure Arc-enabled data services ------- Previously updated : 07/19/2023----# Delete resources from Azure Arc-enabled data services --This article describes how to delete Azure Arc-enabled data service resources from Azure. --> [!WARNING] -> When you delete resources as described in this article, these actions are irreversible. --The information in this article applies to resources in Azure Arc-enabled data services. To delete resources in Azure, review the information at [Azure Resource Manager resource group and resource deletion](../../azure-resource-manager/management/delete-resource-group.md). --## Before --Before you delete a resource such as Azure Arc SQL managed instance or Azure Arc data controller, you need to export and upload the usage information to Azure for accurate billing calculation by following the instructions described in [Upload billing data to Azure - Indirectly connected mode](view-billing-data-in-azure.md#upload-billing-data-to-azureindirectly-connected-mode). --## Direct connectivity mode --When a cluster is connected to Azure with direct connectivity mode, use the Azure portal to manage the resources. Use the portal for all create, read, update, & delete (CRUD) operations for data controller, managed instances, and PostgreSQL servers. --From Azure portal: -1. Browse to the resource group and delete the Azure Arc data controller -2. Select the Azure Arc-enabled Kubernetes cluster, go to the Overview page - - Select **Extensions** under Settings - - In the Extensions page, select the Azure Arc data services extension (of type microsoft.arcdataservices) and click on **Uninstall** -3. Optionally delete the Custom Location that the Azure Arc data controller is deployed to. -4. Optionally, you can also delete the namespace on your Kubernetes cluster if there are no other resources created in the namespace. --See [Manage Azure resources by using the Azure portal](../../azure-resource-manager/management/manage-resources-portal.md). --## Indirect connectivity mode --In indirect connect mode, deleting an instance from Kubernetes will not remove it from Azure and deleting an instance from Azure will not remove it from Kubernetes. For indirect connect mode, deleting a resource is a two step process and this will be improved in the future. Kubernetes will be the source of truth and the portal will be updated to reflect it. --In some cases, you may need to manually delete Azure Arc-enabled data services resources in Azure. You can delete these resources using any of the following options. --- [Delete an entire resource group](#delete-an-entire-resource-group)-- [Delete specific resources in the resource group](#delete-specific-resources-in-the-resource-group)-- [Delete resources using the Azure CLI](#delete-resources-using-the-azure-cli)- - [Delete SQL managed instance resources using the Azure CLI](#delete-sql-managed-instance-resources-using-the-azure-cli) - - [Delete PostgreSQL server resources using the Azure CLI](#delete-postgresql-server-resources-using-the-azure-cli) - - [Delete Azure Arc data controller resources using the Azure CLI](#delete-azure-arc-data-controller-resources-using-the-azure-cli) - - [Delete a resource group using the Azure CLI](#delete-a-resource-group-using-the-azure-cli) ---## Delete an entire resource group --If you have been using a specific and dedicated resource group for Azure Arc-enabled data services and you want to delete *everything* inside of the resource group you can delete the resource group which will delete everything inside of it. --You can delete a resource group in the Azure portal by doing the following: --- Browse to the resource group in the Azure portal where the Azure Arc-enabled data services resources have been created.-- Click the **Delete resource group** button.-- Confirm the deletion by entering the resource group name and click **Delete**.--## Delete specific resources in the resource group --You can delete specific Azure Arc-enabled data services resources in a resource group in the Azure portal by doing the following: --- Browse to the resource group in the Azure portal where the Azure Arc-enabled data services resources have been created.-- Select all the resources to be deleted.-- Click on the Delete button.-- Confirm the deletion by typing 'yes' and click **Delete**.--## Delete resources using the Azure CLI --You can delete specific Azure Arc-enabled data services resources using the Azure CLI. --### Delete SQL managed instance resources using the Azure CLI --To delete SQL managed instance resources from Azure using the Azure CLI replace the placeholder values in the command below and run it. --```azurecli -az resource delete --name <sql instance name> --resource-type Microsoft.AzureArcData/sqlManagedInstances --resource-group <resource group name> --#Example -#az resource delete --name sql1 --resource-type Microsoft.AzureArcData/sqlManagedInstances --resource-group rg1 -``` --### Delete PostgreSQL server resources using the Azure CLI --To delete a PostgreSQL server resource from Azure using the Azure CLI replace the placeholder values in the command below and run it. --```azurecli -az resource delete --name <postgresql instance name> --resource-type Microsoft.AzureArcData/postgresInstances --resource-group <resource group name> --#Example -#az resource delete --name pg1 --resource-type Microsoft.AzureArcData/postgresInstances --resource-group rg1 -``` --### Delete Azure Arc data controller resources using the Azure CLI --> [!NOTE] -> Before deleting an Azure Arc data controller, you should delete all of the database instance resources that it is managing. --To delete an Azure Arc data controller from Azure using the Azure CLI replace the placeholder values in the command below and run it. --```azurecli -az resource delete --name <data controller name> --resource-type Microsoft.AzureArcData/dataControllers --resource-group <resource group name> --#Example -#az resource delete --name dc1 --resource-type Microsoft.AzureArcData/dataControllers --resource-group rg1 -``` --### Delete a resource group using the Azure CLI --You can also use the Azure CLI to [delete a resource group](../../azure-resource-manager/management/delete-resource-group.md). |
azure-arc | Delete Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-managed-instance.md | - Title: Delete a SQL Managed Instance enabled by Azure Arc -description: Learn how to delete a SQL Managed Instance enabled by Azure Arc and optionally, reclaim associated Kubernetes persistent volume claims (PVCs). ------- Previously updated : 07/30/2021----# Delete a SQL Managed Instance enabled by Azure Arc --In this how-to guide, you'll find and then delete a SQL Managed Instance enabled by Azure Arc. Optionally, after deleting managed instances, you can reclaim associated Kubernetes persistent volume claims (PVCs). --1. Find existing instances: -- ```azurecli - az sql mi-arc list --k8s-namespace <namespace> --use-k8s - ``` -- Example output: -- ```console - Name Replicas ServerEndpoint State - - - - - demo-mi 1/1 10.240.0.4:32023 Ready - ``` --1. Delete the SQL Managed Instance, run one of the commands appropriate for your deployment type: -- 1. **Indirectly connected mode**: -- ```azurecli - az sql mi-arc delete --name <instance_name> --k8s-namespace <namespace> --use-k8s - ``` -- Example output: -- ```azurecli - # az sql mi-arc delete --name demo-mi --k8s-namespace <namespace> --use-k8s - Deleted demo-mi from namespace arc - ``` -- 1. **Directly connected mode**: -- ```azurecli - az sql mi-arc delete --name <instance_name> --resource-group <resource_group> - ``` -- Example output: -- ```azurecli - # az sql mi-arc delete --name demo-mi --resource-group my-rg - Deleted demo-mi from namespace arc - ``` --## Optional - Reclaim Kubernetes PVCs --A Persistent Volume Claim (PVC) is a request for storage by a user from a Kubernetes cluster while creating and adding storage to a SQL Managed Instance. Deleting PVCs is recommended but it isn't mandatory. However, if you don't reclaim these PVCs, you'll eventually end up with errors in your Kubernetes cluster. For example, you might be unable to create, read, update, or delete resources from the Kubernetes API. You might not be able to run commands like `az arcdata dc export` because the controller pods were evicted from the Kubernetes nodes due to storage issues (normal Kubernetes behavior). You can see messages in the logs similar to: --- Annotations: microsoft.com/ignore-pod-health: true -- Status: Failed -- Reason: Evicted -- Message: The node was low on resource: ephemeral-storage. Container controller was using 16372Ki, which exceeds its request of 0.--By design, deleting a SQL Managed Instance doesn't remove its associated [PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). The intention is to ensure that you can access the database files in case the deletion was accidental. --1. To reclaim the PVCs, take the following steps: - 1. Find the PVCs for the server group you deleted. -- ```console - kubectl get pvc - ``` -- In the example below, notice the PVCs for the SQL Managed Instances you deleted. -- ```console - # kubectl get pvc -n arc -- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - data-demo-mi-0 Bound pvc-1030df34-4b0d-4148-8986-4e4c20660cc4 5Gi RWO managed-premium 13h - logs-demo-mi-0 Bound pvc-11836e5e-63e5-4620-a6ba-d74f7a916db4 5Gi RWO managed-premium 13h - ``` -- 1. Delete the data and log PVCs for each of the SQL Managed Instances you deleted. - The general format of this command is: -- ```console - kubectl delete pvc <name of pvc> - ``` -- For example: -- ```console - kubectl delete pvc data-demo-mi-0 -n arc - kubectl delete pvc logs-demo-mi-0 -n arc - ``` -- Each of these kubectl commands will confirm the successful deleting of the PVC. For example: -- ```console - persistentvolumeclaim "data-demo-mi-0" deleted - persistentvolumeclaim "logs-demo-mi-0" deleted - ``` - -## Related content --Learn more about [Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md) --[Start by creating a Data Controller](create-data-controller-indirect-cli.md) --Already created a Data Controller? [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) |
azure-arc | Delete Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-postgresql-server.md | - Title: Delete an Azure Arc-enabled PostgreSQL server -description: Delete an Azure Arc-enabled Postgres Hyperscale server group ------- Previously updated : 07/30/2021----# Delete an Azure Arc-enabled PostgreSQL server --This document describes the steps to delete a server from your Azure Arc setup. ---## Delete the server --As an example, let's consider we want to delete the _postgres01_ instance from the below setup: --```azurecli -az postgres server-arc list --k8s-namespace <namespace> --use-k8s -Name State -- --postgres01 Ready -``` --The general format of the delete command is: -```azurecli -az postgres server-arc delete -n <server name> --k8s-namespace <namespace> --use-k8s -``` -When you execute this command, you will be requested to confirm the deletion of the server. If you are using scripts to automate deletions you will need to use the --force parameter to bypass the confirmation request. For example, you would run a command like: -```azurecli -az postgres server-arc delete -n <server name> --force --k8s-namespace <namespace> --use-k8s -``` --For more details about the delete command, run: -```azurecli -az postgres server-arc delete --help -``` --### Delete the server used in this example --```azurecli -az postgres server-arc delete -n postgres01 --k8s-namespace <namespace> --use-k8s -``` --## Reclaim the Kubernetes Persistent Volume Claims (PVCs) --A PersistentVolumeClaim (PVC) is a request for storage by a user from Kubernetes cluster while creating and adding storage to a PostgreSQL server. Deleting a server group does not remove its associated [PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). This is by design. The intention is to help the user to access the database files in case the deletion of instance was accidental. Deleting PVCs is not mandatory. However it is recommended. If you don't reclaim these PVCs, you'll eventually end up with errors as your Kubernetes cluster will think it's running out of disk space or usage of the same PostgreSQL server name while creating new one might cause inconsistencies. -To reclaim the PVCs, take the following steps: --### 1. List the PVCs for the server group you deleted --To list the PVCs, run this command: --```console -kubectl get pvc [-n <namespace name>] -``` --It returns the list of PVCs, in particular the PVCs for the server group you deleted. For example: --```output -kubectl get pvc -NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -data-few7hh0k4npx9phsiobdc3hq-postgres01-0 Bound pvc-72ccc225-dad0-4dee-8eae-ed352be847aa 5Gi RWO default 2d18h -data-few7hh0k4npx9phsiobdc3hq-postgres01-1 Bound pvc-ce6f0c51-faed-45ae-9472-8cdf390deb0d 5Gi RWO default 2d18h -data-few7hh0k4npx9phsiobdc3hq-postgres01-2 Bound pvc-5a863ab9-522a-45f3-889b-8084c48c32f8 5Gi RWO default 2d18h -data-few7hh0k4npx9phsiobdc3hq-postgres01-3 Bound pvc-00e1ace3-1452-434f-8445-767ec39c23f2 5Gi RWO default 2d15h -logs-few7hh0k4npx9phsiobdc3hq-postgres01-0 Bound pvc-8b810f4c-d72a-474a-a5d7-64ec26fa32de 5Gi RWO default 2d18h -logs-few7hh0k4npx9phsiobdc3hq-postgres01-1 Bound pvc-51d1e91b-08a9-4b6b-858d-38e8e06e60f9 5Gi RWO default 2d18h -logs-few7hh0k4npx9phsiobdc3hq-postgres01-2 Bound pvc-8e5ad55e-300d-4353-92d8-2e383b3fe96e 5Gi RWO default 2d18h -logs-few7hh0k4npx9phsiobdc3hq-postgres01-3 Bound pvc-f9e4cb98-c943-45b0-aa07-dd5cff7ea585 5Gi RWO default 2d15h -``` -There are 8 PVCs for this server group. --### 2. Delete each of the PVCs --Delete the data and log PVCs for the PostgreSQL server you deleted. --The general format of this command is: --```console -kubectl delete pvc <name of pvc> [-n <namespace name>] -``` --For example: --```console -kubectl delete pvc data-few7hh0k4npx9phsiobdc3hq-postgres01-0 -kubectl delete pvc data-few7hh0k4npx9phsiobdc3hq-postgres01-1 -kubectl delete pvc data-few7hh0k4npx9phsiobdc3hq-postgres01-2 -kubectl delete pvc data-few7hh0k4npx9phsiobdc3hq-postgres01-3 -kubectl delete pvc logs-few7hh0k4npx9phsiobdc3hq-postgres01-0 -kubectl delete pvc logs-few7hh0k4npx9phsiobdc3hq-postgres01-1 -kubectl delete pvc logs-few7hh0k4npx9phsiobdc3hq-postgres01-2 -kubectl delete pvc logs-few7hh0k4npx9phsiobdc3hq-postgres01-3 -``` --Each of these kubectl commands will confirm the successful deleting of the PVC. For example: --```output -persistentvolumeclaim "data-postgres01-0" deleted -``` - -->[!NOTE] -> As indicated, not deleting the PVCs might eventually get your Kubernetes cluster in a situation where it will throw errors. Some of these errors may include being unable to create, read, update, delete resources from the Kubernetes API, or being able to run commands like `az arcdata dc export` as the controller pods may be evicted from the Kubernetes nodes because of this storage issue (normal Kubernetes behavior). -> -> For example, you may see messages in the logs similar to: -> ```output -> Annotations: microsoft.com/ignore-pod-health: true -> Status: Failed -> Reason: Evicted -> Message: The node was low on resource: ephemeral-storage. Container controller was using 16372Ki, which exceeds its request of 0. -> ``` - -## Next step -Create [Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) |
azure-arc | Deploy Active Directory Connector Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-cli.md | - Title: Tutorial ΓÇô Deploy Active Directory connector using Azure CLI -description: Tutorial to deploy an Active Directory connector using Azure CLI ------- Previously updated : 10/11/2022-----# Tutorial ΓÇô Deploy Active Directory connector using Azure CLI --This article explains how to deploy an Active Directory (AD) connector using Azure CLI. The AD connector is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc. --## Prerequisites --### Install tools --Before you can proceed with the tasks in this article, install the following tools: --- The [Azure CLI (az)](/cli/azure/install-azure-cli)-- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)--To know further details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md) ---## Deploy Active Directory connector in customer-managed keytab mode --### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode) --#### Create an AD connector instance --> [!NOTE] -> Make sure to wrap your password for the domain service AD account with single quote `'` to avoid the expansion of special characters such as `!`. -> --To view available options for create command for AD connector instance, use the following command: --```azurecli -az arcdata ad-connector create --help -``` --To create an AD connector instance, use `az arcdata ad-connector create`. See the following examples for different connectivity modes: ---##### Indirectly connected mode --```azurecli -az arcdata ad-connector create name < name >k8s-namespace < Kubernetes namespace >realm < AD Domain name >nameserver-addresses < DNS server IP addresses >account-provisioning < account provisioning mode : manual or automatic > prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >use-k8s-``` --Example: --```azurecli -az arcdata ad-connector create name arcadc k8s-namespace arc realm CONTOSO.LOCAL nameserver-addresses 10.10.10.11account-provisioning manualprefer-k8s-dns falseuse-k8s-``` --```azurecli -# Setting environment variables needed for automatic account provisioning -DOMAIN_SERVICE_ACCOUNT_USERNAME='sqlmi' -DOMAIN_SERVICE_ACCOUNT_PASSWORD='arc@123!!' --# Deploying active directory connector with automatic account provisioning -az arcdata ad-connector create name arcadc k8s-namespace arc realm CONTOSO.LOCAL nameserver-addresses 10.10.10.11account-provisioning automaticprefer-k8s-dns falseuse-k8s-``` --##### Directly connected mode --```azurecli -az arcdata ad-connector create name < name >dns-domain-name < The DNS name of AD domain > realm < AD Domain name > nameserver-addresses < DNS server IP addresses >account-provisioning < account provisioning mode : manual or automatic >prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >data-controller-name < Arc Data Controller Name >resource-group < resource-group >-``` --Example: --```azurecli -az arcdata ad-connector create name arcadc realm CONTOSO.LOCAL dns-domain-name contoso.local nameserver-addresses 10.10.10.11account-provisioning manualprefer-k8s-dns falsedata-controller-name arcdcresource-group arc-rg-``` --```azurecli -# Setting environment variables needed for automatic account provisioning -DOMAIN_SERVICE_ACCOUNT_USERNAME='sqlmi' -DOMAIN_SERVICE_ACCOUNT_PASSWORD='arc@123!!' --# Deploying active directory connector with automatic account provisioning -az arcdata ad-connector create name arcadc realm CONTOSO.LOCAL dns-domain-name contoso.local nameserver-addresses 10.10.10.11account-provisioning automaticprefer-k8s-dns falsedata-controller-name arcdcresource-group arc-rg-``` --### Update an AD connector instance --To view available options for update command for AD connector instance, use the following command: --```azurecli -az arcdata ad-connector update --help -``` --To update an AD connector instance, use `az arcdata ad-connector update`. See the following examples for different connectivity modes: --#### Indirectly connected mode --```azurecli -az arcdata ad-connector update name < name >k8s-namespace < Kubernetes namespace > nameserver-addresses < DNS server IP addresses >use-k8s-``` --Example: --```azurecli -az arcdata ad-connector update name arcadc k8s-namespace arc nameserver-addresses 10.10.10.11use-k8s-``` --#### Directly connected mode --```azurecli -az arcdata ad-connector update name < name >nameserver-addresses < DNS server IP addresses > data-controller-name < Arc Data Controller Name >resource-group < resource-group >-``` --Example: --```azurecli -az arcdata ad-connector update name arcadc nameserver-addresses 10.10.10.11data-controller-name arcdcresource-group arc-rg-``` ---### [system-managed keytab mode](#tab/system-managed-keytab-mode) -To create an AD connector instance, use `az arcdata ad-connector create`. See the following examples for different connectivity modes: ---#### Indirectly connected mode --```azurecli -az arcdata ad-connector create name < name >k8s-namespace < Kubernetes namespace > dns-domain-name < The DNS name of AD domain > realm < AD Domain name > nameserver-addresses < DNS server IP addresses >account-provisioning < account provisioning mode > ou-distinguished-name < AD Organizational Unit distinguished name >prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >use-k8s-``` --Example: --```azurecli -az arcdata ad-connector create name arcadc k8s-namespace arc realm CONTOSO.LOCAL netbios-domain-name CONTOSO dns-domain-name contoso.local nameserver-addresses 10.10.10.11account-provisioning automatic ou-distinguished-name ΓÇ£OU=arcou,DC=contoso,DC=localΓÇ¥ prefer-k8s-dns falseuse-k8s-``` --#### Directly connected mode --```azurecli -az arcdata ad-connector create name < name >dns-domain-name < The DNS name of AD domain > realm < AD Domain name > netbios-domain-name < AD domain NETBOIS name > nameserver-addresses < DNS server IP addresses >account-provisioning < account provisioning mode > ou-distinguished-name < AD domain organizational distinguished name >prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >data-controller-name < Arc Data Controller Name >resource-group < resource-group >-``` --Example: --```azurecli -az arcdata ad-connector create name arcadc realm CONTOSO.LOCAL netbios-domain-name CONTOSO dns-domain-name contoso.local nameserver-addresses 10.10.10.11account-provisioning automatic ou-distinguished-name ΓÇ£OU=arcou,DC=contoso,DC=localΓÇ¥ prefer-k8s-dns falsedata-controller-name arcdcresource-group arc-rg-``` --### Update an AD connector instance --To view available options for update command for AD connector instance, use the following command: --```azurecli -az arcdata ad-connector update --help -``` -To update an AD connector instance, use `az arcdata ad-connector update`. See the following examples for different connectivity modes: --### Indirectly connected mode --```azurecli -az arcdata ad-connector update name < name >k8s-namespace < Kubernetes namespace > nameserver-addresses < DNS server IP addresses >use-k8s-``` --Example: --```azurecli -az arcdata ad-connector update name arcadc k8s-namespace arc nameserver-addresses 10.10.10.11use-k8s-``` --#### Directly connected mode --```azurecli -az arcdata ad-connector update name < name >nameserver-addresses < DNS server IP addresses > data-controller-name < Arc Data Controller Name>resource-group <resource-group>-``` --Example: --```azurecli -az arcdata ad-connector update name arcadc nameserver-addresses 10.10.10.11data-controller-name arcdcresource-group arc-rg-``` ----## Delete an AD connector instance --To delete an AD connector instance, use `az arcdata ad-connector delete`. See the following examples for both connectivity modes: --### [Indirectly connected mode](#tab/indirectly-connected-mode) --```azurecli -az arcdata ad-connector delete --name < AD Connector name > --k8s-namespace < namespace > --use-k8s -``` --Example: --```azurecli -az arcdata ad-connector delete --name arcadc --k8s-namespace arc --use-k8s -``` --### [Directly connected mode](#tab/directly-connected-mode) -```azurecli -az arcdata ad-connector delete --name < AD Connector name > --data-controller-name < data controller name > --resource-group < resource group > -``` --Example: --```azurecli -az arcdata ad-connector delete --name arcadc --data-controller-name arcdc --resource-group arc-rg -``` ----## Related content -* [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md) -* [Tutorial ΓÇô Deploy AD connector in system-managed keytab mode](deploy-system-managed-keytab-active-directory-connector.md) -* [Deploy Arc-enabled SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md). |
azure-arc | Deploy Active Directory Connector Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-portal.md | - Title: Tutorial ΓÇô Deploy Active Directory connector using Azure portal -description: Tutorial to deploy an Active Directory connector using Azure portal ------ Previously updated : 10/11/2022----# Tutorial ΓÇô Deploy Active Directory connector using Azure portal --Active Directory (AD) connector is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc. --This article explains how to deploy, manage, and delete an Active Directory (AD) connector in directly connected mode from the Azure portal. --## Prerequisites --For details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md). --Make sure you have the following deployed before proceed with the steps in this article: --- An Arc-enabled Azure Kubernetes cluster.-- A data controller in directly connected mode.--## Create a new AD connector --1. Log in to [Azure portal](https://portal.azure.com). -1. In the search resources field at the top of the portal, type **data controllers**, and select **Azure Arc data controllers**. --Azure takes you to where you can find all available data controllers deployed in your selected Azure subscription. --1. Select the data controller where you wish to add an AD connector. -1. Under **Settings** select **Active Directory**. The portal shows the Active Directory connectors for this data controller. -1. Select **+ Add Connector**, the portal presents an **Add Connector** interface. -1. Under **Active Directory connector** - 1. Specify your **Connector name**. - 2. Choose the account provisioning type - either **Automatic** or **Manual**. --The account provisioning type determines whether you deploy a customer-managed keytab AD connector or a system-managed keytab AD connector. --### Create a new customer-managed keytab AD connector --1. Click **Add Connector**. - -1. Choose the account provisioning type **Manual**. - -1. Set the editable fields for your connector: - - **Realm**: The name of the Active Directory (AD) domain in uppercase. For example *CONTOSO.COM*. - - **Nameserver IP address**: A comma-separated list of Active Directory DNS server IP addresses. For example: *10.10.10.11, 10.10.10.12*. - - **Netbios domain name**: Optional. The NETBIOS name of the Active Directory domain. For example *CONTOSO*. Defaults to the first label of realm. - - **DNS domain name**: Optional. The DNS domain name associated with the Active Directory domain. For example, *contoso.com*. - - **DNS replicas**: Optional. The number of replicas to deploy for the DNS proxy service. Defaults to `1`. - - **Prefer Kubernetes DNS for PTR lookups**: Optional. Check to set Kubernetes DNS for IP address lookup. Clear to use Active Directory DNS. -- ![Screenshot of the portal interface to add customer managed keytab.](media/active-directory-deployment/add-ad-customer-managed-keytab-connector-portal.png) --1. Click **Add Connector** to create a new customer-managed keytab AD connector. --### Create a new system-managed keytab AD connector -1. Click **Add Connector**. -1. Choose the account provisioning type **Automatic**. -1. Set the editable fields for your connector: - - **Realm**: The name of the Active Directory (AD) domain in uppercase. For example *CONTOSO.COM*. - - **Nameserver IP address**: A comma-separated list of Active Directory DNS server IP addresses. For example: *10.10.10.11, 10.10.10.12*. - - **OU distinguished name** The distinguished name of the Organizational Unit (OU) pre-created in the Active Directory (AD) domain. For example, `OU=arcou,DC=contoso,DC=com`. - - **Domain Service Account username** The username of the Domain Service Account in Active Directory. - - **Domain Service Account password** The password of the Domain Service Account in Active Directory. - - **Primary domain controller hostname (Optional)** The hostname of the primary Active Directory domain controller. For example, `azdc01.contoso.com`. - - **Secondary domain controller hostname (Optional)** The secondary domain controller hostname. - - **Netbios domain name**: Optional. The NETBIOS name of the Active Directory domain. For example *CONTOSO*. Defaults to the first label of realm. - - **DNS domain name**: Optional. The DNS domain name associated with the Active Directory domain. For example, *contoso.com*. - - **DNS replicas (Optional)** The number of replicas to deploy for the DNS proxy service. Defaults to `1`. - - **Prefer Kubernetes DNS for PTR lookups**: Optional. Check to set Kubernetes DNS for IP address lookup. Clear to use Active Directory DNS. -- ![Screenshot of the portal interface to add system managed keytab.](media/active-directory-deployment/add-ad-system-managed-keytab-connector-portal.png) --1. Click **Add Connector** to create a new system-managed keytab AD connector. --## Edit an existing AD connector --1. Select the AD connect that you want to edit. Select the ellipses (**...**), and then **Edit**. The portal presents an **Edit Connector** interface. --1. You may update any editable fields. For example: - - **Primary domain controller hostname** The hostname of the primary Active Directory domain controller. For example, `azdc01.contoso.com`. - - **Secondary domain controller hostname** The secondary domain controller hostname. - - **Nameserver IP address**: A comma-separated list of Active Directory DNS server IP addresses. For example: *10.10.10.11, 10.10.10.12*. - - **DNS replicas** The number of replicas to deploy for the DNS proxy service. Defaults to `1`. - - **Prefer Kubernetes DNS for PTR lookups**: Check to set Kubernetes DNS for IP address lookup. Clear to use Active Directory DNS. --1. Click on **Apply** for changes to take effect. ---## Delete an AD connector --1. Select the ellipses (**...**) on the right of the Active Directory connector you would like to delete. -1. Select **Delete**. --To delete multiple AD connectors at one time: --1. Select the checkbox in the beginning row of each AD connector you want to delete. -- Alternatively, select the checkbox in the top row to select all the AD connectors in the table. --1. Click **Delete** in the management bar to delete the AD connectors that you selected. --## Related content -* [Tutorial ΓÇô Deploy Active Directory connector using Azure CLI](deploy-active-directory-connector-cli.md) -* [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md) -* [Tutorial ΓÇô Deploy Active Directory connector in system-managed keytab mode](deploy-system-managed-keytab-active-directory-connector.md) -* [Deploy Arc-enabled SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md). |
azure-arc | Deploy Active Directory Postgresql Server Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-postgresql-server-cli.md | - Title: Deploy Active Directory integrated Azure Arc-enabled PostgreSQL server using Azure CLI -description: Explains how to deploy Active Directory integrated Azure Arc-enabled PostgreSQL server using Azure CLI ------- Previously updated : 02/10/2023----# Deploy Active Directory integrated Azure Arc-enabled PostgreSQL using Azure CLI --This article explains how to deploy Azure Arc-enabled PostgreSQL server with Active Directory (AD) authentication using Azure CLI. --See these articles for specific instructions: --- [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md)--### Prerequisites --Before you proceed, install the following tools: --- The [Azure CLI (az)](/cli/azure/install-azure-cli)-- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)--To know more further details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md) --> [!IMPORTANT] -> When using Active Directory, the default account must be named "postgres" in order for connections to succeed. --## Deploy and update Active Directory integrated Azure Arc-enabled PostgreSQL server --### Customer-managed keytab mode --#### Create an Azure Arc-enabled PostgreSQL server --To view available options for the create command for Azure Arc-enabled PostgreSQL server, use the following command: --```azurecli -az postgres server-arc create --help -``` --To create a SQL Managed Instance, use `az postgres server-arc create`. See the following example: --```azurecli -az postgres server-arc create name < PostgreSQL server name > k8s-namespace < namespace > ad-connector-name < your AD connector name > keytab-secret < PostgreSQL server keytab secret name > ad-account-name < PostgreSQL server AD user account > dns-name < PostgreSQL server primary endpoint DNS name > port < PostgreSQL server primary endpoint port number >use-k8s-``` --Example: --```azurecli -az postgres server-arc create name contosopg k8s-namespace arc ad-connector-name adarc keytab-secret arcuser-keytab-secretad-account-name arcuser dns-name arcpg.contoso.localport 31432use-k8s-``` --#### Update an Azure Arc-enabled PostgreSQL server --To update an Arc-enabled PostgreSQL server, use `az postgres server-arc update`. See the following example: --```azurecli -az postgres server-arc update name < PostgreSQL server name > k8s-namespace < namespace > keytab-secret < PostgreSQL server keytab secret name > use-k8s-``` --Example: --```azurecli -az postgres server-arc update name contosopg k8s-namespace arc keytab-secret arcuser-keytab-secretuse-k8s-``` --## Related content -- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. |
azure-arc | Deploy Active Directory Sql Managed Instance Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance-cli.md | - Title: Deploy Active Directory integrated SQL Managed Instance enabled by Azure Arc using Azure CLI -description: Explains how to deploy Active Directory integrated SQL Managed Instance enabled by Azure Arc using Azure CLI ------- Previously updated : 10/11/2022----# Deploy Active Directory integrated SQL Managed Instance enabled by Azure Arc using Azure CLI --This article explains how to deploy SQL Managed Instance enabled by Azure Arc with Active Directory (AD) authentication using Azure CLI. --See these articles for specific instructions: --- [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md)-- [Tutorial ΓÇô Deploy AD connector in system-managed keytab mode](deploy-system-managed-keytab-active-directory-connector.md)--### Prerequisites --Before you proceed, install the following tools: --- The [Azure CLI (az)](/cli/azure/install-azure-cli)-- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)--To know more further details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md) ---## Deploy and update Active Directory integrated SQL Managed Instance --### [Customer-managed keytab mode](#tab/Customer-managed-keytab-mode) ---#### Create an instance --To view available options for create command for SQL Managed Instance enabled by Azure Arc, use the following command: --```azurecli -az sql mi-arc create --help -``` --To create a SQL Managed Instance, use `az sql mi-arc create`. See the following examples for different connectivity modes: --#### Create - indirectly connected mode --```azurecli -az sql mi-arc create name < SQL MI name > k8s-namespace < namespace > ad-connector-name < your AD connector name > keytab-secret < SQL MI keytab secret name > ad-account-name < SQL MI AD user account > primary-dns-name < SQL MI primary endpoint DNS name > primary-port-number < SQL MI primary endpoint port number > secondary-dns-name < SQL MI secondary endpoint DNS name > secondary-port-number < SQL MI secondary endpoint port number > use-k8s-``` --Example: --```azurecli -az sql mi-arc create name contososqlmi k8s-namespace arc ad-connector-name adarc keytab-secret arcuser-keytab-secretad-account-name arcuser primary-dns-name arcsqlmi.contoso.localprimary-port-number 31433 secondary-dns-name arcsqlmi-2.contoso.localsecondary-port-number 31434use-k8s-``` --#### Create - directly connected mode --```azurecli -az sql mi-arc create name < SQL MI name > ad-connector-name < your AD connector name > keytab-secret < SQL MI keytab secret name > ad-account-name < SQL MI AD user account > primary-dns-name < SQL MI primary endpoint DNS name > primary-port-number < SQL MI primary endpoint port number > secondary-dns-name < SQL MI secondary endpoint DNS name > secondary-port-number < SQL MI secondary endpoint port number >custom-location < your custom location > resource-group < resource-group >-``` --Example: --```azurecli -az sql mi-arc create name contososqlmi ad-connector-name adarc keytab-secret arcuser-keytab-secretad-account-name arcuser primary-dns-name arcsqlmi.contoso.localprimary-port-number 31433 secondary-dns-name arcsqlmi-2.contoso.localsecondary-port-number 31434custom-location private-locationresource-group arc-rg-``` --#### Update an instance --To update a SQL Managed Instance, use `az sql mi-arc update`. See the following examples for different connectivity modes: --#### Update - indirectly connected mode --```azurecli -az sql mi-arc update name < SQL MI name > k8s-namespace < namespace > keytab-secret < SQL MI keytab secret name > use-k8s-``` --Example: --```azurecli -az sql mi-arc update name contososqlmi k8s-namespace arc keytab-secret arcuser-keytab-secretuse-k8s-``` --#### Update - directly connected mode --> [!NOTE] -> Note that the **resource group** is a mandatory parameter but this is not changeable. --```azurecli -az sql mi-arc update name < SQL MI name > keytab-secret < SQL MI keytab secret name > resource-group < resource-group >-``` --Example: --```azurecli -az sql mi-arc update name contososqlmi keytab-secret arcuser-keytab-secretresource-group arc-rg-``` --### [System-managed keytab mode](#tab/system-managed-keytab-mode) ---#### Create an instance --To view available options for create command for SQL Managed Instance enabled by Azure Arc, use the following command: --```azurecli -az sql mi-arc create --help -``` --To create a SQL Managed Instance, use `az sql mi-arc create`. See the following examples for different connectivity modes: ---##### Create - indirectly connected mode --```azurecli -az sql mi-arc create name < SQL MI name > k8s-namespace < namespace > ad-connector-name < your AD connector name > ad-account-name < SQL MI AD user account > primary-dns-name < SQL MI primary endpoint DNS name > primary-port-number < SQL MI primary endpoint port number > secondary-dns-name < SQL MI secondary endpoint DNS name > secondary-port-number < SQL MI secondary endpoint port number >use-k8s-``` --Example: --```azurecli -az sql mi-arc create name contososqlmi k8s-namespace arc ad-connector-name adarc ad-account-name arcuser primary-dns-name arcsqlmi.contoso.localprimary-port-number 31433 secondary-dns-name arcsqlmi-2.contoso.localsecondary-port-number 31434use-k8s-``` --##### Create - directly connected mode --```azurecli -az sql mi-arc create name < SQL MI name > ad-connector-name < your AD connector name > ad-account-name < SQL MI AD user account > primary-dns-name < SQL MI primary endpoint DNS name > primary-port-number < SQL MI primary endpoint port number > secondary-dns-name < SQL MI secondary endpoint DNS name > secondary-port-number < SQL MI secondary endpoint port number >custom-location < your custom location > resource-group <resource-group>-``` --Example: --```azurecli -az sql mi-arc create name contososqlmi ad-connector-name adarc ad-account-name arcuser primary-dns-name arcsqlmi.contoso.localprimary-port-number 31433 secondary-dns-name arcsqlmi-2.contoso.localsecondary-port-number 31434custom-location private-locationresource-group arc-rg-``` ------## Delete an instance in directly connected mode --To delete a SQL Managed Instance, use `az sql mi-arc delete`. See the following examples for both connectivity modes: ---### [Indirectly connected mode](#tab/indirectly-connected-mode) --```azurecli -az sql mi-arc delete --name < SQL MI name > --k8s-namespace < namespace > --use-k8s -``` --Example: --```azurecli -az sql mi-arc delete --name contososqlmi --k8s-namespace arc --use-k8s -``` --### [Directly connected mode](#tab/directly-connected-mode) --```azurecli -az sql mi-arc delete --name < SQL MI name > --resource-group < resource group > -``` --Example: --```azurecli -az sql mi-arc delete --name contososqlmi --resource-group arc-rg -``` --## Related content --* [Deploy Arc-enabled SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md). -* [Connect to Active Directory integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md). |
azure-arc | Deploy Active Directory Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md | - Title: Deploy Active Directory-integrated SQL Managed Instance enabled by Azure Arc -description: Learn how to deploy SQL Managed Instance enabled by Azure Arc with Active Directory authentication. ------ Previously updated : 10/11/2022----# Deploy Active Directory-integrated SQL Managed Instance enabled by Azure Arc --In this article, learn how to deploy Azure Arc-enabled Azure SQL Managed Instance with Active Directory authentication. --## Prerequisites --Before you begin your SQL Managed Instance deployment, make sure you have these prerequisites: --- An Active Directory domain-- A deployed Azure Arc data controller-- A deployed Active Directory connector with a [customer-managed keytab](deploy-customer-managed-keytab-active-directory-connector.md) or [system-managed keytab](deploy-system-managed-keytab-active-directory-connector.md)--## Connector requirements --The customer-managed keytab Active Directory connector and the system-managed keytab Active Directory connector are different deployment modes that have different requirements and steps. Each mode has specific requirements during deployment. Select the tab for the connector you use. --### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode) --For an Active Directory customer-managed keytab deployment, you must provide: --- An Active Directory user account for SQL-- Service principal names (SPNs) under the user account-- DNS A (forward) record for the primary endpoint of SQL (and optionally, a secondary endpoint)--### [System-managed keytab mode](#tab/system-managed-keytab-mode) --For an Active Directory system-managed keytab deployment, you must provide: --- A unique name of an Active Directory user account for SQL-- DNS A (forward) record for the primary endpoint of SQL (and optionally, a secondary endpoint)----## Prepare for deployment --Depending on your deployment mode, complete the following steps to prepare to deploy SQL Managed Instance. --### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode) --To prepare for deployment in customer-managed keytab mode: --1. **Identify a DNS name for the SQL endpoints**: Choose unique DNS names for the SQL endpoints that clients will connect to from outside the Kubernetes cluster. -- - The DNS names should be in the Active Directory domain or in its descendant domains. - - The examples in this article use `sqlmi-primary.contoso.local` for the primary DNS name and `sqlmi-secondary.contoso.local` for the secondary DNS name. --1. **Identify the port numbers for the SQL endpoints**: Enter a port number for each of the SQL endpoints. -- - The port numbers must be in the acceptable range of port numbers for your Kubernetes cluster. - - The examples in this article use `31433` for the primary port number and `31434` for the secondary port number. --1. **Create an Active Directory account for the managed instance**: Choose a name for the Active Directory account to represent your managed instance. -- - The name must be unique in the Active Directory domain. - - The examples in this article use `sqlmi-account` for the Active Directory account name. -- To create the account: -- 1. On the domain controller, open the Active Directory Users and Computers tool. Create an account to represent the managed instance. - 1. Enter an account password that complies with the Active Directory domain password policy. You'll use this password in some of the steps in the next sections. - 1. Ensure that the account is enabled. The account doesn't need any special permissions. --1. **Create DNS records for the SQL endpoints in the Active Directory DNS servers**: In one of the Active Directory DNS servers, create A records (forward lookup records) for the DNS name you chose in step 1. -- - The DNS records should point to the IP address that the SQL endpoint will listen on for connections from outside the Kubernetes cluster. - - You don't need to create reverse-lookup Pointer (PTR) records in association with the A records. --1. **Create SPNs**: For SQL to be able to accept Active Directory authentication against the SQL endpoints, you must register two SPNs in the account you created in the preceding step. Two SPNs must be registered for the primary endpoint. If you want Active Directory authentication for the secondary endpoint, the SPNs must also be registered for the secondary endpoint. -- To create and register SPNs: -- 1. Use the following format to create the SPNs: -- ```output - MSSQLSvc/<DNS name> - MSSQLSvc/<DNS name>:<port> - ``` -- 1. On one of the domain controllers, run the following commands to register the SPNs: -- ```console - setspn -S MSSQLSvc/<DNS name> <account> - setspn -S MSSQLSvc/<DNS name>:<port> <account> - ``` -- Your commands might look like the following example: -- ```console - setspn -S MSSQLSvc/sqlmi-primary.contoso.local sqlmi-account - setspn -S MSSQLSvc/sqlmi-primary.contoso.local:31433 sqlmi-account - ``` -- 1. If you want Active Directory authentication on the secondary endpoint, run the same commands to add SPNs for the secondary endpoint: -- ```console - setspn -S MSSQLSvc/<DNS name> <account> - setspn -S MSSQLSvc/<DNS name>:<port> <account> - ``` - - Your commands might look like the following example: -- ```console - setspn -S MSSQLSvc/sqlmi-secondary.contoso.local sqlmi-account - setspn -S MSSQLSvc/sqlmi-secondary.contoso.local:31434 sqlmi-account - ``` --1. **Generate a keytab file that has entries for the account and SPNs**: For SQL to be able to authenticate itself to Active Directory and accept authentication from Active Directory users, provide a keytab file by using a Kubernetes secret. -- - The keytab file contains encrypted entries for the Active Directory account that's generated for the managed instance and the SPNs. - - SQL Server uses this file as its credential against Active Directory. - - You can choose from multiple tools to generate a keytab file: -- - `adutil`: Available for Linux (see [Introduction to adutil](/sql/linux/sql-server-linux-ad-auth-adutil-introduction)) - - `ktutil`: Available on Linux - - `ktpass`: Available on Windows - - Custom scripts - - To generate the keytab file specifically for the managed instance: -- 1. Use one of these custom scripts: -- - Linux: [create-sql-keytab.sh](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.sh) - - Windows Server: [create-sql-keytab.ps1](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.ps1) -- The scripts accept several parameters and generate a keytab file and a YAML specification file for the Kubernetes secret that contains the keytab. -- 1. In your script, replace the parameter values with values for your managed instance deployment. -- For the input parameters, use the following values: -- - `--realm`: The Active Directory domain in uppercase. Example: `CONTOSO.LOCAL` - - `--account`: The Active Directory account where the SPNs are registered. Example: `sqlmi-account` - - `--port`: The primary SQL endpoint port number. Example: `31433` - - `--dns-name`: The DNS name for the primary SQL endpoint. - - `--keytab-file`: The path to the keytab file. - - `--secret-name`: The name of the keytab secret to generate a specification for. - - `--secret-namespace`: The Kubernetes namespace that contains the keytab secret. - - `--secondary-port`: The secondary SQL endpoint port number (optional). Example: `31434` - - `--secondary-dns-name`: The DNS name for the secondary SQL endpoint (optional). -- Choose a name for the Kubernetes secret that hosts the keytab. Use the namespace where the managed instance is deployed. -- 1. Run the following command to create a keytab: -- ```console - AD_PASSWORD=<password> ./create-sql-keytab.sh --realm <Active Directory domain in uppercase> --account <Active Directory account name> --port <endpoint port> --dns-name <endpoint DNS name> --keytab-file <keytab file name/path> --secret-name <keytab secret name> --secret-namespace <keytab secret namespace> - ``` -- Your command might look like the following example: -- ```console - AD_PASSWORD=<password> ./create-sql-keytab.sh --realm CONTOSO.LOCAL --account sqlmi-account --port 31433 --dns-name sqlmi.contoso.local --keytab-file sqlmi.keytab --secret-name sqlmi-keytab-secret --secret-namespace sqlmi-ns - ``` -- 1. Run the following command to verify that the keytab is correct: -- ```console - klist -kte <keytab file> - ``` --1. **Deploy the Kubernetes secret for the keytab**: Use the Kubernetes secret specification file you create in the preceding step to deploy the secret. -- The specification file looks similar to this example: -- ```yaml - apiVersion: v1 - kind: Secret - type: Opaque - metadata: - name: <secret name> - namespace: <secret namespace> - data: - keytab: <keytab content in Base64> - ``` - - To deploy the Kubernetes secret, run this command: - - ```console - kubectl apply -f <file> - ``` - - Your command might look like this example: - - ```console - kubectl apply -f sqlmi-keytab-secret.yaml - ``` --### [System-managed keytab mode](#tab/system-managed-keytab-mode) --To prepare for deployment in system-managed keytab mode: --1. **Identify a DNS name for the SQL endpoints**: Choose unique DNS names for the SQL endpoints that clients will connect to from outside the Kubernetes cluster. -- - The DNS names should be in the Active Directory domain or its descendant domains. - - The examples in this article use `sqlmi-primary.contoso.local` for the primary DNS name and `sqlmi-secondary.contoso.local` for the secondary DNS name. --1. **Identify the port numbers for the SQL endpoints**: Enter a port number for each of the SQL endpoints. -- - The port numbers must be in the acceptable range of port numbers for your Kubernetes cluster. - - The examples in this article use `31433` for the primary port number and `31434` for the secondary port number. --1. **Choose an Active Directory account name for SQL**: Choose a name for the Active Directory account that will represent your managed instance. -- - This name should be unique in the Active Directory domain, and the account must *not* already exist in the domain. This account is automatically generated in the domain. - - The examples in this article use `sqlmi-account` for the Active Directory account name. --1. **Create DNS records for the SQL endpoints in the Active Directory DNS servers**: In one of the Active Directory DNS servers, create A records (forward lookup records) for the DNS names chosen in step 1. -- - The DNS records should point to the IP address that the SQL endpoint will listen on for connections from outside the Kubernetes cluster. - - You don't need to create reverse-lookup Pointer (PTR) records in association with the A records. ----## Set properties for Active Directory authentication --To deploy SQL Managed Instance enabled by Azure Arc for Azure Arc Active Directory authentication, update your deployment specification file to reference the Active Directory connector instance to use. Referencing the Active Directory connector in the SQL specification file automatically sets up SQL for Active Directory authentication. --### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode) --To support Active Directory authentication on SQL in customer-managed keytab mode, set the following properties in your deployment specification file. Some properties are required and some are optional. --#### Required --- `spec.security.activeDirectory.connector.name`: The name of the preexisting Active Directory connector custom resource to join for Active Directory authentication. If you enter a value for this property, Active Directory authentication is implemented.-- `spec.security.activeDirectory.accountName`: The name of the Active Directory account for the managed instance.-- `spec.security.activeDirectory.keytabSecret`: The name of the Kubernetes secret that hosts the pre-created keytab file for users. This secret must be in the same namespace as the managed instance. This parameter is required only for the Active Directory deployment in customer-managed keytab mode.-- `spec.services.primary.dnsName`: Enter a DNS name for the primary SQL endpoint.-- `spec.services.primary.port`: Enter a port number for the primary SQL endpoint.--#### Optional --- `spec.security.activeDirectory.connector.namespace`: The Kubernetes namespace of the preexisting Active Directory connector to join for Active Directory authentication. If you don't enter a value, the SQL namespace is used.-- `spec.services.readableSecondaries.dnsName`: Enter a DNS name for the secondary SQL endpoint.-- `spec.services.readableSecondaries.port`: Enter a port number for the secondary SQL endpoint.--### [System-managed keytab mode](#tab/system-managed-keytab-mode) --To support Active Directory authentication on SQL in system-managed keytab mode, set the following properties in your deployment specification file. Some properties are required and some are optional. --#### Required --- `spec.security.activeDirectory.connector.name`: The name of the preexisting Active Directory connector custom resource to join for Active Directory authentication. If you enter a value for this property, Active Directory authentication is implemented.-- `spec.security.activeDirectory.accountName`: The name of the Active Directory account for the managed instance. This account is automatically generated for this managed instance and must not exist in the domain before you deploy SQL.-- `spec.services.primary.dnsName`: Enter a DNS name for the primary SQL endpoint.-- `spec.services.primary.port`: Enter a port number for the primary SQL endpoint.--#### Optional --- `spec.security.activeDirectory.connector.namespace`: The Kubernetes namespace of the preexisting Active Directory connector to join for Active Directory authentication. If you don't enter a value, the SQL namespace is used.-- `spec.security.activeDirectory.encryptionTypes`: A list of Kerberos encryption types to allow for the automatically generated Active Directory account provided in `spec.security.activeDirectory.accountName`. Accepted values are `RC4`, `AES128`, and `AES256`. If you don't enter an encryption type, all encryption types are allowed. You can disable RC4 by entering only `AES128` and `AES256` as encryption types.-- `spec.services.readableSecondaries.dnsName`: Enter a DNS name for the secondary SQL endpoint.-- `spec.services.readableSecondaries.port`: Enter a port number for the secondary SQL endpoint.----## Prepare your deployment specification file --Next, prepare a YAML specification file to deploy SQL Managed Instance. For the mode you use, enter your deployment values in the specification file. --> [!NOTE] -> In the specification file for both modes, the `admin-login-secret` value in the YAML example provides basic authentication. You can use the parameter value to log in to the managed instance, and then create logins for Active Directory users and groups. For more information, see [Connect to Active Directory-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md). --### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode) --The following example shows a specification file for customer-managed keytab mode: --```yaml -apiVersion: v1 -data: - password: <your Base64-encoded password> - username: <your Base64-encoded username> -kind: Secret -metadata: - name: admin-login-secret -type: Opaque --apiVersion: sql.arcdata.microsoft.com/v3 -kind: SqlManagedInstance -metadata: - name: <name> - namespace: <namespace> -spec: - backup: - retentionPeriodInDays: 7 - dev: false - tier: GeneralPurpose - forceHA: "true" - licenseType: LicenseIncluded - replicas: 1 - security: - adminLoginSecret: admin-login-secret - activeDirectory: - connector: - name: <Active Directory connector name> - namespace: <Active Directory connector namespace> - accountName: <Active Directory account name> - keytabSecret: <keytab secret name> - - primary: - type: LoadBalancer - dnsName: <primary endpoint DNS name> - port: <primary endpoint port number> - readableSecondaries: - type: LoadBalancer - dnsName: <secondary endpoint DNS name> - port: <secondary endpoint port number> - storage: - data: - volumes: - - accessMode: ReadWriteOnce - className: local-storage - size: 5Gi - logs: - volumes: - - accessMode: ReadWriteOnce - className: local-storage - size: 5Gi -``` --### [System-managed keytab mode](#tab/system-managed-keytab-mode) --The following example shows a specification file for system-managed keytab mode: --```yaml -apiVersion: v1 -data: - password: <your Base64-encoded password> - username: <your Base64-encoded username> -kind: Secret -metadata: - name: admin-login-secret -type: Opaque --apiVersion: sql.arcdata.microsoft.com/v3 -kind: SqlManagedInstance -metadata: - name: <name> - namespace: <namespace> -spec: - backup: - retentionPeriodInDays: 7 - dev: false - tier: GeneralPurpose - forceHA: "true" - licenseType: LicenseIncluded - replicas: 1 - security: - adminLoginSecret: admin-login-secret - activeDirectory: - connector: - name: <Active Directory connector name> - namespace: <Active Directory connector namespace> - accountName: <Active Directory account name> - - primary: - type: LoadBalancer - dnsName: <primary endpoint DNS name> - port: <primary endpoint port number> - readableSecondaries: - type: LoadBalancer - dnsName: <secondary endpoint DNS name> - port: <secondary endpoint port number> - storage: - data: - volumes: - - accessMode: ReadWriteOnce - className: local-storage - size: 5Gi - logs: - volumes: - - accessMode: ReadWriteOnce - className: local-storage - size: 5Gi -``` ----## Deploy the managed instance --For both customer-managed keytab mode and system-managed keytab mode, deploy the managed instance by using the prepared specification YAML file: --1. Save the file. The example in the next step uses *sqlmi.yaml* for the specification file name, but you can choose any file name. --1. Run the following command to deploy the instance by using the specification: -- ```console - kubectl apply -f <specification file name> - ``` -- Your command might look like the following example: -- ```console - kubectl apply -f sqlmi.yaml - ``` --## Related content --- [Connect to Active Directory-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md)-- [Upgrade your Active Directory connector](upgrade-active-directory-connector.md) |
azure-arc | Deploy Customer Managed Keytab Active Directory Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-customer-managed-keytab-active-directory-connector.md | - Title: Tutorial ΓÇô Deploy Active Directory (AD) Connector in customer-managed keytab mode -description: Tutorial to deploy a customer-managed keytab Active Directory (AD) connector ------ Previously updated : 10/11/2022----# Tutorial ΓÇô Deploy Active Directory (AD) connector in customer-managed keytab mode --This article explains how to deploy Active Directory (AD) connector in customer-managed keytab mode. The connector is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc. --## Active Directory connector in customer-managed keytab mode --In customer-managed keytab mode, an Active Directory connector deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS -* Active Directory DNS Servers -* Kubernetes DNS Servers --The AD Connector facilitates the environment needed by SQL to authenticate AD logins. --The following diagram shows AD Connector and DNS Proxy service functionality in customer-managed keytab mode: --![Active Directory connector](media/active-directory-deployment/active-directory-connector-customer-managed.png) --## Prerequisites --Before you proceed, you must have: --* An instance of Data Controller deployed on a supported version of Kubernetes -* An Active Directory (AD) domain --## Input for deploying Active Directory (AD) Connector --To deploy an instance of Active Directory connector, several inputs are needed from the Active Directory domain environment. --These inputs are provided in a YAML specification of AD Connector instance. --Following metadata about the AD domain must be available before deploying an instance of AD Connector: -* Name of the Active Directory domain -* List of the domain controllers (fully qualified domain names) -* List of the DNS server IP addresses --Following input fields are exposed to the users in the Active Directory connector spec: --- **Required**-- - `spec.activeDirectory.realm` - Name of the Active Directory domain in uppercase. This is the AD domain that this instance of AD Connector will be associated with. - - - `spec.activeDirectory.dns.nameserverIpAddresses` - List of Active Directory DNS server IP addresses. DNS proxy service will forward DNS queries in the provided domain name to these servers. --- **Optional**-- - `spec.activeDirectory.netbiosDomainName` NetBIOS name of the Active Directory domain. This is the short domain name (pre-Windows 2000 name) of your Active Directory domain. This is often used to qualify accounts in the AD domain. e.g. if the accounts in the domain are referred to as CONTOSO\admin, then CONTOSO is the NETBIOS domain name. - - This field is optional. When not provided, its value defaults to the first label of the `spec.activeDirectory.realm` field. - - In most domain environments, this is set to the default value but some domain environments may have a non-default value. You will need to use this field only when your domain's NetBIOS name does not match the first label of its fully qualified name. -- - `spec.activeDirectory.dns.domainName` - DNS domain name for which DNS lookups should be forwarded to the Active Directory DNS servers. -- A DNS lookup for any name belonging to this domain or its descendant domains will get forwarded to Active Directory. -- This field is optional. When not provided, it defaults to the value provided for `spec.activeDirectory.realm` converted to lowercase. -- - `spec.activeDirectory.dns.replicas` - Replica count for DNS proxy service. This field is optional and defaults to 1 when not provided. -- - `spec.activeDirectory.dns.preferK8sDnsForPtrLookups` - Flag indicating whether to prefer Kubernetes DNS server response over AD DNS server response for IP address lookups. -- DNS proxy service relies on this field to determine which upstream group of DNS servers to prefer for IP address lookups. -- This field is optional. When not provided, it defaults to `true` i.e. the DNS lookups of IP addresses will be first forwarded to Kubernetes DNS servers. If Kubernetes DNS servers fail to answer the lookup, the query is then forwarded to AD DNS servers. When set to `false`, these DNS lookups will be forwarded to AD DNS servers first and upon failure, fall back to Kubernetes. ---## Deploy a customer-managed keytab Active Directory (AD) connector --To deploy an AD connector, create a .yaml specification file called `active-directory-connector.yaml`. --The following example is an example of a customer-managed keytab AD connector uses an AD domain of name `CONTOSO.LOCAL`. Ensure to replace the values with the ones for your AD domain. --```yaml -apiVersion: arcdata.microsoft.com/v1beta1 -kind: ActiveDirectoryConnector -metadata: - name: adarc - namespace: <namespace> -spec: - activeDirectory: - realm: CONTOSO.LOCAL - dns: - preferK8sDnsForPtrLookups: false - nameserverIPAddresses: - - <DNS Server 1 IP address> - - <DNS Server 2 IP address> -``` --The following command deploys the AD connector instance. Currently, only kube-native approach of deploying is supported. --```console -kubectl apply ΓÇôf active-directory-connector.yaml -``` --After submitting the deployment of AD Connector instance, you may check the status of the deployment using the following command. --```console -kubectl get adc -n <namespace> -``` --## Related content -* [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md) -* [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md). -* [Connect to AD-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md). - |
azure-arc | Deploy System Managed Keytab Active Directory Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-system-managed-keytab-active-directory-connector.md | - Title: Tutorial ΓÇô Deploy Active Directory connector in system-managed keytab mode -description: Tutorial to deploy a system-managed keytab Active Directory connector ------ Previously updated : 10/11/2022-----# Tutorial ΓÇô Deploy Active Directory connector in system-managed keytab mode --This article explains how to deploy Active Directory connector in system-managed keytab mode. It is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc. --## Active Directory connector in system-managed keytab mode --In System-Managed Keytab mode, an Active Directory connector deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS -* Active Directory DNS Servers -* Kubernetes DNS Servers --In addition to the DNS proxy service, AD Connector also deploys a Security Support Service that facilitates communication to the AD domain for automatic creation and management of AD accounts, Service Principal Names (SPNs) and keytabs. --The following diagram shows AD Connector and DNS Proxy service functionality in system-managed keytab mode: --![Active Directory connector](media/active-directory-deployment/active-directory-connector-smk.png) --## Prerequisites --Before you proceed, you must have: --* An instance of Data Controller deployed on a supported version of Kubernetes -* An Active Directory domain -* A pre-created organizational unit (OU) in the Active Directory domain -* An Active Directory domain service account --The AD domain service account should have sufficient permissions to automatically create and delete users accounts inside the provided organizational unit (OU) in the active directory. --Grant the following permissions - scoped to the Organizational Unit (OU) - to the domain service account: - -- Read all properties-- Write all properties-- Create User objects-- Delete User objects-- Reset Password for Descendant User objects--For details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication with system-managed keytab - prerequisites](active-directory-prerequisites.md) --## Input for deploying Active Directory connector in system-managed keytab mode --To deploy an instance of Active Directory connector, several inputs are needed from the Active Directory domain environment. --These inputs are provided in a yaml specification for the AD connector instance. --The following metadata about the AD domain must be available before deploying an instance of AD connector: --* Name of the Active Directory domain -* List of the domain controllers (fully qualified domain names) -* List of the DNS server IP addresses --The following input fields are exposed to the users in the Active Directory connector specification: --- **Required**- - `spec.activeDirectory.realm` - Name of the Active Directory domain in uppercase. This is the AD domain that this instance of AD Connector will be associated with. -- - `spec.activeDirectory.domainControllers.primaryDomainController.hostname` - Fully qualified domain name of the Primary Domain Controller (PDC) in the AD domain. -- If you do not know which domain controller in the domain is primary, you can find out by running this command on any Windows machine joined to the AD domain: `netdom query fsmo`. - - - `spec.activeDirectory.dns.nameserverIpAddresses` - List of Active Directory DNS server IP addresses. DNS proxy service will forward DNS queries in the provided domain name to these servers. --- **Optional**- - `spec.activeDirectory.serviceAccountProvisioning` This is an optional field which defines your AD connector deployment mode with possible values as `manual` for customer-managed keytab or `automatic` for system-managed keytab. When this field is not set, the value defaults to `manual`. When set to `automatic` (system-managed keytab), the system will automatically generate AD accounts and Service Principal Names (SPNs) for the SQL Managed Instances associated with this AD Connector and create keytab files for them. When set to `manual` (customer-managed keytab), the system will not provide automatic generation of the AD account and keytab generation. The user will be expected to provide a keytab file. -- - `spec.activeDirectory.ouDistinguishedName` This is an optional field. Though it becomes conditionally mandatory when the value of `serviceAccountProvisioning` is set to `automatic`. This field accepts the Distinguished Name (DN) of the Organizational Unit (OU) that the users must create in Active Directory domain before deploying AD Connector. It is used to store the system-generated AD accounts for SQL Managed Instances in Active Directory domain. The example of the value looks like: `OU=arcou,DC=contoso,DC=local`. -- - `spec.activeDirectory.domainServiceAccountSecret` This is an optional field. It becomes conditionally mandatory when the value of `serviceAccountProvisioning` is set to `automatic`. This field accepts the name of the Kubernetes secret that contains the username and password of the Domain Service Account that was created prior to the AD Connector deployment. The system will use this account to generate other AD accounts in the OU and perform actions on those AD accounts. -- - `spec.activeDirectory.netbiosDomainName` NetBIOS name of the Active Directory domain. This is the short domain name (pre-Windows 2000 name) of your Active Directory domain. This is often used to qualify accounts in the AD domain. e.g. if the accounts in the domain are referred to as CONTOSO\admin, then CONTOSO is the NETBIOS domain name. - - This field is optional. When not provided, its value defaults to the first label of the `spec.activeDirectory.realm` field. - - In most domain environments, this is set to the default value but some domain environments may have a non-default value. You will need to use this field only when your domain's NetBIOS name does not match the first label of its fully qualified name. -- - `spec.activeDirectory.domainControllers.secondaryDomainControllers[*].hostname` - List of the fully qualified domain names of the secondary domain controllers in the AD domain. -- If your domain is served by multiple domain controllers, it is a good practice to provide some of their fully qualified domain names in this list. This allows high-availability for Kerberos operations. -- This field is optional and not needed. The system will automatically detect the secondary domain controllers when a value is not provided. -- - `spec.activeDirectory.dns.domainName` - DNS domain name for which DNS lookups should be forwarded to the Active Directory DNS servers. -- A DNS lookup for any name belonging to this domain or its descendant domains will get forwarded to Active Directory. -- This field is optional. When not provided, it defaults to the value provided for `spec.activeDirectory.realm` converted to lowercase. -- - `spec.activeDirectory.dns.replicas` - Replica count for DNS proxy service. This field is optional and defaults to 1 when not provided. -- - `spec.activeDirectory.dns.preferK8sDnsForPtrLookups` - Flag indicating whether to prefer Kubernetes DNS server response over AD DNS server response for IP address lookups. -- DNS proxy service relies on this field to determine which upstream group of DNS servers to prefer for IP address lookups. -- This field is optional. When not provided, it defaults to `true` i.e. the DNS lookups of IP addresses will be first forwarded to Kubernetes DNS servers. If Kubernetes DNS servers fail to answer the lookup, the query is then forwarded to AD DNS servers. When set to `false`, these DNS lookups will be forwarded to AD DNS servers first and upon failure, fall back to Kubernetes. --## Deploy Active Directory connector in system-managed keytab mode --To deploy an AD connector, create a YAML specification file called `active-directory-connector.yaml`. --Following is an example of a system-managed keytab AD connector that uses an AD domain of name `CONTOSO.LOCAL`. Ensure to replace the values with the ones for your AD domain. The `adarc-dsa-secret` contains the AD domain service account that was created prior to the AD deployment. --> [!NOTE] -> Make sure the password of provided domain service AD account here doesn't contain `!` as special characters. -> --```yaml -apiVersion: v1 -kind: Secret -type: Opaque -metadata: - name: adarc-dsa-secret - namespace: <namespace> -data: - password: <your base64 encoded password> - username: <your base64 encoded username> --apiVersion: arcdata.microsoft.com/v1beta2 -kind: ActiveDirectoryConnector -metadata: - name: adarc - namespace: <namespace> -spec: - activeDirectory: - realm: CONTOSO.LOCAL - serviceAccountProvisioning: automatic - ouDistinguishedName: "OU=arcou,DC=contoso,DC=local" - domainServiceAccountSecret: adarc-dsa-secret - domainControllers: - primaryDomainController: - hostname: dc1.contoso.local - secondaryDomainControllers: - - hostname: dc2.contoso.local - - hostname: dc3.contoso.local - dns: - preferK8sDnsForPtrLookups: false - nameserverIPAddresses: - - <DNS Server 1 IP address> - - <DNS Server 2 IP address> -``` ---The following command deploys the AD connector instance. Currently, only kube-native approach of deploying is supported. --```console -kubectl apply ΓÇôf active-directory-connector.yaml -``` --After submitting the deployment for the AD connector instance, you may check the status of the deployment using the following command. --```console -kubectl get adc -n <namespace> -``` --## Related content -* [Deploy a customer-managed keytab Active Directory connector](deploy-customer-managed-keytab-active-directory-connector.md) -* [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md). -* [Connect to AD-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md). |
azure-arc | Deploy Telemetry Router | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-telemetry-router.md | - Title: Deploy telemetry router | Azure Arc-enabled data services -description: Learn how to deploy the Azure Arc Telemetry Router ---- Previously updated : 09/07/2022----# Deploy the Azure Arc telemetry Router --> [!NOTE] -> -> - The telemetry router is in Public Preview and should be deployed for **testing purposes only**. -> - While the telemetry router is in Public Preview, be advised that future preview releases could include changes to CRD specs, CLI commands, and/or telemetry router messages. -> - The current preview does not support in-place upgrades of a data controller deployed with the Arc telemetry router enabled. In order to install or upgrade a data controller in a future release, you will need to uninstall the data controller and then re-install. --## What is the Azure Arc Telemetry Router? --The Azure Arc telemetry router enables exporting telemetry data to other monitoring solutions. For this Public Preview, we only support exporting log data to either Kafka or Elasticsearch and metric data to Kafka. --This document specifies how to deploy the telemetry router and configure it to work with the supported exporters. --## Deployment --> [!NOTE] -> -> The telemetry router currently supports indirectly connected mode only. --### Create a Custom Configuration Profile --After setting up your Kubernetes cluster, you'll need to [create a custom configuration profile](create-custom-configuration-template.md). Next, enable a temporary feature flag that deploys the telemetry router during data controller creation. --### Turn on the Feature Flag --After creating the custom configuration profile, you'll need to edit the profile to add the `monitoring` property with the `enableOpenTelemetry` flag set to `true`. You can set the feature flag by running the following az CLI commands (edit the --path parameter, as necessary): --```bash -az arcdata dc config add --path ./control.json --json-values ".spec.monitoring={}" -az arcdata dc config add --path ./control.json --json-values ".spec.monitoring.enableOpenTelemetry=true" -``` --To confirm the flag was set correctly, open the control.json file and confirm the `monitoring` object was added to the `spec` object and `enableOpenTelemetry` is set to `true`. --```yaml -spec: - monitoring: - enableOpenTelemetry: true -``` --This feature flag requirement will be removed in a future release. --### Create the Data Controller --After creating the custom configuration profile and setting the feature flag, you're ready to [create the data controller using indirect connectivity mode](create-data-controller-indirect-cli.md?tabs=linux). Be sure to replace the `--profile-name` parameter with a `--path` parameter that points to your custom control.json file (see [use custom control.json file to deploy Azure Arc-enabled data controller](create-custom-configuration-template.md)) --### Verify Telemetry Router Deployment --When the data controller is created, a TelemetryRouter custom resource is also created. Data controller deployment is marked ready when both custom resources have finished deploying. After the data controller finishes deployment, you can use the following command to verify that the TelemetryRouter exists: --```bash -kubectl describe telemetryrouter arc-telemetry-router -n <namespace> -``` --```yaml -apiVersion: arcdata.microsoft.com/v1beta4 - kind: TelemetryRouter - metadata: - name: arc-telemetry-router - namespace: <namespace> - spec: - credentials: - exporters: - pipelines: -``` --At the time of creation, no pipeline or exporters are set up. You can [setup your own pipelines and exporters](adding-exporters-and-pipelines.md) to route metrics and logs data to your own instances of Kafka and Elasticsearch. --After the TelemetryRouter is deployed, an instance of Kafka (arc-router-kafka) and a single instance of TelemetryCollector (collector-inbound) should be deployed and in a ready state. These resources are system managed and editing them isn't supported. The following pods will be deployed as a result: --- An inbound collector pod - `arctc-collector-inbound-0`-- A kakfa broker pod - `arck-arc-router-kafka-broker-0`-- A kakfa controller pod - `arck-arc-router-kafka-controller-0`---> [!NOTE] -> An outbound collector pod isn't created until at least one pipeline has been added to the telemetry router. -> -> After you create the first pipeline, an additional TelemetryCollector resource (collector-outbound) and pod `arctc-collector-outbound-0` are deployed. --```bash -kubectl get pods -n <namespace> --NAME READY STATUS RESTARTS AGE -arc-bootstrapper-job-4z2vr 0/1 Completed 0 15h -arc-webhook-job-facc4-z7dd7 0/1 Completed 0 15h -arck-arc-router-kafka-broker-0 2/2 Running 0 15h -arck-arc-router-kafka-controller-0 2/2 Running 0 15h -arctc-collector-inbound-0 2/2 Running 0 15h -bootstrapper-8d5bff6f7-7w88j 1/1 Running 0 15h -control-vpfr9 2/2 Running 0 15h -controldb-0 2/2 Running 0 15h -logsdb-0 3/3 Running 0 15h -logsui-fwrh9 3/3 Running 0 15h -metricsdb-0 2/2 Running 0 15h -metricsdc-bc4df 2/2 Running 0 15h -metricsdc-fm7jh 2/2 Running 0 15h -metricsui-qqgbv 2/2 Running 0 15h -``` --## Related content --- [Add exporters and pipelines to your telemetry router](adding-exporters-and-pipelines.md) |
azure-arc | Get Connection Endpoints And Connection Strings Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/get-connection-endpoints-and-connection-strings-postgresql-server.md | - Title: Get connection endpoints and create connection strings for your Azure Arc-enabled PostgreSQL server- -description: Get connection endpoints & create connection strings for your Azure Arc-enabled PostgreSQL server ------ Previously updated : 11/03/2021----# Get connection endpoints & create the connection strings for your Azure Arc-enabled PostgreSQL server --This article explains how you can retrieve the connection endpoints for your server group and how you can form the connection strings, which can be used with your applications and/or tools. ----## Get connection end points: --Run the following command: -```azurecli -az postgres server-arc endpoint list -n <server name> --k8s-namespace <namespace> --use-k8s -``` -For example: -```azurecli -az postgres server-arc endpoint list -n postgres01 --k8s-namespace arc --use-k8s -``` --It returns the list of endpoints: the PostgreSQL endpoint, the log search dashboard (Kibana), and the metrics dashboard (Grafana). For example: --```output -{ - "instances": [ - { - "endpoints": [ - { - "description": "PostgreSQL Instance", - "endpoint": "postgresql://postgres:<replace with password>@12.345.567.89:5432" - }, - { - "description": "Log Search Dashboard", - "endpoint": "https://23.456.78.99:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:postgres01'))" - }, - { - "description": "Metrics Dashboard", - "endpoint": "https://34.567.890.12:3000/d/postgres-metrics?var-Namespace=arc&var-Name=postgres01" - } - ], - "engine": "PostgreSql", - "name": "postgres01" - } - ], - "namespace": "arc" -} -``` --Use these end points to: --- Form your connection strings and connect with your client tools or applications-- Access the Grafana and Kibana dashboards from your browser--For example, you can use the end point named _PostgreSQL Instance_ to connect with psql to your server group: --```console -psql postgresql://postgres:MyPassworkd@12.345.567.89:5432 -psql (10.14 (Ubuntu 10.14-0ubuntu0.18.04.1), server 12.4 (Ubuntu 12.4-1.pgdg16.04+1)) -WARNING: psql major version 10, server major version 12. - Some psql features might not work. -SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) -Type "help" for help. --postgres=# -``` -> [!NOTE] -> -> - The password of the _postgres_ user indicated in the end point named "_PostgreSQL Instance_" is the password you chose when deploying the server group. ---## From CLI with kubectl --```console -kubectl get postgresqls/<server name> -n <namespace name> -``` --For example: -```azurecli -kubectl get postgresqls/postgres01 -n arc -``` --Those commands will produce output like the one below. You can use that information to form your connection strings: --```console -NAME STATE READY-PODS PRIMARY-ENDPOINT AGE -postgres01 Ready 3/3 12.345.567.89:5432 9d -``` --## Form connection strings --Use the connections string examples below for your server group. Copy, paste, and customize them as needed: --> [!IMPORTANT] -> SSL is required for client connections. In connection string, the SSL mode parameter should not be disabled. For more information, review [https://www.postgresql.org/docs/14/runtime-config-connection.html](https://www.postgresql.org/docs/14/runtime-config-connection.html). --### ADO.NET --```ado.net -Server=192.168.1.121;Database=postgres;Port=24276;User Id=postgres;Password={your_password_here};Ssl Mode=Require;` -``` --### C++ (libpq) --```cpp -host=192.168.1.121 port=24276 dbname=postgres user=postgres password={your_password_here} sslmode=require -``` --### JDBC --```jdbc -jdbc:postgresql://192.168.1.121:24276/postgres?user=postgres&password={your_password_here}&sslmode=require -``` --### Node.js --```node.js -host=192.168.1.121 port=24276 dbname=postgres user=postgres password={your_password_here} sslmode=require -``` --### PHP --```php -host=192.168.1.121 port=24276 dbname=postgres user=postgres password={your_password_here} sslmode=require -``` --### psql --```psql -psql "host=192.168.1.121 port=24276 dbname=postgres user=postgres password={your_password_here} sslmode=require" -``` --### Python --```python -dbname='postgres' user='postgres' host='192.168.1.121' password='{your_password_here}' port='24276' sslmode='true' -``` --### Ruby --```ruby -host=192.168.1.121; dbname=postgres user=postgres password={your_password_here} port=24276 sslmode=require -``` --## Related content -- Read about [scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-server-using-cli.md) your server group |
azure-arc | Install Arcdata Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/install-arcdata-extension.md | - Title: Install `arcdata` extension -description: Install the `arcdata` extension for Azure (`az`) CLI ------- Previously updated : 07/30/2021----# Install `arcdata` Azure CLI extension --> [!IMPORTANT] -> If you are updating to a new release, please be sure to also update to the latest version of Azure CLI and the `arcdata` extension. ---## Install latest Azure CLI --To get the latest Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli). ---## Add `arcdata` extension --To add the extension, run the following command: --```azurecli -az extension add --name arcdata -``` --[Learn more about Azure CLI extensions](/cli/azure/azure-cli-extensions-overview). --## Update `arcdata` extension --If you already have the extension, you can update it with the following command: --```azurecli -az extension update --name arcdata -``` --## Related content --[Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) |
azure-arc | Install Client Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/install-client-tools.md | - Title: Install client tools -description: Install azdata, kubectl, Azure CLI, psql, Azure Data Studio (Insiders), and the Arc extension for Azure Data Studio ------- Previously updated : 07/30/2021----# Install client tools for deploying and managing Azure Arc-enabled data services --This article points you to resources to install the tools to manage Azure Arc-enabled data services. --> [!IMPORTANT] -> If you are updating to a new release, update to the latest version of Azure Data Studio, the Azure Arc extension for Azure Data Studio, Azure (`az`) command line interface (CLI), and the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]. -> -> [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --The [`arcdata` extension for Azure CLI (`az`)](about-arcdata-extension.md) replaces `azdata` for Azure Arc-enabled data services. --## Tools for creating and managing Azure Arc-enabled data services --The following table lists common tools required for creating and managing Azure Arc-enabled data services, and how to install those tools: --| Tool | Required | Description | Installation | -||||| -| Azure CLI (`az`)<sup>1</sup> | Yes | Modern command-line interface for managing Azure services. Used to manage Azure services in general and also specifically Azure Arc-enabled data services using the CLI or in scripts for both indirectly connected mode (available now) and directly connected mode (available soon). ([More info](/cli/azure/)). | [Install](/cli/azure/install-azure-cli) | -| `arcdata` extension for Azure (`az`) CLI | Yes | Command-line tool for managing Azure Arc-enabled data services as an extension to the Azure CLI (`az`) | [Install](install-arcdata-extension.md) | -| Azure Data Studio | Yes | Rich experience tool for connecting to and querying a variety of databases including Azure SQL, SQL Server, PostrgreSQL, and MySQL. Extensions to Azure Data Studio provide an administration experience for Azure Arc-enabled data services. | [Install](/azure-data-studio/download-azure-data-studio) | -| Azure Arc extension for Azure Data Studio | Yes | Extension for Azure Data Studio that provides a management experience for Azure Arc-enabled data services.| Install from the extensions gallery in Azure Data Studio.| -| PostgreSQL extension in Azure Data Studio | No | PostgreSQL extension for Azure Data Studio that provides management capabilities for PostgreSQL. | <!--{need link} [Install](../azure-data-studio/data-virtualization-extension.md) --> Install from extensions gallery in Azure Data Studio.| -| Kubernetes CLI (kubectl)<sup>2</sup> | Yes | Command-line tool for managing the Kubernetes cluster ([More info](https://kubernetes.io/docs/tasks/tools/install-kubectl/)). | [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows) \| [Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) \| [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/) | -| `curl` <sup>3</sup> | Required for some sample scripts. | Command-line tool for transferring data with URLs. | [Windows](https://curl.haxx.se/windows/) \| Linux: install curl package | -| `oc` | Required for Red Hat OpenShift and Azure Redhat OpenShift deployments. |`oc` is the Open Shift command line interface (CLI). | [Installing the CLI](https://docs.openshift.com/container-platform/4.6/cli_reference/openshift_cli/getting-started-cli.html#installing-the-cli) ----<sup>1</sup> You must be using Azure CLI version 2.26.0 or later. Run `az --version` to find the version if needed. --<sup>2</sup> You must use `kubectl` version 1.19 or later. Also, the version of `kubectl` should be plus or minus one minor version of your Kubernetes cluster. If you want to install a specific version on `kubectl` client, see [Install `kubectl` binary via curl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl) (on Windows 10, use cmd.exe and not Windows PowerShell to run curl). --<sup>3</sup> For PowerShell, `curl` is an alias to the Invoke-WebRequest cmdlet. --## Related content --[Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) |
azure-arc | Least Privilege | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/least-privilege.md | - Title: Operate Azure Arc-enabled data services with least privileges -description: Explains how to operate Azure Arc-enabled data services with least privileges ------ Previously updated : 11/07/2021----# Operate Azure Arc-enabled data services with least privileges --Operating Arc-enabled data services with least privileges is a security best practice. Only grant users and service accounts the specific permissions required to perform the required tasks. Both Azure and Kubernetes provide a role-based access control model which can be used to grant these specific permissions. This article describes certain common scenarios in which the security of least privilege should be applied. --> [!NOTE] -> In this article, a namespace name of `arc` will be used. If you choose to use a different name, then use the same name throughout. -> In this article, the `kubectl` CLI utility is used as the example. Any tool or system that uses the Kubernetes API can be used though. --## Deploy the Azure Arc data controller --Deploying the Azure Arc data controller requires some permissions which can be considered high privilege such as creating a Kubernetes namespace or creating cluster role. The following steps can be followed to separate the deployment of the data controller into multiple steps, each of which can be performed by a user or a service account which has the required permissions. This separation of duties ensures that each user or service account in the process has just the permissions required and nothing more. --### Deploy a namespace in which the data controller will be created --This step will create a new, dedicated Kubernetes namespace into which the Arc data controller will be deployed. It is essential to perform this step first, because the following steps will use this new namespace as a scope for the permissions that are being granted. --Permissions required to perform this action: --- Namespace- - Create - - Edit (if required for OpenShift clusters) --Run a command similar to the following to create a new, dedicated namespace in which the data controller will be created. --```console -kubectl create namespace arc -``` --If you are using OpenShift, you will need to edit the `openshift.io/sa.scc.supplemental-groups` and `openshift.io/sa.scc.uid-range` annotations on the namespace using `kubectl edit namespace <name of namespace>`. Change these existing annotations to match these _specific_ UID and fsGroup IDs/ranges. --```console -openshift.io/sa.scc.supplemental-groups: 1000700001/10000 -openshift.io/sa.scc.uid-range: 1000700001/10000 -``` --## Assign permissions to the deploying service account and users/groups --This step will create a service account and assign roles and cluster roles to the service account so that the service account can be used in a job to deploy the Arc data controller with the least privileges required. --Permissions required to perform this action: --- Service account- - Create -- Role- - Create -- Role binding- - Create -- Cluster role- - Create -- Cluster role binding- - Create -- All the permissions being granted to the service account (see the arcdata-deployer.yaml below for details)--Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file. --```console -kubectl apply --namespace arc -f arcdata-deployer.yaml -``` --## Grant permissions to users to create the bootstrapper job and data controller --Permissions required to perform this action: --- Role- - Create -- Role binding- - Create --Save a copy of [arcdata-installer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arcdata-installer.yaml), and replace the placeholder `{{INSTALLER_USERNAME}}` in the file with the name of the user to grant the permissions to, for example: `john@contoso.com`. Add additional role binding subjects such as other users or groups as needed. Run the following command to create the installer permissions with the edited file. --```console -kubectl apply --namespace arc -f arcdata-installer.yaml -``` --## Deploy the bootstrapper job --Permissions required to perform this action: --- User that is assigned to the arcdata-installer-role role in the previous step--Run the following command to create the bootstrapper job that will run preparatory steps to deploy the data controller. --```console -kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper.yaml -``` --## Create the Arc data controller --Now you are ready to create the data controller itself. --First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings. --### Create the metrics and logs dashboards user names and passwords --At the top of the file, you can specify a user name and password that is used to authenticate to the metrics and logs dashboards as an administrator. Choose a secure password and share it with only those that need to have these privileges. --A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password. --```consoole -echo -n '<your string to encode here>' | base64 -# echo -n 'example' | base64 -``` --Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify SSL/TLS certificates during Kubernetes native tools deployment](monitor-certificates.md). --### Edit the data controller configuration --Edit the data controller configuration as needed: --#### REQUIRED --- `location`: Change this to be the Azure location where the _metadata_ about the data controller will be stored. Review the [list of available regions](overview.md#supported-regions).-- `logsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the logs UI certificate.-- `metricsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.--#### Recommended: review and possibly change defaults --Review these values, and update for your deployment: --- `storage..className`: the storage class to use for the data controller data and log files. If you are unsure of the available storage classes in your Kubernetes cluster, you can run the following command: `kubectl get storageclass`. The default is default which assumes there is a storage class that exists and is named default not that there is a storage class that is the default. Note: There are two className settings to be set to the desired storage class - one for data and one for logs.-- `serviceType`: Change the service type to NodePort if you are not using a LoadBalancer.-- Security For Azure Red Hat OpenShift or Red Hat OpenShift Container Platform, replace the security: settings with the following values in the data controller yaml file.-- ```yml - security: - allowDumps: false - allowNodeMetricsCollection: false - allowPodMetricsCollection: false - ``` --#### Optional --The following settings are optional. --- `name`: The default name of the data controller is arc, but you can change it if you want.-- `displayName`: Set this to the same value as the name attribute at the top of the file.-- `registry`: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and pushing them to a private container registry, enter the IP address or DNS name of your registry here.-- `dockerRegistry`: The secret to use to pull the images from a private container registry if required.-- `repository`: The default repository on the Microsoft Container Registry is arcdata. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images.-- `imageTag`: The current latest version tag is defaulted in the template, but you can change it if you want to use an older version.-- `logsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the logs UI certificate.-- `metricsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.--The following example shows a completed data controller yaml. ---Save the edited file on your local computer and run the following command to create the data controller: --```console -kubectl create --namespace arc -f <path to your data controller file> --#Example -kubectl create --namespace arc -f data-controller.yaml -``` --### Monitoring the creation status --Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --```console -kubectl get datacontroller --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status or logs of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. --```console -kubectl describe pod/<pod name> --namespace arc -kubectl logs <pod name> --namespace arc --#Example: -#kubectl describe pod/control-2g7bl --namespace arc -#kubectl logs control-2g7b1 --namespace arc -``` --## Related content --You have several additional options for creating the Azure Arc data controller: --> **Just want to try things out?** -> Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on AKS, Amazon EKS, or GKE, or in an Azure VM. -> --- [Create a data controller in direct connectivity mode with the Azure portal](create-data-controller-direct-prerequisites.md)-- [Create a data controller in indirect connectivity mode with CLI](create-data-controller-indirect-cli.md)-- [Create a data controller in indirect connectivity mode with Azure Data Studio](create-data-controller-indirect-azure-data-studio.md)-- [Create a data controller in indirect connectivity mode from the Azure portal via a Jupyter notebook in Azure Data Studio](create-data-controller-indirect-azure-portal.md)-- [Create a data controller in indirect connectivity mode with Kubernetes tools such as `kubectl` or `oc`](create-data-controller-using-kubernetes-native-tools.md) |
azure-arc | Limitations Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-managed-instance.md | - Title: Limitations of SQL Managed Instance enabled by Azure Arc -description: Limitations of SQL Managed Instance enabled by Azure Arc ------ Previously updated : 09/07/2021----# Limitations of SQL Managed Instance enabled by Azure Arc --This article describes limitations of SQL Managed Instance enabled by Azure Arc. --## Back up and restore --### Automated backups --- User databases with SIMPLE recovery model are not backed up.-- System database `model` is not backed up in order to prevent interference with creation/deletion of database. The database gets locked when admin operations are performed.--### Point-in-time restore (PITR) --- Doesn't support restore from one SQL Managed Instance enabled by Azure Arc to another SQL Managed Instance enabled by Azure Arc. The database can only be restored to the same Arc-enabled SQL Managed Instance where the backups were created.-- Renaming databases is currently not supported, during point in time restore.-- No support for restoring a TDE enabled database currently.-- A deleted database cannot be restored currently.--## Other limitations --- Transactional replication is currently not supported.-- Log shipping is currently blocked.-- All user databases need to be in a full recovery model because they participate in an always-on-availability group--## Roles and responsibilities --The roles and responsibilities between Microsoft and its customers differ between Azure PaaS services (Platform As A Service) and Azure hybrid (like SQL Managed Instance enabled by Azure Arc). --### Frequently asked questions --This table summarizes answers to frequently asked questions regarding support roles and responsibilities. --| Question | Azure Platform As A Service (PaaS) | Azure Arc hybrid services | -|:-|::|::| -| Who provides the infrastructure? | Microsoft | Customer | -| Who provides the software?* | Microsoft | Microsoft | -| Who does the operations? | Microsoft | Customer | -| Does Microsoft provide SLAs? | Yes | No | -| WhoΓÇÖs in charge of SLAs? | Microsoft | Customer | --\* Azure services --__Why doesn't Microsoft provide SLAs on Azure Arc hybrid services?__ Customers and their partners own and operate the infrastructure that Azure Arc hybrid services run on so Microsoft can't provide the SLA. --## Related content --- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. --- **Create your own.** Follow these steps to create on your own Kubernetes cluster: - 1. [Install the client tools](install-client-tools.md) - 2. [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) - 3. [Deploy SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) --- **Learn**- - [Read more about Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services) - - [Read about Azure Arc](https://aka.ms/azurearc) |
azure-arc | Limitations Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-postgresql.md | - Title: Limitations of Azure Arc-enabled PostgreSQL -description: Limitations of Azure Arc-enabled PostgreSQL ------ Previously updated : 11/03/2021----# Limitations of Azure Arc-enabled PostgreSQL --This article describes limitations of Azure Arc-enabled PostgreSQL. ---## High availability --Configuring high availability to recover from infrastructure failures isn't yet available. --## Monitoring --Currently, local monitoring with Grafana is only available for the default `postgres` database. Metrics dashboards for user created databases will be empty. --## Configuration --System configurations that are stored in `postgresql.auto.conf` are backed up when a base backup is created. This means that changes made after the last base backup, will not be present in a restored server until a new base backup is taken to capture those changes. --## Roles and responsibilities --The roles and responsibilities between Microsoft and its customers differ between Azure managed services (Platform As A Service or PaaS) and Azure hybrid (like Azure Arc-enabled PostgreSQL). --### Frequently asked questions -The table below summarizes answers to frequently asked questions regarding support roles and responsibilities. --| Question | Azure Platform As A Service (PaaS) | Azure Arc hybrid services | -|:-|::|::| -| Who provides the infrastructure? | Microsoft | Customer | -| Who provides the software?* | Microsoft | Microsoft | -| Who does the operations? | Microsoft | Customer | -| Does Microsoft provide SLAs? | Yes | No | -| WhoΓÇÖs in charge of SLAs? | Microsoft | Customer | --\* Azure services --__Why doesn't Microsoft provide SLAs on Azure Arc hybrid services?__ Because with a hybrid service, you or your provider owns the infrastructure. --## Related content --- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. --- **Create your own.** Follow these steps to create on your own Kubernetes cluster: - 1. [Install the client tools](install-client-tools.md) - 2. [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) - 3. [Create an Azure Database for PostgreSQL server on Azure Arc](create-postgresql-server.md) --- **Learn**- - [Read more about Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services) - - [Read about Azure Arc](https://aka.ms/azurearc) |
azure-arc | List Servers Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/list-servers-postgresql.md | - Title: List the Azure Arc-enabled PostgreSQL servers created in an Azure Arc Data Controller -description: List the Azure Arc-enabled PostgreSQL servers created in an Azure Arc Data Controller ------- Previously updated : 11/03/2021----# List the Azure Arc-enabled PostgreSQL servers created in an Azure Arc Data Controller --This article explains how you can retrieve the list of servers created in your Arc Data Controller. --To retrieve this list, use either of the following methods once you are connected to the Arc Data Controller: ---## From CLI with Azure CLI extension (az) --The general format of the command is: -```azurecli -az postgres server-arc list --k8s-namespace <namespace> --use-k8s -``` --It will return an output like: -```console -[ - { - "name": "postgres01", - "state": "Ready" - } -] -``` -For more details about the parameters available for this command, run: -```azurecli -az postgres server-arc list --help -``` --## From CLI with kubectl -Run either of the following commands. --**To list the server groups irrespective of the version of Postgres, run:** -```console -kubectl get postgresqls -n <namespace> -``` -It will return an output like: -```console -NAME STATE READY-PODS PRIMARY-ENDPOINT AGE -postgres01 Ready 5/5 12.345.67.890:5432 12d -``` --## Related content: --* [Read the article about how to get the connection end points and form the connection strings to connect to your server group](get-connection-endpoints-and-connection-strings-postgresql-server.md) -* [Read the article about showing the configuration of an Azure Arc-enabled PostgreSQL server](show-configuration-postgresql-server.md) |
azure-arc | Maintenance Window | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/maintenance-window.md | - Title: Maintenance window - Azure Arc-enabled data services -description: Article describes how to set a maintenance window ------- Previously updated : 03/31/2022----# Maintenance window - Azure Arc-enabled data services --Configure a maintenance window on a data controller to define a time period for upgrades. In this time period, the Arc-enabled SQL Managed Instances on that data controller which have the `desiredVersion` property set to `auto` will be upgraded. --During setup, specify a duration, recurrence, and start date and time. After the maintenance window starts, it will run for the period of time set in the duration. The instances attached to the data controller will begin upgrades (in parallel). At the end of the set duration, any upgrades that are in progress will continue to completion. Any instances that did not begin upgrading in the window will begin upgrading in the following recurrence. --## Prerequisites --a SQL Managed Instance enabled by Azure Arc with the [`desiredVersion` property set to `auto`](upgrade-sql-managed-instance-auto.md). --## Limitations --The maintenance window duration can be from 2 hours to 8 hours. --Only one maintenance window can be set per data controller. --## Configure a maintenance window --The maintenance window has these settings: --- Duration - The length of time the window will run, expressed in hours and minutes (HH:mm).-- Recurrence - how often the window will occur. All words are case sensitive and must be capitalized. You can set weekly or monthly windows.- - Weekly - - [Week | Weekly][day of week] - - Examples: - - `--recurrence "Week Thursday"` - - `--recurrence "Weekly Saturday"` - - Monthly - - [Month | Monthly] [First | Second | Third | Fourth | Last] [day of week] - - Examples: - - `--recurrence "Month Fourth Saturday"` - - `--recurrence "Monthly Last Monday"` - - If recurrence isn't specified, it will be a one-time maintenance window. -- Start - the date and time the first window will occur, in the format `YYYY-MM-DDThh:mm` (24-hour format).- - Example: - - `--start "2022-02-01T23:00"` -- Time Zone - the [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) associated with the maintenance window.--#### CLI --To create a maintenance window, use the following command: --```cli -az arcdata dc update --maintenance-start <date and time> --maintenance-duration <time> --maintenance-recurrence <interval> --maintenance-time-zone <time zone> --k8s-namespace <namespace> --use-k8s -``` --Example: --```cli -az arcdata dc update --maintenance-start "2022-01-01T23:00" --maintenance-duration 3:00 --maintenance-recurrence "Monthly First Saturday" --maintenance-time-zone US/Pacific --k8s-namespace arc --use-k8s -``` --## Monitor the upgrades --During the maintenance window, you can view the status of upgrades. --```kubectl -kubectl -n <namespace> get sqlmi -o yaml -``` --The `status.runningVersion` and `status.lastUpdateTime` fields will show the latest version and when the status changed. --## View existing maintenance window --You can view the maintenance window in the `datacontroller` spec. --```kubectl -kubectl describe datacontroller -n <namespace> -``` --Output: --```text -Spec: - Settings: - Maintenance: - Duration: 3:00 - Recurrence: Monthly First Saturday - Start: 2022-01-01T23:00 - Time Zone: US/Pacific -``` --## Failed upgrades --There is no automatic rollback for failed upgrades. If an instance failed to upgrade automatically, manual intervention will be needed to pin the instance to its current running version, using `az sql mi-arc update`. After the issue is resolved, the version can be set back to "auto". --```cli -az sql mi-arc upgrade --name <instance name> --desired-version <version> -``` --Example: -```cli -az sql mi-arc upgrade --name sql01 --desired-version v1.2.0_2021-12-15 -``` --## Disable maintenance window --When the maintenance window is disabled, automatic upgrades will not run. --```cli -az arcdata dc update --maintenance-enabled false --k8s-namespace <namespace> --use-k8s -``` --Example: --```cli -az arcdata dc update --maintenance-enabled false --k8s-namespace arc --use-k8s -``` --## Enable maintenance window --When the maintenance window is enabled, automatic upgrades will resume. --```cli -az arcdata dc update --maintenance-enabled true --k8s-namespace <namespace> --use-k8s -``` --Example: --```cli -az arcdata dc update --maintenance-enabled true --k8s-namespace arc --use-k8s -``` --## Change maintenance window options --The update command can be used to change any of the options. In this example, I will update the start time. --```cli -az arcdata dc update --maintenance-start <date and time> --k8s-namespace arc --use-k8s -``` --Example: --```cli -az arcdata dc update --maintenance-start "2022-04-15T23:00" --k8s-namespace arc --use-k8s -``` --## Related content --[Enable automatic upgrades of a SQL Managed Instance](upgrade-sql-managed-instance-auto.md) |
azure-arc | Manage Postgresql Server With Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/manage-postgresql-server-with-azure-data-studio.md | - Title: Use Azure Data Studio to manage your PostgreSQL instance -description: Use Azure Data Studio to manage your PostgreSQL instance ------ Previously updated : 07/30/2021----# Use Azure Data Studio to manage your Azure Arc-enabled PostgreSQL server ---This article describes how to: -- manage your PostgreSQL instances with dashboard views like Overview, Connection Strings, Properties, Resource Health...-- work with your data and schema---## Prerequisites --- [Install azdata, Azure Data Studio, and Azure CLI](install-client-tools.md)-- Install in Azure Data Studio the **[!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]** and **Azure Arc** and **PostgreSQL** extensions-- [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --- Create the [Azure Arc Data Controller](./create-data-controller-indirect-cli.md)-- Launch Azure Data Studio--## Connect to the Azure Arc Data Controller --In Azure Data Studio, expand the node **Azure Arc Controllers** and select the **Connect Controller** button: --Enter the connection information to your Azure Data Controller: --- **Controller URL:**-- The URL to connect to your controller in Kubernetes. Entered in the form of `https://<IP_address_of_the_controller>:<Kubernetes_port.` - For example: -- ```console - https://12.345.67.890:30080 - ``` -- **Username:**-- Name of the user account you use to connect to the Controller. Use the name you typically use when you run `az login`. It is not the name of the PostgreSQL user you use to connect to the PostgreSQL database engine typically from psql. -- **Password:**- The password of the user account you use to connect to the Controller ---Azure data studio shows your Arc Data Controller. Expand it and it shows the list of PostgreSQL instances that it manages. --## Manage your Azure Arc-enabled PostgreSQL servers --Right-click on the PostgreSQL instance you want to manage and select [Manage] --The PostgreSQL Dashboard view: --That features several dashboards listed on the left side of that pane: --- **Overview:** - Displays summary information about your instance like name, PostgreSQL admin user name, Azure subscription ID, configuration, version of the database engine, endpoints for Grafana and Kibana... -- **Connection Strings:** - Displays various connection strings you may need to connect to your PostgreSQL instance like psql, Node.js, PHP, Ruby... -- **Diagnose and solve problems:** - Displays various resources that will help you troubleshoot your instance as we expand the troubleshooting notebooks -- **New support request:** - Request assistance from our support services starting preview announcement. --## Work with your data and schema --On the left side of the Azure Data Studio window, expand the node **Servers**: --And select [Add Connection] and fill in the connection details to your PostgreSQL instance: -- **Connection Type:** PostgreSQL-- **Server name:** enter the name of your PostgreSQL instance. For example: postgres01-- **Authentication type:** Password-- **User name:** for example, you can use the standard/default PostgreSQL admin user name. Note, this field is case-sensitive.-- **Password:** you'll find the password of the PostgreSQL username in the psql connection string in the output of the `az postgres server-arc endpoint -n postgres01` command-- **Database name:** set the name of the database you want to connect to. You can let it set to __Default__-- **Server group:** you can let it set to __Default__-- **Name (optional):** you can let this blank-- **Advanced:**- - **Host IP Address:** is the Public IP address of the Kubernetes cluster - - **Port:** is the port on which your PostgreSQL instance is listening. You can find this port at the end of the psql connection string in the output of the `az postgres server-arc endpoint -n postgres01` command. Not port 30080 on which Kubernetes is listening and that you entered when connecting to the Azure Data Controller in Azure Data Studio. - - **Other parameters:** They should be self-explicit, you can live with the default/blank values they appear with. --Select **[OK] and [Connect]** to connect to your server. --Once connected, several experiences are available: -- **New query**-- **New Notebook**-- **Expand the display of your server and browse/work on the objects inside your database**-- **...**--## Next step -[Monitor your server group](monitor-grafana-kibana.md) |
azure-arc | Managed Instance Business Continuity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-business-continuity-overview.md | - Title: Business continuity overview - SQL Managed Instance enabled by Azure Arc -description: Overview business continuity for SQL Managed Instance enabled by Azure Arc ------ Previously updated : 01/27/2022----# Overview: SQL Managed Instance enabled by Azure Arc business continuity --Business continuity is a combination of people, processes, and technology that enables businesses to recover and continue operating in the event of disruptions. In hybrid scenarios there is a joint responsibility between Microsoft and customer, such that customer owns and manages the on-premises infrastructure while the software is provided by Microsoft. --## Features --This overview describes the set of capabilities that come built-in with SQL Managed Instance enabled by Azure Arc and how you can leverage them to recover from disruptions. --| Feature | Use case | Service Tier | -|--|--|| -| Point in time restore | Use the built-in point in time restore (PITR) feature to recover from situations such as data corruptions caused by human errors. Learn more about [Point in time restore](.\point-in-time-restore.md) | Available in both General Purpose and Business Critical service tiers| -| High availability | Deploy the Azure Arc enabled SQL Managed Instance in high availability mode to achieve local high availability. This mode automatically recovers from scenarios such as hardware failures, pod/node failures, and etc. The built-in listener service automatically redirects new connections to another replica while Kubernetes attempts to rebuild the failed replica. Learn more about [high-availability in SQL Managed Instance enabled by Azure Arc](.\managed-instance-high-availability.md) |This feature is only available in the Business Critical service tier. <br> For General Purpose service tier, Kubernetes provides basic recoverability from scenarios such as node/pod crashes. | -|Disaster recovery| Configure disaster recovery by setting up another SQL Managed Instance enabled by Azure Arc in a geographically separate data center to synchronize data from the primary data center. This scenario is useful for recovering from events when an entire data center is down due to disruptions such as power outages or other events. | Available in both General Purpose and Business Critical service tiers| -| --## Related content --[Learn more about configuring point in time restore](.\point-in-time-restore.md) --[Learn more about configuring high availability in SQL Managed Instance enabled by Azure Arc](.\managed-instance-high-availability.md) --[Learn more about setting up and configuring disaster recovery in SQL Managed Instance enabled by Azure Arc](.\managed-instance-disaster-recovery.md) |
azure-arc | Managed Instance Disaster Recovery Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-cli.md | - Title: Configure failover group - CLI -description: Describes how to configure disaster recovery with a failover group for SQL Managed Instance enabled by Azure Arc with the CLI ------- Previously updated : 08/02/2023----# Configure failover group - CLI --This article explains how to configure disaster recovery for SQL Managed Instance enabled by Azure Arc with the CLI. Before you proceed, review the information and prerequisites in [SQL Managed Instance enabled by Azure Arc - disaster recovery](managed-instance-disaster-recovery.md). ---## Configure Azure failover group - direct mode --Follow the steps below if the Azure Arc data services are deployed in `directly` connected mode. --Once the prerequisites are met, run the below command to set up Azure failover group between the two instances: --```azurecli -az sql instance-failover-group-arc create --name <name of failover group> --mi <primary SQL MI> --partner-mi <Partner MI> --resource-group <name of RG> --partner-resource-group <name of partner MI RG> -``` --Example: --```azurecli -az sql instance-failover-group-arc create --name sql-fog --mi sql1 --partner-mi sql2 --resource-group rg-name --partner-resource-group rg-name -``` --The above command: --- Creates the required custom resources on both primary and secondary sites-- Copies the mirroring certificates and configures the failover group between the instances --## Configure Azure failover group - indirect mode --Follow the steps below if Azure Arc data services are deployed in `indirectly` connected mode. --1. Provision the managed instance in the primary site. -- ```azurecli - az sql mi-arc create --name <primaryinstance> --tier bc --replicas 3 --k8s-namespace <namespace> --use-k8s - ``` --2. Switch context to the secondary cluster by running ```kubectl config use-context <secondarycluster>``` and provision the managed instance in the secondary site that will be the disaster recovery instance. At this point, the system databases are not part of the contained availability group. -- > [!NOTE] - > It is important to specify `--license-type DisasterRecovery` **during** the managed instance. This will allow the DR instance to be seeded from the primary instance in the primary data center. Updating this property post deployment will not have the same effect. -- ```azurecli - az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --license-type DisasterRecovery --k8s-namespace <namespace> --use-k8s - ``` --3. Mirroring certificates - The binary data inside the Mirroring Certificate property of the managed instance is needed for the Instance Failover Group CR (Custom Resource) creation. -- This can be achieved in a few ways: -- (a) If using `az` CLI, generate the mirroring certificate file first, and then point to that file while configuring the Instance Failover Group so the binary data is read from the file and copied over into the CR. The cert files are not needed after failover group creation. -- (b) If using `kubectl`, directly copy and paste the binary data from the managed instance CR into the yaml file that will be used to create the Instance Failover Group. --- Using (a) above: -- Create the mirroring certificate file for primary instance: - ```azurecli - az sql mi-arc get-mirroring-cert --name <primaryinstance> --cert-file </path/name>.pemΓÇï --k8s-namespace <namespace> --use-k8s - ``` -- Example: - ```azurecli - az sql mi-arc get-mirroring-cert --name sqlprimary --cert-file $HOME/sqlcerts/sqlprimary.pemΓÇï --k8s-namespace my-namespace --use-k8s - ``` -- Connect to the secondary cluster and create the mirroring certificate file for secondary instance: -- ```azurecli - az sql mi-arc get-mirroring-cert --name <secondaryinstance> --cert-file </path/name>.pem --k8s-namespace <namespace> --use-k8s - ``` -- Example: -- ```azurecli - az sql mi-arc get-mirroring-cert --name sqlsecondary --cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s - ``` -- Once the mirroring certificate files are created, copy the certificate from the secondary instance to a shared/local path on the primary instance cluster and vice-versa. --4. Create the failover group resource on both sites. --- > [!NOTE] - > Ensure the SQL instances have different names for both primary and secondary sites, and the `shared-name` value should be identical on both sites. - - ```azurecli - az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for primary failover group resource> --mi <local SQL managed instance name> --role primary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<secondary IP> --partner-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s - ``` -- Example: - ```azurecli - az sql instance-failover-group-arc create --shared-name myfog --name primarycr --mi sqlinstance1 --role primary --partner-mi sqlinstance2 --partner-mirroring-url tcp://10.20.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance2.pem --k8s-namespace my-namespace --use-k8s - ``` -- On the secondary instance, run the following command to set up the failover group custom resource. The `--partner-mirroring-cert-file` in this case should point to a path that has the mirroring certificate file generated from the primary instance as described in 3(a) above. -- ```azurecli - az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for secondary failover group resource> --mi <local SQL managed instance name> --role secondary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<primary IP> --partner-mirroring-cert-file <primary.pem> --k8s-namespace <namespace> --use-k8s - ``` -- Example: - ```azurecli - az sql instance-failover-group-arc create --shared-name myfog --name secondarycr --mi sqlinstance2 --role secondary --partner-mi sqlinstance1 --partner-mirroring-url tcp://10.10.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance1.pem --k8s-namespace my-namespace --use-k8s - ``` --## Retrieve Azure failover group health state --Information about the failover group such as primary role, secondary role, and the current health status can be viewed on the custom resource on either primary or secondary site. --Run the below command on primary and/or the secondary site to list the failover groups custom resource: --```azurecli -kubectl get fog -n <namespace> -``` --Describe the custom resource to retrieve the failover group status, as follows: --```azurecli -kubectl describe fog <failover group cr name> -n <namespace> -``` --## Failover group operations --Once the failover group is set up between the managed instances, different failover operations can be performed depending on the circumstances. --Possible failover scenarios are: --- The instances at both sites are in healthy state and a failover needs to be performed: - + perform a manual failover from primary to secondary without data loss by setting `role=secondary` on the primary SQL MI. - -- Primary site is unhealthy/unreachable and a failover needs to be performed:- - + the primary SQL Managed Instance enabled by Azure Arc is down/unhealthy/unreachable - + the secondary SQL Managed Instance enabled by Azure Arc needs to be force-promoted to primary with potential data loss - + when the original primary SQL Managed Instance enabled by Azure Arc comes back online, it will report as `Primary` role and unhealthy state and needs to be forced into a `secondary` role so it can join the failover group and data can be synchronized. - --## Manual failover (without data loss) --Use `az sql instance-failover-group-arc update ...` command group to initiate a failover from primary to secondary. Any pending transactions on the geo-primary instance are replicated over to the geo-secondary instance before the failover. --### Directly connected mode -Run the following command to initiate a manual failover, in `direct` connected mode using ARM APIs: --```azurecli -az sql instance-failover-group-arc update --name <shared name of failover group> --mi <primary instance> --role secondary --resource-group <resource group> -``` -Example: --```azurecli -az sql instance-failover-group-arc update --name myfog --mi sqlmi1 --role secondary --resource-group myresourcegroup -``` -### Indirectly connected mode -Run the following command to initiate a manual failover, in `indirect` connected mode using kubernetes APIs: --```azurecli -az sql instance-failover-group-arc update --name <name of failover group resource> --role secondary --k8s-namespace <namespace> --use-k8s -``` --Example: --```azurecli -az sql instance-failover-group-arc update --name myfog --role secondary --k8s-namespace my-namespace --use-k8s -``` --## Forced failover with data loss --In the circumstance when the geo-primary instance becomes unavailable, the following commands can be run on the geo-secondary DR instance to promote to primary with a forced failover incurring potential data loss. --On the geo-secondary DR instance, run the following command to promote it to primary role, with data loss. --> [!NOTE] -> If the `--partner-sync-mode` was configured as `sync`, it needs to be reset to `async` when the secondary is promoted to primary. --### Directly connected mode -```azurecli -az sql instance-failover-group-arc update --name <shared name of failover group> --mi <instance> --role force-primary-allow-data-loss --resource-group <resource group> --partner-sync-mode async -``` -Example: --```azurecli -az sql instance-failover-group-arc update --name myfog --mi sqlmi2 --role force-primary-allow-data-loss --resource-group myresourcegroup --partner-sync-mode async -``` --### Indirectly connected mode -```azurecli -az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-primary-allow-data-loss --partner-sync-mode async -``` --When the geo-primary instance becomes available, run the below command to bring it into the failover group and synchronize the data: --### Directly connected mode -```azurecli -az sql instance-failover-group-arc update --name <shared name of failover group> --mi <old primary instance> --role force-secondary --resource-group <resource group> -``` --### Indirectly connected mode -```azurecli -az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-secondary -``` -Optionally, the `--partner-sync-mode` can be configured back to `sync` mode if desired. --## Post failover operations -Once you perform a failover from primary site to secondary site, either with or without data loss, you may need to do the following: -- Update the connection string for your applications to connect to the newly promoted primary Arc SQL managed instance-- If you plan to continue running the production workload off of the secondary site, update the `--license-type` to either `BasePrice` or `LicenseIncluded` to initiate billing for the vCores consumed.--## Related content --- [Overview: SQL Managed Instance enabled by Azure Arc business continuity](managed-instance-business-continuity-overview.md)-- [Configure failover group - portal](managed-instance-disaster-recovery-portal.md) |
azure-arc | Managed Instance Disaster Recovery Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-portal.md | - Title: Disaster recovery - SQL Managed Instance enabled by Azure Arc - portal -description: Describes how to configure disaster recovery for SQL Managed Instance enabled by Azure Arc in the portal ------ Previously updated : 08/02/2023----# Configure failover group - portal --This article explains how to configure disaster recovery for SQL Managed Instance enabled by Azure Arc with Azure portal. Before you proceed, review the information and prerequisites in [SQL Managed Instance enabled by Azure Arc - disaster recovery](managed-instance-disaster-recovery.md). ---To configure disaster recovery through Azure portal, the Azure Arc-enabled data service requires direct connectivity to Azure. --## Configure Azure failover group --1. In the portal, go to your primary availability group. -1. Under **Data Management**, select **Failover Groups**. -- Azure portal presents **Create instance failover group**. -- :::image type="content" source="media/managed-instance-disaster-recovery-portal/create-failover-group.png" alt-text="Screenshot of the Azure portal create instance failover group control."::: --1. Provide the information to define the failover group. -- * **Primary mirroring URL**: The mirroring endpoint for the failover group instance. - * **Resource group**: The resource group for the failover group instance. - * **Secondary managed instance**: The Azure SQL Managed Instance at the DR location. - * **Synchronization mode**: Select either *Sync* for synchronous mode, or *Async* for asynchronous mode. - * **Instance failover group name**: The name of the failover group. - -1. Select **Create**. --Azure portal begins to provision the instance failover group. --## View failover group --After the failover group is provisioned, you can view it in Azure portal. ---## Failover --In the disaster recovery configuration, only one of the instances in the failover group is primary. You can fail over from the portal to migrate the primary role to the other instance in your failover group. To fail over: --1. In portal, locate your managed instance. -1. Under **Data Management** select **Failover Groups**. -1. Select **Failover**. --Monitor failover progress in Azure portal. --## Set synchronization mode --To set the synchronization mode: --1. From **Failover Groups**, select **Edit configuration**. -- Azure portal shows an **Edit Configuration** control. -- :::image type="content" source="media/managed-instance-disaster-recovery-portal/edit-synchronization.png" alt-text="Screenshot of the Edit Configuration control."::: --1. Under **Edit configuration**, select your desired mode, and select **Apply**. --## Monitor failover group status in the portal --After you use the portal to change a failover group, the portal automatically reports the status as the change is applied. Changes that the portal reports include: --- Add failover group-- Edit failover group configuration-- Start failover-- Delete failover group--After you initiate the change, the portal automatically refreshes the status every two minutes. The portal automatically refreshes for two minutes. --## Delete failover group --1. From Failover Groups**, select **Delete Failover Group**. -- Azure portal asks you to confirm your choice to delete the failover group. --1. Select **Delete failover group** to proceed. Otherwise select **Cancel**, to not delete the group. ---## Related content --- [Overview: SQL Managed Instance enabled by Azure Arc business continuity](managed-instance-business-continuity-overview.md)-- [Configure failover group - CLI](managed-instance-disaster-recovery-cli.md) |
azure-arc | Managed Instance Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md | - Title: Disaster recovery - SQL Managed Instance enabled by Azure Arc -description: Describes disaster recovery for SQL Managed Instance enabled by Azure Arc ------ Previously updated : 08/02/2023----# SQL Managed Instance enabled by Azure Arc - disaster recovery --To configure disaster recovery in SQL Managed Instance enabled by Azure Arc, set up Azure failover groups. This article explains failover groups. --## Background --Azure failover groups use the same distributed availability groups technology that is in SQL Server. Because SQL Managed Instance enabled by Azure Arc runs on Kubernetes, there's no Windows failover cluster involved. For more information, see [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups). --> [!NOTE] -> - The instances in both geo-primary and geo-secondary sites need to be identical in terms of their compute & capacity, as well as service tiers they are deployed in. -> - Distributed availability groups can be set up for either General Purpose or Business Critical service tiers. --You can configure failover groups in with the CLI or in the portal. For prerequisites and instructions see the respective content below: --- [Configure failover group - portal](managed-instance-disaster-recovery-portal.md)-- [Configure failover group - CLI](managed-instance-disaster-recovery-cli.md)--## Related content --- [Overview: SQL Managed Instance enabled by Azure Arc business continuity](managed-instance-business-continuity-overview.md) |
azure-arc | Managed Instance Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-features.md | - Title: Features and Capabilities of SQL Managed Instance enabled by Azure Arc -description: Features and Capabilities of SQL Managed Instance enabled by Azure Arc ------ Previously updated : 07/30/2021----# Features and Capabilities of SQL Managed Instance enabled by Azure Arc --SQL Managed Instance enabled by Azure Arc share a common code base with the latest stable version of SQL Server. Most of the standard SQL language, query processing, and database management features are identical. The features that are common between SQL Server and SQL Database or SQL Managed Instance are: --- Language features - [Control of flow language keywords](/sql/t-sql/language-elements/control-of-flow), [Cursors](/sql/t-sql/language-elements/cursors-transact-sql), [Data types](/sql/t-sql/data-types/data-types-transact-sql), [DML statements](/sql/t-sql/queries/queries), [Predicates](/sql/t-sql/queries/predicates), [Sequence numbers](/sql/relational-databases/sequence-numbers/sequence-numbers), [Stored procedures](/sql/relational-databases/stored-procedures/stored-procedures-database-engine), and [Variables](/sql/t-sql/language-elements/variables-transact-sql).-- Database features - [Automatic tuning (plan forcing)](/sql/relational-databases/automatic-tuning/automatic-tuning), [Change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server), [Database collation](/sql/relational-databases/collations/set-or-change-the-database-collation), [Contained databases](/sql/relational-databases/databases/contained-databases), [Contained users](/sql/relational-databases/security/contained-database-users-making-your-database-portable), [Data compression](/sql/relational-databases/data-compression/data-compression), [Database configuration settings](/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql), [Online index operations](/sql/relational-databases/indexes/perform-index-operations-online), [Partitioning](/sql/relational-databases/partitions/partitioned-tables-and-indexes), and [Temporal tables](/sql/relational-databases/tables/temporal-tables) ([see getting started guide](/sql/relational-databases/tables/getting-started-with-system-versioned-temporal-tables)).-- Security features - [Application roles](/sql/relational-databases/security/authentication-access/application-roles), [Dynamic data masking](/sql/relational-databases/security/dynamic-data-masking) ([Get started with SQL Database dynamic data masking with the Azure portal](/azure/azure-sql/database/dynamic-data-masking-configure-portal)), [Row Level Security](/sql/relational-databases/security/row-level-security)-- Multi-model capabilities - [Graph processing](/sql/relational-databases/graphs/sql-graph-overview), [JSON data](/sql/relational-databases/json/json-data-sql-server), [OPENXML](/sql/t-sql/functions/openxml-transact-sql), [Spatial](/sql/relational-databases/spatial/spatial-data-sql-server), [OPENJSON](/sql/t-sql/functions/openjson-transact-sql), and [XML indexes](/sql/t-sql/statements/create-xml-index-transact-sql).---## <a name="RDBMSHA"></a> RDBMS High Availability - -|Feature|SQL Managed Instance enabled by Azure Arc| -|-|-| -|Always On failover cluster instance<sup>1</sup>| Not Applicable. Similar capabilities available.| -|Always On availability groups |Business Critical service tier.| -|Basic availability groups |Not Applicable. Similar capabilities available.| -|Minimum replica commit availability group |Business Critical service tier.| -|Clusterless availability group|Yes| -|Backup database | Yes - `COPY_ONLY` See [BACKUP - (Transact-SQL)](/sql/t-sql/statements/backup-transact-sql?view=azuresqldb-mi-current&preserve-view=true)| -|Backup compression|Yes| -|Backup mirror |Yes| -|Backup encryption|Yes| -|Back up to Azure to (back up to URL)|Yes| -|Database snapshot|Yes| -|Fast recovery|Yes| -|Hot add memory and CPU|Yes| -|Log shipping|Not currently available.| -|Online page and file restore|Yes| -|Online indexing|Yes| -|Online schema change|Yes| -|Resumable online index rebuilds|Yes| --<sup>1</sup> In the scenario where there is a pod failure, a new SQL Managed Instance will start up and re-attach to the persistent volume containing your data. [Learn more about Kubernetes persistent volumes here](https://kubernetes.io/docs/concepts/storage/persistent-volumes). --## <a name="RDBMSSP"></a> RDBMS Scalability and Performance --| Feature | SQL Managed Instance enabled by Azure Arc | -|--|--| -| Columnstore | Yes | -| Large object binaries in clustered columnstore indexes | Yes | -| Online nonclustered columnstore index rebuild | Yes | -| In-Memory OLTP | Yes | -| Persistent Main Memory | Yes | -| Table and index partitioning | Yes | -| Data compression | Yes | -| Resource Governor | Yes | -| Partitioned Table Parallelism | Yes | -| NUMA Aware and Large Page Memory and Buffer Array Allocation | Yes | -| IO Resource Governance | Yes | -| Delayed Durability | Yes | -| Automatic Tuning | Yes | -| Batch Mode Adaptive Joins | Yes | -| Batch Mode Memory Grant Feedback | Yes | -| Interleaved Execution for Multi-Statement Table Valued Functions | Yes | -| Bulk insert improvements | Yes | --## <a name="RDBMSS"></a> RDBMS Security --| Feature | SQL Managed Instance enabled by Azure Arc | -|--|--| -| Row-level security | Yes | -| Always Encrypted | Yes | -| Always Encrypted with Secure Enclaves | No | -| Dynamic data masking | Yes | -| Basic auditing | Yes | -| Fine grained auditing | Yes | -| Transparent database encryption | Yes | -| User-defined roles | Yes | -| Contained databases | Yes | -| Encryption for backups | Yes | -| SQL Server Authentication | Yes | -| Microsoft Entra authentication | No | -| Windows Authentication | Yes | --## <a name="RDBMSM"></a> RDBMS Manageability --| Feature | SQL Managed Instance enabled by Azure Arc | -|--|--| -| Dedicated administrator connection | Yes | -| PowerShell scripting support | Yes | -| Support for data-tier application component operations - extract, deploy, upgrade, delete | Yes | -| Policy automation (check on schedule and change) | Yes | -| Performance data collector | Yes | -| Standard performance reports | Yes | -| Plan guides and plan freezing for plan guides | Yes | -| Direct query of indexed views (using NOEXPAND hint) | Yes | -| Automatically maintain indexed views | Yes | -| Distributed partitioned views | Yes | -| Parallel indexed operations | Yes | -| Automatic use of indexed view by query optimizer | Yes | -| Parallel consistency check | Yes | --### <a name="Programmability"></a> Programmability --| Feature | SQL Managed Instance enabled by Azure Arc | -|--|--| -| JSON | Yes | -| Query Store | Yes | -| Temporal | Yes | -| Native XML support | Yes | -| XML indexing | Yes | -| MERGE & UPSERT capabilities | Yes | -| Date and Time datatypes | Yes | -| Internationalization support | Yes | -| Full-text and semantic search | No | -| Specification of language in query | Yes | -| Service Broker (messaging) | Yes | -| Transact-SQL endpoints | Yes | -| Graph | Yes | -| Machine Learning Services | No | -| PolyBase | No | ---### Tools --SQL Managed Instance enabled by Azure Arc supports various data tools that can help you manage your data. --| **Tool** | SQL Managed Instance enabled by Azure Arc| -| | | | -| Azure portal | Yes | -| Azure CLI | Yes | -| [Azure Data Studio](/azure-data-studio/what-is-azure-data-studio) | Yes | -| Azure PowerShell | No | -| [BACPAC file (export)](/sql/relational-databases/data-tier-applications/export-a-data-tier-application) | Yes | -| [BACPAC file (import)](/sql/relational-databases/data-tier-applications/import-a-bacpac-file-to-create-a-new-user-database) | Yes | -| [SQL Server Data Tools (SSDT)](/sql/ssdt/download-sql-server-data-tools-ssdt) | Yes | -| [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) | Yes | -| [SQL Server PowerShell](/sql/relational-databases/scripting/sql-server-powershell) | Yes | -| [SQL Server Profiler](/sql/tools/sql-server-profiler/sql-server-profiler) | Yes | -- [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --### <a name="Unsupported"></a> Unsupported Features & Services --The following features and services are not available for SQL Managed Instance enabled by Azure Arc. --| Area | Unsupported feature or service | -|--|--| -| **Database engine** | Merge replication | -| | Stretch DB | -| | Distributed query with 3rd-party connections | -| | Linked Servers to data sources other than SQL Server and Azure SQL products | -| | System extended stored procedures (XP_CMDSHELL, etc.) | -| | FileTable, FILESTREAM | -| | CLR assemblies with the EXTERNAL_ACCESS or UNSAFE permission set | -| | Buffer Pool Extension | -| **SQL Server Agent** | SQL Server agent is supported but the following specific capabilities are not supported: Subsystems (CmdExec, PowerShell, Queue Reader, SSIS, SSAS, SSRS), Alerts, Managed Backup -| **High Availability** | Database mirroring | -| **Security** | Extensible Key Management | -| | AD Authentication for Linked Servers | -| | AD Authentication for Availability Groups (AGs) | |
azure-arc | Managed Instance High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-high-availability.md | - Title: SQL Managed Instance enabled by Azure Arc high availability- -description: Learn how to deploy SQL Managed Instance enabled by Azure Arc with high availability. --- Previously updated : 07/30/2021--------# High Availability with SQL Managed Instance enabled by Azure Arc --SQL Managed Instance enabled by Azure Arc is deployed on Kubernetes as a containerized application. It uses Kubernetes constructs such as stateful sets and persistent storage to provide built-in: --- Health monitoring-- Failure detection-- Automatic fail over to maintain service health. --For increased reliability, you can also configure SQL Managed Instance enabled by Azure Arc to deploy with extra replicas in a high availability configuration. The Arc data services data controller manages: --- Monitoring-- Failure detection-- Automatic failover--Arc-enabled data service provides this service without user intervention. The service: --- Sets up the availability group-- Configures database mirroring endpoints-- Adds databases to the availability group-- Coordinates failover and upgrade. --This document explores both types of high availability. --SQL Managed Instance enabled by Azure Arc provides different levels of high availability depending on whether the SQL managed instance was deployed as a *General Purpose* service tier or *Business Critical* service tier. --## High availability in General Purpose service tier --In the General Purpose service tier, there is only one replica available, and the high availability is achieved via Kubernetes orchestration. For instance, if a pod or node containing the managed instance container image crashes, Kubernetes attempts to stand up another pod or node, and attach to the same persistent storage. During this time, the SQL managed instance is unavailable to the applications. Applications need to reconnect and retry the transaction when the new pod is up. If `load balancer` is the service type used, then applications can reconnect to the same primary endpoint and Kubernetes will redirect the connection to the new primary. If the service type is `nodeport` then the applications will need to reconnect to the new IP address. --### Verify built-in high availability --To verify the build-in high availability provided by Kubernetes, you can: --1. Delete the pod of an existing managed instance -1. Verify that Kubernetes recovers from this action --During recover, Kubernetes bootstraps another pod and attaches the persistent storage. --### Prerequisites --- Kubernetes cluster requires [shared, remote storage](storage-configuration.md#factors-to-consider-when-choosing-your-storage-configuration) -- A SQL Managed Instance enabled by Azure Arc deployed with one replica (default)---1. View the pods. -- ```console - kubectl get pods -n <namespace of data controller> - ``` --2. Delete the managed instance pod. -- ```console - kubectl delete pod <name of managed instance>-0 -n <namespace of data controller> - ``` -- For example -- ```output - user@pc:/# kubectl delete pod sql1-0 -n arc - pod "sql1-0" deleted - ``` --3. View the pods to verify that the managed instance is recovering. -- ```console - kubectl get pods -n <namespace of data controller> - ``` -- For example: -- ```output - user@pc:/# kubectl get pods -n arc - NAME READY STATUS RESTARTS AGE - sql1-0 2/3 Running 0 22s - ``` --After all containers within the pod recover, you can connect to the managed instance. ---## High availability in Business Critical service tier --In the Business Critical service tier, in addition to what is natively provided by Kubernetes orchestration, SQL Managed Instance for Azure Arc provides a contained availability group. The contained availability group is built on SQL Server Always On technology. It provides higher levels of availability. SQL Managed Instance enabled by Azure Arc deployed with *Business Critical* service tier can be deployed with either 2 or 3 replicas. These replicas are always kept in sync with each other. --With contained availability groups, any pod crashes or node failures are transparent to the application. The contained availability group provides at least one other pod that has all the data from the primary and is ready to take on connections. --## Contained availability groups --An availability group binds one or more user databases into a logical group so that when there is a failover, the entire group of databases fails over to the secondary replica as a single unit. An availability group only replicates data in the user databases but not the data in system databases such as logins, permissions, or agent jobs. A contained availability group includes metadata from system databases such as `msdb` and `master` databases. When logins are created or modified in the primary replica, they're automatically also created in the secondary replicas. Similarly, when an agent job is created or modified in the primary replica, the secondary replicas also receive those changes. --SQL Managed Instance enabled by Azure Arc takes this concept of contained availability group and adds Kubernetes operator so these can be deployed and managed at scale. --Capabilities that contained availability groups enable: --- When deployed with multiple replicas, a single availability group named with the same name as the Arc enabled SQL managed instance is created. By default, contained AG has three replicas, including primary. All CRUD operations for the availability group are managed internally, including creating the availability group or joining replicas to the availability group created. You can't create more availability groups in an instance.--- All databases are automatically added to the availability group, including all user and system databases like `master` and `msdb`. This capability provides a single-system view across the availability group replicas. Notice both `containedag_master` and `containedag_msdb` databases if you connect directly to the instance. The `containedag_*` databases represent the `master` and `msdb` inside the availability group.--- An external endpoint is automatically provisioned for connecting to databases within the availability group. This endpoint `<managed_instance_name>-external-svc` plays the role of the availability group listener.--### Deploy SQL Managed Instance enabled by Azure Arc with multiple replicas using Azure portal --From Azure portal, on the create SQL Managed Instance enabled by Azure Arc page: -1. Select **Configure Compute + Storage** under Compute + Storage. The portal shows advanced settings. -2. Under Service tier, select **Business Critical**. -3. Check the "For development use only", if using for development purposes. -4. Under High availability, select either **2 replicas** or **3 replicas**. --![High availability settings](.\media\business-continuity\service-tier-replicas.png) ----### Deploy with multiple replicas using Azure CLI ---When a SQL Managed Instance enabled by Azure Arc is deployed in Business Critical service tier, the deployment creates multiple replicas. The setup and configuration of contained availability groups among those instances is automatically done during provisioning. --For instance, the following command creates a managed instance with 3 replicas. --Indirectly connected mode: --```azurecli -az sql mi-arc create -n <instanceName> --k8s-namespace <namespace> --use-k8s --tier <tier> --replicas <number of replicas> -``` -Example: --```azurecli -az sql mi-arc create -n sqldemo --k8s-namespace my-namespace --use-k8s --tier BusinessCritical --replicas 3 -``` --Directly connected mode: --```azurecli -az sql mi-arc create --name <name> --resource-group <group> --location <Azure location> ΓÇôsubscription <subscription> --custom-location <custom-location> --tier <tier> --replicas <number of replicas> -``` -Example: -```azurecli -az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôsubscription xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --tier BusinessCritical --replcias 3 -``` --By default, all the replicas are configured in synchronous mode. This means any updates on the primary instance are synchronously replicated to each of the secondary instances. --## View and monitor high availability status --Once the deployment is complete, connect to the primary endpoint from SQL Server Management Studio. --Verify and retrieve the endpoint of the primary replica, and connect to it from SQL Server Management Studio. -For instance, if the SQL instance was deployed using `service-type=loadbalancer`, run the below command to retrieve the endpoint to connect to: --```azurecli -az sql mi-arc list --k8s-namespace my-namespace --use-k8s -``` --or -```console -kubectl get sqlmi -A -``` --### Get the primary and secondary endpoints and AG status --Use the `kubectl describe sqlmi` or `az sql mi-arc show` commands to view the primary and secondary endpoints, and high availability status. --Example: --```console -kubectl describe sqlmi sqldemo -n my-namespace -``` -or --```azurecli -az sql mi-arc show --name sqldemo --k8s-namespace my-namespace --use-k8s -``` --Example output: --```console - "status": { - "endpoints": { - "logSearchDashboard": "https://10.120.230.404:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqldemo'))", - "metricsDashboard": "https://10.120.230.46:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqldemo-0", - "mirroring": "10.15.100.150:5022", - "primary": "10.15.100.150,1433", - "secondary": "10.15.100.156,1433" - }, - "highAvailability": { - "healthState": "OK", - "mirroringCertificate": "--BEGIN CERTIFICATE--\n...\n--END CERTIFICATE--" - }, - "observedGeneration": 1, - "readyReplicas": "2/2", - "state": "Ready" - } -``` --You can connect to the primary endpoint with SQL Server Management Studio and verify DMVs as: --```sql -SELECT * FROM sys.dm_hadr_availability_replica_states -``` ----![Availability Group](.\media\business-continuity\availability-group.png) --And the Contained Availability Dashboard: --![Container Availability Group dashboard](.\media\business-continuity\ag-dashboard.png) ---## Failover scenarios --Unlike SQL Server Always On availability groups, the contained availability group is a managed high availability solution. Hence, the failover modes are limited compared to the typical modes available with SQL Server Always On availability groups. --Deploy Business Critical service tier SQL managed instances in either two-replica configuration or three replica configuration. The effects of failures and the subsequent recoverability are different with each configuration. A three replica instance provides a higher level of availability and recovery, than a two replica instance. --In a two replica configuration, when both the node states are `SYNCHRONIZED`, if the primary replica becomes unavailable, the secondary replica is automatically promoted to primary. When the failed replica becomes available, it is updated with all the pending changes. If there are connectivity issues between the replicas, then the primary replica may not commit any transactions as every transaction needs to be committed on both replicas before a success is returned back on the primary. --In a three replica configuration, a transaction needs to commit in at least 2 of the 3 replicas before returning a success message back to the application. In the event of a failure, one of the secondaries is automatically promoted to primary while Kubernetes attempts to recover the failed replica. When the replica becomes available, it is automatically joined back with the contained availability group and pending changes are synchronized. If there are connectivity issues between the replicas, and more than 2 replicas are out of sync, primary replica won't commit any transactions. --> [!NOTE] -> It is recommended to deploy a Business Critical SQL Managed Instance in a three replica configuration than a two replica configuration to achieve near-zero data loss. ---To fail over from the primary replica to one of the secondaries, for a planned event, run the following command: --If you connect to primary, you can use following T-SQL to fail over the SQL instance to one of the secondaries: -```code -ALTER AVAILABILITY GROUP current SET (ROLE = SECONDARY); -``` ---If you connect to the secondary, you can use following T-SQL to promote the desired secondary to primary replica. -```code -ALTER AVAILABILITY GROUP current SET (ROLE = PRIMARY); -``` -### Preferred primary replica --You can also set a specific replica to be the primary replica using AZ CLI as follows: -```azurecli -az sql mi-arc update --name <sqlinstance name> --k8s-namespace <namespace> --use-k8s --preferred-primary-replica <replica> -``` --Example: -```azurecli -az sql mi-arc update --name sqldemo --k8s-namespace my-namespace --use-k8s --preferred-primary-replica sqldemo-3 -``` --> [!NOTE] -> Kubernetes will attempt to set the preferred replica, however it is not guaranteed. --- ## Restoring a database onto a multi-replica instance --Additional steps are required to restore a database into an availability group. The following steps demonstrate how to restore a database into a managed instance and add it to an availability group. --1. Expose the primary instance external endpoint by creating a new Kubernetes service. -- Determine the pod that hosts the primary replica. Connect to the managed instance and run: -- ```sql - SELECT @@SERVERNAME - ``` -- The query returns the pod that hosts the primary replica. -- Create the Kubernetes service to the primary instance by running the following command if your Kubernetes cluster uses `NodePort` services. Replace `<podName>` with the name of the server returned at previous step, `<serviceName>` with the preferred name for the Kubernetes service created. -- ```console - kubectl -n <namespaceName> expose pod <podName> --port=1533 --name=<serviceName> --type=NodePort - ``` -- For a LoadBalancer service, run the same command, except that the type of the service created is `LoadBalancer`. For example: -- ```console - kubectl -n <namespaceName> expose pod <podName> --port=1533 --name=<serviceName> --type=LoadBalancer - ``` -- Here is an example of this command run against Azure Kubernetes Service, where the pod hosting the primary is `sql2-0`: -- ```console - kubectl -n arc-cluster expose pod sql2-0 --port=1533 --name=sql2-0-p --type=LoadBalancer - ``` -- Get the IP of the Kubernetes service created: -- ```console - kubectl get services -n <namespaceName> - ``` --2. Restore the database to the primary instance endpoint. -- Add the database backup file into the primary instance container. -- ```console - kubectl cp <source file location> <pod name>:var/opt/mssql/data/<file name> -c <serviceName> -n <namespaceName> - ``` -- Example -- ```console - kubectl cp /home/WideWorldImporters-Full.bak sql2-1:var/opt/mssql/data/WideWorldImporters-Full.bak -c arc-sqlmi -n arc - ``` -- Restore the database backup file by running the command below. -- ```sql - RESTORE DATABASE test FROM DISK = '/var/opt/mssql/data/<file name>.bak' - WITH MOVE '<database name>' to '/var/opt/mssql/datf' - ,MOVE '<database name>' to '/var/opt/mssql/data/<file name>_log.ldf' - ,RECOVERY, REPLACE, STATS = 5; - GO - ``` - - Example -- ```sql - RESTORE Database WideWorldImporters - FROM DISK = '/var/opt/mssql/data/WideWorldImporters-Full.BAK' - WITH - MOVE 'WWI_Primary' TO '/var/opt/mssql/datf', - MOVE 'WWI_UserData' TO '/var/opt/mssql/data/WideWorldImporters_UserData.ndf', - MOVE 'WWI_Log' TO '/var/opt/mssql/data/WideWorldImporters.ldf', - MOVE 'WWI_InMemory_Data_1' TO '/var/opt/mssql/data/WideWorldImporters_InMemory_Data_1', - RECOVERY, REPLACE, STATS = 5; - GO - ``` --3. Add the database to the availability group. -- For the database to be added to the AG, it must run in full recovery mode and a log backup has to be taken. Run the TSQL statements below to add the restored database into the availability group. -- ```sql - ALTER DATABASE <databaseName> SET RECOVERY FULL; - BACKUP DATABASE <databaseName> TO DISK='<filePath>' - ALTER AVAILABILITY GROUP containedag ADD DATABASE <databaseName> - ``` -- The following example adds a database named `WideWorldImporters` that was restored on the instance: -- ```sql - ALTER DATABASE WideWorldImporters SET RECOVERY FULL; - BACKUP DATABASE WideWorldImporters TO DISK='/var/opt/mssql/data/WideWorldImporters.bak' - ALTER AVAILABILITY GROUP containedag ADD DATABASE WideWorldImporters - ``` --> [!IMPORTANT] -> As a best practice, you should delete the Kubernetes service created above by running this command: -> ->```console ->kubectl delete svc sql2-0-p -n arc ->``` --### Limitations --SQL Managed Instance enabled by Azure Arc availability groups has the same limitations as Big Data Cluster availability groups. For more information, see [Deploy SQL Server Big Data Cluster with high availability](/sql/big-data-cluster/deployment-high-availability#known-limitations). --## Related content --Learn more about [Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md) |
azure-arc | Managed Instance Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-overview.md | - Title: SQL Managed Instance enabled by Azure Arc Overview -description: SQL Managed Instance enabled by Azure Arc Overview ------ Previously updated : 07/19/2023----# SQL Managed Instance enabled by Azure Arc Overview --SQL Managed Instance enabled by Azure Arc is an Azure SQL data service that can be created on the infrastructure of your choice. ---## Description --SQL Managed Instance enabled by Azure Arc has near 100% compatibility with the latest SQL Server database engine, and enables existing SQL Server customers to lift and shift their applications to Azure Arc data services with minimal application and database changes while maintaining data sovereignty. At the same time, SQL Managed Instance includes built-in management capabilities that drastically reduce management overhead. --To learn more about these capabilities, watch these introductory videos. --### SQL Managed Instance enabled by Azure Arc - indirect connected mode --> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny] --### SQL Managed Instance enabled by Azure Arc - direct connected mode --> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny] --## Related content --Learn more about [Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md) --[Azure Arc-enabled Managed Instance high availability](managed-instance-high-availability.md) --[Start by creating a Data Controller](create-data-controller-indirect-cli.md) --Already created a Data Controller? [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) |
azure-arc | Migrate Postgresql Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-postgresql-data.md | - Title: Migrate data from a PostgreSQL database into an Azure Arc-enabled PostgreSQL server- -description: Migrate data from a PostgreSQL database into an Azure Arc-enabled PostgreSQL server ------ Previously updated : 11/03/2021----# Migrate PostgreSQL database to Azure Arc-enabled PostgreSQL server --This document describes the steps to get your existing PostgreSQL database (one that not hosted in Azure Arc-enabled Data Services) into your Azure Arc-enabled PostgreSQL server. ---## Considerations --Azure Arc-enabled PostgreSQL server is the community version of PostgreSQL. So any tool that works on PostgreSQL outside of Azure Arc should work with Azure Arc-enabled PostgreSQL server. ---As such, with the set of tools you use today for Postgres, you should be able to: -1. Backup your Postgres database from your instance hosted outside of Azure Arc -2. Restore it in your Azure Arc-enabled PostgreSQL server --What will be left for you to do is: -- reset the server parameters-- reset the security contexts: recreate users, roles, and reset permissions...--To do this backup/restore operation, you can use any tool that is capable of doing backup/restore for Postgres. For example: -- Azure Data Studio and its Postgres extension-- `pgcli`-- `pgAdmin`-- `pg_dump`-- `pg_restore`-- `psql`-- ...--## Example --Let's illustrate those steps using the `pgAdmin` tool. -Consider the following setup: -- **Source:** - A Postgres server running on premises on a bare metal server and named JEANYDSRV. It is of version 14 and hosts a database named MyOnPremPostgresDB that has one table T1 which has 1 row - :::image type="content" source="media/postgres-hyperscale/migrate-pg-source.jpg" alt-text="Migrate-source"::: --- **Destination:** - A Postgres server running in an Azure Arc environment and named postgres01. It is of version 14. It does not have any database except the standard Postgres database. - :::image type="content" source="media/postgres-hyperscale/migrate-pg-destination.jpg" alt-text="Migrate-destination"::: ---### Take a backup of the source database on premises ---Configure it: -1. Give it a file name: **MySourceBackup** -2. Set the format to **Custom** --The backup completes successfully: --### Create an empty database on the destination system in your Azure Arc-enabled PostgreSQL server --> [!NOTE] -> To register a Postgres instance in the `pgAdmin` tool, you need to you use public IP of your instance in your Kubernetes cluster and set the port and security context appropriately. You will find these details on the `psql` endpoint line after running the following command: --```azurecli -az postgres server-arc endpoint list -n postgres01 --k8s-namespace <namespace> --use-k8s -``` -That returns an output like: -```console -{ - "instances": [ - { - "endpoints": [ - "Description": "PostgreSQL Instance", - "Endpoint": "postgresql://postgres:<replace with password>@12.345.123.456:1234" - }, - { - "Description": "Log Search Dashboard", - "Endpoint": "https://12.345.123.456:12345/kibana/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:\"postgres01\"'))" - }, - { - "Description": "Metrics Dashboard", - "Endpoint": "https://12.345.123.456:12345/grafana/d/postgres-metrics?var-Namespace=arc3&var-Name=postgres01" - } -], -"engine": "PostgreSql", -"name": "postgres01" -} - ], - "namespace": "arc" -} -``` --Let's name the destination database **RESTORED_MyOnPremPostgresDB**. ---### Restore the database in your Arc setup ---Configure the restore: -1. Point to the file that contains the backup to restore: **MySourceBackup** -2. Keep the format set to **Custom or tar** - :::image type="content" source="media/postgres-hyperscale/migrate-pg-destination-dbrestore2.jpg" alt-text="Migrate-db-restore-configure"::: --3. Click **Restore**. -- The restore is successful. - :::image type="content" source="media/postgres-hyperscale/migrate-pg-destination-dbrestore3.jpg" alt-text="Migrate-db-restore-completed"::: --### Verify that the database was successfully restored in your Azure Arc-enabled PostgreSQL server --Use either of the following methods: --**From `pgAdmin`:** --Expand the Postgres instance hosted in your Azure Arc setup. You will see the table in the database that you have restored and when you select the data it shows the same row as that it has in the on-premises instance: -- :::image type="content" source="media/postgres-hyperscale/migrate-pg-destination-dbrestoreverif.jpg" alt-text="Migrate-db-restore-verification"::: --**From `psql` inside your Azure Arc setup:** --Within your Arc setup you can use `psql` to connect to your Postgres instance, set the database context to `RESTORED_MyOnPremPostgresDB` and query the data: --1. List the end points to help form your `psql` connection string: -- ```Az CLI - az postgres server-arc endpoint list -n postgres01 --k8s-namespace <namespace> --use-k8s - ``` -- ```Az CLI - { - "instances": [ - { - "endpoints": [ - "Description": "PostgreSQL Instance", - "Endpoint": "postgresql://postgres:<replace with password>@12.345.123.456:1234" - }, - { - "Description": "Log Search Dashboard", - "Endpoint": "https://12.345.123.456:12345/kibana/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:\"postgres01\"'))" - }, - { - "Description": "Metrics Dashboard", - "Endpoint": "https://12.345.123.456:12345/grafana/d/postgres-metrics?var-Namespace=arc3&var-Name=postgres01" - } - ], - "engine": "PostgreSql", - "name": "postgres01" - } - ], - "namespace": "arc" - } - ``` --1. From your `psql` connection string use the `-d` parameter to indicate the database name. With the below command, you will be prompted for the password: -- ```console - psql -d RESTORED_MyOnPremPostgresDB -U postgres -h 10.0.0.4 -p 32639 - ``` -- `psql` connects. -- ```output - Password for user postgres: - psql (10.12 (Ubuntu 10.12-0ubuntu0.18.04.1), server 12.3 (Debian 12.3-1.pgdg100+1)) - WARNING: psql major version 10, server major version 12. - Some psql features might not work. - SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) - Type "help" for help. -- RESTORED_MyOnPremPostgresDB=# - ``` --1. Select the table and you'll see the data that you restored from the on-premises Postgres instance: -- ```console - RESTORED_MyOnPremPostgresDB=# select * from t1; - ``` -- ```output - col1 | col2 - +- - 1 | BobbyIsADog - (1 row) - ``` --> [!NOTE] -> - It is not possible today to "onboard into Azure Arc" an existing Postgres instance that would running on premises or in any other cloud. In other words, it is not possible to install some sort of "Azure Arc agent" on your existing Postgres instance to make it a Postgres setup enabled by Azure Arc. Instead, you need to create a new Postgres instance and transfer data into it. You may use the technique shown above to do this or you may use any ETL tool of your choice. ---> *In these documents, skip the sections **Sign in to the Azure portal**, and **Create an Azure Database for PostgreSQL**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL server offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL server. |
azure-arc | Migrate To Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-to-managed-instance.md | - Title: Migrate a database from SQL Server to SQL Managed Instance enabled by Azure Arc -description: Migrate database from SQL Server to SQL Managed Instance enabled by Azure Arc ------ Previously updated : 07/30/2021----# Migrate: SQL Server to SQL Managed Instance enabled by Azure Arc --This scenario walks you through the steps for migrating a database from a SQL Server instance to Azure SQL managed instance in Azure Arc via two different backup and restore methods. ---## Use Azure blob storage --Use Azure blob storage for migrating to SQL Managed Instance enabled by Azure Arc. --This method uses Azure Blob Storage as a temporary storage location that you can back up to and then restore from. --### Prerequisites --- [Install Azure Data Studio](install-client-tools.md)-- [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --- [Install Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)-- Azure subscription----### Step 1: Provision Azure blob storage --1. Follow the steps described in [Create an Azure Blob Storage account](../../storage/common/storage-account-create.md?tabs=azure-portal) -1. Launch Azure Storage Explorer -1. [Sign in to Azure](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#sign-in-to-azure) to access the blob storage created in previous step -1. Right-click on the blob storage account and select **Create Blob Container** to create a new container where the backup file will be stored --### Step 2: Get storage blob credentials --1. In Azure Storage Explorer, right-click on the blob container that was just created and select **Get Shared Access Signature** --1. Select the **Read**, **Write** and **List** --1. Select **Create** -- Take note of the URI and the Query String from this screen. These will be needed in later steps. Click on the **Copy** button to save to a Notepad/OneNote etc. --1. Close the **Shared Access Signature** window. --### Step 3: Backup database file to Azure Blob Storage --In this step, we will connect to the source SQL Server and create the backup file of the database that we want to migrate to SQL Managed Instance - Azure Arc. --1. Launch Azure Data Studio -1. Connect to the SQL Server instance that has the database you want to migrate to SQL Managed Instance - Azure Arc -1. Right-click on the database and select **New Query** -1. Prepare your query in the following format replacing the placeholders indicated by the `<...>` using the information from the shared access signature in earlier steps. Once you have substituted the values, run the query. -- ```sql - IF NOT EXISTS - (SELECT * FROM sys.credentials - WHERE name = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>') - CREATE CREDENTIAL [https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>] - WITH IDENTITY = 'SHARED ACCESS SIGNATURE', - SECRET = '<SAS_TOKEN>'; - ``` --1. Similarly, prepare the **BACKUP DATABASE** command as follows to create a backup file to the blob container. Once you have substituted the values, run the query. -- ```sql - BACKUP DATABASE <database name> TO URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>/<file name>.bak' - ``` --1. Open Azure Storage Explorer and validate that the backup file created in previous step is visible in the Blob container --Learn more about backup to URL here: --- [SQL Server Backup and Restore with Azure Blob Storage](/sql/relational-databases/backup-restore/sql-server-backup-and-restore-with-microsoft-azure-blob-storage-service)--- [Back up to URL docs](/sql/relational-databases/backup-restore/sql-server-backup-to-url)--- [Back up to URL using SQL Server Management Studio (SSMS)](/sql/relational-databases/tutorial-sql-server-backup-and-restore-to-azure-blob-storage-service)---### Step 4: Restore the database from Azure blob storage to SQL Managed Instance - Azure Arc --1. From Azure Data Studio, login and connect to the SQL Managed Instance - Azure Arc. -1. Expand the **System Databases**, right-click on **master** database and select **New Query**. -1. In the query editor window, prepare and run the same query from previous step to create the credentials. -- ```sql - IF NOT EXISTS - (SELECT * FROM sys.credentials - WHERE name = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>') - CREATE CREDENTIAL [https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>] - WITH IDENTITY = 'SHARED ACCESS SIGNATURE', - SECRET = '<SAS_TOKEN>'; - ``` --1. Prepare and run the below command to verify the backup file is readable, and intact. -- ```console - RESTORE FILELISTONLY FROM URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>/<file name>.bak' - ``` --1. Prepare and run the **RESTORE DATABASE** command as follows to restore the backup file to a database on SQL Managed Instance - Azure Arc -- ```sql - RESTORE DATABASE <database name> FROM URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>/<file name>.bak' - WITH MOVE 'Test' to '/var/opt/mssql/datf' - ,MOVE 'Test_log' to '/var/opt/mssql/data/<file name>.ldf' - ,RECOVERY; - GO - ``` -----## Method 2: Copy the backup file into an Azure SQL Managed Instance - Azure Arc pod using kubectl --This method shows you how to take a backup file that you create via any method and then copy it into local storage in the Azure SQL managed instance pod so you can restore from there much like you would on a typical file system on Windows or Linux. In this scenario, you will be using the command `kubectl cp` to copy the file from one place into the pod's file system. --### Prerequisites --- Install and configure kubectl to point to your Kubernetes cluster where Azure Arc data services is deployed-- Have a tool like Azure Data Studio or SQL Server Management Server installed and connected to the SQL Server where you want to create the backup file OR have an existing .bak file already created on your local file system.--### Step 1: Backup the database if you haven't already --Backup the SQL Server database to your local file path like any typical SQL Server backup to disk: --```sql -BACKUP DATABASE Test -TO DISK = 'C:\Backupfiles\test.bak' -WITH FORMAT, MEDIANAME = 'Test' ; -GO -``` --### Step 2: Copy the backup file into the pod's file system --Find the name of the pod where the sql instance is deployed. Typically it should look like `pod/<sqlinstancename>-0` --Get the list of all pods by running: --```console -kubectl get pods -n <namespace of data controller> -``` --Example: --Copy the backup file from the local storage to the sql pod in the cluster. --```console -kubectl cp <source file location> <pod name>:var/opt/mssql/data/<file name> -n <namespace name> -c arc-sqlmi --#Example: -kubectl cp C:\Backupfiles\test.bak sqlinstance1-0:var/opt/mssql/data/test.bak -n arc -c arc-sqlmi -``` --### Step 3: Restore the database --Prepare and run the RESTORE command to restore the backup file to the Azure SQL managed instance - Azure Arc --```sql -RESTORE DATABASE test FROM DISK = '/var/opt/mssql/data/<file name>.bak' -WITH MOVE '<database name>' to '/var/opt/mssql/datf' -,MOVE '<database name>' to '/var/opt/mssql/data/<file name>_log.ldf' -,RECOVERY; -GO -``` --Example: --```sql -RESTORE DATABASE test FROM DISK = '/var/opt/mssql/data/test.bak' -WITH MOVE 'test' to '/var/opt/mssql/datf' -,MOVE 'test' to '/var/opt/mssql/data/test_log.ldf' -,RECOVERY; -GO -``` --## Related content --[Learn more about Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md) --[Start by creating a Data Controller](create-data-controller-indirect-cli.md) --[Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) |
azure-arc | Monitor Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/monitor-certificates.md | - Title: Provide certificates for monitoring -description: Explains how to provide certificates for monitoring ------ Previously updated : 12/15/2021----# Provide SSL certificates for monitoring --Beginning with the December, 2021 release, Azure Arc-enabled data services allows you to provide SSL/TLS certificates for the monitoring dashboards. You can use these certificates for logs (Kibana) and metrics (Grafana) dashboards. --You can specify the certificate when you create a data controller with: -- Azure `az` CLI `arcdata` extension-- Kubernetes native deployment--Microsoft provides sample files to create the certificates in the [/microsoft/azure_arc/](https://github.com/microsoft/azure_arc) GitHub repository. --You can clone the file locally to access the sample files. --```console -git clone https://github.com/microsoft/azure_arc -``` --The files that are referenced in this article are in the repository under `/arc_data_services/deploy/scripts/monitoring`. --## Create or acquire appropriate certificates --You need appropriate certificates for each UI. One for logs, and one for metrics. The following table describes the requirements. --The following table describes the requirements for each certificate and key. --|Requirement|Logs certificate|Metrics certificate| -|--|--|--| -|CN|`logsui-svc`|`metricsui-svc`| -|SANs| None required | `metricsui-svc.${NAMESPACE}.${K8S_DNS_DOMAIN_NAME}`| -|keyUsage|`digitalsignature`<br/><br>`keyEncipherment`|`digitalsignature`<br/><br>`keyEncipherment`| -|extendedKeyUsage|`serverAuth`|`serverAuth`| --> [!NOTE] -> Default K8S_DNS_DOMAIN_NAME is `svc.cluster.local`, though it may differ depending on environment and configuration. --The GitHub repository directory includes example template files that identify the certificate specifications. --- [/arc_data_services/deploy/scripts/monitoring/logsui-ssl.conf.tmpl](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/monitoring/logsui-ssl.conf.tmpl)-- [/arc_data_services/deploy/scripts/monitoring/metricsui-ssl.conf.tmpl](https://github.com/microsoft/azure_arc/blob/main/arc_data_services/deploy/scripts/monitoring/metricsui-ssl.conf.tmpl) --The Azure Arc samples GitHub repository provides an example you can use to generate a compliant certificate and private key for an endpoint. --See the code from [/arc_data_services/deploy/scripts/monitoringcreate-monitoring-tls-files.sh](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/monitoring). --To use the example to create certificates, update the following command with your `namespace` and the directory for the certificates (`output_directory`). Then run the command. --```console -./create-monitor-tls-files.sh <namespace> <output_directory> -``` --This creates compliant certificates in the directory. --## Deploy with CLI --After you have the certificate/private key for each endpoint, create the data controller with `az dc create...` command. --To apply your own certificate/private key use the following arguments. -- - `--logs-ui-public-key-file <path\file to logs public key file>` - - `--logs-ui-private-key-file <path\file to logs private key file>` - - `--metrics-ui-public-key-file <path\file to metrics public key file>` - - `--metrics-ui-private-key-file <path\file to metrics private key file>` --For example, the following example creates a data controller with designated certificates for the logs and metrics UI dashboards: --```azurecli -az arcdata dc create --profile-name azure-arc-aks-default-storage --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --logs-ui-public-key-file <path\file to logs public key file> --logs-ui-private-key-file <path\file to logs private key file> --metrics-ui-public-key-file <path\file to metrics public key file> --metrics-ui-private-key-file <path\file to metrics private key file> --#Example: -#az arcdata dc create --profile-name azure-arc-aks-default-storage --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect --logs-ui-public-key-file /path/to/logsuipublickeyfile.pem --logs-ui-private-key-file /path/to/logsuiprivatekey.pem --metrics-ui-public-key-file /path/to/metricsuipublickeyfile.pem --metrics-ui-private-key-file /path/to/metricsuiprivatekey.pem -``` --You can only specify certificates when you include `--use-k8s` in the `az arcdata dc create ...` statement. --## Deploy with Kubernetes native tools --If you are using Kubernetes native tools to deploy, create kubernetes secrets that hold the certificates and private keys. Create the following secrets: --- `logsui-certificiate-secret` -- `metricsui-certificate-secret`.--Make sure the services are listed as subject alternative names (SANs) and the certificate usage parameters are correct. --1. Verify each secret has the following fields: - - `certificate.pem` containing the base64 encoded certificate - - `privatekey.pem` containing the private key --## Related content -- Try [Upload metrics and logs to Azure Monitor](upload-metrics-and-logs-to-azure-monitor.md)-- Read about Grafana:- - [Getting started](https://grafana.com/docs/grafana/latest/getting-started/getting-started) - - [Grafana fundamentals](https://grafana.com/tutorials/grafana-fundamentals/#1) - - [Grafana tutorials](https://grafana.com/tutorials/grafana-fundamentals/#1) -- Read about Kibana- - [Introduction](https://www.elastic.co/webinars/getting-started-kibana?baymax=default&elektra=docs&storm=top-video) - - [Kibana guide](https://www.elastic.co/guide/en/kibana/current/https://docsupdatetracker.net/index.html) - - [Introduction to dashboard drilldowns with data visualizations in Kibana](https://www.elastic.co/webinars/dashboard-drilldowns-with-data-visualizations-in-kibana/) - - [How to build Kibana dashboards](https://www.elastic.co/webinars/how-to-build-kibana-dashboards/) |
azure-arc | Monitor Grafana Kibana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/monitor-grafana-kibana.md | - Title: View logs and metrics using Kibana and Grafana -description: View logs and metrics using Kibana and Grafana ------- Previously updated : 11/03/2021----# View logs and metrics using Kibana and Grafana --Kibana and Grafana web dashboards are provided to bring insight and clarity to the Kubernetes namespaces being used by Azure Arc-enabled data services. To access Kibana and Grafana web dashboards view service endpoints check [Azure Data Studio dashboards](./azure-data-studio-dashboards.md) documentation. ---## Monitor Azure SQL managed instances on Azure Arc --To access the logs and monitoring dashboards for SQL Managed Instance enabled by Azure Arc, run the following `azdata` CLI command --```azurecli -az sql mi-arc endpoint list -n <name of SQL instance> --use-k8s -``` --The relevant Grafana dashboards are: --* "Azure SQL managed instance Metrics" -* "Host Node Metrics" -* "Host Pods Metrics" ---> [!NOTE] -> When prompted to enter a username and password, enter the username and password that you provided at the time that you created the Azure Arc data controller. --> [!NOTE] -> You will be prompted with a certificate warning because the certificates are self-signed certificates. ---## Monitor Azure Arc-enabled PostgreSQL server --To access the logs and monitoring dashboards for an Azure Arc-enabled PostgreSQL server, run the following `azdata` CLI command --```azurecli -az postgres server-arc endpoint list -n <name of postgreSQL instance> --k8s-namespace <namespace> --use-k8s -``` --The relevant postgreSQL dashboards are: --* "Postgres Metrics" -* "Postgres Table Metrics" -* "Host Node Metrics" -* "Host Pods Metrics" ---## Additional firewall configuration --Depending on where the data controller is deployed, you may find that you need to open up ports on your firewall to access the Kibana and Grafana endpoints. --Below is an example of how to do this for an Azure VM. You will need to do this if you have deployed Kubernetes using the script. --The steps below highlight how to create an NSG rule for the Kibana and Grafana endpoints: --### Find the name of the NSG --```azurecli -az network nsg list -g azurearcvm-rg --query "[].{NSGName:name}" -o table -``` --### Add the NSG rule --Once you have the name of the NSG you can add a rule using the following command: --```azurecli -az network nsg rule create -n ports_30777 --nsg-name azurearcvmNSG --priority 600 -g azurearcvm-rg --access Allow --description 'Allow Kibana and Grafana ports' --destination-address-prefixes '*' --destination-port-ranges 30777 --direction Inbound --protocol Tcp --source-address-prefixes '*' --source-port-ranges '*' -``` ---## Related content -- Try [Upload metrics and logs to Azure Monitor](upload-metrics-and-logs-to-azure-monitor.md)-- Read about Grafana:- - [Getting started](https://grafana.com/docs/grafana/latest/getting-started/getting-started) - - [Grafana fundamentals](https://grafana.com/tutorials/grafana-fundamentals/#1) - - [Grafana tutorials](https://grafana.com/tutorials/grafana-fundamentals/#1) -- Read about Kibana- - [Introduction](https://www.elastic.co/webinars/getting-started-kibana?baymax=default&elektra=docs&storm=top-video) - - [Kibana guide](https://www.elastic.co/guide/en/kibana/current/https://docsupdatetracker.net/index.html) - - [Introduction to dashboard drilldowns with data visualizations in Kibana](https://www.elastic.co/webinars/dashboard-drilldowns-with-data-visualizations-in-kibana/) - - [How to build Kibana dashboards](https://www.elastic.co/webinars/how-to-build-kibana-dashboards/) |
azure-arc | Monitoring Log Analytics Azure Portal Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/monitoring-log-analytics-azure-portal-managed-instance.md | - Title: Monitoring, log analytics, Azure portal (SQL Managed Instance) -description: Monitor Azure Arc-enabled data services for SQL Managed Instance. ------ Previously updated : 07/30/2021----# Monitoring, log analytics, billing information, Azure portal (SQL Managed Instance) --This article lists additional experiences you can have with Azure Arc-enabled data services. ---## Experiences ---## Related content -- [Read about the overview of Azure Arc-enabled data services](overview.md)-- [Read about connectivity modes and requirements for Azure Arc-enabled data services](connectivity.md) |
azure-arc | Monitoring Log Analytics Azure Portal Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/monitoring-log-analytics-azure-portal-postgresql.md | - Title: Monitoring, log analytics, Azure portal (PostgreSQL server) -description: Monitor Azure Arc-enabled PostgreSQL services ------ Previously updated : 09/22/2020----# Monitoring, log analytics, billing information, Azure portal (PostgreSQL server) --This article lists additional experiences you can have with Azure Arc-enabled data services. ---## Experiences ---## Related content -- [Read about the overview of Azure Arc-enabled data services](overview.md)-- [Read about connectivity modes and requirements for Azure Arc-enabled data services](connectivity.md) |
azure-arc | Offline Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/offline-deployment.md | - Title: Offline deployment -description: Offline deployment enables you to pull container images from a private container registry instead of pulling from the Microsoft Container Registry. ------ Previously updated : 09/22/2020----# Offline Deployment Overview --Typically the container images used in the creation of the Azure Arc data controller, SQL managed instances and PostgreSQL servers are directly pulled from the Microsoft Container Registry (MCR). In some cases, the environment that you're deploying to won't have connectivity to the Microsoft Container Registry. For situations like this, you can pull the container images using a computer, which _does_ have access to the Microsoft Container Registry and then tag and push them to a private container registry that _is_ connectable from the environment in which you want to deploy Azure Arc-enabled data services. --Because monthly updates are provided for Azure Arc-enabled data services and there are a large number of container images, it's best to perform this process of pulling, tagging, and pushing the container images to a private container registry using a script. The script can either be automated or run manually. --A [sample script](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/scripts/pull-and-push-arc-data-services-images-to-private-registry.py) can be found in the Azure Arc GitHub repository. --> [!NOTE] -> This script requires the installation of Python and the [Docker CLI](https://docs.docker.com/install/). --The script will interactively prompt for the following information. Alternatively, if you want to have the script run without interactive prompts, you can set the corresponding environment variables before running the script. --|Prompt|Environment Variable|Notes| -|||| -|Provide source container registry - press ENTER for using `mcr.microsoft.com`|SOURCE_DOCKER_REGISTRY|Typically, you would pull the images from the Microsoft Container Registry, but if you're participating in a preview with a different registry, you can use the information provided to you as part of the preview program.| -|Provide source container registry repository - press ENTER for using `arcdata`:|SOURCE_DOCKER_REPOSITORY|If you're pulling from the Microsoft Container Registry, the repository will be `arcdata`.| -|Provide username for the source container registry - press ENTER for using none:|SOURCE_DOCKER_USERNAME|Only provide a value if you're pulling container images from a source that requires login. The Microsoft Container Registry doesn't require a login.| -|Provide password for the source container registry - press ENTER for using none:|SOURCE_DOCKER_PASSWORD|Only provide a value if you're pulling container images from a source that requires login. The Microsoft Container Registry doesn't require a login. The prompt uses a masked password prompt. You won't see the password if you type or paste it in.| -|Provide container image tag for the images at the source - press ENTER for using '`<current monthly release tag>`':|SOURCE_DOCKER_TAG|The default tag name will be updated monthly to reflect the month and year of the current release on the Microsoft Container Registry.| -|Provide target container registry DNS name or IP address:|TARGET_DOCKER_REGISTRY|The target registry DNS name or IP address. This prompt is the registry that the images will be pushed _to_.| -|Provide target container registry repository:|TARGET_DOCKER_REPOSITORY|The repository on the target registry to push the images to.| -|Provide username for the target container registry - press enter for using none:|TARGET_DOCKER_USERNAME|The username, if any, that is used to log in to the target container registry.| -|Provide password for the target container registry - press enter for using none:|TARGET_DOCKER_PASSWORD|The password, if any, that is used to log in to the target container registry. This prompt is a masked password prompt. You won't see the password if you type or paste it in.| -|Provide container image tag for the images at the target:|TARGET_DOCKER_TAG|Typically, you would use the same tag as the source to avoid confusion.| |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md | - Title: Introducing Azure Arc-enabled data services -description: Describes Azure Arc-enabled data services ------- Previously updated : 07/19/2023--# Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature. ---# What are Azure Arc-enabled data services? --Azure Arc makes it possible to run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. --Currently, the following Azure Arc-enabled data services are available: --- SQL Managed Instance-- Azure Arc-enabled PostgreSQL (preview)--For an introduction to how Azure Arc-enabled data services supports your hybrid work environment, see this introductory video: --> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny] --## Always current --Azure Arc-enabled data services such as SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL server receive updates on a frequent basis including servicing patches and new features similar to the experience in Azure. Updates from the Microsoft Container Registry are provided to you and deployment cadences are set by you in accordance with your policies. This way, on-premises databases can stay up to date while ensuring you maintain control. Because Azure Arc-enabled data services are a subscription service, you will no longer face end-of-support situations for your databases. --## Elastic scale --Cloud-like elasticity on-premises enables you to scale databases up or down dynamically in much the same way as they do in Azure, based on the available capacity of your infrastructure. This capability can satisfy burst scenarios that have volatile needs, including scenarios that require ingesting and querying data in real time, at any scale, with sub-second response time. --## Self-service provisioning --Azure Arc also provides other cloud benefits such as fast deployment and automation at scale. Thanks to Kubernetes-based orchestration, you can deploy a database in seconds using either GUI or CLI tools. --## Unified management --Using familiar tools such as the Azure portal, Azure Data Studio, and the Azure CLI (`az`) with the `arcdata` extension, you can now gain a unified view of all your data assets deployed with Azure Arc. You are able to not only view and manage a variety of relational databases across your environment and Azure, but also get logs and telemetry from Kubernetes APIs to analyze the underlying infrastructure capacity and health. Besides having localized log analytics and performance monitoring, you can now leverage Azure Monitor for comprehensive operational insights across your entire estate. ---## Disconnected scenario support --Many of the services such as self-service provisioning, automated backups/restore, and monitoring can run locally in your infrastructure with or without a direct connection to Azure. Connecting directly to Azure opens up additional options for integration with other Azure services such as Azure Monitor and the ability to use the Azure portal and Azure Resource Manager APIs from anywhere in the world to manage your Azure Arc-enabled data services. --## Supported regions --To see the regions that currently support Azure Arc-enabled data services, go to [Azure Products by Region - Azure Arc](https://azure.microsoft.com/global-infrastructure/services/?cdn=disable&products=azure-arc). ---## Related content --> **Just want to try things out?** -> Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. -> ->In addition, deploy [Jumpstart ArcBox for DataOps](https://azurearcjumpstart.com/azure_jumpstart_arcbox/DataOps), an easy to deploy sandbox for all things SQL Managed Instance enabled by Azure Arc. ArcBox is designed to be completely self-contained within a single Azure subscription and resource group, which will make it easy for you to get hands-on with all available Azure Arc-enabled technology with nothing more than an available Azure subscription. --[Install the client tools](install-client-tools.md) --[Plan your Azure Arc data services deployment](plan-azure-arc-data-services.md) (requires installing the client tools first) --[Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) (requires creation of an Azure Arc data controller first) --[Create an Azure Database for PostgreSQL server on Azure Arc](create-postgresql-server.md) (requires creation of an Azure Arc data controller first) |
azure-arc | Plan Azure Arc Data Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md | - Title: Plan an Azure Arc-enabled data services deployment -description: This article explains the considerations for planning an Azure Arc-enabled data services deployment. ------ Previously updated : 07/19/2023---# Plan an Azure Arc-enabled data services deployment --This article describes how to plan to deploy Azure Arc-enabled data services. --> [!TIP] -> Review all of the information in this article before you start your deployment. --## Deployment steps --In order to experience Azure Arc-enabled data services, you'll need to complete the following tasks. --1. Plan your deployment -- The details in this article will guide your plan. --1. [Install client tools](install-client-tools.md). --1. Register the Microsoft.AzureArcData provider for the subscription where the Azure Arc-enabled data services will be deployed, as follows: -- ```azurecli - az provider register --namespace Microsoft.AzureArcData - ``` --1. Access a Kubernetes cluster. -- For demonstration, testing, and validation purposes, you can use an Azure Kubernetes Service cluster. To create a cluster, follow the instructions at [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md) to walk through the entire process. --1. [Create Azure Arc data controller in direct connectivity mode (prerequisites)](create-data-controller-direct-prerequisites.md). -- For other ways to create a data controller see the links under [Related content](#related-content). --1. Create data services. -- For example, [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md). --1. Connect with Azure Data Studio. --As you begin planning to deploy Azure Arc-enabled data services, it's important to properly understand your database workloads and your business requirements for those workloads. For example, you need to consider availability, business continuity, and capacity requirements for memory, CPU, and storage for the workloads. And you need to carefully prepare the infrastructure to support the database workloads, based on your business requirements. --## Prerequisites --Before you start, be sure that you've met certain prerequisites and have the necessary background and information ready. To ensure a successful deployment, your infrastructure environment must be properly configured with the right level of access and the appropriate capacity for storage, CPU, and memory. --Review the following articles: --- [Sizing guidance](sizing-guidance.md)-- [Storage configuration](storage-configuration.md)-- [Connectivity modes and their requirements](connectivity.md)--Verify that: --- The [`arcdata` CLI extension](install-arcdata-extension.md) is installed.-- The other [client tools](install-client-tools.md) are installed.-- You have access to the Kubernetes cluster.-- Your *kubeconfig* file is configured. It should point to the Kubernetes cluster that you want to deploy to. To verify the current context of the cluster, run the following command:-- ```console - kubectl cluster-info - ``` -- You have an Azure subscription that resources such as an Azure Arc data controller, SQL Managed Instance enabled by Azure Arc, or Azure Arc-enabled PostgreSQL server will be projected and billed to.-- The Microsoft.AzureArcData provider is registered for the subscription where the Azure Arc-enabled data services will be deployed.--After you're prepared the infrastructure, deploy Azure Arc-enabled data services in the following way: -1. Create an Azure Arc-enabled data controller on one of the validated distributions of a Kubernetes cluster. -1. Create a SQL Managed Instance enabled by Azure Arc and/or an Azure Arc-enabled PostgreSQL server. --> [!CAUTION] -> Some of the data services tiers and modes are in [general availability (GA)](release-notes.md), and some are in preview. We recommend that you don't mix GA and preview services on the same data controller. If you mix GA and preview services on the same data controller, you can't upgrade in place. In that scenario, when you want to upgrade, you must remove and re-create the data controller and data services. --## Deployment requirements --You can deploy Azure Arc-enabled data services on various types of Kubernetes clusters. Currently, the validated list of Kubernetes services and distributions includes: --- Amazon Elastic Kubernetes Service (Amazon EKS)-- Azure Kubernetes Service (AKS)-- Azure Kubernetes Service on Azure Stack HCI-- Azure Red Hat OpenShift-- Google Kubernetes Engine (GKE)-- Open source, upstream Kubernetes (typically deployed by using kubeadm)-- OpenShift Container Platform (OCP)-- K3s-- Additional [partner-validated Kubernetes distributions](./validation-program.md)--> [!IMPORTANT] -> * The minimum supported version of Kubernetes is v1.21. -> * The minimum supported version of OCP is 4.8. -> * If you're using Azure Kubernetes Service, your cluster's worker node virtual machine (VM) size should be at least Standard_D8s_v3 and use Premium Disks. -> * The cluster should not span multiple availability zones. -> * For more information, review [Release notes](./release-notes.md). --## Deployment information --When you're creating Azure Arc-enabled data services, regardless of the service or distribution option you choose, you'll need to provide the following information: --- **Data controller name**: A descriptive name for your data controller (for example, *production-dc* or *seattle-dc*). The name must meet [Kubernetes naming standards](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/).--- **Username**: The username for the Kibana/Grafana administrator user.-- **Password**: The password for the Kibana/Grafana administrator user.-- **Name of your Kubernetes namespace**: The name of the Kubernetes namespace where you want to create the data controller.-- **Connectivity mode**: Determines the degree of connectivity from your Azure Arc-enabled data services environment to Azure. Your choice of connectivity mode determines the options for deployment methods. For more information, see [Connectivity modes and requirements](./connectivity.md).-- **Azure subscription ID**: The Azure subscription GUID for where you want to create the data controller resource in Azure. All deployments of SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL are also created in and billed to this subscription.-- **Azure resource group name**: The name of the resource group where you want to create the data controller resource in Azure. All deployments of SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL are also created in this resource group.-- **Azure location**: The Azure location where the data controller resource metadata will be stored in Azure. For a list of available regions, see the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc) page for Azure global infrastructure. The metadata and billing information about the Azure resources that are managed by your deployed data controller is stored only in the location in Azure that you specify as the location parameter. If you're deploying in direct connectivity mode, the location parameter for the data controller is the same as the location of your targeted custom location resource.-- **Service principal information**: - - If you're deploying in **indirect** connectivity mode, you'll need service principal information to upload usage and metrics data. For more information, see the "Assign roles to the service principal" section of [Upload usage data, metrics, and logs to Azure](upload-metrics-and-logs-to-azure-monitor.md). --- **Infrastructure**: For billing purposes, you must indicate the infrastructure on which you're running Azure Arc-enabled data services. The options are:-- `alibaba`-- `aws`-- `azure`-- `gcp`-- `onpremises`-- `other`--- **Container runtime**: Use `containerd` runtime for the container runtime. Azure Arc-enabled data services don't support Docker runtime.--## Additional concepts for direct connectivity mode --As outlined in [Connectivity modes and requirements](./connectivity.md), you can deploy the Azure Arc data controller either in **direct** or **indirect** connectivity mode. Deploying Azure Arc data services in direct connectivity mode requires additional concepts and considerations: --* First, the Kubernetes cluster where the Azure Arc-enabled data services will be deployed needs to be an [Azure Arc-enabled Kubernetes cluster](../kubernetes/overview.md). By connecting your Kubernetes cluster to Azure, you can deploy and manage Azure Arc data services to your cluster directly from the Azure portal, upload your usage, logs and metrics to Azure automatically and get several other Azure benefits. To learn how, see [Connect your cluster to Azure](../kubernetes/quickstart-connect-cluster.md). --* After the Kubernetes cluster is Azure Arc-enabled, deploy Azure Arc-enabled data services by doing the following: - 1. Create the Azure Arc data services extension. To learn how, see [Cluster extensions on Azure Arc-enabled Kubernetes](../kubernetes/conceptual-extensions.md). - 1. Create a custom location. To learn how, see [Custom locations on top of Azure Arc-enabled Kubernetes](../kubernetes/conceptual-custom-locations.md). - 1. Create the Azure Arc data controller. -- You can perform all three of these steps in a single step by using the Azure Arc data controller creation wizard in the Azure portal. --After you've installed the Azure Arc data controller, you can create and access data services such as SQL Managed Instance enabled by Azure Arc or Azure Arc-enabled PostgreSQL server. --## Known limitations -Currently, only one Azure Arc data controller per Kubernetes cluster is supported. However, you can create multiple Arc data services, such as Arc-enabled SQL managed instances and Arc-enabled PostgreSQL servers, that are managed by the same Azure Arc data controller. --## Related content --You have several additional options for creating the Azure Arc data controller: --> **Just want to try things out?** -> Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on AKS, Amazon EKS, or GKE, or in an Azure VM. -> --- [Create a data controller in direct connectivity mode with the Azure portal](create-data-controller-direct-prerequisites.md)-- [Create a data controller in indirect connectivity mode with CLI](create-data-controller-indirect-cli.md)-- [Create a data controller in indirect connectivity mode with Azure Data Studio](create-data-controller-indirect-azure-data-studio.md)-- [Create a data controller in indirect connectivity mode from the Azure portal via a Jupyter notebook in Azure Data Studio](create-data-controller-indirect-azure-portal.md)-- [Create a data controller in indirect connectivity mode with Kubernetes tools](create-data-controller-using-kubernetes-native-tools.md) |
azure-arc | Pod Scheduling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/pod-scheduling.md | - Title: Arc SQL Managed Instance pod scheduling -description: Describes how pods are scheduled for Azure Arc-enabled data services, and how you may configure them. ------ Previously updated : 07/07/2023----# Arc SQL Managed Instance pod scheduling --By default, SQL pods are scheduled with a preferred pod anti affinity between each other. This setting prefers that the pods are scheduled on different nodes, but does not require it. In a scenario where there are not enough nodes to place each pod on a distinct node, multiple pods are scheduled on a single node. Kubernetes does not reevaluate this decision until a pod is rescheduled. --This default behavior can be overridden using the scheduling options. Arc SQL Managed Instance has three controls for scheduling, which are located at `$.spec.scheduling` --## NodeSelector --The simplest control is node selector. The node selector simply specifies a label that the target nodes for an instance must have. The path of nodeSelector is `$.spec.scheduling.nodeSelector` and functions the same as any other Kubernetes nodeSelector property. (see: [Assign Pods to Nodes | Kubernetes](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#create-a-pod-that-gets-scheduled-to-your-chosen-node)) --## Affinity --Affinity is a feature in Kubernetes that allows fine-grained control over how pods are scheduled onto nodes within a cluster. There are many ways to leverage affinity in Kubernetes (see: [Assigning Pods to Nodes | Kubernetes](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)). The same rules for applying affinities to traditional StatefulSets in Kubernetes apply in SQL MI. The exact same object model is used. ----The path of affinity in a deployment is `$.spec.template.spec.affinity`, whereas the path of affinity in SQL MI is `$.spec.scheduling.affinity`. --Here is a sample spec for a required pod anti affinity between replicas of a single SQL MI instance. The labels chosen in the labelSelector of the affinity term are automatically applied by the dataController based on the resource type and name, but the labelSelector could be changed to use any labels provided. ---```yaml -apiVersion: sql.arcdata.microsoft.com/v13 -kind: SqlManagedInstance -metadata: - labels: - management.azure.com/resourceProvider: Microsoft.AzureArcData - name: sql1 - namespace: test -spec: - backup: - retentionPeriodInDays: 7 - dev: false - licenseType: LicenseIncluded - orchestratorReplicas: 1 - preferredPrimaryReplicaSpec: - preferredPrimaryReplica: any - primaryReplicaFailoverInterval: 600 - readableSecondaries: 1 - replicas: 3 - scheduling: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchLabels: - arc-resource: sqlmanagedinstance - controller: sql1 - topologyKey: kubernetes.io/hostname - default: - resources: - limits: - cpu: "4" - requests: - cpu: "4" - memory: 4Gi - - primary: - type: NodePort - readableSecondaries: - type: NodePort - storage: - data: - volumes: - - accessMode: ReadWriteOnce - className: local-storage - size: 5Gi - logs: - volumes: - - accessMode: ReadWriteOnce - className: local-storage - size: 5Gi - syncSecondaryToCommit: -1 - tier: BusinessCritical -``` --## TopologySpreadConstraints --Pod topology spread constraints control rules around how pods are spread across different groupings of nodes in a Kubernetes cluster. A cluster may have different node topology domains defined such as regions, zones, node pools, etc. A standard Kubernetes topology spread constraint can be applied at `$.spec.scheduling.topologySpreadConstraints` (see: [Pod Topology Spread Constraints | Kubernetes](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/)). --For instance: ---```yaml -apiVersion: sql.arcdata.microsoft.com/v13 -kind: SqlManagedInstance -metadata: - labels: - management.azure.com/resourceProvider: Microsoft.AzureArcData - name: sql1 - namespace: test -spec: - backup: - retentionPeriodInDays: 7 - dev: false - licenseType: LicenseIncluded - orchestratorReplicas: 1 - preferredPrimaryReplicaSpec: - preferredPrimaryReplica: any - primaryReplicaFailoverInterval: 600 - readableSecondaries: 1 - replicas: 3 - scheduling: - topologySpreadConstraints: - - maxSkew: 1 - topologyKey: kubernetes.io/hostname - whenUnsatisfiable: DoNotSchedule - labelSelector: - matchLabels: - name: sql1 -``` --> [!NOTE] -> These label selectors and constraints should be added or edited as part of the `SqlManagedInstance` custom resource definition spec, either during deployment or post deployment edit. -> It is not recommended to modify/edit statefulset or pod spec for SqlManagedInstance. These dodifications could be lost after next update/upgrade. |
azure-arc | Point In Time Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/point-in-time-restore.md | - Title: Restore a database in SQL Managed Instance enabled by Azure Arc to a previous point-in-time -description: Explains how to restore a database to a specific point-in-time on SQL Managed Instance enabled by Azure Arc. ------- Previously updated : 06/17/2022----# Perform a point-in-time Restore --Use the point-in-time restore (PITR) to create a database as a copy of another database from some time in the past that is within the retention period. This article describes how to do a point-in-time restore of a database in SQL Managed Instance enabled by Azure Arc. --Point-in-time restore can restore a database: --- From an existing database-- To a new database on the same SQL Managed Instance enabled by Azure Arc--You can restore a database to a point-in-time within a pre-configured retention setting. -You can check the retention setting for a SQL Managed Instance enabled by Azure Arc as follows: --For **Direct** connected mode: --```azurecli -az sql mi-arc show --name <SQL instance name> --resource-group <resource-group> -#Example -az sql mi-arc show --name sqlmi --resource-group myresourcegroup -``` --For **Indirect** connected mode: --```azurecli -az sql mi-arc show --name <SQL instance name> --k8s-namespace <SQL MI namespace> --use-k8s -#Example -az sql mi-arc show --name sqlmi --k8s-namespace arc --use-k8s -``` --Currently, point-in-time restore can restore a database: --- From an existing database on an instance-- To a new database on the same instance--## Automatic Backups --SQL Managed Instance enabled by Azure Arc has built-in automatic backups feature enabled. Whenever you create or restore a new database, SQL Managed Instance enabled by Azure Arc initiates a full backup immediately and schedules differential and transaction log backups automatically. SQL managed instance stores these backups in the storage class specified during the deployment. --Point-in-time restore enables a database to be restored to a specific point-in-time, within the retention period. To restore a database to a specific point-in-time, Azure Arc-enabled data services applies the backup files in a specific order. For example: --1. Full backup -2. Differential backup -3. One or more transaction log backups ---Currently, full backups are taken once a week, differential backups are taken every 12 hours and transaction log backups every 5 minutes. --## Retention Period --The default retention period for a new SQL Managed Instance enabled by Azure Arc is seven days, and can be adjusted with values of 0, or 1-35 days. The retention period can be set during deployment of the SQL managed instance by specifying the `--retention-days` property. Backup files older than the configured retention period are automatically deleted. ---## Create a database from a point-in-time using az CLI --```azurecli -az sql midb-arc restore --managed-instance <SQL managed instance> --name <source DB name> --dest-name <Name for new db> --k8s-namespace <namespace of managed instance> --time "YYYY-MM-DDTHH:MM:SSZ" --use-k8s -#Example -az sql midb-arc restore --managed-instance sqlmi1 --name Testdb1 --dest-name mynewdb --k8s-namespace arc --time "2021-10-29T01:42:14.00Z" --use-k8s -``` --You can also use the `--dry-run` option to validate your restore operation without actually restoring the database. --```azurecli -az sql midb-arc restore --managed-instance <SQL managed instance> --name <source DB name> --dest-name <Name for new db> --k8s-namespace <namespace of managed instance> --time "YYYY-MM-DDTHH:MM:SSZ" --use-k8s --dry-run -#Example -az sql midb-arc restore --managed-instance sqlmi1 --name Testdb1 --dest-name mynewdb --k8s-namespace arc --time "2021-10-29T01:42:14.00Z" --use-k8s --dry-run -``` --## Create a database from a point-in-time using kubectl --1. To perform a point-in-time restore with Kubernetes native tools, you can use `kubectl`. Create a task spec yaml file. For example: -- ```yaml - apiVersion: tasks.sql.arcdata.microsoft.com/v1 - kind: SqlManagedInstanceRestoreTask - metadata: - name: myrestoretask20220304 - namespace: test - spec: - source: - name: miarc1 - database: testdb - restorePoint: "2021-10-12T18:35:33Z" - destination: - name: miarc1 - database: testdb-pitr - dryRun: false - ``` --1. Edit the properties as follows: -- 1. `name:` Unique string for each custom resource (CR). Required by Kubernetes. - 1. `namespace:` Kubernetes namespace where instance is. - 1. `source: ... name:` Name of the source instance. - 1. `source: ... database:` Name of source database where the restore would be applied from. - 1. `restorePoint:` Point-in-time for the restore operation in UTC datetime. - 1. `destination: ... name:` Name of the destination Arc-enabled SQL managed instance. Currently, point-in-time restore is only supported within the Arc SQL managed instance. This should be same as the source SQL managed instance. - 1. `destination: ... database:` Name of the new database where the restore would be applied to. --1. Create a task to start the point-in-time restore. The following example initiates the task defined in `myrestoretask20220304.yaml`. --- ```console - kubectl apply -f myrestoretask20220304.yaml - ``` --1. Check restore task status as follows: -- ```console - kubectl get sqlmirestoretask -n <namespace> - ``` --Restore task status will be updated about every 10 seconds based on the PITR progress. The status progresses from `Waiting` to `Restoring` to `Completed` or `Failed`. --## Create a database from a point-in-time using Azure Data Studio --You can also restore a database to a point-in-time from Azure Data Studio as follows: -1. Launch Azure Data studio -2. Ensure you have the required Arc extensions as described in [Tools](install-client-tools.md). -3. Connect to the Azure Arc data controller -4. Expand the data controller node, right-click on the instance and select **Manage**. Azure Data Studio launches the SQL managed instance dashboard. -5. Click on the **Backups** tab in the dashboard -6. You should see a list of databases on the SQL managed instance and their Earliest and Latest restore time windows, and an icon to initiate the **Restore** -7. Click on the icon for the database you want to restore from. Azure Data Studio launches a blade towards the right side -8. Provide the required input in the blade and click on **Restore** --### Monitor progress --When a restore is initiated, a task is created in the Kubernetes cluster that executes the actual restore operations of full, differential, and log backups. The progress of this activity can be monitored from your Kubernetes cluster as follows: --```console -kubectl get sqlmirestoretask -n <namespace> -#Example -kubectl get sqlmirestoretask -n arc -``` --You can get more details of the task by running `kubectl describe` on the task. For example: --```console -kubectl describe sqlmirestoretask <nameoftask> -n <namespace> -``` --## Configure Retention period --The Retention period for a SQL Managed Instance enabled by Azure Arc can be reconfigured from their original setting as follows: --> [!WARNING] -> If you reduce the current retention period, you lose the ability to restore to points in time older than the new retention period. Backups that are no longer needed to provide PITR within the new retention period are deleted. If you increase the current retention period, you do not immediately gain the ability to restore to older points in time within the new retention period. You gain that ability over time, as the system starts to retain backups for longer. ----The `--retention-period` can be changed for a SQL Managed Instance-Azure Arc as follows. The below command applies to both `direct` and `indirect` connected modes. ---```azurecli -az sql mi-arc update --name <SQLMI name> --k8s-namespace <namespace> --use-k8s --retention-days <retentiondays> -``` --For example: --```azurecli -az sql mi-arc update --name sqlmi --k8s-namespace arc --use-k8s --retention-days 10 -``` --## Disable Automatic backups --You can disable the built-in automated backups for a specific instance of SQL Managed Instance enabled by Azure Arc by setting the `--retention-days` property to 0, as follows. The below command applies to both ```direct``` and ```indirect``` modes. --> [!WARNING] -> If you disable Automatic Backups for a SQL Managed Instance enabled by Azure Arc, then any Automatic Backups configured will be deleted and you lose the ability to do a point-in-time restore. You can change the `retention-days` property to re-initiate automatic backups if needed. ---```azurecli -az sql mi-arc update --name <SQLMI name> --k8s-namespace <namespace> --use-k8s --retention-days 0 -``` --For example: -```azurecli -az sql mi-arc update --name sqlmi --k8s-namespace arc --use-k8s --retention-days 0 -``` --## Monitor backups --The backups are stored under `/var/opt/mssql/backups/archived/<dbname>/<datetime>` folder, where `<dbname>` is the name of the database and `<datetime>` would be a timestamp in UTC format, for the beginning of each full backup. Each time a full backup is initiated, a new folder would be created with the full back and all subsequent differential and transaction log backups inside that folder. The most current full backup and its subsequent differential and transaction log backups are stored under `/var/opt/mssql/backups/current/<dbname><datetime>` folder. --## Limitations --Point-in-time restore to SQL Managed Instance enabled by Azure Arc has the following limitations: --- Point-in-time restore is database level feature, not an instance level feature. You cannot restore the entire instance with Point-in-time restore.-- You can only restore to the same SQL Managed Instance enabled by Azure Arc from where the backup was taken.--## Related content --[Learn more about Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md) --[Start by creating a Data Controller](create-data-controller-indirect-cli.md) --[Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) |
azure-arc | Preview Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md | - Title: Azure Arc-enabled data services - Pre-release testing -description: Experience pre-release versions of Azure Arc-enabled data services ------ Previously updated : 12/06/2023---#Customer intent: As a data professional, I want to validate upcoming releases. ---# Pre-release testing --To provide an opportunity for customers and partners to provide pre-release feedback, pre-release versions of Azure Arc-enabled data services are made available on a predictable schedule. This article describes how to install pre-release versions of Azure Arc-enabled data services and provide feedback to Microsoft. --## Pre-release testing schedule --Each month, Azure Arc-enabled data services is released on the second Tuesday of the month, commonly known as "Patch Tuesday". The pre-release versions are made available on a predictable schedule in alignment with that release date. --- 14 days before the release date, the *test* pre-release version is made available.-- 7 days before the release date, the *preview* pre-release version is made available.--Normally, the main difference between the test and preview pre-release versions is quality and stability, but in some exceptional cases there may be new features introduced in between the test and preview releases. --Normally, pre-release version binaries are available around 10:00 AM Pacific Time. Documentation follows later in the day. --## Artifacts for a pre-release version --Pre-release versions simultaneously release with artifacts, which are designed to work together: --- Container images hosted on the Microsoft Container Registry (MCR)- - `mcr.microsoft.com/arcdata/test` is the repository that hosts the **test** pre-release builds - - `mcr.microsoft.com/arcdata/preview` is the repository that hosts the **preview** pre-release builds - - > [!NOTE] - > `mcr.microsoft.com/arcdata/` will continue to be the repository that hosts the final release builds. - - - Azure CLI extension hosted on Azure Blob Storage - - Azure Data Studio extension hosted on Azure Blob Storage --In addition to the above installable artifacts, the following are updated in Azure as needed: --- New version of ARM API (occasionally)-- New Azure portal accessible via a special URL query string parameter (see below for details)-- New Arc-enabled Kubernetes extension version for Arc-enabled data services (applies to direct connectivity mode only)-- Documentation updates on this page describing the location and details of the above artifacts and the new features available and any pre-release "read me" documentation--## Installing pre-release versions --### Install prerequisite tools --To install a pre-release version, follow these pre-requisite instructions: --If you use the Azure CLI extension: --1. Uninstall the Azure CLI extension (`az extension remove -n arcdata`). -1. Download the latest pre-release Azure CLI extension `.whl` file from the link in the [Current preview release information](#current-preview-release-information). -1. Install the latest pre-release Azure CLI extension (`az extension add -s <location of downloaded .whl file>`). --If you use the Azure Data Studio extension to install: --1. Uninstall the Azure Data Studio extension. Select the Extensions panel and select on the **Azure Arc** extension, select **Uninstall**. -1. Download the latest pre-release Azure Data Studio extension .vsix files from the links in the [Current preview release information](#current-preview-release-information). -1. Install the extensions. Choose **File** > **Install Extension from VSIX package**. Locate the download location of the .vsix files. Install the `azcli` extension first and then `arc`. --### Install using Azure CLI --To install with the Azure CLI, follow the steps for your connectivity mode: --- [Indirect connectivity mode](#indirect-connectivity-mode)-- [Direct connectivity mode](#direct-connectivity-mode)--#### Indirect connectivity mode --1. Set environment variables. Set variables for: - - Docker registry - - Docker repository - - Docker image tag - - Docker image policy -- Use the example script below to set environment variables for your respective platform. -- # [Linux](#tab/linux) -- ```console - ## variables for the docker registry, repository, and image - export DOCKER_REGISTRY=<Docker registry> - export DOCKER_REPOSITORY=<Docker repository> - export DOCKER_IMAGE_TAG=<Docker image tag> - export DOCKER_IMAGE_POLICY=<Docker image policy> - ``` -- # [Windows (PowerShell)](#tab/windows) -- ```PowerShell - ## variables for Metrics and Monitoring dashboard credentials - $ENV:DOCKER_REGISTRY="<Docker registry>" - $ENV:DOCKER_REPOSITORY="<Docker repository>" - $ENV:DOCKER_IMAGE_TAG="<Docker image tag>" - $ENV:DOCKER_IMAGE_POLICY="<Docker image policy>" - ``` - --1. Follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md). -1. Use the command `az arcdata dc create` as explained in [create a custom configuration profile](create-custom-configuration-template.md). --#### Direct connectivity mode --If you install using the Azure CLI: --1. Set environment variables. Set variables for: - - Docker registry - - Docker repository - - Docker image tag - - Docker image policy - - Arc data services extension version tag (`ARC_DATASERVICES_EXTENSION_VERSION_TAG`): Use the version of the **Arc enabled Kubernetes helm chart extension version** from the release details under [Current preview release information](#current-preview-release-information). - - Arc data services release train: `ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN`: `{ test | preview }`. -- Use the example script below to set environment variables for your respective platform. -- # [Linux](#tab/linux) -- ```console - ## variables for the docker registry, repository, and image - export DOCKER_REGISTRY=<Docker registry> - export DOCKER_REPOSITORY=<Docker repository> - export DOCKER_IMAGE_TAG=<Docker image tag> - export DOCKER_IMAGE_POLICY=<Docker image policy> - export ARC_DATASERVICES_EXTENSION_VERSION_TAG=<Version tag> - export ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN='preview' - ``` -- # [Windows (PowerShell)](#tab/windows) -- ```PowerShell - ## variables for Metrics and Monitoring dashboard credentials - $ENV:DOCKER_REGISTRY="<Docker registry>" - $ENV:DOCKER_REPOSITORY="<Docker repository>" - $ENV:DOCKER_IMAGE_TAG="<Docker image tag>" - $ENV:DOCKER_IMAGE_POLICY="<Docker image policy>" - $ENV:ARC_DATASERVICES_EXTENSION_VERSION_TAG="<Version tag>" - $ENV:ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN="preview" - ``` - --1. Run `az arcdata dc create` as normal for the direct mode to: -- - Create the extension, if it doesn't already exist - - Create the custom location, if it doesn't already exist - - Create data controller -- For details see, [create a custom configuration profile](create-custom-configuration-template.md). --### Install using Azure Data Studio --> [!NOTE] -> Deploying pre-release builds using direct connectivity mode from Azure Data Studio is not supported. --You can install with Azure Data Studio (ADS) in indirect connectivity mode. To use Azure Data Studio to install: --1. Complete the data controller deployment wizard as normal except click on **Script to notebook** at the end instead of **Deploy**. -1. Update the following script. Replace `{ test | preview }` with the appropriate label. -1. In the generated notebook, edit the `Set variables` cell to *add* the following lines: -- ```python - # choose between arcdata/test or arcdata/preview as appropriate - os.environ["AZDATA_DOCKER_REPOSITORY"] = "{ test | preview }" - os.environ["AZDATA_DOCKER_TAG"] = "{ Current preview tag } - ``` --1. Run the notebook, click **Run All**. --### Install using Azure portal --1. Follow the instructions to [Arc-enabled the Kubernetes cluster](create-data-controller-direct-prerequisites.md) as normal. -1. Open the Azure portal for the appropriate preview version: -- - **Test**: [https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=test#home](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=test#home) - - **Preview**: [https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home). --1. Follow the instructions to [Create the Azure Arc data controller from Azure portal - Direct connectivity mode](create-data-controller-direct-azure-portal.md) except that when choosing a deployment profile, select **Custom template** in the **Kubernetes configuration template** drop-down. -1. Set the repository to either `arcdata/test` or `arcdata/preview` as appropriate. Enter the desired tag in the **Image tag** field. -1. Fill out the rest of the custom cluster configuration template fields as normal. --Complete the rest of the wizard as normal. --When you deploy with this method, the most recent pre-release version will always be used. --## Current preview release information ---## Provide feedback --At this time, pre-release testing is supported for certain customers and partners that have established agreements with Microsoft. Participants have points of contact on the product engineering team. Email your points of contact with any issues that are found during pre-release testing. --## Related content --[Release notes - Azure Arc-enabled data services](release-notes.md) |
azure-arc | Privacy Data Collection And Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/privacy-data-collection-and-reporting.md | - Title: Data collection and reporting | Azure Arc-enabled data services -description: Explains the type of data that is transmitted by Azure Arc-enabled Data services to Microsoft. ------ Previously updated : 07/30/2021----# Azure Arc-enabled data services data collection and reporting --This article describes the data that Azure Arc-enabled data services transmit to Microsoft. --Neither Azure Arc-enabled data services nor any of the applicable data services store any customer data. This applies to: --- SQL Managed Instance enabled by Azure Arc-- Azure Arc-enabled PostgreSQL--## Azure Arc-enabled data services --Azure Arc-enabled data services may use some or all of the following products: --- SQL Managed Instance enabled by Azure Arc -- Azure Arc-enabled PostgreSQL-- Azure Data Studio-- [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --- Azure CLI (az)--### Directly connected --When a cluster is configured to be directly connected to Azure, some data is automatically transmitted to Microsoft. --The following table describes the type of data, how it is sent, and requirement. --|Data category|What data is sent?|How is it sent?|Is it required? -|:-|:-|:-|:-| -|Operational Data|Metrics and logs|Automatically, when configured to do so|No -Billing & inventory data|Inventory such as number of instances, and usage such as number of vCores consumed|Automatically |Yes -Diagnostics|Diagnostic information for troubleshooting purposes|Manually exported and provided to Microsoft Support|Only for the scope of troubleshooting and follows the standard [privacy policies](https://privacy.microsoft.com/privacystatement) --### Indirectly connected --When a cluster not configured to be directly connected to Azure, it does not automatically transmit operational, or billing and inventory data to Microsoft. To transmit data to Microsoft, you need to configure the export. --The following table describes the type of data, how it is sent, and requirement. --|Data category|What data is sent?|How is it sent?|Is it required? -|:-|:-|:-|:-| -|Operational Data|Metrics and logs|Manually|No -Billing & inventory data|Inventory such as number of instances, and usage such as number of vCores consumed|Manually |Yes -Diagnostics|Diagnostic information for troubleshooting purposes|Manually exported and provided to Microsoft Support|Only for the scope of troubleshooting and follows the standard [privacy policies](https://privacy.microsoft.com/privacystatement) ---## Operational data --Operational data is collected for all database instances and for the Azure Arc-enabled data services platform itself. There are two types of operational data: --- Metrics – Performance and capacity related metrics, which are collected to an Influx DB provided as part of Azure Arc-enabled data services. You can view these metrics in the provided Grafana dashboard. --- Logs – Records emitted by all components including failure, warning, and informational events are collected to an OpenSearch database provided as part of Azure Arc-enabled data services. You can view the logs in the provided Kibana dashboard. Prior to the May, 2023 release, the log database used Elasticsearch. Thereafter, it uses OpenSearch. --The operational data stored locally requires built-in administrative privileges to view it in Grafana/Kibana. --The operational data does not leave your environment unless you chooses to export/upload (indirect connected mode) or automatically send (directly connected mode) the data to Azure Monitor/Log Analytics. The data goes into a Log Analytics workspace, which you control. --If the data is sent to Azure Monitor or Log Analytics, you can choose which Azure region or datacenter the Log Analytics workspace resides in. After that, access to view or copy it from other locations can be controlled by you. --## Inventory data --The collected inventory data is represented by several Azure resource types. The following sections show the properties, types, and descriptions that are collected for each resource type: --Every database instance and the data controller itself will be reflected in Azure as an Azure resource in Azure Resource Manager. --There are three resource types: --- SQL Managed Instance enabled by Azure Arc -- Azure Arc-enabled PostgreSQL server -- Data controller--The following sections show the properties, types, and descriptions that are collected and stored about each type of resource: --### SQL Server - Azure Arc --| Description | Property name | Property type| -|:--|:--|:--| -| Computer name | name | string | -| SQL Server instance name| instanceName | string | -| SQL Server Version | version | string | -| SQL Server Edition | edition | string | -| Containing server resource ID | containerResourceId | string | -| Virtual cores | vCore | string | -| Connectivity status | status | string | -| SQL Server patch level | patchLevel | string | -| Collation | collation | string | -| Current version | currentVersion | string | -| TCP dynamic ports | tcpDynamicPorts | string | -| TCP static ports | tcpStaticPorts | string | -| Product ID | productId | string | -| License type | licenseType | string | -| Microsoft Defender status | azureDefenderStatus | string | -| Microsoft Defender status last updated | azureDefenderStatusLastUpdated | string | -| Provisioning state | provisioningState | string | --The following JSON document is an example of the SQL Server - Azure Arc resource. --```json -{ - - "name": "SQL22-EE_PAYGTEST", - "version": "SQL Server 2022", - "edition": "Enterprise", - "containerResourceId": "/subscriptions/a5082b19-8a6e-4bc5-8fdd-8ef39dfebc39/resourcegroups/sashan-arc-eastasia/providers/Microsoft.HybridCompute/machines/SQL22-EE", - "vCore": "8", - "status": "Connected", - "patchLevel": "16.0.1000.6", - "collation": "SQL_Latin1_General_CP1_CI_AS", - "currentVersion": "16.0.1000.6", - "instanceName": "PAYGTEST", - "tcpDynamicPorts": "61394", - "tcpStaticPorts": "", - "productId": "00488-00010-05000-AB944", - "licenseType": "PAYG", - "azureDefenderStatusLastUpdated": "2023-02-08T07:57:37.5597421Z", - "azureDefenderStatus": "Protected", - "provisioningState": "Succeeded" -} -``` --### SQL Server database - Azure Arc --| Description | Property name | Property type| -|:--|:--|:--| -| Database name | name | string | -| Collation | collationName | string | -| Database creation date | databaseCreationDate | System.DateTime | -| Compatibility level | compatibilityLevel | string | -| Database state | state | string | -| Readonly mode | isReadOnly | boolean | -| Recovery mode | recoveryMode | boolean | -| Auto close enabled | isAutoCloseOn | boolean | -| Auto shrink enabled | isAutoShrinkOn | boolean | -| Auto create stats enabled | isAutoCreateStatsOn | boolean | -| Auto update stats enabled | isAutoUpdateStatsOn | boolean | -| Remote data archive enabled | isRemoteDataArchiveEnabled | boolean | -! Memory optimization enabled | isMemoryOptimizationEnabled | boolean | -| Encryption enabled | isEncrypted | boolean | -| Trustworthy mode enabled | isTrustworthyOn | boolean | -| Backup information | backupInformation | | -| Provisioning state | provisioningState | string | --The following JSON document is an example of the SQL Server database - Azure Arc resource. --```json -{ - "name": "newDb80", - "collationName": "SQL_Latin1_General_CP1_CI_AS", - "databaseCreationDate": "2023-01-09T03:40:45Z", - "compatibilityLevel": 150, - "state": "Online", - "isReadOnly": false, - "recoveryMode": "Full", - "databaseOptions": { - "isAutoCloseOn": false, - "isAutoShrinkOn": false, - "isAutoCreateStatsOn": true, - "isAutoUpdateStatsOn": true, - "isRemoteDataArchiveEnabled": false, - "isMemoryOptimizationEnabled": true, - "isEncrypted": false, - "isTrustworthyOn": false - }, - "backupInformation": {}, - "provisioningState": "Succeeded" -} -``` --### Azure Arc data controller --| Description | Property name | Property type| -|:--|:--|:--| -| Location information | OnPremiseProperty | public: OnPremiseProperty | -| The raw Kubernetes information (`kubectl get datacontroller`) | K8sRaw | object | -| Last uploaded date from on-premises cluster | LastUploadedDate | System.DateTime | -| Data controller state | ProvisioningState | string | --The following JSON document is an example of the Azure Arc Data Controller resource. ------```json -{ - "id": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.AzureArcData/dataControllers/contosodc", - "name": "contosodc", - "type": "microsoft.azurearcdata/datacontrollers", - "location": "eastus", - "extendedLocation": { - "name": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.ExtendedLocation/customLocations/contoso", - "type": "CustomLocation" - }, - "tags": {}, - "systemData": { - "createdBy": "contosouser@contoso.com", - "createdByType": "User", - "createdAt": "2023-01-03T21:35:36.8412132Z", - "lastModifiedBy": "319f651f-7ddb-4fc6-9857-7aef9250bd05", - "lastModifiedByType": "Application", - "lastModifiedAt": "2023-02-15T17:13:26.6429039Z" - }, - "properties": { - "infrastructure": "azure", - "onPremiseProperty": { - "id": "4eb0a7a5-5ed6-4463-af71-12590b2fad5d", - "publicSigningKey": "MIIDWzCCAkOgAwIBAgIIA8OmTJKpD8AwDQYJKoZIhvcNAQELBQAwKDEmMCQGA1UEAxMdQ2x1c3RlciBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMjMwMTAzMjEzNzUxWhcNMjgwMTAyMjEzNzUxWjAaMRgwFgYDVQQDEw9iaWxsaW5nLXNpZ25pbmcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC3rAuXaXIeaipFiqGW5rtkdq/1+S58CRMEkANHvwFnimXEWIt8VnbG9foIm20r0RK+6XeRpn5r92jrOl/3R4Q9AAiF3Tgzy3NF9Dg9OsKo1bnrfWHMxmyX2w8TxyZSvWKEUVpVhjhqyhy/cqSJA5ASjEtthMx4Q1HTVcEDSTfnPHPz9EhfZqZ6ES3Yqun2D9MIatkSUpjHJbqYwRTzzrsPG84hJX7EGAWntvEzzCjmTUsouShEwUhi8c05CLBwzF5bxDNLhTdy+tj2ZyUzL7R+BmifwPR9jvOziYPlrbgIIs77sPbNlZjZvMeeBaJHktWZ0s8/UpUpV1W69m7hT2gbAgMBAAGjgZYwgZMwIAYDVR0lAQH/BBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMA4GA1UdDwEB/wQEAwIFoDBfBgNVHREEWDBWgg5jb250cm9sbGVyLXN2Y4IoY29udHJvbGxlci1zdmMuY29udG9zby5zdmMuY2x1c3Rlci5sb2NhbIIaY29udHJvbGxlci1zdmMuY29udG9zby5zdmMwDQYJKoZIhvcNAQELBQADggEBADcZNIZcDDUC79ElbRrXdbHo9bUUv/NJfY7Dx226jc8j0AdDq8MbHAnt+JiMH6+GDb88avleA448yZ9ujBP9zC8v8IyaWu4vQpPT7MagzlsAhb6VEWU0FQfM6R14WwbATWSOIwDlMn4I33mZULyJdZhk4TqzqTQ8F0I3TavHh8TWBbjnwg1IhR/8TQ9HfgceoI80SBE3BDI5at/CzYgoWcWS2pzfd3QYwD8DIPVLCdcx1LNSDjdlQCQTKal0yKMauGIzMuYpCF1M6Z0LunPU/Ns96T9mqLXJHu+wmAoJ2CwdXa4FruwTSgrQlY3pokjTMwGaP3uzpnCSI7ykvi5kp4Q=", - "signingCertificateThumbprint": "8FB48D0DD44DCFB25ECC13B9CB5F493F5438D38C" - }, - "k8sRaw": { - "kind": "DataController", - "spec": { - "credentials": { - "dockerRegistry": "arc-private-registry", - "domainServiceAccount": "domain-service-account-secret", - "serviceAccount": "sa-arc-controller" - }, - "security": { - "allowDumps": true, - "allowNodeMetricsCollection": true, - "allowPodMetricsCollection": true - }, - "services": [ - { - "name": "controller", - "port": 30080, - "serviceType": "LoadBalancer" - } - ], - "settings": { - "ElasticSearch": { - "vm.max_map_count": "-1" - }, - "azure": { - "autoUploadMetrics": "true", - "autoUploadLogs": "false", - "subscription": "7894901a-dfga-rf4d-85r4-cc1234459df2", - "resourceGroup": "contoso-rg", - "location": "eastus", - "connectionMode": "direct" - }, - "controller": { - "logs.rotation.days": "7", - "logs.rotation.size": "5000", - "displayName": "contosodc" - } - }, - "storage": { - "data": { - "accessMode": "ReadWriteOnce", - "className": "managed-premium", - "size": "15Gi" - }, - "logs": { - "accessMode": "ReadWriteOnce", - "className": "managed-premium", - "size": "10Gi" - } - }, - "infrastructure": "azure", - "docker": { - "registry": "mcr.microsoft.com", - "imageTag": "v1.14.0_2022-12-13", - "repository": "arcdata", - "imagePullPolicy": "Always" - } - }, - "metadata": { - "namespace": "contoso", - "name": "contosodc", - "annotations": { - "management.azure.com/apiVersion": "2022-03-01-preview", - "management.azure.com/cloudEnvironment": "AzureCloud", - "management.azure.com/correlationId": "aa531c88-6dfb-46c3-af5b-d93f7eaaf0f6", - "management.azure.com/customLocation": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.ExtendedLocation/customLocations/contoso", - "management.azure.com/location": "eastus", - "management.azure.com/operationId": "265b98a7-0fc2-4dce-9cef-26f9b6dd000c*705EDFCA81D01028EFA1C3E9CB3CEC2BF472F25894ACB2FFDF955711236F486D", - "management.azure.com/resourceId": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.AzureArcData/dataControllers/contosodc", - "management.azure.com/systemData": "{\"createdBy\":\"9c1a17be-338f-4b3c-90e9-55eb526c5aef\",\"createdByType\":\"User\",\"createdAt\":\"2023-01-03T21:35:36.8412132Z\",\"resourceUID\":\"74087467-4f98-4a23-bacf-a1e40404457f\"}", - "management.azure.com/tenantId": "123488bf-8asd-41wf-91ab-211kl345db47", - "traceparent": "00-197d885376f938d6138babf8ed4d809c-1a584b84b3c8f5df-01" - }, - "creationTimestamp": "2023-01-03T21:35:42Z", - "generation": 2, - "resourceVersion": "15446366", - "uid": "4eb0a7a5-5ed6-4463-af71-12590b2fad5d" - }, - "apiVersion": "arcdata.microsoft.com/v5", - "status": { - "observedGeneration": 2, - "state": "Ready", - "azure": { - "uploadStatus": { - "logs": { - "lastUploadTime": "0001-01-01T00:00:00Z", - "message": "Automatic upload of logs is disabled. Execution time: 02/15/2023 17:07:57" - }, - "metrics": { - "lastUploadTime": "2023-02-15T17:00:57.047934Z", - "message": "Success" - }, - "usage": { - "lastUploadTime": "2023-02-15T17:07:53.843439Z", - "message": "Success. Records uploaded: 1." - } - } - }, - "lastUpdateTime": "2023-02-15T17:07:57.587925Z", - "runningVersion": "v1.14.0_2022-12-13", - "arcDataServicesK8sExtensionLatestVersion": "v1.16.0", - "registryVersions": { - "available": [ - "v1.16.0_2023-02-14", - "v1.15.0_2023-01-10" - ], - "behind": 2, - "current": "v1.14.0_2022-12-13", - "latest": "v1.16.0_2023-02-14", - "next": "v1.15.0_2023-01-10", - "previous": "v1.13.0_2022-11-08" - } - } - }, - "provisioningState": "Succeeded" - } -} -``` ----### PostgreSQL server - Azure Arc --| Description | Property name | Property type| -|:--|:--|:--| -| The data controller ID | DataControllerId | string | -| The instance admin name | Admin | string | -| Username and password for basic authentication | BasicLoginInformation | public: BasicLoginInformation | -| The raw Kubernetes information (`kubectl get postgres12`) | K8sRaw | object | -| Last uploaded date from on premises cluster | LastUploadedDate | System.DateTime | -| Group provisioning state | ProvisioningState | string | --### SQL managed instance - Azure Arc --| Description | Property name | Property type| -|:--|:--|:--| -| The managed instance ID | DataControllerId | string | -| The instance admin username | Admin | string | -| The instance start time | StartTime | string | -| The instance end time | EndTime | string | -| The raw kubernetes information (`kubectl get sqlmi`) | K8sRaw | object | -| Username and password for basic authentication | BasicLoginInformation | BasicLoginInformation | -| Last uploaded date from on-premises cluster | LastUploadedDate | System.DateTime | -| SQL managed instance provisioning state | ProvisioningState | string | ---The following JSON document is an example of the SQL Managed Instance - Azure Arc resource. ------```json --{ - "id": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.AzureArcData/sqlManagedInstances/sqlmi1", - "name": "sqlmi1", - "type": "microsoft.azurearcdata/sqlmanagedinstances", - "sku": { - "name": "vCore", - "tier": "BusinessCritical" - }, - "location": "eastus", - "extendedLocation": { - "name": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourcegroups/contoso-rg/providers/microsoft.extendedlocation/customlocations/contoso", - "type": "CustomLocation" - }, - "tags": {}, - "systemData": { - "createdBy": "contosouser@contoso.com", - "createdByType": "User", - "createdAt": "2023-01-04T01:33:57.5232885Z", - "lastModifiedBy": "319f651f-7ddb-4fc6-9857-7aef9250bd05", - "lastModifiedByType": "Application", - "lastModifiedAt": "2023-02-15T01:39:11.6582399Z" - }, - "properties": { - "dataControllerId": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.AzureArcData/dataControllers/contosodc", - "admin": "sqladmin", - "k8sRaw": { - "spec": { - "scheduling": { - "default": { - "resources": { - "requests": { - "cpu": "2", - "memory": "4Gi" - }, - "limits": { - "cpu": "2", - "memory": "4Gi" - } - } - } - }, - "replicas": 2, - "dev": true, - "services": { - "primary": { - "type": "LoadBalancer" - }, - "readableSecondaries": {} - }, - "readableSecondaries": 1, - "syncSecondaryToCommit": 0, - "storage": { - "data": { - "volumes": [ - { - "size": "5Gi" - } - ] - }, - "logs": { - "volumes": [ - { - "size": "5Gi" - } - ] - }, - "datalogs": { - "volumes": [ - { - "size": "5Gi" - } - ] - }, - "backups": { - "volumes": [ - { - "className": "azurefile", - "size": "5Gi" - } - ] - } - }, - "security": { - "adminLoginSecret": "sqlmi1-login-secret" - }, - "tier": "BusinessCritical", - "update": {}, - "backup": { - "retentionPeriodInDays": 7 - }, - "licenseType": "LicenseIncluded", - "orchestratorReplicas": 1, - "parentResource": { - "apiGroup": "arcdata.microsoft.com", - "kind": "DataController", - "name": "contosodc", - "namespace": "contoso" - }, - "settings": { - "collation": "SQL_Latin1_General_CP1_CI_AS", - "language": { - "lcid": 1033 - }, - "network": { - "forceencryption": 0, - "tlsciphers": "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384", - "tlsprotocols": "1.2" - }, - "sqlagent": { - "enabled": false - }, - "timezone": "UTC" - } - }, - "metadata": { - "annotations": { - "management.azure.com/apiVersion": "2022-03-01-preview", - "management.azure.com/cloudEnvironment": "AzureCloud", - "management.azure.com/correlationId": "3a49178d-a09f-48d3-9292-3133f6591743", - "management.azure.com/customLocation": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/microsoft.extendedlocation/customlocations/contoso", - "management.azure.com/location": "eastus", - "management.azure.com/operationId": "dbf2e708-78da-4762-8fd5-75ba43721b24*4C234309E6735F28E751F5734D64E8F98A910A88E54A1AD35C6469BCD0E6EA84", - "management.azure.com/resourceId": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.AzureArcData/sqlManagedInstances/sqlmi1", - "management.azure.com/systemData": "{\"createdBy\":\"9c1a17be-338f-4b3c-90e9-55eb526c5aef\",\"createdByType\":\"User\",\"createdAt\":\"2023-01-04T01:33:57.5232885Z\",\"resourceUID\":\"40fa8b55-4b7d-4d6a-b783-043169d7fd03\"}", - "management.azure.com/tenantId": "123488bf-8asd-41wf-91ab-211kl345db47", - "traceparent": "00-3c07cf4caa8b4778591b02b1bf3979ef-f2ee2c890c21ea8a-01" - }, - "creationTimestamp": "2023-01-04T01:34:03Z", - "generation": 1, - "labels": { - "management.azure.com/resourceProvider": "Microsoft.AzureArcData" - }, - "name": "sqlmi1", - "namespace": "contoso", - "resourceVersion": "15215035", - "uid": "6d653cd8-f17e-437a-b0dc-48154164c1ad" - }, - "status": { - "lastUpdateTime": "2023-02-15T01:39:07.691211Z", - "observedGeneration": 1, - "readyReplicas": "2/2", - "roles": { - "sql": { - "replicas": 2, - "lastUpdateTime": "2023-02-14T11:37:14.875705Z", - "readyReplicas": 2 - } - }, - "state": "Ready", - "endpoints": { - "logSearchDashboard": "https://230.41.13.18:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi1'))", - "metricsDashboard": "https://230.41.13.18:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi1-0", - "mirroring": "230.41.13.18:5022", - "primary": "230.41.13.18,1433", - "secondary": "230.41.13.18,1433" - }, - "highAvailability": { - "lastUpdateTime": "2023-02-14T11:47:42.208708Z", - "mirroringCertificate": "--BEGIN CERTIFICATE--\nMIIDQzCCAiugAwIBAgIISqqmfCPaolkwDQYJKoZIhvcNAQELBQAwKDEmMCQGA1UEAxMdQ2x1c3Rl\r\nciBDZXJ0aWZpDEzNDA2WhcNMjgwMTAzMDEzNDA2WjAO\r\nMQwwCgYDVQQDEwNkYm0wggEiMA0GCSqgEKAoIBAQDEXj2nm2cGkyfu\r\npXWQ4s6G//AI1rbH4JStZOAHwJNYmBuESSHz0i6znjnQQloFe+g2KM+1m4TN1T39Lz+/ufEYQQX9\r\nx9WuGP2IALgH1LXc/0DGuOB16QXqN7ZWULQ4ovW4Aaz5NxTSDXWYPK+zpb1c8adsQyamLHwmSPs4\r\nMpsgfOR9EUCqdnuKjSHbWCtkJTYogpAFyZb5HOgY1TMICrTkXG6VYoCPS/EDNmtPOyVuykdjjsxx\r\nIC5KkVgHWTaYIDjim7L44FPh4HUIVM/OFScRijCZTJogN/Fe94+kGDWfgWIG36Jlz127BbWV3HNJ\r\nkH2oLchIABvgTXsdKnjK3i2TAgMBAAGjgYowgYcwIAYDVR0lAQH/BBYwFAYIKwYBBQUHAwIGCCsG\r\nAQUFBwMBMA4GA1UdDwEB/wQEAwIFoDBTBgNVHREETDBKggpzcWxtaTEtc3ZjgiRzcWxtaTEtc3Zj\r\nLmNvbnRvc28uc3ZjLmNsdXN0ZXIubG9jYWyCFnNxbG1pMS1zdmMuY29udG9zby5zdmMwDQYJKoZI\r\nhvcNAQELBQADggEBAA+Wj6WK9NgX4szxT7zQxPVIn+0iviO/2dFxHmjmvj+lrAffsgNdfeX5095f\r\natxIO+no6VW2eoHze2f6AECh4/KefyAzd+GL9MIksJcMLqSqAemXju3pUfGBS1SAW8Rh361D8tmA\r\nEFpPMwZG3uMidYMso0GqO0tpejz2+5Q4NpweHBGoq6jk+9ApTLD+s5qetZHrxGD6tS1Z/Lvt24lE\r\nKtSKEDw5O2qnqbsOe6xxtPAuIfTmpwIzIv2WiGC3aGuXSr0bNyPHzh5RL1MCIpwLMrnruFwVzB25\r\nA0xRalcXVZRZ1H0zbznGsecyBRJiA+7uxNB7/V6i+SjB/qxj2xKh4s8=\n--END CERTIFICATE--\n", - "healthState": "Error", - "replicas": [] - }, - "logSearchDashboard": "https://230.41.13.18:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi1'))", - "metricsDashboard": "https://230.41.13.18:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi1-0", - "primaryEndpoint": "230.41.13.18,1433", - "runningVersion": "v1.14.0_2022-12-13", - "registryVersions": { - "available": [], - "behind": 0, - "current": "v1.14.0_2022-12-13", - "latest": "v1.14.0_2022-12-13", - "previous": "v1.13.0_2022-11-08" - } - } - }, - "provisioningState": "Succeeded", - "licenseType": "LicenseIncluded" - } -} -``` --## Examples --Example of resource inventory data JSON document that is sent to Azure to create Azure resources in your subscription. --```json -{ -- "customObjectName": "<resource type>-2020-29-5-23-13-17-164711", - "uid": "4bc3dc6b-9148-4c7a-b7dc-01afc1ef5373", - "instanceName": "sqlInstance001", - "instanceNamespace": "arc", - "instanceType": "<resource>", - "location": "eastus", - "resourceGroupName": "production-resources", - "subscriptionId": "<subscription_id>", - "isDeleted": false, - "externalEndpoint": "32.191.39.83:1433", - "vCores": "2", - "createTimestamp": "05/29/2020 23:13:17", - "updateTimestamp": "05/29/2020 23:13:17" - } -``` --## Billing data --Billing data is used for purposes of tracking usage that is billable. This data is essential for running of the service and needs to be transmitted manually or automatically in all modes. --### Arc-enabled data services --Billing data captures the start time (“created”) and end time (“deleted”) of a given instance, as well as any start and time whenever a change in the number of cores available to a given instance (“core limit”) happens. --```json -{ - "requestType": "usageUpload", - "clusterId": "4b0917dd-e003-480e-ae74-1a8bb5e36b5d", - "name": "DataControllerTestName", - "subscriptionId": "<subscription_id>", - "resourceGroup": "production-resources", - "location": "eastus", - "uploadRequest": { - "exportType": "usages", - "dataTimestamp": "2020-06-17T22:32:24Z", - "data": - "[{\"name\":\"sqlInstance001\", - \"namespace\":\"arc\", - \"type\":\"<resource type>\", - \"eventSequence\":1, - \"eventId\":\"50DF90E8-FC2C-4BBF-B245-CB20DC97FF24\", - \"startTime\":\"2020-06-17T19:11:47.7533333\", - \"endTime\":\"2020-06-17T19:59:00\", - \"quantity\":1, - \"id\":\"<subscription_id>\"}]", - "signature":"MIIE7gYJKoZIhvcNAQ...2xXqkK" -``` --### Arc-enabled SQL Server --Billing data captures a snapshot of the SQL Server instance properties as well as the machine properties every hour and compose the usage upload payload to report usage. There is a snapshot time in the payload for each SQL Server instance.  --```json -{ - "hostType": "Unknown", - "osType": "Windows", - "manufacturer": "Microsoft", - "model": "Hyper-V", - "isVirtualMachine": true, - "serverName": "TestArcServer", - "serverId": "<server id>", - "location": "eastus", - "timestamp": "2021-07-08T01:42:15.0388467Z", - "uploadRequest": { - "exportType": "usages", - "dataTimestamp": "2020-06-17T22:32:24Z", - "data": - "[{\"hostType\":\"VirtualMachine\", - \"numberOfCores\":4, - \"numberOfProcessors\":1, - \"numberOfLogicalProcessors\":4, - \"subscriptionId\":\"<subscription id>\",\"resourceGroup\":\"ArceeBillingPipelineStorage_Test\", - \"location\":\"eastus2euap\", - \"version\":\"Sql2019\", - \"edition\":\"Enterprise\", - \"editionOriginalString\":\"Enterprise Edition: Core based licensing\", - \"coreInfoOriginalString\":\"using 16 logical processors based on SQL Server licensing\", - \"vCore\":4, - \"instanceName\":\"INSTANCE01\", - \"licenseType\":\"LicenseOnly\", - \"hostLicenseType\":\"Paid\", - \"instanceLicenseType\":\"Paid\", - \"serverName\":\"TestArcServer\", - \"isRunning\":false, - \"eventId\":\"00000000-0000-0000-0000-000000000000\", - \"snapshotTime\":\"2020-06-17T19:59:00\", - \"isAzureBilled\":\"Enabled\", - \"hasSoftwareAssurance\":\"Undefined\"}]" - } -} -``` --## Diagnostic data --In support situations, you may be asked to provide database instance logs, Kubernetes logs, and other diagnostic logs. The support team will provide a secure location for you to upload to. Dynamic management views (DMVs) may also provide diagnostic data. The DMVs or queries used could contain database schema metadata details but typically not customer data. Diagnostic data does not contain any passwords, cluster IPs or individually identifiable data. These are cleaned and the logs are made anonymous for storage when possible. They are not transmitted automatically and administrator has to manually upload them. --|Field name |Notes | -|:--|:--| -|Error logs |Log files capturing errors may contain customer or personal data (see below) are restricted and shared by user | -|DMVs  |Dynamic management views can contain query and query plans but are restricted and shared by user | -|Views |Views can contain customer data but are restricted and shared only by user  | -|Crash dumps – customer data | Maximum 30-day retention of crash dumps – may contain access control data <br/><br/> Statistics objects, data values within rows, query texts could be in customer crash dumps | -|Crash dumps – personal data | Machine, logins/ user names, emails, location information, customer identification – require user consent to be included | --## Related content -[Upload usage data to Azure Monitor](upload-usage-data.md) ---- |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md | - Title: Azure Arc-enabled data services - Release notes -description: This article provides highlights for the latest release, and a history of features introduced in previous releases. ------ Previously updated : 09/09/2024---#Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature. ---# Release notes - Azure Arc-enabled data services --This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services. --## September 9, 2024 --**Image tag**: `v1.33.0_2024-09-10` --For complete release version information, review [Version log](version-log.md#september-9-2024). --## August 13, 2024 --**Image tag**: `v1.32.0_2024-08-13` --For complete release version information, review [Version log](version-log.md#august-13-2024). --## July 9, 2024 --**Image tag**: `v1.31.0_2024-07-09` --For complete release version information, review [Version log](version-log.md#july-9-2024). --## June 11, 2024 --**Image tag**: `v1.30.0_2024-06-11` --For complete release version information, review [Version log](version-log.md#june-11-2024). --## April 9, 2024 --**Image tag**:`v1.29.0_2024-04-09` --For complete release version information, review [Version log](version-log.md#april-9-2024). --## March 12, 2024 --**Image tag**:`v1.28.0_2024-03-12` --For complete release version information, review [Version log](version-log.md#march-12-2024). --### SQL Managed Instance enabled by Azure Arc --Database version for this release (964) has been upgraded beyond the database version for SQL Server 2022 (957). As a result, you can't restore databases from SQL Managed Instance enabled by Azure Arc to SQL Server 2022. --### Streamlined network endpoints --Prior to this release, Azure Arc data processing endpoint was at `san-af-<region>-prod.azurewebsites.net`. --Beginning with this release both Azure Arc data processing, and Azure Arc data telemetry use `*.<region>.arcdataservices.com`. --## February 13, 2024 --**Image tag**:`v1.27.0_2024-02-13` --For complete release version information, review [Version log](version-log.md#february-13-2024). --## December 12, 2023 --**Image tag**: `v1.26.0_2023-12-12` --For complete release version information, review [Version log](version-log.md#december-12-2023). --## November 14, 2023 --**Image tag**: `v1.25.0_2023-11-14` --For complete release version information, review [Version log](version-log.md#november-14-2023). --## October 10, 2023 --**Image tag**: `v1.24.0_2023-10-10` --For complete release version information, review [Version log](version-log.md#october-10-2023). --## September 12, 2023 --**Image tag**: `v1.23.0_2023-09-12` --For complete release version information, review [Version log](version-log.md#september-12-2023). --### Release notes --- Portal automatically refreshes status of failover group every 2 seconds. [Monitor failover group status in the portal](managed-instance-disaster-recovery-portal.md#monitor-failover-group-status-in-the-portal).--## August 8, 2023 --**Image tag**: `v1.22.0_2023-08-08` --For complete release version information, review [Version log](version-log.md#august-8-2023). --### Release notes --- Support for configuring and managing Azure Failover groups between two instances using Azure portal. For details, review [Configure failover group - portal](managed-instance-disaster-recovery-portal.md).-- Upgraded OpenSearch and OpenSearch Dashboards from 2.7.0 to 2.8.0-- Improvements and examples to [Back up and recover controller database](backup-controller-database.md).--## July 11, 2023 --**Image tag**: `v1.21.0_2023-07-11` --For complete release version information, review [Version log](version-log.md#july-11-2023). --### Release notes --- Proxy bypass is now supported for Arc SQL Server Extension. Starting this release, you can also specify services which shouldn't use the specified proxy server.--## June 13, 2023 --**Image tag**: `v1.20.0_2023-06-13` --For complete release version information, review [Version log](version-log.md#june-13-2023). --### Release notes --- SQL Managed Instance enabled by Azure Arc- - [Added Azure CLI support to manage transparent data encryption (TDE)](configure-transparent-data-encryption-sql-managed-instance.md). --## May 9, 2023 --**Image tag**: `v1.19.0_2023-05-09` --For complete release version information, review [Version log](version-log.md#may-9-2023). --New for this release: --### Release notes --- Arc data services- - OpenSearch replaces Elasticsearch for log database - - OpenSearch Dashboards replaces Kibana for logs interface - - There's a known issue with user settings migration to OpenSearch Dashboards for some versions of Elasticsearch, including the version used in Arc data services. - - > [!IMPORTANT] - > Before upgrade, save any Kibana configuration externally so that it can be re-created in OpenSearch Dashboards. -- - Automatic upgrade is disabled for the Arc data services extension - - Error-handling in the `az` CLI is improved during data controller upgrade - - Fixed a bug to preserve the resource limits for Azure Arc Data Controller where the resource limits could get reset during an upgrade. --- SQL Managed Instance enabled by Azure Arc- - General Purpose: Customer-managed TDE encryption keys (preview). For information, review [Enable transparent data encryption on SQL Managed Instance enabled by Azure Arc](configure-transparent-data-encryption-sql-managed-instance.md). - - Support for customer-managed keytab rotation. For information, review [Rotate SQL Managed Instance enabled by Azure Arc customer-managed keytab](rotate-customer-managed-keytab.md). - - Support for `sp_configure` to manage configuration. For information, review [Configure SQL Managed Instance enabled by Azure Arc](configure-managed-instance.md). - - Service-managed credential rotation. For information, review [How to rotate service-managed credentials in a managed instance](rotate-sql-managed-instance-credentials.md#how-to-rotate-service-managed-credentials-in-a-managed-instance). --## April 12, 2023 --**Image tag**: `v1.18.0_2023-04-11` --For complete release version information, see [Version log](version-log.md#april-11-2023). --New for this release: --- SQL Managed Instance enabled by Azure Arc- - Direct mode for failover groups is generally available az CLI - - Schedule the HA orchestrator replicas on different nodes when available --- Arc PostgreSQL- - Ensure postgres extensions work per database/role - - Arc PostgreSQL | Upload metrics/logs to Azure Monitor --- Bug fixes and optimizations in the following areas:- - Deploying Arc data controller using the individual create experience has been removed as it sets the auto upgrade parameter incorrectly. Use the all-in-one create experience. This experience creates the extension, custom location, and data controller. It also sets all the parameters correctly. For specific information, see [Create Azure Arc data controller in direct connectivity mode using CLI](create-data-controller-direct-cli.md). --## March 14, 2023 --**Image tag**: `v1.17.0_2023-03-14` --For complete release version information, see [Version log](version-log.md#march-14-2023). --New for this release: --- SQL Managed Instance enabled by Azure Arc - - [Rotate SQL Managed Instance enabled by Azure Arc service-managed credentials (preview)](rotate-sql-managed-instance-credentials.md) -- Azure Arc-enabled PostgreSQL - - Require client connections to use SSL - - Extended SQL Managed Instance enabled by Azure Arc authentication control plane to PostgreSQL --## February 14, 2023 --**Image tag**: `v1.16.0_2023-02-14` --For complete release version information, see [Version log](version-log.md#february-14-2023). --New for this release: --- Arc data - - Initial Extended Events Functionality | (preview) --- Arc-SQL MI- - [Enabled service managed Transparent Data Encryption (TDE) (preview)](configure-transparent-data-encryption-sql-managed-instance.md). - - Backups | Produce automated backups from readable secondary - - The built-in automatic backups are performed on secondary replicas when available. --- Arc PostgreSQL - - Automated Backups - - Settings via configuration framework - - Point-in-Time Restore - - Turn backups on/off - - Require client connections to use SSL - - Active Directory | Customer-managed bring your own keytab - - Active Directory | Configure in Azure command line client - - Enable Extensions via Kubernetes Custom Resource Definition --- Azure CLI Extension - - Optional `imageTag` for controller creation by defaulting to the image tag of the bootstrapper ---## January 13, 2023 --**Image tag**: `v1.15.0_2023-01-10` --For complete release version information, see [Version log](version-log.md#january-13-2023). --New for this release: --- Arc data - - Kafka separate mode --- Arc-SQL MI- - Time series functions are available. --## December 13, 2022 --**Image tag**: `v1.14.0_2022-12-13` --For complete release version information, see [Version log](version-log.md#december-13-2022). --New for this release: --- Platform support- - Add support for K3s --- Arc data controller.- - Added defaults on HA supervisor pod to support resource quotas. - - Update Grafana to version 9. --- Arc-enabled PostgreSQL server- - Switch to Ubuntu based images. --- Bug fixes and optimizations in the following areas:- - Arc enabling SQL Server onboarding. - - Fixed confusing error messages when DBMail is configured. --## November 8, 2022 --**Image tag**: `v1.13.0_2022-11-08` --For complete release version information, see [Version log](version-log.md#november-8-2022). --New for this release: --- Arc-enabled PostgreSQL server- - Add support for automated backups --- `arcdata` Azure CLI extension- - CLI support for automated backups: Setting the `--storage-class-backups` parameter for the create command will enable automated backups --## October 11, 2022 --**Image tag**: `v1.12.0_2022-10-11` --For complete release version information, see [Version log](version-log.md#october-11-2022). --New for this release: -- Arc data controller- - Updates to TelemetryRouter implementation to include inbound and outbound TelemetryCollector layers alongside Kafka as a persistent buffer - - AD connector will now be upgraded when data controller is upgraded --- Arc-enabled SQL managed instance- - New reprovision replica task lets you rebuild a broken sql instance replica. For more information, see [Reprovision replica](reprovision-replica.md). - - Edit Active Directory settings from the Azure portal --- `arcdata` Azure CLI extension- - Columns for release information added to the following commands: `az sql mi-arc list` this makes it easy to see what instance may need to be updated. - - Alternately you can run `az arcdata dc list-upgrades' - - New command to list AD Connectors `az arcdata ad-connector list --k8s-namespace <namespace> --use-k8s` - - Az CLI Polling for AD Connector create/update/delete: This feature changes the default behavior of `az arcdata ad-connector create/update/delete` to hang and wait until the operation finishes. To override this behavior, the user has to use the `--no-wait` flag when invoking the command. --Deprecation and breaking changes notices: --The following properties in the Arc SQL Managed Instance status will be deprecated/moved in the _next_ release: -- `status.logSearchDashboard`: use `status.endpoints.logSearchDashboard` instead.-- `status.metricsDashboard`: use `status.endpoints.metricsDashboard` instead.-- `status.primaryEndpoint`: use `status.endpoints.primary` instead.-- `status.readyReplicas`: uses `status.roles.sql.readyReplicas` instead.--## September 13, 2022 --**Image tag**: `v1.11.0_2022-09-13` --For complete release version information, see [Version log](version-log.md#september-13-2022). --New for this release: --- Arc data controller- - New extensions to monitoring stack to enable Kafka as a data cache and expose an OpenTelemetry endpoint for integration. See documentation for more details. - - Deleting an AD connector that is in use is now blocked. First remove all database instances that are using it and then remove the AD connector. - - New OpenTelemetry Router preview to make collected logs available for export to other SEIM systems. See documentation for details. - - AD connectors can now be created in Kubernetes via the Kubernetes API and synchronized to Azure via Resource Sync. - - Added short name `arcdc` to the data controllers custom resource definition. You can now use `kubectl get arcdc` as short form for `kubectl get datacontrollers`. - - The controller-external-svc is now only created when deploying using the indirect connectivity mode since it's only used for exporting logs/metrics/usage data in the indirect mode. - - "Downgrades" - i.e. going from a higher major or minor version to a lower - is now blocked. Examples of a blocked downgrade: v1.10 -> v1.9 or v2.0 -> v1.20. --- Arc-enabled SQL managed instance- - Added support for specifying multiple encryption types for AD connectors using the Azure CLI extension or Azure portal. --- Arc-enabled PostgreSQL server- - Removed Hyperscale/Citus scale-out capabilities. Focus will be on providing a single node Postgres server service. All user experiences have had terms and concepts like `Hyperscale`, `server groups`, `worker nodes`, `coordinator nodes`, and so forth. removed. **BREAKING CHANGE** -- - Only PostgreSQL version 14 is supported for now. Versions 11 and 12 have been removed. Two new images are introduced: `arc-postgres-14` and `arc-postgresql-agent`. The `arc-postgres-11` and `arc-postgres-12` container images are removed going forward. - - The postgresql CRD version has been updated to v1beta3. Some properties such as `workers` have been removed or changed. Update any scripts or automation you have as needed to align to the new CRD schema. **BREAKING CHANGE** --- `arcdata` Azure CLI extension- - Columns for desiredVersion and runningVersion are added to the following commands: `az sql mi-arc list` and `kubectl get sqlmi` to easily compare what the runningVersion and desiredVersion are. - - The command group `az postgres arc-server` is renamed to `az postgres server-arc`. **BREAKING CHANGE** - - Some of the `az postgres server-arc` commands have changed to remove things like `--workers`. **BREAKING CHANGE** --## August 9, 2022 --This release is published August 9, 2022. --**Image tag**: `v1.10.0_2022-08-09` --For complete release version information, see [Version log](version-log.md#august-9-2022). --### Arc-enabled SQL Managed Instance --- AES encryption can now be enabled for AD authentication.--### `arcdata` Azure CLI extension --- The Azure CLI help text for the Arc data controller, Arc-enabled SQL Managed Instance, and Active Directory connector command groups has been updated to reflect new naming conventions. Indirect mode arguments are now referred to as _Kubernetes API - targeted_ arguments, and direct mode arguments are now referred to as _Azure Resource Manager - targeted_ arguments.--## July 12, 2022 --This release is published July 12, 2022 --**Image tag**: `v1.9.0_2022-07-12` --For complete release version information, see [Version log](version-log.md#july-12-2022). --### Miscellaneous --- Extended the disk metrics reported in monitoring dashboards to include more queue length stats and more counters for IOPS. All disks are in scope for data collection that start with `vd` or `sd` now.--### Arc-enabled SQL Managed Instance --- Added buffer cache hit ratio to `collectd` and surface it in monitoring dashboards.-- Improvements to the formatting of the legends on some dashboards.-- Added process level CPU and memory metrics to the monitoring dashboards for the SQL managed instance process.-- `syncSecondaryToCommit` property is now available to be viewed and edited in Azure portal and Azure Data Studio.-- Added ability to set the DNS name for the readableSecondaries service in Azure CLI and Azure portal.-- The service now collects the `agent.log`, `security.log` and `sqlagentstartup.log` for Arc-enabled SQL Managed instance to ElasticSearch so they're searchable via Kibana. If you choose, you can upload them to Azure Log Analytics.-- There are more additional notifications when provisioning new SQL managed instances is blocked due to not exporting/uploading billing data to Azure.--### Data controller --- Permissions required to deploy the Arc data controller have been reduced to a least-privilege level.-- When deployed via the Azure CLI, the Arc data controller is now installed via a K8s job that uses a helm chart to do the installation. There's no change to the user experience.-- Resource Sync rule is created automatically when Data Controller is deployed in Direct connected mode. This enables customers to deploy an Azure Arc enabled SQL Managed Instance by directly talking to the kubernetes APIs.--## June 14, 2022 --This release is published June 14, 2022. --**Image tag**: `v1.8.0_2022-06-14` --For complete release version information, see [Version log](version-log.md#june-14-2022). --### Miscellaneous --- Canada Central and West US 3 regions are fully supported.--### Data controller --- Control DB SQL instance version is upgraded to latest version.-- Additional compatibility checks are run prior to executing an upgrade request.-- Upload status is now shown in the data controller list view in the Azure portal.-- Show the usage upload message value in the Overview blade banner in the Azure portal if the value isn't **Success**.--### SQL Managed Instance -- - You can now configure a SQL managed instance to use an AD connector at the time the SQL managed instance is provisioned from the Azure portal. - - BACKUP DATABASE TO URL to S3-compatible storage is introduced for preview. Limited to COPY_ONLY. [Documentation](/sql/relational-databases/backup-restore/sql-server-backup-and-restore-with-s3-compatible-object-storage). - - `az sql mi-arc create` and `update` commands have a new `--sync-secondary-commit` parameter which is the number of secondary replicas that must be synchronized to fail over. Default is `-1` which sets the number of required synchronized secondaries to (# of replicas - 1) / 2. Allowed values: `-1`, `1`, or `2`. Arc SQL MI custom resource property added called `syncSecondaryToCommit`. - - Billing estimate in Azure portal is updated to reflect the number of readable secondaries that are selected. - - Added SPNs for readable secondary service. --## May 24, 2022 --This release is published May 24, 2022. --**Image tag**: `v1.7.0_2022-05-24` --For complete release version information, see [Version log](version-log.md#may-24-2022). --### Data controller reminders and warnings --Reminders and warnings are implemented in Azure portal, custom resource status, and through CLI when the billing data related to all resources managed by the data controller hasn't been uploaded or exported for an extended period. --### SQL Managed Instance --General Availability of Business Critical service tier. SQL Managed Instance enabled by Azure Arc instances that have a version greater than or equal to v1.7.0 will be charged through Azure billing meters. --### User experience improvements --#### Azure portal --Added ability to create AD Connectors from Azure portal. --Preview expected costs for SQL Managed Instance enabled by Azure Arc Business Critical tier when you create new instances. --#### Azure Data Studio --Added ability to upgrade instances from Azure Data Studio in the indirect and direct connectivity modes. --Preview expected costs for SQL Managed Instance enabled by Azure Arc Business Critical tier when you create new instances. --## May 4, 2022 --This release is published May 4, 2022. --**Image tag**: `v1.6.0_2022-05-02` --For complete release version information, see [Version log](version-log.md#may-4-2022). --### Data controller --Added: --- Create, update, and delete AD connector -- Create SQL Managed Instance with AD connectivity to the Azure CLI extension in direct connectivity mode.--Data controller sends controller logs to the Log Analytics Workspace if logs upload is enabled. --Removed the `--ad-connector-namespace` parameter from `az sql mi-arc create` command because for now the AD connector resource must always be in the same namespace as the SQL Managed Instance resource. --Updated Elasticsearch to latest version `7.9.1-36fefbab37-205465`. Also Grafana, Kibana, Telegraf, Fluent Bit, Go. --All container image sizes were reduced by approximately 40% on average. --Introduced new `create-sql-keytab.ps1` PowerShell script to aid in creation of keytabs. --### SQL Managed Instance --Separated the availability group and failover group status into two different sections on Kubernetes. --Updated SQL engine binaries to the latest version. --Add support for `NodeSelector`, `TopologySpreadConstraints` and `Affinity`. Only available through Kubernetes yaml/json file create/edit currently. No Azure CLI, Azure portal, or Azure Data Studio user experience yet. --Add support for specifying labels and annotations on the secondary service endpoint. `REQUIRED_SECONDARIES_TO_COMMIT` is now a function of the number of replicas. --- If three replicas: `REQUIRED_SECONDARIES_TO_COMMIT = 1`. -- If one or two replicas: `REQUIRED_SECONDARIES_TO_COMMIT = 0`.--In this release, the default value of the readable secondary service is `Cluster IP`. The secondary service type can be set in the Kubernetes yaml/json at `spec.services.readableSecondaries.type`. In the next release, the default value will be the same as the primary service type. --### User experience improvements --Notifications added in Azure portal if billing data hasn't been uploaded to Azure recently. --#### Azure Data Studio --Added upgrade experience for Data Controller in direct and indirect connectivity mode. --## April 6, 2022 --This release is published April 6, 2022. --**Image tag**: `v1.5.0_2022-04-05` --For complete release version information, see [Version log](version-log.md#april-6-2022). --### Data controller --- Logs are retained in ElasticSearch for 2 weeks by default now.-- Upgrades are now limited to only upgrading to the next incremental minor or major version. For example:- - Supported version upgrades: - - 1.1 -> 1.2 - - 1.3 -> 2.0 - - Not supported version upgrade. - - 1.1. -> 1.4 - Not supported because one or more minor versions are skipped. -- Updates to open source projects included in Azure Arc-enabled data services to patch vulnerabilities.--### SQL Managed Instance enabled by Azure Arc --You can create a maintenance window on the data controller, and if you have SQL managed instances with a desired version set to `auto`, they will be upgraded in the next maintenance windows after a data controller upgrade. --Metrics for each replica in a Business Critical instance are now sent to the Azure portal so you can view them in the monitoring charts. --AD authentication connectors can now be set up in an `automatic mode` or *system-managed keytab* which will use a service account to automatically create SQL service accounts, SPNs, and DNS entries as an alternative to the AD authentication connectors which use the *customer-managed keytab* mode. --> [!NOTE] -> In some early releases customer-managed keytab mode was called *bring your own keytab* mode. --Backup and point-in-time-restore when a database has Transparent Data Encryption (TDE) enabled is now supported. --Change Data Capture (CDC) is now enabled in SQL Managed Instance enabled by Azure Arc. --Bug fixes for replica scaling in Arc SQL MI Business Critical and database restore when there is insufficient disk space. --Distributed availability groups have been renamed to failover groups. The `az sql mi-arc dag` command group has been moved to `az sql instance-failover-group-arc`. Before upgrade, delete all resources of the `dag` resource type. --### User experience improvements --You can now use the Azure CLI `az arcdata dc create` command to create: -- A custom location-- A data services extension-- A data controller in one command.--New enforcements of constraints: --- The data controller and managed instance resources it manages must be in the same resource group.-- There can only be one data controller in a given custom location.--#### Azure Data Studio --During direct connected mode data controller creation, you can now specify the log analytics workspace information for auto sync upload of the logs. --## March 2022 --This release is published March 8, 2022. --**Image tag**: `v1.4.1_2022-03-08` --For complete release version information, see [Version log](version-log.md#march-8-2022). --### Data Controller -- Fixed the issue "ConfigMap sql-config-[SQL MI] does not exist" from the February 2022 release. This issue occurs when deploying a SQL Managed Instance with service type of `loadBalancer` with certain load balancers. --## February 2022 --This release is published February 25, 2022. --**Image tag**: `v1.4.0_2022-02-25` --For complete release version information, see [Version log](version-log.md#february-25-2022). --> [!CAUTION] -> There's a known issue with this release where deployment of Arc SQL MI hangs, and sends the controldb pods of Arc Data Controller into a -> `CrashLoopBackOff` state, when the SQL MI is deployed with `loadBalancer` service type. This issue is fixed in a release on March 08, 2022. --### SQL Managed Instance --- Support for readable secondary replicas:- - To set readable secondary replicas use `--readable-secondaries` when you create or update an Arc-enabled SQL Managed Instance deployment. - - Set `--readable-secondaries` to any value between 0 and the number of replicas minus 1. - - `--readable-secondaries` only applies to Business Critical tier. -- Automatic backups are taken on the primary instance in a Business Critical service tier when there are multiple replicas. When a failover happens, backups move to the new primary. -- [ReadWriteMany (RWX) capable storage class](/azure/aks/concepts-storage#azure-disk) is required for backups, for both General Purpose and Business Critical service tiers. Specifying a non-ReadWriteMany storage class will cause the SQL Managed Instance to be stuck in "Pending" status during deployment.-- Billing support when using multiple read replicas.--For additional information about service tiers, see [High Availability with SQL Managed Instance enabled by Azure Arc (preview)](managed-instance-high-availability.md). --### User experience improvements --The following improvements are available in [Azure Data Studio](/azure-data-studio/download-azure-data-studio). --- Azure Arc and Azure CLI extensions now generally available. -- Changed edit commands for SQL Managed Instance for Azure Arc dashboard to use `update`, reflecting Azure CLI changes. This works in indirect or direct mode. -- Data controller deployment wizard step for connectivity mode is now earlier in the process.-- Removed an extra backups field in SQL MI deployment wizard.--## January 2022 --This release is published January 27, 2022. --**Image tag**: `v1.3.0_2022-01-27` --For complete release version information, see [Version log](version-log.md#january-27-2022). --### Data controller --- Initiate an upgrade of the data controller from the portal in the direct connected mode-- Removed block on data controller upgrade if there are Business Critical instances that exist-- Better handling of delete user experiences in Azure portal--### SQL Managed Instance --- SQL Managed Instance enabled by Azure Arc Business Critical instances can be upgraded from the January release and going forward (preview)-- Business critical distributed availability group failover can now be done through a Kubernetes-native experience or the Azure CLI (indirect mode only) (preview)-- Added support for `LicenseType: DisasterRecovery` which will ensure that instances which are used for Business Critical distributed availability group secondary replicas:- - Are not billed for - - Automatically seed the system databases from the primary replica when the distributed availability group is created. (preview) -- New option added to `desiredVersion` called `auto` - automatically upgrades a given SQL instance when there is a new upgrade available (preview)-- Update the configuration of SQL instances using Azure CLI in the direct connected mode--## Related content --> **Just want to try things out?** -> Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on AKS, AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. --- [Install the client tools](install-client-tools.md)-- [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) (requires installing the client tools first)-- [Create an Azure SQL Managed Instance on Azure Arc](create-sql-managed-instance.md) (requires creation of an Azure Arc data controller first)-- [Create an Azure Database for PostgreSQL server on Azure Arc](create-postgresql-server.md) (requires creation of an Azure Arc data controller first)-- [Resource providers for Azure services](../../azure-resource-manager/management/azure-services-resource-providers.md) |
azure-arc | Reprovision Replica | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reprovision-replica.md | - Title: Reprovision replica -description: This article explains how to rebuild a broken SQL Managed Instance enabled by Azure Arc replica. A replica may break due to storage corruption, for example. ------- Previously updated : 10/05/2022---# Reprovision replica - SQL Managed Instance enabled by Azure Arc --This article describes how to provision a new replica to replace an existing replica in SQL Managed Instance enabled by Azure Arc. --When you reprovision a replica, you rebuild a new managed instance replica for a SQL Managed Instance enabled by Azure Arc deployment. Use this task to replace a replica that is failing to synchronize, for example, due to corruption of the data on the persistent volumes (PV) for that instance, or due to some recurring SQL issue. --You can reprovision a replica [via `az` CLI](#via-az-cli) or [via `kubectl`](#via-kubectl). You can't reprovision a replica from the Azure portal. --## Prerequisites --You can only reprovision a replica on a multi-replica instance. --## Via `az` CLI --Azure CLI `az sql mi-arc` command group includes `reprovision-replica`. To reprovision a replica, update the following example. Replace `<instance_name-replica_number>` with the instance name and replica number of the replica you want to replace. Replace `<namespace>`. --```az -az sql mi-arc reprovision-replica -n <instance_name-replica_number> -k <namespace> --use-k8s -``` --For example, to reprovision replica 2 of instance `mySqlInstance` in namespace `arc`, use: --```az -az sql mi-arc reprovision-replica -n mySqlInstance-2 -k arc --use-k8s -``` --The command runs until completion, at which point the console returns the name of the Kubernetes task: --```output -sql-reprov-replica-mySqlInstance-2-1664217002.376132 is Ready -``` --At this point, you can either examine the task or delete it. --### Examine the task --The following example returns information about the state of the Kubernetes task: --```console -kubectl describe SqlManagedInstanceReprovisionReplicaTask sql-reprov-replica-mySqlInstance-2-1664217002.376132 -n arc -``` --> [!IMPORTANT] -> After a replica is reprovisioned, you must delete the task before another reprovision can run on the same instance. For more information, see [Limitations](#limitations). --### Delete the task --The following example deletes the Kubernetes task: --```console -kubectl delete SqlManagedInstanceReprovisionReplicaTask sql-reprov-replica-mySqlInstance-2-1664217002.376132 -n arc -``` --### Option parameter: `--no-wait` --There's an optional `--no-wait` parameter for the command. If you send the request with `--no-wait`, the output includes the name of the task to be monitored. For example: --```az -az sql mi-arc reprovision-replica -n mySqlInstance-2 -k arc --use-k8s --no-wait -Reprovisioning replica mySqlInstance-2 in namespace `arc`. Please use -`kubectl get -n arc SqlManagedInstanceReprovisionReplicaTask sql-reprov-replica-mySqlInstance-2-1664217434.531035` -to check its status or -`kubectl get -n arc SqlManagedInstanceReprovisionReplicaTask` -to view all reprovision tasks. -``` --## Via kubectl --To reprovision with `kubectl`, create a custom resource. To create a custom resource to reprovision, you can create a .yaml file with this structure: --```yaml -apiVersion: tasks.sql.arcdata.microsoft.com/v1beta1 -kind: SqlManagedInstanceReprovisionReplicaTask -metadata: - name: <task name you make up> - namespace: <namespace> -spec: - replicaName: instance_name-replica_number -``` --To use the same example as above, `mySqlinstance` replica 2, the payload is: --```yaml -apiVersion: tasks.sql.arcdata.microsoft.com/v1beta1 -kind: SqlManagedInstanceReprovisionReplicaTask -metadata: - name: my-reprovision-task-mySqlInstance-2 - namespace: arc -spec: - replicaName: mySqlInstance-2 -``` --### Monitor or delete the task --Once the yaml is applied via kubectl apply, you can monitor or delete the task via kubectl: --```console -kubectl get -n arc SqlManagedInstanceReprovisionReplicaTask my-reprovision-task-mySqlInstance-2 -kubectl describe -n arc SqlManagedInstanceReprovisionReplicaTask my-reprovision-task-mySqlInstance-2 -kubectl delete -n arc SqlManagedInstanceReprovisionReplicaTask my-reprovision-task-mySqlInstance-2 -``` --> [!IMPORTANT] -> After a replica is reprovisioned, you must delete the task before another reprovision can run on the same instance. For more information, see [Limitations](#limitations). ---## Limitations --- The task rejects attempts to reprovision the current primary replica. If the current primary replica is corrupted and in need of reprovisioning, fail over to a different replica, and then request the reprovisioning.--- Reprovisioning of multiple replicas in the same instance runs serially. The tasks queue and are held in `Creating` state until the currently active task finishes **and is deleted**. There's no auto-cleanup of a completed task, so this serialization will affect you even if you run the `az sql mi-arc reprovision-replica` command synchronously and wait for it to complete before requesting another reprovision. In all cases, you have to remove the task via `kubectl` before another reprovision on the same instance can run. --More details about serialization of reprovision tasks: If you have multiple requests to reprovision a replica in one instance, you may see something like this in the output from a `kubectl get SqlManagedInstanceReprovisionReplicaTask`: --```console -kubectl get SqlManagedInstanceReprovisionReplicaTask -n arc -NAME STATUS AGE -sql-reprov-replica-c-sql-djlexlmty-1-1664217344.304601 Completed 13m -sql-reprov-replica-c-sql-kkncursza-1-1664217002.376132 Completed 19m -sql-reprov-replica-c-sql-kkncursza-1-1664217434.531035 Creating 12m -``` --That last entry for replica c-sql-kkncursza-1, `sql-reprov-replica-c-sql-kkncursza-1-1664217434.531035`, will stay in status `Creating` until the completed one `sql-reprov-replica-c-sql-kkncursza-1-1664217002.376132` is removed. |
azure-arc | Reserved Capacity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reserved-capacity-overview.md | - Title: Save costs with reserved capacity -description: Learn how to buy SQL Managed Instance enabled by Azure Arc reserved capacity to save costs. ------- Previously updated : 10/27/2021---# Reserved capacity - SQL Managed Instance enabled by Azure Arc --Save money with SQL Managed Instance enabled by Azure Arc by committing to a reservation for Azure Arc services compared to pay-as-you-go prices. With reserved capacity, you make a commitment for SQL Managed Instance enabled by Azure Arc use for one or three years to get a significant discount on the service fee. To purchase reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. --You do not need to assign the reservation to a specific database or managed instance. Matching existing deployments that are already running or ones that are newly deployed automatically get the benefit. By purchasing a reservation, you commit to usage for the Azure Arc services cost for one or three years. As soon as you buy a reservation, the service charges that match the reservation attributes are no longer charged at the pay-as-you go rates. --A reservation applies to Azure Arc services cost only and does not cover SQL IP costs or any other charges. At the end of the reservation term, the billing benefit expires and the managed instance is billed at the pay-as-you go price. Reservations do not automatically renew. For pricing information, see the [reserved capacity offering](https://azure.microsoft.com/pricing/details/sql-database/managed/). --You can buy reserved capacity in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/reservationsBrowse). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy reserved capacity: --- You must be in the owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.-- For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com). Or, if that setting is disabled, you must be an EA Admin on the subscription. Reserved capacity.--For more information about how enterprise customers and pay-as-you-go customers are charged for reservation purchases, see [Understand Azure reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [Understand Azure reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md). --## Determine correct size before purchase --The size of reservation should be based on the total amount of compute resources measured in vCores used by the existing or soon-to-be-deployed managed instances within a specific region reservation scope. --The following list demonstrates a scenario to project how you would reserve resources: --* **Current**: - - One General Purpose, 16 vCore managed instance - - Two Business Critical, 8-vCore managed instances --* **In the next year you will add**: - - One more General Purpose, 16 vCore managed instance - - One more Business Critical, 32 vCore managed instance --* **Purchase a reservations for**: - - 32 (2x16) vCore one year reservation for General Purpose managed instance - - 48 (2x8 + 32) vCore one year reservation for Business Critical managed instance --## Buy reserved capacity --1. Sign in to the [Azure portal](https://portal.azure.com). -2. Select **All services** > **Reservations**. -3. Select **Add** and then in the **Purchase Reservations** pane, select **SQL Managed Instance** to purchase a new reservation for SQL Managed Instance enabled by Azure Arc. -4. Fill in the required fields. Existing SQL Managed Instance resources that match the attributes you select qualify to get the reserved capacity discount. The actual number of databases or managed instances that get the discount depends on the scope and quantity selected. -- The following table describes required fields. - - | Field | Description| - ||--| - |Subscription|The subscription used to pay for the capacity reservation. The payment method on the subscription is charged the upfront costs for the reservation. The subscription type must be an enterprise agreement (offer number MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer number MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.| - |Scope |The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select <br/><br/>**Shared**, the vCore reservation discount is applied to the database or managed instance running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.<br/><br/>**Single subscription**, the vCore reservation discount is applied to the databases or managed instances in this subscription. <br/><br/>**Single resource group**, the reservation discount is applied to managed instances in the selected subscription in the selected subscription and the selected resource group within that subscription.</br></br>**Management group**, the reservation discount is applied to the managed instances in the list of subscriptions that are a part of both the management group and billing scope.| - |Region |The Azure region that's covered by the capacity reservation.| - |Deployment type|The SQL resource type that you want to buy the reservation for.| - |Performance Tier|The service tier for the databases or managed instances. | - |Term |One year or three years.| - |Quantity |The amount of compute resources being purchased within the capacity reservation. The quantity is the number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you run or plan to run multiple managed instances with the total compute capacity of Gen5 16 vCores in the East US region, then specify the quantity as 16 to maximize the benefit for all the databases. | --1. Review the cost of the capacity reservation in the **Costs** section. -1. Select **Purchase**. -1. Select **View this Reservation** to see the status of your purchase. --## Cancel, exchange, or refund reservations --You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure Reservations](../../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md). --## vCore size flexibility --vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit. By keeping an unapplied buffer in your reservation, you can effectively manage the performance spikes without exceeding your budget. --## Limitation --Reserved capacity pricing is only supported for features and products that are in General Availability state. --## Need help? Contact us --If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). --## Related content --The vCore reservation discount is applied automatically to the number of managed instances that match the capacity reservation scope and attributes. You can update the scope of the capacity reservation through the [Azure portal](https://portal.azure.com), PowerShell, Azure CLI, or the API. --To learn about service tiers for SQL Managed Instance enabled by Azure Arc, see [SQL Managed Instance enabled by Azure Arc service tiers](service-tiers.md). --- For information on Azure SQL Managed Instance service tiers for the vCore model, see [Azure SQL Managed Instance - Compute Hardware in the vCore Service Tier](/azure/azure-sql/managed-instance/service-tiers-managed-instance-vcore)--To learn how to manage the capacity reservation, see [manage reserved capacity](../../cost-management-billing/reservations/manage-reserved-vm-instance.md). --To learn more about Azure Reservations, see the following articles: --- [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md)-- [Manage Azure Reservations](../../cost-management-billing/reservations/manage-reserved-vm-instance.md)-- [Understand Azure Reservations discount](../../cost-management-billing/reservations/understand-reservation-charges.md)-- [Understand reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md)-- [Understand reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)-- [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations) |
azure-arc | Resize Persistent Volume Claim | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/resize-persistent-volume-claim.md | - Title: Resize persistent volume claim (PVC) for Azure Arc-enabled data services volume -description: Explains how to resize a persistent volume claim for a volume used for Azure Arc-enabled data services. ------ Previously updated : 07/19/2023----# Resize persistent volume to increase size --This article explains how to resize an existing persistent volume to increase its size by editing the `PersistentVolumeClaim` (PVC) object. --> [!NOTE] -> Resizing PVCs using this method only works your `StorageClass` supports `AllowVolumeExpansion=True`. --When you deploy a SQL Managed Instance enabled by Azure Arc, you can configure the size of the persistent volume (PV) for `data`, `logs`, `datalogs`, and `backups`. The deployment creates these volumes based on the values set by parameters `--volume-size-data`, `--volume-size-logs`, `--volume-size-datalogs`, and `--volume-size-backups`. When these volumes become full, you will need to resize the `PersistentVolumes`. SQL Managed Instance enabled by Azure Arc is deployed as part of a `StatefulSet` for both General Purpose or Business Critical service tiers. Kubernetes supports automatic resizing for persistent volumes but not for volumes attached to `StatefulSet`. --Following are the steps to resize persistent volumes attached to `StatefulSet`: --1. Scale the `StatefulSet` replicas to 0 -2. Patch the PVC to the new size -3. Scale the `StatefulSet` replicas back to the original size --During the patching of `PersistentVolumeClaim`, the status of the persistent volume claim will likely change from: `Attached` to `Resizing` to `FileSystemResizePending` to `Attached`. The exact states will depend on the storage provisioner. --> [!NOTE] -> Ensure the managed instance is in a healthy state before you proceed. Run `kubectl get sqlmi -n <namespace>` and check the status of the managed instance. --## 1. Scale the `StatefulSet` replicas to 0 --There is one `StatefulSet` deployed for each Arc SQL MI. The number of replicas in the `StatefulSet` is equal to the number of replicas in the Arc SQL MI. For General Purpose service tier, this is 1. For Business Critical service tier it could be 1, 2 or 3 depending on how many replicas were specified. Run the below command to get the number of `StatefulSet` replicas if you have a Business Critical instance. --```console -kubectl get sts --namespace <namespace> -``` --For example, if the namespace is `arc`, run: --```console -kubectl get sts --namespace arc -``` --Notice the number of stateful sets under the `READY` column for the SQL managed instance(s). --Run the below command to scale the `StatefulSet` replicas to 0: --```console -kubectl scale statefulsets <statefulset> --namespace <namespace> --replicas= <number> -``` --For example: --```console -kubectl scale statefulsets sqlmi1 --namespace arc --replicas=0 -``` --## 2. Patch the PVC to the new size --Run the below command to get the name of the `PersistentVolumeClaim` which needs to be resized: --```console -kubectl get pvc --namespace <namespace> -``` --For example: --```console -kubectl get pvc --namespace arc -``` ---Once the stateful `StatefulSet` replicas have completed scaling down to 0, patch the `StatefulSet`. Run the following command: --```console -$newsize='{\"spec\":{\"resources\":{\"requests\":{\"storage\":\"<newsize>Gi\"}}}}' -kubectl patch pvc <name of PVC> --namespace <namespace> --type merge --patch $newsize -``` --For example: the following command will resize the data PVC to 50Gi. --```console -$newsize='{\"spec\":{\"resources\":{\"requests\":{\"storage\":\"50Gi\"}}}}' -kubectl patch pvc data-a6gt3be7mrtq60eao0gmgxgd-sqlmi1-0 --namespace arcns --type merge --patch $newsize -``` --## 3. Scale the `StatefulSet` replicas to original size --Once the resize completes, scale the `StatefulSet` replicas back to its original size by running the below command: --```console -kubectl scale statefulsets <statefulset> --namespace <namespace> --replicas= <number> -``` --For example: The below command sets the `StatefulSet` replicas to 3. --``` -kubectl scale statefulsets sqlmi1 --namespace arc --replicas=3 -``` -Ensure the Arc-enabled SQL managed instance is back to ready status by running: --```console -kubectl get sqlmi -A -``` --## See also --[Sizing Guidance](sizing-guidance.md) |
azure-arc | Resource Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/resource-sync.md | - Title: Resource sync -description: Synchronize resources for Azure Arc-enabled data services in directly connected mode ------ Previously updated : 07/14/2022----# Resource sync --Resource sync lets you create, update, or delete resources directly on the Kubernetes cluster using Kubernetes APIs in the direct connected mode, and automatically synchronizes those changes to Azure. This article explains resource sync. ---When you deploy Azure Arc-enabled data services in direct connected mode, the deployment creates a *resource sync* rule. This resource sync rule ensures that the Arc resources such as SQL managed instance created or updated by directly calling the Kubernetes APIs get updated appropriately in the mapped resources in Azure and the resource metadata is continually synced back to Azure. This rule is created within the same resource group as the data controller. -- > [!NOTE] - > The resource sync rule is created by default, during the Azure Arc Data Controller deployment and is only applicable in direct connected mode. --Without the resource sync rule, the SQL managed instance is created using the following command: --```azurecli -az sql mi-arc create --name <name> --resource-group <group> --location <Azure location> -ΓÇôsubscription <subscription> --custom-location <custom-location> --storage-class-backups <RWX capable storageclass> -``` --In this scenario, first the Azure ARM APIs are called and the mapped Azure resource is created. Once this mapped resource is created successfully, then the Kubernetes API is called to create the SQL managed instance on the Kubernetes cluster. ---With the resource sync rule, you can use the Kubernetes API to create the Arc-enabled SQL managed instance, as follows: --```azurecli -az sql mi-arc create --name <name> --k8s-namespace <namespace> --use-k8s --storage-class-backups <RWX capable storageclass> -``` --In this scenario, the SQL managed instance is directly created in the Kubernetes cluster. The resource sync rule ensures that the equivalent resource in Azure is created as well. --If the resource sync rule is deleted accidentally, you can add it back to restore the sync functionality by using the below REST API. Refer to Azure REST API reference for guidance on executing REST APIs. Please make sure to use data controller Azure resource subscription and resource group. ---```rest -https://management.azure.com/subscriptions/{{subscription}}/resourcegroups/{{resource_group}}/providers/microsoft.extendedlocation/customlocations/{{custom_location_name}}/resourcesyncrules/defaultresourcesyncrule?api-version=2021-08-31-preview -``` ----```azurecli - "location": "{{Azure region}}", - "properties": { - "targetResourceGroup": "/subscriptions/{{subscription}}/resourcegroups/{{resource_group_of_ data_controller}}", - "priority": 100, - "selector": { - "matchLabels": { - "management.azure.com/resourceProvider": "Microsoft.AzureArcData" //Mandatory - } - } - } -} -``` --## Limitations --- Resource sync rule does not project Azure Arc Data controller. The Azure Arc Data controller must be deployed via ARM API. -- Resource sync only applies to the data services such as Arc enabled SQL managed instance, post deployment of Data controller. -- Resource sync rule does not project Azure Arc enabled PostgreSQL-- Resource sync rule does not project Azure Arc Active Directory connector-- Resource sync rule does not project Azure Arc Instance Failover Groups--## Related content --[Create Azure Arc data controller in direct connectivity mode using CLI](create-data-controller-direct-cli.md) - |
azure-arc | Restore Adventureworks Sample Db Into Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-server.md | - Title: Import the AdventureWorks sample database to Azure Arc-enabled PostgreSQL server -description: Restore the AdventureWorks sample database to Azure Arc-enabled PostgreSQL server ----- Previously updated : 06/02/2021----# Import the AdventureWorks sample database to Azure Arc-enabled PostgreSQL server --[AdventureWorks](/sql/samples/adventureworks-install-configure) is a sample database containing an OLTP database used in tutorials, and examples. It's provided and maintained by Microsoft as part of the [SQL Server samples GitHub repository](https://github.com/microsoft/sql-server-samples/tree/master/samples/databases). --An open-source project has converted the AdventureWorks database to be compatible with Azure Arc-enabled PostgreSQL server. -- [Original project](https://github.com/lorint/AdventureWorks-for-Postgres)-- [Follow on project that pre-converts the CSV files to be compatible with PostgreSQL](https://github.com/NorfolkDataSci/adventure-works-postgres)--This document describes a simple process to get the AdventureWorks sample database imported into your Azure Arc-enabled PostgreSQL server. ---## Download the AdventureWorks backup file --Download the AdventureWorks .sql file into your PostgreSQL server container. In this example, we'll use the `kubectl exec` command to remotely execute a command in the PostgreSQL server container to download the file into the container. You could download this file from any location accessible by `curl`. Use this same method if you have other database back up files you want to pull in the PostgreSQL server container. Once it's in the PostgreSQL server container, it's easy to create the database, schema, and populate the data. --Run a command like this to download the files replace the value of the pod name and namespace name before you run it: --> [!NOTE] -> Your container will need to have Internet connectivity over 443 to download the file from GitHub. --> [!NOTE] -> Use the pod name of the Coordinator node of the PostgreSQL server. Its name is \<server group name\>c-0 (for example postgres01c-0, where c stands for Coordinator node). If you are not sure of the pod name run the command `kubectl get pod` --```console -kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/cluster_api/capi_azure/arm_template/artifacts/AdventureWorks2019.sql" --#Example: -#kubectl exec postgres02-0 -n arc -c postgres -- /bin/bash -c "cd /tmp && curl -k -O hthttps://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/cluster_api/capi_azure/arm_template/artifacts/AdventureWorks2019.sql" -``` --## Import the AdventureWorks database --Similarly, you can run a kubectl exec command to use the psql CLI tool that is included in the PostgreSQL server containers to create and load the database. --Run a command like this to create the empty database first substituting the value of the pod name and the namespace name before you run it. --```console -kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --username postgres -c 'CREATE DATABASE "adventureworks";' --#Example -#kubectl exec postgres02-0 -n arc -c postgres -- psql --username postgres -c 'CREATE DATABASE "adventureworks";' -``` --Then, run a command like this to import the database substituting the value of the pod name and the namespace name before you run it. --```console -kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --username postgres -d adventureworks -f /tmp/AdventureWorks.sql --#Example -#kubectl exec postgres02-0 -n arc -c postgres -- psql --username postgres -d adventureworks -f /tmp/AdventureWorks.sql -``` ---## Suggested next steps -- Read the concepts and How-to guides of Azure Database for PostgreSQL to distribute your data across multiple PostgreSQL server nodes and to benefit from all the power of Azure Database for PostgreSQL. :- * [Nodes and tables](/azure/postgresql/hyperscale/concepts-nodes) - * [Determine application type](/azure/postgresql/hyperscale/howto-app-type) - * [Choose a distribution column](/azure/postgresql/hyperscale/howto-choose-distribution-column) - * [Table colocation](/azure/postgresql/hyperscale/concepts-colocation) - * [Distribute and modify tables](/azure/postgresql/hyperscale/howto-modify-distributed-tables) - * [Design a multi-tenant database](/azure/postgresql/hyperscale/tutorial-design-database-multi-tenant)* - * [Design a real-time analytics dashboard](/azure/postgresql/hyperscale/tutorial-design-database-realtime)* -- > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL server offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL server. - |
azure-arc | Restore Adventureworks Sample Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/restore-adventureworks-sample-db.md | - Title: Restore the AdventureWorks sample database into SQL Managed Instance -description: Restore the AdventureWorks sample database into SQL Managed Instance ------ Previously updated : 07/30/2021----# Restore the AdventureWorks sample database into SQL Managed Instance - Azure Arc --[AdventureWorks](/sql/samples/adventureworks-install-configure) is a sample database containing an OLTP database that is often used in tutorials, and examples. It is provided and maintained by Microsoft as part of the [SQL Server samples GitHub repository](https://github.com/microsoft/sql-server-samples/tree/master/samples/databases). --This document describes a simple process to get the AdventureWorks sample database restored into your SQL Managed Instance - Azure Arc. ---## Download the AdventureWorks backup file --Download the AdventureWorks backup (.bak) file into your SQL Managed Instance container. In this example, use the `kubectl exec` command to remotely execute a command inside of the SQL Managed Instance container to download the .bak file into the container. Download this file from any location accessible by `wget` if you have other database backup files you want to pull to be inside of the SQL Managed Instance container. Once it is inside of the SQL Managed Instance container it is easy to restore using standard `RESTORE DATABASE` T-SQL. --Run a command like this to download the .bak file substituting the value of the pod name and namespace name before you run it. -> [!NOTE] -> Your container will need to have internet connectivity over 443 to download the file from GitHub --```console -kubectl exec <SQL pod name> -n <namespace name> -c arc-sqlmi -- wget https://github.com/Microsoft/sql-server-samples/releases/download/adventureworks/AdventureWorks2019.bak -O /var/opt/mssql/data/AdventureWorks2019.bak -``` --Example --```console -kubectl exec sqltest1-0 -n arc -c arc-sqlmi -- wget https://github.com/Microsoft/sql-server-samples/releases/download/adventureworks/AdventureWorks2019.bak -O /var/opt/mssql/data/AdventureWorks2019.bak -``` --## Restore the AdventureWorks database --Similarly, you can run a `kubectl` exec command to use the `sqlcmd` CLI tool that is included in the SQL Managed Instance container to execute the T-SQL command to RESTORE DATABASE. --Run a command like this to restore the database. Replace the value of the pod name, the password, and the namespace name before you run it. --```console -kubectl exec <SQL pod name> -n <namespace name> -c arc-sqlmi -- /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P <password> -Q "RESTORE DATABASE AdventureWorks2019 FROM DISK = N'/var/opt/mssql/datf', MOVE 'AdventureWorks2017_Log' TO '/var/opt/mssql/data/AdventureWorks2019_Log.ldf'" -``` -Example --```console -kubectl exec sqltest1-0 -n arc -c arc-sqlmi -- /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P MyPassword! -Q "RESTORE DATABASE AdventureWorks2019 FROM DISK = N'/var/opt/mssql/datf', MOVE 'AdventureWorks2017_Log' TO '/var/opt/mssql/data/AdventureWorks2019_Log.ldf'" -``` |
azure-arc | Restore Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/restore-postgresql.md | - Title: Restore Azure Arc-enabled PostgreSQL server -description: Explains how to restore Arc-enabled PostgreSQL server. You can restore to a point-in-time or restore a whole server. ------- Previously updated : 03/13/2023----# Restore Azure Arc-enabled PostgreSQL servers --Restoring an Azure Arc-enable PostgreSQL server creates a new server by copying the configuration of the existing server (for example resource requests/limits, extensions etc.). Configurations that could cause conflicts (for example primary endpoint port) aren't copied. The storage configuration for the new resource can be defined by passing `--storage-class*` and `--volume-size-*` parameters to the `restore` command. ---Restore an Azure Arc-enabled PostgreSQL server to a new server with the `restore` command: --```azurecli -az postgres server-arc restore -n <destination-server-name> --source-server <source-server-name> --k8s-namespace <namespace> --use-k8s -``` --## Examples --### Restore using latest backups --Create a new Arc-enabled PostgreSQL server `pg02` by restoring `pg01` using the latest backups: --```azurecli -az postgres server-arc restore -n pg02 --source-server pg01 --k8s-namespace arc --use-k8s -``` --### Restore using latest backup and modify the storage requirement --Create a new Arc-enabled PostgreSQL server `pg02` by restoring `pg01` using the latest backups, defining new storage requirements for pg02: --```azurecli -az postgres server-arc restore -n pg02 --source-server pg01 --k8s-namespace arc --storage-class-data azurefile-csi-premium --volume-size-data 10Gi --storage-class-logs azurefile-csi-premium --volume-size-logs 2Gi--use-k8s --storage-class-backups azurefile-csi-premium --volume-size-backups 15Gi -``` --### Restore to a specific point in time --Create a new Arc-enabled PostgreSQL server `pg02` by restoring `pg01` to its state at `2023-02-01T00:00:00Z`: -```azurecli -az postgres server-arc restore -n pg02 --source-server pg01 --k8s-namespace arc -t 2023-02-01T00:00:00Z --use-k8s -``` --## Help --For details about all the parameters available for restore review the output of the command: -```azurecli -az postgres server-arc restore --help -``` --## Related content --- [Configure automated backup - Azure Arc-enabled PostgreSQL servers](backup-restore-postgresql.md)-- [Scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-server-using-cli.md) your server. |
azure-arc | Rotate Customer Managed Keytab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-customer-managed-keytab.md | - Title: Rotate customer-managed keytab -description: How to rotate a customer-managed keytab ------ Previously updated : 05/05/2023---# Rotate SQL Managed Instance enabled by Azure Arc customer-managed keytab --This article describes how to rotate customer-managed keytabs for SQL Managed Instance enabled by Azure Arc. These keytabs are used to enable Active Directory logins for the managed instance. --## Prerequisites: --Before you proceed with this article, you must have an active directory connector in customer-managed keytab mode and a SQL Managed Instance enabled by Azure Arc created. --- [Deploy a customer-managed keytab active directory connector](./deploy-customer-managed-keytab-active-directory-connector.md)-- [Deploy and connect a SQL Managed Instance enabled by Azure Arc](./deploy-active-directory-sql-managed-instance.md)--## How to rotate customer-managed keytabs in a managed instance --The following steps need to be followed to rotate the keytab: --1. Get `kvno` value for the current generation of credentials for the SQL MI Active Directory account. -1. Create a new keytab file with entries for the current generation of credentials. Specifically, the `kvno` value should match from step (1.) above. -1. Update the new keytab file with new entries for the new credentials for the SQL MI Active Directory account. -1. Create a kubernetes secret holding the new keytab file contents in the same namespace as the SQL MI. -1. Edit the SQL MI spec to point the Active Directory keytab secret setting to this new secret. -1. Change the password in the Active Directory domain. --We have provided the following PowerShell and bash scripts that will take care of steps 1-5 for you: -- [`rotate-sqlmi-keytab.sh`](https://github.com/microsoft/azure_arc/blob/main/arc_data_services/deploy/scripts/rotate-sql-keytab.sh) - This bash script uses `ktutil` or `adutil` (if the `--use-adutil` flag is specified) to generate the new keytab for you.-- [`rotate-sqlmi-keytab.ps1`](https://github.com/microsoft/azure_arc/blob/main/arc_data_services/deploy/scripts/rotate-sql-keytab.ps1) - This PowerShell script uses `ktpass.exe` to generate the new keytab for you.--Executing the above script would result in the following keytab file for the user `arcsqlmi@CONTOSO.COM`, secret `sqlmi-keytab-secret-kvno-2-3` and namespace `test`: --```text -KVNO Timestamp Principal -- - - 2 02/16/2023 17:12:05 arcsqlmiuser@CONTOSO.COM (aes256-cts-hmac-sha1-96) - 2 02/16/2023 17:12:05 arcsqlmiuser@CONTOSO.COM (arcfour-hmac) - 2 02/16/2023 17:12:05 MSSQLSvc/arcsqlmi.contoso.com@CONTOSO.COM (aes256-cts-hmac-sha1-96) - 2 02/16/2023 17:12:05 MSSQLSvc/arcsqlmi.contoso.com@CONTOSO.COM (arcfour-hmac) - 2 02/16/2023 17:12:05 MSSQLSvc/arcsqlmi.contoso.com:31433@CONTOSO.COM (aes256-cts-hmac-sha1-96) - 2 02/16/2023 17:12:05 MSSQLSvc/arcsqlmi.contoso.com:31433@CONTOSO.COM (arcfour-hmac) - 3 02/16/2023 17:13:41 arcsqlmiuser@CONTOSO.COM (aes256-cts-hmac-sha1-96) - 3 02/16/2023 17:13:41 arcsqlmiuser@CONTOSO.COM (arcfour-hmac) - 3 02/16/2023 17:13:41 MSSQLSvc/arcsqlmi.contoso.com@CONTOSO.COM (aes256-cts-hmac-sha1-96) - 3 02/16/2023 17:13:41 MSSQLSvc/arcsqlmi.contoso.com@CONTOSO.COM (arcfour-hmac) - 3 02/16/2023 17:13:41 MSSQLSvc/arcsqlmi.contoso.com:31433@CONTOSO.COM (aes256-cts-hmac-sha1-96) - 3 02/16/2023 17:13:41 MSSQLSvc/arcsqlmi.contoso.com:31433@CONTOSO.COM (arcfour-hmac) -``` --And the following updated-secret.yaml spec: -```yaml -apiVersion: v1 -kind: Secret -type: Opaque -metadata: - name: sqlmi-keytab-secret-kvno-2-3 - namespace: test -data: - keytab: - <keytab-contents> -``` --Finally, change the password for `arcsqlmi` user account in the domain controller for the Active Directory domain `contoso.com`: --1. Open **Server Manager** on the domain controller for the Active Directory domain `contoso.com`. You can either search for *Server Manager* or open it through the Start menu. -1. Go to **Tools** > **Active Directory Users and Computers** -- :::image type="content" source="media/rotate-customer-managed-keytab/active-directory-users-and-computers.png" alt-text="Screenshot of Active Directory Users and Computers."::: --1. Select the user that you want to change password for. Right-click to select the user. Select **Reset password**: -- :::image type="content" source="media/rotate-customer-managed-keytab/reset-password.png" alt-text="Screenshot of the control to reset the password for an Active Directory user account."::: --1. Enter new password and select `OK`. --### Troubleshooting errors after rotation --In case there are errors when trying to use Active Directory Authentication after completing keytab rotation, the following files in the `arc-sqlmi` container in the SQL MI pod are a good place to start investigating the root cause: -- `security.log` file located at `/var/opt/mssql/log` - This log file has logs for SQL's interactions with the Active Directory domain.-- `errorlog` file located at `/var/opt/mssql/log` - This log file contains logs from the SQL Server running on the container.-- `mssql.keytab` file located at `/var/run/secrets/managed/keytabs/mssql` - Verify that this keytab file contains the newly updated entries and matches the keytab file created by using the scripts provided above. The keytab file can be read using the `klist` command i.e. `klist -k mssql.keytab -e`--Additionally, after getting the kerberos Ticket-Granting Ticket (TGT) by using `kinit` command, verify the `kvno` of the SQL user matches the highest `kvno` in the `mssql.keytab` file in the `arc-sqlmi` container. For example, for `arcsqlmi@CONTOSO.COM` user: --- Get the kerberos TGT from the Active Directory domain by running `kinit arcsqlmi@CONTOSO.COM`. This will prompt a user input for the password for `arcsqlmi` user.-- Once this succeeds, the `kvno` can be queried by running `kvno arcsqlmi@CONTOSO.COM`.--We can also enable debug logging for the `kinit` command by running the following: `KRB5_TRACE=/dev/stdout kinit -V arcsqlmi@CONTOSO.COM`. This increases the verbosity and outputs the logs to stdout as the command is being executed. --## Related content --- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards)-- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md) |
azure-arc | Rotate Sql Managed Instance Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-sql-managed-instance-credentials.md | - Title: Rotate SQL Managed Instance service-managed credentials (preview) -description: Rotate SQL Managed Instance service-managed credentials (preview) ------ Previously updated : 03/06/2023---# Rotate SQL Managed Instance enabled by Azure Arc service-managed credentials (preview) --This article describes how to rotate service-managed credentials for SQL Managed Instance enabled by Azure Arc. Arc data services generate various service-managed credentials like certificates and SQL logins used for Monitoring, Backup/Restore, High Availability etc. These credentials are considered custom resource credentials managed by Azure Arc data services. --Service-managed credential rotation is a user-triggered operation that you initiate during a security issue or when periodic rotation is required for compliance. --## Limitations --Consider the following limitations when you rotate a managed instance service-managed credentials: --- SQL Server failover groups aren't supported.-- Automatically prescheduled rotation isn't supported.-- The service-managed DPAPI symmetric keys, keytab, active directory accounts, and service-managed TDE credentials aren't included in this credential rotation.--## General Purpose tier --During General Purpose SQL Managed Instance service-managed credential rotation, the managed instance Kubernetes pod is terminated and reprovisioned with rotated credentials. This process causes a short amount of downtime as the new managed instance pod is created. To handle the interruption, build resiliency into your application such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on how to architect resiliency and [retry guidance for Azure Services](/azure/architecture/best-practices/retry-service-specific#sql-database-using-adonet). --## Business Critical tier --During Business Critical SQL Managed Instance service-managed credential rotation with more than one replica: --- The secondary replica pods are terminated and reprovisioned with the rotated service-managed credentials-- After the replicas are reprovisioned, the primary will fail over to a reprovisioned replica-- The previous primary pod is terminated and reprovisioned with the rotated service-managed credentials, and becomes a secondary--There's a brief moment of downtime when the failover occurs. --## Prerequisites: --Before you proceed with this article, you must have a SQL Managed Instance enabled by Azure Arc resource created. --- [a SQL Managed Instance enabled by Azure Arc created](./create-sql-managed-instance.md)--## How to rotate service-managed credentials in a managed instance --Service-managed credentials are associated with a generation within the managed instance. To rotate all service-managed credentials for a managed instance, the generation must be increased by 1. --Run the following commands to get current service-managed credentials generation from spec and generate the new generation of service-managed credentials. This action triggers service-managed credential rotation. --```console -rotateCredentialGeneration=$(($(kubectl get sqlmi <sqlmi-name> -o jsonpath='{.spec.update.managedCredentialsGeneration}' -n <namespace>) + 1)) -``` ---```console -kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "update": { "managedCredentialsGeneration": '$rotateCredentialGeneration'} } }' -``` ---The `managedCredentialsGeneration` identifies the target generation for service-managed credentials. The rest of the features like configuration and the kubernetes topology remain the same. --## How to roll back service-managed credentials in a managed instance --> [!NOTE] -> Rollback is required when credential rotation fails. Rollback to previous credentials generation is supported only once to n-1 where n is the current generation. -> -> If rollback is triggered while credential rotation is in progress and all the replicas have not been reprovisioned then the rollback __may__ take about 30 minutes to complete for the managed instance to be in a **Ready** state. --Run the following two commands to get current service-managed credentials generation from spec and rollback to the previous generation of service-managed credentials: --```console -rotateCredentialGeneration=$(($(kubectl get sqlmi <sqlmi-name> -o jsonpath='{.spec.update.managedCredentialsGeneration}' -n <namespace>) - 1)) -``` --```console -kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "update": { "managedCredentialsGeneration": '$rotateCredentialGeneration'} } }' -``` --Triggering rollback is the same as triggering a rotation of service-managed credentials except that the target generation is previous generation and doesn't generate a new generation or credentials. --## Related content --- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards)-- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md) |
azure-arc | Rotate User Tls Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-user-tls-certificate.md | - Title: Rotate user-provided TLS certificate in indirectly connected SQL Managed Instance enabled by Azure Arc -description: Rotate user-provided TLS certificate in indirectly connected SQL Managed Instance enabled by Azure Arc ------- Previously updated : 12/15/2021---# Rotate certificate SQL Managed Instance enabled by Azure Arc (indirectly connected) --This article describes how to rotate user-provided Transport Layer Security(TLS) certificate for SQL Managed Instance enabled by Azure Arc in indirectly connected mode using Azure CLI or `kubectl` commands. --Examples in this article use OpenSSL. [OpenSSL](https://www.openssl.org/) is an open-source command-line toolkit for general-purpose cryptography and secure communication. --## Prerequisite --* [Install openssl utility ](https://www.openssl.org/source/) -* a SQL Managed Instance enabled by Azure Arc in indirectly connected mode --## Generate certificate request using `openssl` --If the managed instance uses a self-signed certificate, add all needed Subject Alternative Names (SANs). The SAN is an extension to X.509 that allows various values to be associated with a security certificate using a `subjectAltName` field, the SAN field lets you specify additional host names (sites, IP addresses, common names, and etc.) to be protected by a single SSL certificate, such as a multi-domain SAN or extended validation multi-domain SSL certificate. --To generate certificate on your own, you need to create a certificate signing request (CSR). Verify the configuration for the certificate has a common name with required SANs and has a CA issuer. For example: --```console -openssl req -newkey rsa:2048 -keyout your-private-key.key -out your-csr.csr -``` --Run the following command to check the required SANs: --```console -openssl x509 -in /<cert path>/<filename>.pem -text -``` --The following example demonstrates this command: --```console -openssl x509 -in ./mssql-certificate.pem -text -``` --The command returns the following output: --```output -Certificate: - Data: - Version: 3 (0x2) - Serial Number: 7686530591430793847 (0x6aac0ad91167da77) - Signature Algorithm: sha256WithRSAEncryption - Issuer: CN = Cluster Certificate Authority - Validity - Not Before: Mmm dd hh:mm:ss yyyy GMT - Not After: Mmm dd hh:mm:ss yyyy GMT - Subject: CN = mi4-svc - Subject Public Key Info: - Public Key Algorithm: rsaEncryption - RSA Public-Key: (2048 bit) - Modulus: - 00:ad:7e:16:3e:7d:b3:1e: ... - Exponent: 65537 (0x10001) - X509v3 extensions: - X509v3 Extended Key Usage: critical - TLS Web Client Authentication, TLS Web Server Authentication - X509v3 Key Usage: critical - Digital Signature, Key Encipherment - X509v3 Subject Alternative Name: - DNS:mi4-svc, DNS:mi4-svc.test.svc.cluster.local, DNS:mi4-svc.test.svc - Signature Algorithm: sha256WithRSAEncryption - 7a:f8:a1:25:5c:1d:e2:b4: ... BEGIN CERTIFICATE---MIIDNjCCAh6gAwIB ...== END CERTIFICATE---``` --Example output: --```output -X509v3 Subject Alternative Name: -DNS:mi1-svc, DNS:mi1-svc.test.svc.cluster.local, DNS:mi1-svc.test.svc -``` --## Create Kubernetes secret yaml specification for your service certificate --1. Encode a file using the following command with base64 in any Linux distribution, data are encoded and decoded to make the data transmission and storing process easier. -- ```console - base64 /<path>/<file> > cert.txt - ``` -- For Windows users, use [certutil](/windows-server/administration/windows-commands/certutil) utility to perform Base64 encoding and decoding as the following command: -- ```console - $certutil -encode -f input.txt b64-encoded.txt - ``` -- Remove the header in the output file manually, or use the following command: -- ```console - $findstr /v CERTIFICATE b64-encoded.txt> updated-b64.txt - ``` --1. Add the base64 encoded cert and private key to the yaml specification file to create a Kubernetes secret: -- ```yaml - apiVersion: v1 - kind: Secret - metadata: - name: <secretName> - type: Opaque - data: - certificate.pem: < base64 encoded certificate > - privatekey.pem: < base64 encoded private key > - ``` --## Rotating certificate via Azure CLI --Use the following command by providing Kubernetes secret that you created previously to rotate the certificate: --```azurecli -az sql mi-arc update -n <managed instance name> --k8s-namespace <arc> --use-k8s --service-cert-secret <your-cert-secret> -``` --For example: --```azurecli -az sql mi-arc update -n mysqlmi --k8s-namespace <arc> --use-k8s --service-cert-secret mymi-cert-secret -``` --Use the following command to rotate the certificate with the PEM formatted certificate public and private keys. The command generates a default service certificate name. --```azurecli -az sql mi-arc update -n <managed instance name> --k8s-namespace arc --use-k8s --cert-public-key-file <path-to-my-cert-public-key> --cert-private-key-file <path-to-my-cert-private-key> --k8s-namespace <your-k8s-namespace> -``` --For example: --```azurecli -az sql mi-arc update -n mysqlmi --k8s-namespace arc --use-k8s --cert-public-key-file ./mi1-1-cert --cert-private-key-file ./mi1-1-pvt -``` --You can also provide a Kubernetes service cert secret name for `--service-cert-secret` parameter. In this case, it's taken as an updated secret name. The command checks if the secret exists. If not, the command creates a secret name and then rotates the secret in the managed instance. --```azurecli -az sql mi-arc update -n <managed instance name> --k8s-namespace <arc> --use-k8s --cert-public-key-file <path-to-my-cert-public-key> --cert-private-key-file <path-to-my-cert-private-key> --service-cert-secret <path-to-mymi-cert-secret> -``` --For example: --```azurecli -az sql mi-arc update -n mysqlmi --k8s-namespace arc --use-k8s --cert-public-key-file ./mi1-1-cert --cert-private-key-file ./mi1-1-pvt --service-cert-secret mi1-12-1-cert-secret -``` --## Rotate the certificate with `kubectl` command --Once you created the Kubernetes secret, you can bind it to the SQL Managed Instance yaml definition `security` section where `serviceCertificateSecret` located as follows: --```yaml - security: - adminLoginSecret: <your-admin-login-secret> - serviceCertificateSecret: <your-cert-secret> -``` --The following `.yaml` file is an example to rotate the service certificate in SQL instance named `mysqlmi`, update the spec with a Kubernetes secret named `my-service-cert`: --```yaml -apiVersion: sql.arcdata.microsoft.com/v1 -kind: sqlmanagedinstance -metadata: - name: mysqlmi - namespace: my-arc-namespace -spec: -spec: - dev: false - licenseType: LicenseIncluded - replicas: 1 - security: - adminLoginSecret: mysqlmi-admin-login-secret - # Update the serviceCertificateSecret with name of the K8s secret - serviceCertificateSecret: my-service-cert - - primary: - type: NodePort - storage: - data: - volumes: - - size: 5Gi - logs: - volumes: - - size: 5Gi - tier: GeneralPurpose -``` --You can use the following kubectl command to apply this setting: --```console - kubectl apply -f <my-sql-mi-yaml-file> -``` --## Related content -- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards)-- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md) |
azure-arc | Scale Up Down Postgresql Server Using Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/scale-up-down-postgresql-server-using-cli.md | - Title: Scale up and down an Azure Database for PostgreSQL server using CLI (az or kubectl) -description: Scale up and down an Azure Database for PostgreSQL server using CLI (az or kubectl) ------ Previously updated : 11/03/2021---# Scale up and down an Azure Database for PostgreSQL server using CLI (az or kubectl) --There are times when you may need to change the characteristics or the definition of a server. For example: --- Scale up or down the number of vCores that the server uses-- Scale up or down the memory that the server uses--This guide explains how to scale vCore and/or memory. --Scaling up or down the vCore or memory settings of your server means you have the possibility to set a minimum and/or a maximum for each of the vCore and memory settings. If you want to configure your server to use a specific number of vCore or a specific amount of memory, you would set the minimum settings equal to the maximum settings. Before increasing the value set for vCores and Memory, you must ensure that -- you have enough resources available in the physical infrastructure that hosts your deployment and -- workloads collocated on the same system are not competing for the same vCores or Memory.---## Show the current definition of the server --To show the current definition of your server and see what are the current vCore and Memory settings, run either of the following command: --### With Azure CLI (az) --```azurecli -az postgres server-arc show -n <server name> --k8s-namespace <namespace> --use-k8s -``` -### CLI with kubectl --```console -kubectl describe postgresql/<server name> -n <namespace name> -``` --It returns the configuration of your server group. If you have created the server with the default settings, you should see the definition as follows: --```json -Spec: - Dev: false - Scheduling: - Default: - Resources: - Requests: - Memory: 256Mi -... -``` --## Interpret the definition of the server --In the definition of a server, the section that carries the settings of minimum or maximum vCore per node and minimum or maximum memory per node is the **"scheduling"** section. In that section, the maximum settings will be persisted in a subsection called **"limits"** and the minimum settings are persisted in the subsection called **"requests"**. --If you set minimum settings that are different from the maximum settings, the configuration guarantees that your server is allocated the requested resources if it needs. It will not exceed the limits you set. --The resources (vCores and memory) that will actually be used by your server are up to the maximum settings and depend on the workloads and the resources available on the cluster. If you do not cap the settings with a max, your server may use up to all the resources that the Kubernetes cluster allocates to the Kubernetes nodes your server is scheduled on. --In a default configuration, only the minimum memory is set to 256Mi as it is the minimum amount of memory that is recommended to run PostgreSQL server. --> [!NOTE] -> Setting a minimum does not mean the server will necessarily use that minimum. It means that if the server needs it, it is guaranteed to be allocated at least this minimum. For example, let's consider we set `--minCpu 2`. It does not mean that the server will be using at least 2 vCores at all times. It instead means that the sever may start using less than 2 vCores if it does not need that much and it is guaranteed to be allocated at least 2 vCores if it needs them later on. It implies that the Kubernetes cluster allocates resources to other workloads in such a way that it can allocate 2 vCores to the server if it ever needs them. Also, scaling up and down is not a online operation as it requires the restart of the kubernetes pods. -->[!NOTE] ->Before you modify the configuration of your system please make sure to familiarize yourself with the Kubernetes resource model [here](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities) --## Scale up and down the server --Scaling up refers to increasing the values for the vCores and/or memory settings of your server. -Scaling down refers to decreasing the values for the vCores and/or memory settings of your server. --The settings you are about to set have to be considered within the configuration you set for your Kubernetes cluster. Make sure you are not setting values that your Kubernetes cluster won't be able to satisfy. That could lead to errors or unpredictable behavior like unavailability of the database instance. As an example, if the status of your server stays in status _updating_ for a long time after you change the configuration, it may be an indication that you set the below parameters to values that your Kubernetes cluster cannot satisfy. If that is the case, revert the change or read the _troubleshooting_section. --What settings should you set? -- To set minimum vCore, set `--cores-request`.-- To set maximum vCore, set `--cores-limit`.-- To set minimum memory, set `--memory-request`-- To set maximum memory, set `--memory-limit`---> [!CAUTION] -> With Kubernetes, configuring a limit setting without configuring the corresponding request setting forces the request value to be the same value as the limit. This could potentially lead to the unavailability of your server as its pods may not be rescheduled if there isn't a Kubernetes node available with sufficient resources. As such, to avoid this situation, the below examples show how to set both the request and the limit settings. ---**The general syntax is:** --```azurecli -az postgres server-arc edit -n <server name> --memory-limit/memory-request/cores-request/cores-limit <val> --k8s-namespace <namespace> --use-k8s -``` --The value you indicate for the memory setting is a number followed by a unit of volume. For example, to indicate 1Gb, you would indicate 1024Mi or 1Gi. -To indicate a number of cores, you just pass a number without unit. --### Examples using the Azure CLI --**Configure the server to not exceed 2 cores:** --```azurecli - az postgres server-arc edit -n postgres01 --cores-request 1, --cores-limit 2 --k8s-namespace arc --use-k8s -``` ---> [!NOTE] -> For details about those parameters, run `az postgres server-arc update --help`. --### Example using Kubernetes native tools like `kubectl` --Run the command: -```console -kubectl edit postgresql/<server name> -n <namespace name> -``` --This takes you in the `vi` editor where you can navigate and change the configuration. Use the following to map the desired setting to the name of the field in the specification: --> [!CAUTION] -> Below is an example provided to illustrate how you could edit the configuration. Before updating the configuration, make sure to set the parameters to values that the Kubernetes cluster can honor. --For example if you want to set the following settings for both the coordinator and the worker roles to the following values: -- Minimum vCore = `2` -- Maximum vCore = `4`-- Minimum memory = `512Mb`-- Maximum Memory = `1Gb` --You would set the definition your server group so that it matches the below configuration: --```json -... - spec: - dev: false - scheduling: - default: - resources: - requests: - cpu: "2" - memory: 256Mi - limits: - cpu: "4" - memory: 1Gi -... -``` --If you are not familiar with the `vi` editor, see a description of the commands you may need [here](https://www.computerhope.com/unix/uvi.htm): -- Edit mode: `i`-- Move around with arrows-- Stop editing: `esc`-- Exit without saving: `:qa!`-- Exit after saving: `:qw!`---## Reset to default values -To reset core/memory limits/requests parameters to their default values, edit them and pass an empty string instead of an actual value. For example, if you want to reset the core limit parameter, run the following commands: --```azurecli -az postgres server-arc edit -n postgres01 --cores-request '' --k8s-namespace arc --use-k8s -az postgres server-arc edit -n postgres01 --cores-limit '' --k8s-namespace arc --use-k8s -``` --or -```azurecli -az postgres server-arc edit -n postgres01 --cores-request '' --cores-limit '' --k8s-namespace arc --use-k8s -``` --## Related content --- [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities) |
azure-arc | Service Tiers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/service-tiers.md | - Title: SQL Managed Instance enabled by Azure Arc service tiers -description: Explains the service tiers available for SQL Managed Instance enabled by Azure Arc deployments. ------ Previously updated : 07/19/2023----# SQL Managed Instance enabled by Azure Arc service tiers --As part of the family of Azure SQL products, SQL Managed Instance enabled by Azure Arc is available in two [vCore](/azure/azure-sql/database/service-tiers-vcore) service tiers. --- **General Purpose** is a budget-friendly tier designed for most workloads with common performance and availability features.-- **Business Critical** tier is designed for performance-sensitive workloads with higher availability features.--In Azure, storage and compute is provided by Microsoft with guaranteed service level agreements (SLAs) for performance, throughput, availability, and etc. across each of the service tiers. With Azure Arc-enabled data services, customers provide the storage and compute. Hence, there are no guaranteed SLAs provided to customers with Azure Arc-enabled data services. However, customers get the flexibility to bring their own performant hardware irrespective of the service tier. --## Service tier comparison --Following is a description of the various capabilities available from Azure Arc-enabled data services across the two service tiers: ---Area | Business Critical | General Purpose --|--|-SQL Feature set | Same as SQL Server Enterprise Edition | Same as SQL Server Standard Edition -CPU limit/instance | Unlimited | 24 cores -Memory limit/instance | Unlimited | 128 GB -Scale up/down | Available | Available -Monitoring | Built-in available locally, and optionally export to Azure Monitor | Built-in available locally, and optionally export to Azure Log Analytics -Logging | Built-in available locally, and optionally export to Azure Log Analytics | Built-in available locally, and optionally export to Azure Monitor -Point in time Restore | Built-in | Built-in -High availability | Contained Availability groups over kubernetes redeployment | Single instance w/ Kubernetes redeploy + shared storage. -Read scale out | Availability group | None -Disaster Recovery | Available via Failover Groups | Available via Failover Groups -AHB exchange rates for IP component of price | 1:1 Enterprise Edition <br> 4:1 Standard Edition | 1:4 Enterprise Edition​ <br> 1:1 Standard Edition -Dev/Test pricing | No cost | No cost --## How to choose between the service tiers --Since customers bring their own hardware with performance and availability requirements based on their business needs, the primary differentiators between the service tiers are what is provided at the software level. --### Choose General Purpose if --- CPU/memory requirements meet or are within the limits of the General Purpose service tier-- The high availability options provided by Kubernetes, such as pod redeployments, is sufficient for the workload-- Application does not need read scale out-- The application does not require any of the features found in the Business Critical service tier (same as SQL Server Enterprise Edition)--### Choose Business Critical if --- CPU/memory requirements exceed the limits of the General Purpose service tier-- Application requires a higher level of High Availability such as built-in Availability Groups to handle application failovers than what is offered by Kubernetes. -- Application can take advantage of read scale out to offload read workloads to the secondary replicas-- Application requires features found only in the Business Critical service tier (same as SQL Server Enterprise Edition) |
azure-arc | Show Configuration Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/show-configuration-postgresql-server.md | - Title: Show the configuration of an Azure Arc-enabled PostgreSQL server- -description: Show the configuration of an Azure Arc-enabled PostgreSQL server ------ Previously updated : 11/03/2021----# Show the configuration of an Azure Arc-enabled PostgreSQL server --This article explains how to display the configuration of your server. It does so by anticipating some questions you may be asking to yourself and it answers them. At times, there may be several valid answers. This article pitches the most common or useful ones. It groups those questions by theme: --- From a Kubernetes point of view-- From an Azure Arc-enabled data services point of view---## From a Kubernetes point of view --### What are the Postgres servers deployed and how many pods are they using? --List the Kubernetes resources of type Postgres. Run the command: --```console -kubectl get postgresqls -n <namespace> -``` --The output of this command shows the list of server groups created. For each, it indicates the number of pods. For example: --```output -NAME STATE READY-PODS PRIMARY-ENDPOINT AGE -postgres01 Ready 1/1 20.101.12.221:5432 12d -``` --This example shows that one server is created. It runs on one pod. --### What pods are used by Azure Arc-enabled PostgreSQL servers? --Run: --```console -kubectl get pods -n <namespace> -``` --The command returns the list of pods. You will see the pods used by your servers based on the names you gave to those servers. For example: --```console -NAME READY STATUS RESTARTS AGE -bootstrapper-4jrtl 1/1 Running 0 12d -control-kz8gh 2/2 Running 0 12d -controldb-0 2/2 Running 0 12d -logsdb-0 3/3 Running 0 12d -logsui-qjkgz 3/3 Running 0 12d -metricsdb-0 2/2 Running 0 12d -metricsdc-4jslw 2/2 Running 0 12d -metricsdc-4tl2g 2/2 Running 0 12d -metricsdc-fkxv2 2/2 Running 0 12d -metricsdc-hs4h5 2/2 Running 0 12d -metricsdc-tvz22 2/2 Running 0 12d -metricsui-7pcch 2/2 Running 0 12d -postgres01-0 3/3 Running 0 2d19h -``` --### What is the status of the pods? --Run `kubectl get pods -n <namespace>` and look at the column `STATUS` --### What persistent volume claims (PVCs) are being used? --To understand what PVCs are used, and which are used for data, and logs, run: --```console -kubectl get pvc -n <namespace> -``` --By default, the prefix of the name of a PVC indicates its usage: --- `data-`...: is PVC used for data files-- `logs-`...: is a PVC used for transaction logs/WAL files--For example: --```output -NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -data-few7hh0k4npx9phsiobdc3hq-postgres01-0 Bound local-pv-3c1a8cc5 1938Gi RWO local-storage 6d6h -data-few7hh0k4npx9phsiobdc3hq-postgres01-1 Bound local-pv-8303ab19 1938Gi RWO local-storage 6d6h -data-few7hh0k4npx9phsiobdc3hq-postgres01-2 Bound local-pv-55572fe6 1938Gi RWO local-storage 6d6h -... -logs-few7hh0k4npx9phsiobdc3hq-postgres01-0 Bound local-pv-5e852b76 1938Gi RWO local-storage 6d6h -logs-few7hh0k4npx9phsiobdc3hq-postgres01-1 Bound local-pv-55d309a7 1938Gi RWO local-storage 6d6h -logs-few7hh0k4npx9phsiobdc3hq-postgres01-2 Bound local-pv-5ccd02e6 1938Gi RWO local-storage 6d6h -... -``` --### How much memory and vCores are being used by a server? --Use kubectl to describe Postgres resources. To do so, you need its kind (name of the Kubernetes resource (CRD) for Postgres in Azure Arc) and the name of the server group. --The general format of this command is: --```console -kubectl describe <CRD name>/<server name> -n <namespace> -``` --For example: --```console -kubectl describe postgresql/postgres01 -n arc -``` --This command shows the configuration of the server group: --```output -Name: postgres01 -Namespace: arc -Labels: <none> -Annotations: <none> -API Version: arcdata.microsoft.com/v1beta2 -Kind: PostgreSql -Metadata: - Creation Timestamp: 2021-10-13T01:09:25Z - Generation: 29 - Managed Fields: - API Version: arcdata.microsoft.com/v1beta2 - Fields Type: FieldsV1 - fieldsV1: - f:spec: - .: - f:dev: - f:scheduling: - .: - f:default: - .: - f:resources: - .: - f:limits: - .: - f:cpu: - f:memory: - f:requests: - .: - f:cpu: - f:memory: - f: - .: - f:primary: - .: - f:port: - f:type: - f:storage: - .: - f:data: - .: - f:volumes: - f:logs: - .: - f:volumes: - - Operation: Update - Time: 2021-10-22T22:37:51Z - API Version: arcdata.microsoft.com/v1beta2 - Fields Type: FieldsV1 - fieldsV1: - f:IsValid: - f:status: - .: - f:lastUpdateTime: - f:logSearchDashboard: - f:metricsDashboard: - f:observedGeneration: - f:primaryEndpoint: - f:readyPods: - f:state: - - Operation: Update - Time: 2021-10-22T22:37:53Z - Resource Version: 1541521 - UID: 23565e53-2e7a-4cd6-8f80-3a79397e1d7a -Spec: - Dev: false - Scheduling: - Default: - Resources: - Limits: - Cpu: 2 - Memory: 1Gi - Requests: - Cpu: 1 - Memory: 256Mi - - Primary: - Port: 5432 - Type: LoadBalancer - Storage: - Data: - Volumes: - Class Name: managed-premium - Size: 5Gi - Logs: - Volumes: - Class Name: managed-premium - Size: 5Gi -Status: - Last Update Time: 2021-10-22T22:37:53.000000Z - Log Search Dashboard: https://12.235.78.99:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:postgres01')) - Metrics Dashboard: https://12.346.578.99:3000/d/postgres-metrics?var-Namespace=arc&var-Name=postgres01 - Observed Generation: 29 - Primary Endpoint: 20.101.12.221:5432 - Ready Pods: 1/1 - State: Ready -Events: <none> -``` --#### Interpret the configuration information --Let's call out some specific points of interest in the description of the `server` shown above. What does it tell us about this server? --- It was created during on October 13 2021:-- ```output - Metadata: - Creation Timestamp: 2021-10-13T01:09:25Z - ``` --- Resource configuration: in this example, its guaranteed 256Mi of memory. The server can not use more that 1Gi of memory. It is guaranteed one vCore and can't consume more than two vCores.-- ```console - Scheduling: - Default: - Resources: - Limits: - Cpu: 2 - Memory: 1Gi - Requests: - Cpu: 1 - Memory: 256Mi - ``` --- What's the status of the server? Is it available for my applications?-- Yes, the pods is ready -- ```console - Ready Pods: 1/1 - ``` --## From an Azure Arc-enabled data services point of view --Use Az CLI commands. --### What are the Postgres servers deployed? --Run the following command. -- ```azurecli - az postgres server-arc list --k8s-namespace <namespace> --use-k8s - ``` --It lists the servers that are deployed. -- ```output - [ - { - "name": "postgres01", - "state": "Ready" - } - ] - ``` ---### How much memory and vCores are being used? --Run either of the following commands --```azurecli -az postgres server-arc show -n <server name> --k8s-namespace <namespace> --use-k8s -``` --For example: --```azurecli -az postgres server-arc show -n postgres01 --k8s-namespace arc --use-k8s -``` --Returns the information in a format and content similar to the one returned by kubectl. Use the tool of your choice to interact with the system. --## Related content --- [Read about how to scale up/down (increase or reduce memory and/or vCores) a server group](scale-up-down-postgresql-server-using-cli.md)-- [Read about storage configuration](storage-configuration.md)-- [Read how to monitor a database instance](monitor-grafana-kibana.md)-- [Configure security for your Azure Arc-enabled PostgreSQL server](configure-security-postgresql.md) |
azure-arc | Sizing Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/sizing-guidance.md | - Title: Sizing guidance -description: Plan for the size of a deployment of Azure Arc-enabled data services. ------ Previously updated : 07/30/2021----# Sizing Guidance ---## Overview of sizing guidance --When planning for the deployment of Azure Arc data services, plan the correct amount of: --- Compute-- Memory-- Storage --These resources are required for: --- The data controller-- SQL managed instances-- PostgreSQL servers--Because Azure Arc-enabled data services deploy on Kubernetes, you have the flexibility of adding more capacity to your Kubernetes cluster over time by compute nodes or storage. This guide explains minimum requirements and recommends sizes for some common requirements. --## General sizing requirements --> [!NOTE] -> If you are not familiar with the concepts in this article, you can read more about [Kubernetes resource governance](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) and [Kubernetes size notation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes). --Cores numbers must be an integer value greater than or equal to one. --When you deploy with Azure CLI (az), use a power of two number to set the memory values. Specifically, use the suffixes: --- `Ki`-- `Mi`-- `Gi`--Limit values must always be greater than to the request value, if specified. --Limit values for cores are the billable metric on SQL managed instance and PostgreSQL servers. --## Minimum deployment requirements --A minimum size Azure Arc-enabled data services deployment could be considered to be the Azure Arc data controller plus one SQL managed instance plus one PostgreSQL server. For this configuration, you need at least 16-GB RAM and 4 cores of _available_ capacity on your Kubernetes cluster. You should ensure that you have a minimum Kubernetes node size of 8-GB RAM and 4 cores and a sum total capacity of 16-GB RAM available across all of your Kubernetes nodes. For example, you could have 1 node at 32-GB RAM and 4 cores or you could have 2 nodes with 16-GB RAM and 4 cores each. --See the [storage-configuration](storage-configuration.md) article for details on storage sizing. --## Data controller sizing details --The data controller is a collection of pods that are deployed to your Kubernetes cluster to provide an API, the controller service, the bootstrapper, and the monitoring databases and dashboards. This table describes the default values for memory and CPU requests and limits. --|Pod name|CPU request|Memory request|CPU limit|Memory limit| -|||||| -|**`bootstrapper`**|`100m`|`100Mi`|`200m`|`200Mi`| -|**`control`**|`400m`|`2Gi`|`1800m`|`2Gi`| -|**`controldb`**|`200m`|`4Gi`|`800m`|`6Gi`| -|**`logsdb`**|`200m`|`1600Mi`|`2`|`1600Mi`| -|**`logsui`**|`100m`|`500Mi`|`2`|`2Gi`| -|**`metricsdb`**|`200m`|`800Mi`|`400m`|`2Gi`| -|**`metricsdc`**|`100m`|`200Mi`|`200m`|`300Mi`| -|**`metricsui`**|`20m`|`200Mi`|`500m`|`200Mi`| --`metricsdc` is a `daemonset`, which is created on each of the Kubernetes nodes in your cluster. The numbers in the table are _per node_. If you set `allowNodeMetricsCollection = false` in your deployment profile file before you create the data controller, this `daemonset` isn't created. --You can override the default settings for the `controldb` and control pods in your data controller YAML file. Example: --```yaml - resources: - controller: - limits: - cpu: "1000m" - memory: "3Gi" - requests: - cpu: "800m" - memory: "2Gi" - controllerDb: - limits: - cpu: "800m" - memory: "8Gi" - requests: - cpu: "200m" - memory: "4Gi" -``` --See the [storage-configuration](storage-configuration.md) article for details on storage sizing. --## SQL managed instance sizing details --Each SQL managed instance must have the following minimum resource requests and limits: --|Service tier|General Purpose|Business Critical| -|||| -|CPU request|Minimum: 1<br/> Maximum: 24<br/> Default: 2|Minimum: 3<br/> Maximum: unlimited<br/> Default: 4| -|CPU limit|Minimum: 1<br/> Maximum: 24<br/> Default: 2|Minimum: 3<br/> Maximum: unlimited<br/> Default: 4| -|Memory request|Minimum: `2Gi`<br/> Maximum: `128Gi`<br/> Default: `4Gi`|Minimum: `2Gi`<br/> Maximum: unlimited<br/> Default: `4Gi`| -|Memory limit|Minimum: `2Gi`<br/> Maximum: `128Gi`<br/> Default: `4Gi`|Minimum: `2Gi`<br/> Maximum: unlimited<br/> Default: `4Gi`| --Each SQL managed instance pod that is created has three containers: --|Container name|CPU Request|Memory Request|CPU Limit|Memory Limit|Notes| -||||||| -|`fluentbit`|`100m`|`100Mi`|Not specified|Not specified|The `fluentbit` container resource requests are _in addition to_ the requests specified for the SQL managed instance.| -|`arc-sqlmi`|User specified or not specified.|User specified or not specified.|User specified or not specified.|User specified or not specified.| -|`collectd`|Not specified|Not specified|Not specified|Not specified| --The default volume size for all persistent volumes is `5Gi`. --## PostgreSQL server sizing details --Each PostgreSQL server node must have the following minimum resource requests: -- Memory: `256Mi`-- Cores: 1--Each PostgreSQL server pod that is created has three containers: --|Container name|CPU Request|Memory Request|CPU Limit|Memory Limit|Notes| -||||||| -|`fluentbit`|`100m`|`100Mi`|Not specified|Not specified|The `fluentbit` container resource requests are _in addition to_ the requests specified for the PostgreSQL server.| -|`postgres`|User specified or not specified.|User specified or `256Mi` (default).|User specified or not specified.|User specified or not specified.|| -|`arc-postgresql-agent`|Not specified|Not specified|Not specified|Not specified|| --## Cumulative sizing --The overall size of an environment required for Azure Arc-enabled data services is primarily a function of the number and size of the database instances. The overall size can be difficult to predict ahead of time knowing that the number of instances may grow and shrink and the amount of resources that are required for each database instance can change. --The baseline size for a given Azure Arc-enabled data services environment is the size of the data controller, which requires 4 cores and 16-GB RAM. From there, add the cumulative total of cores and memory required for the database instances. SQL Managed Instance requires one pod for each instance. PostgreSQL server creates one pod for each server. --In addition to the cores and memory you request for each database instance, you should add `250m` of cores and `250Mi` of RAM for the agent containers. --### Example sizing calculation --Requirements: --- **"SQL1"**: 1 SQL managed instance with 16-GB RAM, 4 cores-- **"SQL2"**: 1 SQL managed instance with 256-GB RAM, 16 cores-- **"Postgres1"**: 1 PostgreSQL server at 12-GB RAM, 4 cores--Sizing calculations: --- The size of "SQL1" is: `1 pod * ([16Gi RAM, 4 cores] + [250Mi RAM, 250m cores])`. For the agents per pod use `16.25 Gi` RAM and 4.25 cores.-- The size of "SQL2" is: `1 pod * ([256Gi RAM, 16 cores] + [250Mi RAM, 250m cores])`. For the agents per pod use `256.25 Gi` RAM and 16.25 cores.-- The total size of SQL 1 and SQL 2 is: - - `(16.25 GB + 256.25 Gi) = 272.5-GB RAM` - - `(4.25 cores + 16.25 cores) = 20.5 cores` --- The size of "Postgres1" is: `1 pod * ([12Gi RAM, 4 cores] + [250Mi RAM, 250m cores])`. For the agents per pod use `12.25 Gi` RAM and `4.25` cores.--- The total capacity required:- - For the database instances: - - 272.5-GB RAM - - 20.5 cores - - For SQL: - - 12.25-GB RAM - - 4.25 cores - - For PostgreSQL server - - 284.75-GB RAM - - 24.75 cores --- The total capacity required for the database instances plus the data controller is: - - For the database instance - - 284.75-GB RAM - - 24.75 cores - - For the data controller - - 16-GB RAM - - 4 cores - - In total: - - 300.75-GB RAM - - 28.75 cores. --See the [storage-configuration](storage-configuration.md) article for details on storage sizing. --## Other considerations --Keep in mind that a given database instance size request for cores or RAM cannot exceed the available capacity of the Kubernetes nodes in the cluster. For example, if the largest Kubernetes node you have in your Kubernetes cluster is 256-GB RAM and 24 cores, you can't create a database instance with a request of 512-GB RAM and 48 cores. --Maintain at least 25% of available capacity across the Kubernetes nodes. This capacity allows Kubernetes to: --- Efficiently schedule pods to be created-- Enable elastic scaling-- Supports rolling upgrades of the Kubernetes nodes-- Facilitates longer term growth on demand--In your sizing calculations, add the resource requirements of the Kubernetes system pods and any other workloads, which may be sharing capacity with Azure Arc-enabled data services on the same Kubernetes cluster. --To maintain high availability during planned maintenance and disaster continuity, plan for at least one of the Kubernetes nodes in your cluster to be unavailable at any given point in time. Kubernetes attempts to reschedule the pods that were running on a given node that was taken down for maintenance or due to a failure. If there is no available capacity on the remaining nodes those pods won't be rescheduled for creation until there is available capacity again. Be extra careful with large database instances. For example, if there is only one Kubernetes node big enough to meet the resource requirements of a large database instance and that node fails, then Kubernetes won't schedule that database instance pod onto another Kubernetes node. --Keep the [maximum limits for a Kubernetes cluster size](https://kubernetes.io/docs/setup/best-practices/cluster-large/) in mind. --Your Kubernetes administrator may have set up [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) on your namespace/project. Keep these quotas in mind when planning your database instance sizes. |
azure-arc | Storage Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/storage-configuration.md | - Title: Storage configuration -description: Explains Azure Arc-enabled data services storage configuration options ------ Previously updated : 07/30/2021----# Storage Configuration ---## Kubernetes storage concepts --Kubernetes provides an infrastructure abstraction layer over the underlying virtualization tech stack (optional) and hardware. The way that Kubernetes abstracts away storage is through **[Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/)**. When you provision a pod, you can specify a storage class for each volume. At the time the pod is provisioned, the storage class **[provisioner](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/)** is called to provision the storage, and then a **[persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)** is created on that provisioned storage and then the pod is mounted to the persistent volume by a **[persistent volume claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)**. --Kubernetes provides a way for storage infrastructure providers to plug in drivers (also called "Addons") that extend Kubernetes. Storage addons must comply with the **[Container Storage Interface standard](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/)**. There are dozens of addons that can be found in this non-definitive **[list of CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html)**. The specific CSI driver you use depends on factors such as whether you're running in a cloud-hosted, managed Kubernetes service or which OEM provider you use for your hardware. --To view the storage classes configured in your Kubernetes cluster, run this command: --```console -kubectl get storageclass -``` --Example output from an Azure Kubernetes Service (AKS) cluster: --```console -NAME PROVISIONER AGE -azurefile kubernetes.io/azure-file 15d -azurefile-premium kubernetes.io/azure-file 15d -default (default) kubernetes.io/azure-disk 4d3h -managed-premium kubernetes.io/azure-disk 4d3h -``` --You can get details about a storage class by running this command: --```console -kubectl describe storageclass/<storage class name> -``` --Example: --```console -kubectl describe storageclass/azurefile --Name: azurefile -IsDefaultClass: No -Annotations: kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1beta1","kind":"StorageClass","metadata":{"annotations":{},"labels":{"kubernetes.io/cluster-service":"true"},"name":"azurefile"},"parameters":{"sku -Name":"Standard_LRS"},"provisioner":"kubernetes.io/azure-file"} --Provisioner: kubernetes.io/azure-file -Parameters: skuName=Standard_LRS -AllowVolumeExpansion: True -MountOptions: <none> -ReclaimPolicy: Delete -VolumeBindingMode: Immediate -Events: <none> -``` --You can see the currently provisioned persistent volumes and persistent volume claims by running the following commands: --```console -kubectl get persistentvolumes -n <namespace> --kubectl get persistentvolumeclaims -n <namespace> -``` --Example of showing persistent volumes: --```console --kubectl get persistentvolumes -n arc --NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-07fc7b9f-9a37-4796-9442-4405147120da 15Gi RWO Delete Bound arc/sqldemo11-data-claim default 7d3h -pvc-3e772f20-ed89-4642-b34d-8bb11b088afa 15Gi RWO Delete Bound arc/data-metricsdb-0 default 7d14h -pvc-41b33bbd-debb-4153-9a41-02ce2bf9c665 10Gi RWO Delete Bound arc/sqldemo11-logs-claim default 7d3h -pvc-4ccda3e4-fee3-4a89-b92d-655c04fa62ad 15Gi RWO Delete Bound arc/data-controller default 7d14h -pvc-63e6bb4c-7240-4de5-877e-7e9ea4e49c91 10Gi RWO Delete Bound arc/logs-controller default 7d14h -pvc-8a1467fe-5eeb-4d73-b99a-f5baf41eb493 10Gi RWO Delete Bound arc/logs-metricsdb-0 default 7d14h -pvc-8e2cacbc-e953-4901-8591-e77df9af309c 10Gi RWO Delete Bound arc/sqldemo10-logs-claim default 7d14h -pvc-9fb79ba3-bd3e-42aa-aa09-3090135d4513 15Gi RWO Delete Bound arc/sqldemo10-data-claim default 7d14h -pvc-a39c85d4-5cd9-4249-9915-68a70a9bb5e5 15Gi RWO Delete Bound arc/data-controldb default 7d14h -pvc-c9cbd74a-76ca-4be5-b598-0c7a45749bfb 10Gi RWO Delete Bound arc/logs-controldb default 7d14h -pvc-d576e9d4-0a09-4dd7-b806-be8ed461f8a4 10Gi RWO Delete Bound arc/logs-logsdb-0 default 7d14h -pvc-ecd7d07f-2c2c-421d-98d7-711ec5d4a0cd 15Gi RWO Delete Bound arc/data-logsdb-0 default 7d14h -``` --Example of showing persistent volume claims: --```console --kubectl get persistentvolumeclaims -n arc --NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -data-controldb Bound pvc-a39c85d4-5cd9-4249-9915-68a70a9bb5e5 15Gi RWO default 7d14h -data-controller Bound pvc-4ccda3e4-fee3-4a89-b92d-655c04fa62ad 15Gi RWO default 7d14h -data-logsdb-0 Bound pvc-ecd7d07f-2c2c-421d-98d7-711ec5d4a0cd 15Gi RWO default 7d14h -data-metricsdb-0 Bound pvc-3e772f20-ed89-4642-b34d-8bb11b088afa 15Gi RWO default 7d14h -logs-controldb Bound pvc-c9cbd74a-76ca-4be5-b598-0c7a45749bfb 10Gi RWO default 7d14h -logs-controller Bound pvc-63e6bb4c-7240-4de5-877e-7e9ea4e49c91 10Gi RWO default 7d14h -logs-logsdb-0 Bound pvc-d576e9d4-0a09-4dd7-b806-be8ed461f8a4 10Gi RWO default 7d14h -logs-metricsdb-0 Bound pvc-8a1467fe-5eeb-4d73-b99a-f5baf41eb493 10Gi RWO default 7d14h -sqldemo10-data-claim Bound pvc-9fb79ba3-bd3e-42aa-aa09-3090135d4513 15Gi RWO default 7d14h -sqldemo10-logs-claim Bound pvc-8e2cacbc-e953-4901-8591-e77df9af309c 10Gi RWO default 7d14h -sqldemo11-data-claim Bound pvc-07fc7b9f-9a37-4796-9442-4405147120da 15Gi RWO default 7d4h -sqldemo11-logs-claim Bound pvc-41b33bbd-debb-4153-9a41-02ce2bf9c665 10Gi RWO default 7d4h --``` --## Factors to consider when choosing your storage configuration --Selecting the right storage class is important to data resiliency and performance. Choosing the wrong storage class can put your data at risk of total data loss in the event of a hardware failure or could result in less optimal performance. --There are generally two types of storage: --- **Local storage** - storage provisioned on local hard drives on a given node. This kind of storage can be ideal in terms of performance, but requires specifically designing for data redundancy by replicating the data across multiple nodes.-- **Remote, shared storage** - storage provisioned on some remote storage device - for example, a SAN, NAS, or cloud storage service like EBS or Azure Files. This kind of storage generally provides for data redundancy automatically, but is not as fast as local storage can be.--## NFS based storage classes --Depending on the configuration of your NFS server and storage class provisioner, you may need to set the `supplementalGroups` in the pod configurations for database instances, and you may need to change the NFS server configuration to use the group IDs passed in by the client (as opposed to looking group IDs up on the server using the passed-in user ID). Consult your NFS administrator to determine if this is the case. --The `supplementalGroups` property takes an array of values you can set at deployment. Azure Arc data controller applies these to any database instances it creates. --To set this property, run the following command: --```azurecli -az arcdata dc config add --path custom/control.json --json-values 'spec.security.supplementalGroups="1234556"' -``` --### Data controller storage configuration --Some services in Azure Arc for data services depend upon being configured to use remote, shared storage because the services don't have an ability to replicate the data. These services are found in the collection of data controller pods: --|**Service**|**Persistent Volume Claims**| -||| -|**OpenSearch**|`<namespace>/logs-logsdb-0`, `<namespace>/data-logsdb-0`| -|**InfluxDB**|`<namespace>/logs-metricsdb-0`, `<namespace>/data-metricsdb-0`| -|**Controller SQL instance**|`<namespace>/logs-controldb`, `<namespace>/data-controldb`| -|**Controller API service**|`<namespace>/data-controller`| --At the time the data controller is provisioned, the storage class to be used for each of these persistent volumes is specified by either passing the --storage-class | -sc parameter to the `az arcdata dc create` command or by setting the storage classes in the control.json deployment template file that is used. If you're using the Azure portal to create the data controller in the directly connected mode, the deployment template that you choose either has the storage class predefined in the template or you can select a template that does not have a predefined storage class. If your template does not define a storage class, the portal prompts you for one. If you use a custom deployment template, then you can specify the storage class. --The deployment templates that are provided out of the box have a default storage class specified that is appropriate for the target environment, but it can be overridden during deployment. See the detailed steps to [create custom configuration templates](create-custom-configuration-template.md) to change the storage class configuration for the data controller pods at deployment time. --If you set the storage class using the `--storage-class` or `-sc`parameter, that storage class is used for both log and data storage classes. If you set the storage classes in the deployment template file, you can specify different storage classes for logs and data. --Important factors to consider when choosing a storage class for the data controller pods: --- You **must** use a remote, shared storage class in order to ensure data durability and so that if a pod or node dies that when the pod is brought back up it can connect again to the persistent volume.-- The data being written to the controller SQL instance, metrics DB, and logs DB is typically fairly low volume and not sensitive to latency so ultra-fast performance storage is not critical. If you have users that are frequently using the Grafana and Kibana interfaces and you have a large number of database instances, then your users might benefit from faster performing storage.-- The storage capacity required is variable with the number of database instances that you have deployed because logs and metrics are collected for each database instance. Data is retained in the logs and metrics DB for two (2) weeks before it is purged. -- Changing the storage class post deployment is difficult, not documented, and not supported. Be sure to choose the storage class correctly at deployment time.--> [!NOTE] -> If no storage class is specified, the default storage class is used. There can be only one default storage class per Kubernetes cluster. You can [change the default storage class](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/). --### Database instance storage configuration --Each database instance has data, logs, and backup persistent volumes. The storage classes for these persistent volumes can be specified at deployment time. If no storage class is specified the default storage class is used. --When you create an instance using either `az sql mi-arc create` or `az postgres server-arc create`, there are four parameters that you can use to set the storage classes: --|Parameter name, short name|Used for| -||| -|`--storage-class-data`, `-d`|Storage class for all data files (.mdf, ndf). If not specified, defaults to storage class for data controller.| -|`--storage-class-logs`, `-g`|Storage class for all log files. If not specified, defaults to storage class for data controller.| -|`--storage-class-data-logs`|Storage class for the database transaction log files. If not specified, defaults to storage class for data controller.| -|`--storage-class-backups`|Storage class for all backup files. If not specified, defaults to storage class for data (`--storage-class-data`).<br/><br/> Use a ReadWriteMany (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). | --> [!WARNING] -> If you don't specify a storage class for backups, the deployment uses the storage class specified for data. If this storage class isn't RWX capable, the point-in-time restore may not work as desired. --The table below lists the paths inside the Azure SQL Managed Instance container that is mapped to the persistent volume for data and logs: --|Parameter name, short name|Path inside `mssql-miaa` container|Description| -|||| -|`--storage-class-data`, `-d`|/var/opt|Contains directories for the mssql installation and other system processes. The mssql directory contains default data (including transaction logs), error log & backup directories| -|`--storage-class-logs`, `-g`|/var/log|Contains directories that store console output (stderr, stdout), other logging information of processes inside the container| --The table below lists the paths inside the PostgreSQL instance container that is mapped to the persistent volume for data and logs: --|Parameter name, short name|Path inside postgres container|Description| -|||| -|`--storage-class-data`, `-d`|/var/opt/postgresql|Contains data and log directories for the postgres installation| -|`--storage-class-logs`, `-g`|/var/log|Contains directories that store console output (stderr, stdout), other logging information of processes inside the container| --Each database instance has a separate persistent volume for data files, logs, and backups. This means that there is separation of the I/O for each of these types of files subject to how the volume provisioner provisions storage. Each database instance has its own persistent volume claims and persistent volumes. --If there are multiple databases on a given database instance, all of the databases use the same persistent volume claim, persistent volume, and storage class. All backups - both differential log backups and full backups use the same persistent volume claim and persistent volume. The persistent volume claims for the database instance pods are shown below: --|**Instance**|**Persistent Volume Claims**| -||| -|**Azure SQL Managed Instance**|`<namespace>/logs-<instance name>-0`, `<namespace>/data-<instance name>-0`| -|**Azure database for PostgreSQL instance**|`<namespace>/logs--<instance name>-0`, `<namespace>/data--<instance name>-0`| -|**Azure PostgreSQL**|`<namespace>/logs-<instance name>-<ordinal>`, `<namespace>/data-<instance name>-0` --Important factors to consider when choosing a storage class for the database instance pods: --- Starting with the February, 2022 release of Azure Arc data services, you need to specify a **ReadWriteMany** (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). If no storage class is specified for backups, the default storage class in kubernetes is used and if this is not RWX capable, an Azure SQL managed instance deployment may not succeed.-- Database instances can be deployed in either a single pod pattern or a multiple pod pattern. An example of a single pod pattern is a General Purpose pricing tier Azure SQL managed instance. An example of a multiple pod pattern is a highly available Business Critical pricing tier Azure SQL managed instance. Database instances deployed with the single pod pattern **must** use a remote, shared storage class in order to ensure data durability and so that if a pod or node dies that when the pod is brought back up it can connect again to the persistent volume. In contrast, a highly available Azure SQL managed instance uses Always On Availability Groups to replicate the data from one instance to another either synchronously or asynchronously. Especially in the case where the data is replicated synchronously, there is always multiple copies of the data - typically three copies. Because of this, it is possible to use local storage or remote, shared storage classes for data and log files. If utilizing local storage, the data is still preserved even in the case of a failed pod, node, or storage hardware because there are multiple copies of the data. Given this flexibility, you might choose to use local storage for better performance.-- Database performance is largely a function of the I/O throughput of a given storage device. If your database is heavy on reads or heavy on writes, then you should choose a storage class with hardware designed for that type of workload. For example, if your database is mostly used for writes, you might choose local storage with RAID 0. If your database is mostly used for reads of a small amount of "hot data", but there is a large overall storage volume of cold data, then you might choose a SAN device capable of tiered storage. Choosing the right storage class is not any different than choosing the type of storage you would use for any database.-- If you're using a local storage volume provisioner, ensure that the local volumes that are provisioned for data, logs, and backups are each landing on different underlying storage devices to avoid contention on disk I/O. The OS should also be on a volume that is mounted to a separate disk(s). This is essentially the same guidance as would be followed for a database instance on physical hardware.-- Because all databases on a given instance share a persistent volume claim and persistent volume, be sure not to colocate busy database instances on the same database instance. If possible, separate busy databases on to their own database instances to avoid I/O contention. Further, use node label targeting to land database instances onto separate nodes so as to distribute overall I/O traffic across multiple nodes. If you're using virtualization, be sure to consider distributing I/O traffic not just at the node level but also the combined I/O activity happening by all the node VMs on a given physical host.--## Estimating storage requirements -Every pod that contains stateful data uses at least two persistent volumes - one persistent volume for data and another persistent volume for logs. The table below lists the number of persistent volumes required for a single Data Controller, Azure SQL Managed instance, Azure Database for PostgreSQL instance and Azure PostgreSQL HyperScale instance: --|Resource Type|Number of stateful pods|Required number of persistent volumes| -|||| -|Data Controller|4 (`control`, `controldb`, `logsdb`, `metricsdb`)|4 * 2 = 8| -|Azure SQL Managed Instance|1|2| -|Azure PostgreSQL|1|2| --The table below shows the total number of persistent volumes required for a sample deployment: --|Resource Type|Number of instances|Required number of persistent volumes| -|||| -|Data Controller|1|4 * 2 = 8| -|Azure SQL Managed Instance|5|5 * 2 = 10| -|Azure PostgreSQL|5|5 * 2 = 10| -|***Total Number of persistent volumes***||8 + 10 + 10 = 28| --This calculation can be used to plan the storage for your Kubernetes cluster based on the storage provisioner or environment. For example, if local storage provisioner is used for a Kubernetes cluster with five (5) nodes then for the sample deployment above every node requires at least storage for 10 persistent volumes. Similarly, when provisioning an Azure Kubernetes Service (AKS) cluster with five (5) nodes picking an appropriate VM size for the node pool such that 10 data disks can be attached is critical. More details on how to size the nodes for storage needs for AKS nodes can be found [here](/azure/aks/operator-best-practices-storage#size-the-nodes-for-storage-needs). --## Choosing the right storage class --### On-premises and edge sites --Microsoft and its OEM, OS, and Kubernetes partners have a validation program for Azure Arc data services. This program provides comparable test results from a certification testing toolkit. The tests evaluate feature compatibility, stress testing results, and performance and scalability. The test results indicate the OS used, Kubernetes distribution used, HW used, the CSI add-on used, and the storage classes used. This helps customers choose the best storage class, OS, Kubernetes distribution, and hardware for their requirements. More information on this program and test results can be found [here](validation-program.md). --#### Public cloud, managed Kubernetes services --For public cloud-based, managed Kubernetes services we can make the following recommendations: --|Public cloud service|Recommendation| -||| -|**Azure Kubernetes Service (AKS)**|Azure Kubernetes Service (AKS) has two types of storage - Azure Files and Azure Managed Disks. Each type of storage has two pricing/performance tiers - standard (HDD) and premium (SSD). Thus, the four storage classes provided in AKS are `azurefile` (Azure Files standard tier), `azurefile-premium` (Azure Files premium tier), `default` (Azure Disks standard tier), and `managed-premium` (Azure Disks premium tier). The default storage class is `default` (Azure Disks standard tier). There are substantial **[pricing differences](https://azure.microsoft.com/pricing/details/storage/)** between the types and tiers that you should consider. For production workloads with high-performance requirements, we recommend using `managed-premium` for all storage classes. For dev/test workloads, proofs of concept, etc. where cost is a consideration, then `azurefile` is the least expensive option. All four of the options can be used for situations requiring remote, shared storage as they are all network-attached storage devices in Azure. Read more about [AKS Storage](/azure/aks/concepts-storage).| -|**AWS Elastic Kubernetes Service (EKS)**| Amazon's Elastic Kubernetes Service has one primary storage class - based on the [EBS CSI storage driver](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html). This is recommended for production workloads. There is a new storage driver - [EFS CSI storage driver](https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html) - that can be added to an EKS cluster, but it is currently in a beta stage and subject to change. Although AWS says that this storage driver is supported for production, we don't recommend using it because it is still in beta and subject to change. The EBS storage class is the default and is called `gp2`. Read more about [EKS Storage](https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html).| -|**Google Kubernetes Engine (GKE)**|Google Kubernetes Engine (GKE) has just one storage class called `standard`. This class is used for [GCE persistent disks](https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk). Being the only one, it is also the default. Although there is a [local, static volume provisioner](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd#run-local-volume-static-provisioner) for GKE that you can use with direct-attached SSDs, we don't recommend using it as it is not maintained or supported by Google. Read more about [GKE storage](https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes). |
azure-arc | Support Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/support-policy.md | - -description: "Explains the support policy for Azure Arc-enabled data services" Title: "Azure Arc-enabled data services support policy" Previously updated : "08/08/2022"---------# Azure Arc-enabled data services support policy. --This article describes the support policies and troubleshooting boundaries for Azure Arc-enabled data services. This article specifically explains support for Azure Arc data controller and SQL Managed Instance enabled by Azure Arc. --## Support policy -- Azure Arc-enabled data services follow [Microsoft Modern Lifecycle Policy](https://support.microsoft.com/help/30881/modern-lifecycle-policy).-- Read the original [Modern Lifecycle Policy announcement](https://support.microsoft.com/help/447912/announcing-microsoft-modern-lifecycle-policy).-- For additional information, see [Modern Policy FAQs](https://support.microsoft.com/help/30882/modern-lifecycle-policy-faq).--## Support versions --Microsoft supports Azure Arc-enabled data services for one year from the date of the release of that specific version. This support applies to the data controller, and any supported data services. For example, this support also applies to SQL Managed Instance enabled by Azure Arc. --For descriptions, and instructions on how to identify a version release date, see [Supported versions](upgrade-overview.md#supported-versions). --Microsoft releases new versions periodically. [Version log](version-log.md) shows the history of releases. --To plan updates, see [Upgrade Azure Arc-enabled data services](upgrade-overview.md). --## Support by components --Microsoft supports Azure Arc-enabled data services, including the data controller, and the data services (like SQL Managed Instance enabled by Azure Arc) that we provide. Arc-enabled data services require a Kubernetes distribution deployed in a customer operated environment. Microsoft does not provide support for the Kubernetes distribution. Support for the environment and hardware that hosts Kubernetes is provided by the operator of the environment and hardware. --Microsoft has worked with industry partners to validate specific distributions for Azure Arc-enabled data services. You can see a list of partners and validated solutions in [Azure Arc-enabled data services Kubernetes validation](validation-program.md). --Microsoft recommends that you run Azure Arc-enabled data services on a validated solution. --## See also --[SQL Server running in Linux containers](/troubleshoot/sql/general/support-policy-sql-server) |
azure-arc | Supported Versions Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/supported-versions-postgresql.md | - Title: Supported versions PostgreSQL with Azure Arc-enabled PostgreSQL server -description: Supported versions PostgreSQL with Azure Arc-enabled PostgreSQL server ------ Previously updated : 11/03/2021----# Supported versions of PostgreSQL with Azure Arc-enabled PostgreSQL server -The list of supported versions evolves over time as we progress on ensuring parity with PostgreSQL managed services in Azure PaaS. Today, the major version that is supported is PostgreSQL 14. ---## How to choose between versions? -It's recommended you look at what versions your applications have been designed for and what are the capabilities of each of the versions. -To learn more, read about each version on the official PostgreSQL site: -- [PostgreSQL 14 (default)](https://www.postgresql.org/docs/14/https://docsupdatetracker.net/index.html)--## How to create a particular version in Azure Arc-enabled PostgreSQL server? -Currently only PostgreSQL version 14 is supported. --There's only one PostgreSQL Custom Resource Definition (CRD) in your Kubernetes cluster no matter what versions we support. -For example, run the following command: -```console -kubectl get crds -``` --It returns an output like: -```console -NAME CREATED AT -dags.sql.arcdata.microsoft.com 2021-10-12T23:53:40Z -datacontrollers.arcdata.microsoft.com 2021-10-13T01:00:27Z -exporttasks.tasks.arcdata.microsoft.com 2021-10-12T23:53:39Z -healthstates.azmon.container.insights 2021-10-12T19:04:44Z -monitors.arcdata.microsoft.com 2021-10-13T01:00:26Z -postgresqls.arcdata.microsoft.com 2021-10-12T23:53:37Z -sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com 2021-10-12T23:53:38Z -sqlmanagedinstances.sql.arcdata.microsoft.com 2021-10-12T23:53:37Z -``` --In this example, this output indicates there is one CRD related to PostgreSQL: `postgresqls.arcdata.microsoft.com`, shortname `postgresqls`. The CRD isn't a PostgreSQL server. The presence of a CRD isn't an indication that you have - or not - created a server. The CRD is an indication of what kind of resources can be created in the Kubernetes cluster. --## How can I be notified when other versions are available? -Come back and read this article. It's updated as appropriate. ---## Related content: -- [Read about creating Azure Arc-enabled PostgreSQL server](create-postgresql-server.md)-- [Read about getting a list of the Azure Arc-enabled PostgreSQL servers created in your Arc Data Controller](list-servers-postgresql.md) |
azure-arc | Troubleshoot Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-guide.md | - Title: Troubleshoot Azure Arc-enabled data services -description: Introduction to troubleshooting resources ------ Previously updated : 07/07/2022----# Troubleshooting resources --This article identifies troubleshooting resources for Azure Arc-enabled data services. --## Uploads --### Logs Upload related errors --If you deployed Azure Arc data controller in the `direct` connectivity mode using `kubectl`, and have not created a secret for the Log Analytics workspace credentials, you may see the following error messages in the Data Controller CR (Custom Resource): --``` -status": { - "azure": { - "uploadStatus": { - "logs": { - "lastUploadTime": "YYYY-MM-HHTMM:SS:MS.SSSSSSZ", - "message": "spec.settings.azure.autoUploadLogs is true, but failed to get log-workspace-secret secret." - }, --``` --To resolve the above error, create a secret with the Log Analytics Workspace credentials containing the `WorkspaceID` and the `SharedAccessKey` as follows: --``` -apiVersion: v1 -data: - primaryKey: <base64 encoding of Azure Log Analytics workspace primary key> - workspaceId: <base64 encoding of Azure Log Analytics workspace Id> -kind: Secret -metadata: - name: log-workspace-secret - namespace: <your datacontroller namespace> -type: Opaque --``` --### Metrics upload related errors in direct connected mode --If you configured automatic upload of metrics, in the direct connected mode and the permissions needed for the MSI have not been properly granted (as described in [Upload metrics](upload-metrics.md)), you might see an error in your logs as follows: --```output -'Metric upload response: {"error":{"code":"AuthorizationFailed","message":"Check Access Denied Authorization for AD object XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX over scope /subscriptions/XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX/resourcegroups/my-resource-group/providers/microsoft.azurearcdata/sqlmanagedinstances/arc-dc, User Tenant Id: XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX. Microsoft.Insights/Metrics/write was not allowed, Microsoft.Insights/Telemetry/write was notallowed. Warning: Principal will be blocklisted if the service principal is not granted proper access while it hits the GIG endpoint continuously."}} -``` --To resolve above error, retrieve the MSI for the Azure Arc data controller extension, and grant the required roles as described in [Upload metrics](upload-metrics.md). --### Usage upload related errors in direct connected mode --If you deployed your Azure Arc data controller in the direct connected mode the permissions needed to upload your usage information are automatically granted for the Azure Arc data controller extension MSI. If the automatic upload process runs into permissions related issues you might see an error in your logs as follows: --``` -identified that your data controller stopped uploading usage data to Azure. The error was: --{"lastUploadTime":"2022-05-05T20:10:47.6746860Z","message":"Data controller upload response: {\"error\":{\"code\":\"AuthorizationFailed\",\"message\":\"The client 'XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX' with object id 'XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX' does not have authorization to perform action 'microsoft.azurearcdata/datacontrollers/write' over scope '/subscriptions/XXXXXXXXX-XXXX-XXXX-XXXXX-XXXXXXXXXXX/resourcegroups/my-resource-group/providers/microsoft.azurearcdata/datacontrollers/arc-dc' or the scope is invalid. If access was recently granted, please refresh your credentials.\"}}"} -``` --To resolve the permissions issue, retrieve the MSI and grant the required roles as described in [Upload metrics](upload-metrics.md)). --## Upgrades --### Incorrect image tag --If you are using `az` CLI to upgrade and you pass in an incorrect image tag you will see an error within two minutes. --```output -Job Still Active : Failed to await bootstrap job complete after retrying for 2 minute(s). -Failed to await bootstrap job complete after retrying for 2 minute(s). -``` --When you view the pods you will see the bootstrap job status as `ErrImagePull`. --```output -STATUS -ErrImagePull -``` --When you describe the pod you will see --```output -Failed to pull image "<registry>/<repository>/arc-bootstrapper:<incorrect image tag>": [rpc error: code = NotFound desc = failed to pull and unpack image -``` --To resolve, reference the [Version log](version-log.md) for the correct image tag. Re-run the upgrade command with the correct image tag. --### Unable to connect to registry or repository --If you are trying to upgrade and the upgrade job has not produced an error but runs for longer than fifteen minutes, you can view the progress of the upgrade by watching the pods. Run --```console -kubectl get pods -n <namespace> -``` --When you view the pods you will see the bootstrap job status as `ErrImagePull`. --```output -STATUS -ErrImagePull -``` --Describe the bootstrap job pod to view the Events. --```console -kubectl describe pod <pod name> -n <namespace> -``` --When you describe the pod you will see an error that says --```output -failed to resolve reference "<registry>/<repository>/arc-bootstrapper:<image tag>" -``` --This is common if your image was deployed from a private registry, you're using Kubernetes to upgrade via a yaml file, and the yaml file references mcr.microsoft.com instead of the private registry. To resolve, cancel the upgrade job. To find the registry you deployed from, run --```console -kubectl describe pod <controller in format control-XXXXX> -n <namespace> -``` --Look for Containers.controller.Image, where you will see the registry and repository. Capture those values, enter into your yaml file, and re-run the upgrade. --### Not enough resources --If you are trying to upgrade and the upgrade job has not produced an error but runs for longer than fifteen minutes, you can view the progress of the upgrade by watching the pods. Run --```console -kubectl get pods -n <namespace> -``` --Look for a pod that shows some of the containers are ready, but not - for example, this metricsdb-0 pod has only one of two containers: --```output -NAME READY STATUS RESTARTS AGE -bootstrapper-848f8f44b5-7qxbx 1/1 Running 0 16m -control-7qxw8 2/2 Running 0 16m -controldb-0 2/2 Running 0 16m -logsdb-0 3/3 Running 0 18d -logsui-hvsrm 3/3 Running 0 18d -metricsdb-0 1/2 Running 0 18d -``` --Describe the pod to see Events. --```console -kubectl describe pod <pod name> -n <namespace> -``` --If there are no events, get the container names and view the logs for the containers. --```console -kubectl get pods <pod name> -n <namespace> -o jsonpath='{.spec.containers[*].name}*' --kubectl logs <pod name> <container name> -n <namespace> -``` --If you see a message about insufficient CPU or memory, you should add more nodes to your Kubernetes cluster, or add more resources to your existing nodes. --## Resources by type --[Scenario: Troubleshooting PostgreSQL servers](troubleshoot-postgresql-server.md) --[View logs and metrics using Kibana and Grafana](monitor-grafana-kibana.md) --## Related content --[Scenario: View inventory of your instances in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md) |
azure-arc | Troubleshoot Managed Instance Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-managed-instance-configuration.md | - Title: Troubleshoot configuration - SQL Managed Instance enabled by Azure Arc -description: Describes how to troubleshoot configuration. Includes steps to provide configuration files for SQL Managed Instance enabled by Azure Arc Azure Arc-enabled data services --- Previously updated : 04/10/2023---# User-provided configuration files --Arc data services provide management of configuration settings and files in the system. The system generates configuration files such as `mssql.conf`, `mssql.json`, `krb5.conf` using the user-provided settings in the custom resource spec and some system-determined settings. The scope of what settings are supported and what changes can be made to the configuration files using the custom resource spec evolves over time. You may need to try changes in the configuration files that aren't possible through the settings on the custom resource spec. --To alleviate this problem, you can provide configuration file content for a selected set of files through a Kubernetes `ConfigMap`. The information in the `ConfigMap` effectively overrides the file content that the system would have otherwise generated. This content allows you to try some configuration settings. --For Arc SQL Managed Instance, the supported configuration files that you can override using this method are: --- `mssql.conf`-- `mssql.json`-- `krb5.conf`--## Steps to provide override configuration files --1. Prepare the content of the configuration file -- Prepare the content of the file that you would like to provide an override for. --1. Create a `ConfigMap` -- Create a `ConfigMap` spec to store the content of the configuration file. The key in the `ConfigMap` dictionary should be the name of the file, and the value should be the content. -- You can provide file overrides for multiple configuration files in one `ConfigMap`. -- The `ConfigMap` must be in the same namespace as the SQL Managed Instance. -- The following spec shows an example of how to provide an override for mssql.conf file: -- ```json - apiVersion: v1 - kind: ConfigMap - metadata: - name: sqlmifo-cm - namespace: test - data: - mssql.conf: "[language]\r\nlcid = 1033\r\n\r\n[licensing]\r\npid = GeneralPurpose\r\n\r\n[network]\r\nforceencryption = 0\r\ntlscert = /var/run/secrets/managed/certificates/mssql/mssql-certificate.pem\r\ntlsciphers = ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384\r\ntlskey = /var/run/secrets/managed/certificates/mssql/mssql-privatekey.pem\r\ntlsprotocols = 1.2\r\n\r\n[sqlagent]\r\nenabled = False\r\n\r\n[telemetry]\r\ncustomerfeedback = false\r\n\r\n" - ``` -- Apply the `ConfigMap` in Kubernetes using `kubectl apply -f <filename>`. --1. Provide the name of the ConfigMap in SQL Managed Instance spec -- In SQL Managed Instance spec, provide the name of the ConfigMap in the field `spec.fileOverrideConfigMap`. -- The SQL Managed Instance `apiVersion` must be at least v12 (released in April 2023). -- The following SQL Managed Instance spec shows an example of how to provide the name of the ConfigMap. -- ```json - apiVersion: sql.arcdata.microsoft.com/v12 - kind: SqlManagedInstance - metadata: - name: sqlmifo - namespace: test - spec: - fileOverrideConfigMap: sqlmifo-cm - ... - ``` -- Apply the SQL Managed Instance spec in Kubernetes. This action leads to the delivery of the provided configuration files to Arc SQL Managed Instance container. --1. Check that the files are downloaded in the `arc-sqlmi` container. -- The locations of supported files in the container are: -- - `mssql.conf`: `/var/run/config/mssql/mssql.conf` - - `mssql.json`: `/var/run/config/mssql/mssql.json` - - `krb5.conf`: `/etc/krb5.conf` --## Related content --[Get logs to troubleshoot Azure Arc-enabled data services](troubleshooting-get-logs.md) |
azure-arc | Troubleshoot Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-managed-instance.md | - Title: Troubleshoot connection to failover group - SQL Managed Instance enabled by Azure Arc -description: Describes how to troubleshoot issues with connections to failover group resources in Azure Arc-enabled data services --- Previously updated : 03/15/2023---# Troubleshoot SQL Managed Instance enabled by Azure Arc deployments --This article identifies potential issues, and describes how to diagnose root causes for these issues for deployments of Azure Arc-enabled data services. --## Connection to SQL Managed Instance enabled by Azure Arc failover group --This section describes how to troubleshoot issues connecting to a failover group. --### Check failover group connections & synchronization state --```console -kubectl -n $nameSpace get fog $fogName -o jsonpath-as-json='{.status}' -``` --**Results**: --On each side, there are two replicas for one failover group. Check the value of `connectedState`, and `synchronizationState` for each replica. --If one of `connectedState` isn't equal to `CONNECTED`, see the instructions under [Check parameters](#check-parameters). --If one of `synchronizationState` isn't equal to `HEALTHY`, focus on the instance which `synchronizationState` isn't equal to `HEALTHY`". Refer to [Can't connect to SQL Managed Instance enabled by Azure Arc](#cant-connect-to-sql-managed-instance-enabled-by-azure-arc). --### Check parameters --On both geo-primary and geo-secondary, check failover spec against `$sqlmiName` instance on other side. --### Command on local --Run the following command against the local instance to get the spec for the local instance. --```console -kubectl -n $nameSpace get fog $fogName -o jsonpath-as-json='{.spec}' -``` --### Command on remote --Run the following command against the remote instance: --```console -kubectl -n $nameSpace get sqlmi $sqlmiName -o jsonpath-as-json='{.status.highAvailability.mirroringCertificate}' -kubectl -n $nameSpace get sqlmi $sqlmiName -o jsonpath-as-json='{.status.endpoints.mirroring}' -``` --**Results**: --Compare the results from the remote instance with the results from the local instance. --* `partnerMirroringURL`, and `partnerMirroringCert` from the local instance has to match remote instance values from: - * `kubectl -n $nameSpace get sqlmi $sqlmiName -o jsonpath-as-json='{.status.endpoints.mirroring}'` - * `kubectl -n $nameSpace get sqlmi $sqlmiName -o jsonpath-as-json='{.status.highAvailability.mirroringCertificate}'` --* `partnerMI` from `kubectl -n $nameSpace get fog $fogName -o jsonpath-as-json='{.spec}'` has to match with `$sqlmiName` from remote instance. --* `sharedName` from `kubectl -n $nameSpace get fog $fogName -o jsonpath-as-json='{.spec}'` is optional. If it isn't presented, it's same as `sourceMI`. The `sharedName` from both site should be same if presented. --* Role from `kubectl -n $nameSpace get fog $fogName -o jsonpath-as-json='{.spec}'` should be different between two sites. One side should be primary, other should be secondary. --If any one of values described doesn't match the comparison, delete failover group on both sites and re-create. --If nothing is wrong, follow the instructions under [Check mirroring endpoints for both sides](#check-mirroring-endpoints-for-both-sides). --### Check mirroring endpoints for both sides --On both geo-primary and geo-secondary, checks external mirroring endpoint is exposed by following commands. --```console -kubectl -n test get services $sqlmiName-external-svc -o jsonpath-as-json='{.spec.ports}' -``` --**Results** --* `port-mssql-mirroring` should be presented on the list. The failover group on the other side should use the same value for `partnerMirroringURL`. If the values don't match, correct the mistake and retry from the beginning. --### Verify SQL Server can reach external endpoint of another site --Although you can't ping mirroring endpoint of another site directly, use the following command to reach another side external endpoint of the SQL Server tabular data stream (TDS) port. --```console -kubectl exec -ti -n $nameSpace $sqlmiName-0 -c arc-sqlmi -- /opt/mssql-tools/bin/sqlcmd -S $remotePrimaryEndpoint -U $remoteUser -P $remotePassword -Q "SELECT @@ServerName" -``` --**Results** --If SQL server can use external endpoint TDS, there is a good chance it can reach external mirroring endpoint because they are defined and activated in the same service, specifically `$sqlmiName-external-svc`. --## Can't connect to SQL Managed Instance enabled by Azure Arc --This section identifies specific steps you can take to troubleshoot connections to SQL Managed Instance enabled by Azure Arc. --> [!NOTE] -> You can't connect to a SQL Managed Instance enabled by Azure Arc if the instance license type is `DisasterRecovery`. --### Check the managed instance status --SQL Managed Instance (SQLMI) status info indicates if the instance is ready or not. --```console -kubectl -n $nameSpace get sqlmi $sqlmiName -o jsonpath-as-json='{.status}' -``` --**Results** --The state should be `Ready`. If the value isn't `Ready`, you need to wait. If state is error, get the message field, collect logs, and contact support. See [Collect the logs](#collect-the-logs). --### Check the routing label for stateful set -The routing label for stateful set is used to route external endpoint to a matched pod. The name of the label is `role.ag.mssql.microsoft.com`. --```console -kubectl -n $nameSpace get pods $sqlmiName-0 -o jsonpath-as-json='{.metadata.labels}' -kubectl -n $nameSpace get pods $sqlmiName-1 -o jsonpath-as-json='{.metadata.labels}' -kubectl -n $nameSpace get pods $sqlmiName-2 -o jsonpath-as-json='{.metadata.labels}' -``` --**Results** --If you didn't find primary, kill the pod that doesn't have any `role.ag.mssql.microsoft.com` label. If this doesn't resolve the issue, collect logs and contact support. See [Collect the logs](#collect-the-logs). --### Get Replica state from local container connection --Use `localhost,1533` to connect sql in each replica of `statefulset`. This connection should always succeed. Use this connection to query the SQL HA replica state. --```console -kubectl exec -ti -n $nameSpace $sqlmiName-0 -c arc-sqlmi -- /opt/mssql-tools/bin/sqlcmd -S localhost,1533 -U $User -P $Password -Q "SELECT * FROM sys.dm_hadr_availability_replica_states" -kubectl exec -ti -n $nameSpace $sqlmiName-1 -c arc-sqlmi -- /opt/mssql-tools/bin/sqlcmd -S localhost,1533 -U $User -P $Password -Q "SELECT * FROM sys.dm_hadr_availability_replica_states" -kubectl exec -ti -n $nameSpace $sqlmiName-2 -c arc-sqlmi -- /opt/mssql-tools/bin/sqlcmd -S localhost,1533 -U $User -P $Password -Q "SELECT * FROM sys.dm_hadr_availability_replica_states" -``` --**Results** --All replicas should be connected & healthy. Here is the detailed description of the query results [sys.dm_hadr_availability_replica_states](/sql/relational-databases/system-dynamic-management-views/sys-dm-hadr-availability-replica-states-transact-sql). --If you find it isn't synchronized or not connected unexpectedly, try to kill the pod which has the problem. If problem persists, collect logs and contact support. See [Collect the logs](#collect-the-logs). --> [!NOTE] -> If there are some large database in the instance, the seeding process to secondary could take a while. If this happens, wait for seeding to complete. --## Check SQLMI SQL engine listener --SQL engine listener is the component which routes connections to the failover group. --```console -kubectl exec -ti -n $nameSpace $sqlmiName-0 -c arc-sqlmi -- /opt/mssql-tools/bin/sqlcmd -S localhost,1433 -U $User -P $Password -Q "SELECT @@ServerName" -kubectl exec -ti -n $nameSpace $sqlmiName-1 -c arc-sqlmi -- /opt/mssql-tools/bin/sqlcmd -S localhost,1433 -U $User -P $Password -Q "SELECT @@ServerName" -kubectl exec -ti -n $nameSpace $sqlmiName-2 -c arc-sqlmi -- /opt/mssql-tools/bin/sqlcmd -S localhost,1433 -U $User -P $Password -Q "SELECT @@ServerName" -``` --**Results** --You should get `ServerName` from `Listener` of each replica. If you can't get `ServerName`, kill the pods which have the problem. If the problem persists after recovery, collect logs and contact support. See [Collect the logs](#collect-the-logs). --### Check Kubernetes network connection --Inside Kubernetes cluster, there is kubernetes network on top which allow communication between pods and routing. Check if SQLMI pods can communicate with each other via cluster IP. Run this for all the replicas. ---```console -kubectl exec -ti -n $nameSpace $sqlmiName-0 -c arc-sqlmi -- /opt/mssql-tools/bin/sqlcmd -S $(kubectl -n test get service $sqlmiName-p-svc -o jsonpath={'.spec.clusterIP'}),1533 -U $User -P $Password -Q "SELECT @@ServerName" -``` --**Results** --You should be able to reach any Cluster IP address for the pods of stateful set from another pod. If this isn't the case, refer to [Kubernetes documentation - Cluster networking](https://kubernetes.io/docs/concepts/cluster-administration/networking/) for detailed information or get service provider to resolve the issue. --### Check the Kubernetes load balancer or `nodeport` services --Load balancer or `nodeport` services are the services that expose a service port to the external network. --```console -kubectl -n $nameSpace expose pod $sqlmiName-0 --port=1533 --name=ha-$sqlmiName-0 --type=LoadBalancer -kubectl -n $nameSpace expose pod $sqlmiName-1 --port=1533 --name=ha-$sqlmiName-1 --type=LoadBalancer -kubectl -n $nameSpace expose pod $sqlmiName-2 --port=1533 --name=ha-$sqlmiName-2 --type=LoadBalancer -``` --**Results** --You should be able to connect to exposed external port (which has been confirmed from internal at step 3). If you can't connect to external port, refer to [Kubernetes documentation - Create an external load balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/) and get service provider help on the issues. --You can use any client like `SqlCmd`, SQL Server Management Studio (SSMS), or Azure Data Studio (ADS) to test this out. --## Connection between failover groups is lost --If the failover groups between primary and geo-secondary Arc SQL Managed instances is configured to be in `sync` mode and the connection is lost for whatever reason for an extended period of time, then the logs on the primary Arc SQL managed instance cannot be truncated until the transactions are sent to the geo-secondary. This could lead to the logs filling up and potentially running out of space on the primary site. To break out of this situation, remove the failover groups and re-configure when the connection between the sites is re-established. --The failover groups can be removed on both primary as well as secondary site as follows: --IF the data controller is deployed in `indirect` mode: -`kubectl delete fog <failovergroup name>` --and if the data controller is deployed in `direct` mode, provide the `sharedname` and the failover group is deleted on both sites: -`az sql instance-failover-group-arc delete --name fogcr --mi <arcsqlmi> --resource-group <resource group>` ---Once the failover group on the primary site is deleted, logs can be truncated to free up space. --## Collect the logs --If the previous steps all succeeded without any problem and you still can't log in, collect the logs and contact support --### Collection controller logs --```console -MyController=$(kubectl -n $nameSpace get pods --selector=app=controller -o jsonpath='{.items[*].metadata.name}') -kubectl -n $nameSpace cp $MyController:/var/log/controller $localFolder/controller -c controller -``` --### Get SQL Server and supervisor logs for each replica --Run the following command for each replica to get SQL Server and supervisor logs --```console -kubectl -n $nameSpace cp $sqlmiName-0:/var/opt/mssql/log $localFolder/$sqlmiName-0/log -c arc-sqlmi -kubectl -n $nameSpace cp $sqlmiName-0:/var/log/arc-ha-supervisor $localFolder/$sqlmiName-0/arc-ha-supervisor -c arc-ha-supervisor -``` --### Get orchestrator logs --```console -kubectl -n $nameSpace cp $sqlmiName-ha-0:/var/log $localFolder/$sqlmiName-ha-0/log -c arc-ha-orchestrator -``` ---## Related content --[Get logs to troubleshoot Azure Arc-enabled data services](troubleshooting-get-logs.md) |
azure-arc | Troubleshoot Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-postgresql-server.md | - Title: Troubleshoot PostgreSQL servers -description: Troubleshoot PostgreSQL servers with a Jupyter Notebook ------ Previously updated : 07/30/2021----# Troubleshooting PostgreSQL servers -This article describes some techniques you may use to troubleshoot your server group. In addition to this article you may want to read how to use [Kibana](monitor-grafana-kibana.md) to search the logs or use [Grafana](monitor-grafana-kibana.md) to visualize metrics about your server group. --## Getting more details about the execution of a CLI command -You may add the parameter **--debug** to any CLI command you execute. Doing so will display to your console additional information about the execution of that command. You should find it useful to get details to help you understand the behavior of that command. -For example you could run -```azurecli -az postgres server-arc create -n postgres01 -w 2 --debug --k8s-namespace <namespace> --use-k8s -``` --or -```azurecli -az postgres server-arc update -n postgres01 --extension --k8s-namespace <namespace> --use-k8s SomeExtensionName --debug -``` --In addition, you may use the parameter --help on any CLI command to display some help, list of parameters for a specific command. For example: -```azurecli -az postgres server-arc create --help -``` ---## Collecting logs of the data controller and your server groups -Read the article about [getting logs for Azure Arc-enabled data services](troubleshooting-get-logs.md) ----## Interactive troubleshooting with Jupyter notebooks in Azure Data Studio --Notebooks can document procedures by including markdown content to describe what to do/how to do it. It can also provide executable code to automate a procedure. This pattern is useful for everything from standard operating procedures to troubleshooting guides. --For example, let's troubleshoot a PostgreSQL server that might have some problems using Azure Data Studio. ----### Install tools --Install Azure Data Studio, `kubectl`, and Azure (`az`) CLI with the `arcdata` extension on the client machine you are using to run the notebook in Azure Data Studio. To do this, please follow the instructions at [Install client tools](install-client-tools.md) --### Update the PATH environment variable --Make sure that these tools can be invoked from anywhere on this client machine. For example, on a Windows client machine, update the PATH system environment variable and add the folder in which you installed kubectl. --### Log into your Kubernetes cluster with kubectl --To do this, you may want to use the example commands provided in [this](https://blog.christianposta.com/kubernetes/logging-into-a-kubernetes-cluster-with-kubectl/) blog post. -You would run commands like: --```console -kubectl config view -kubectl config set-credentials kubeuser/my_kubeuser --username=<your Arc Data Controller Admin user name> --password=<password> -kubectl config set-cluster my_kubeuser --server=https://<IP address>:<port> -kubectl config set-context default/my_kubeuser/ArcDataControllerAdmin --user=ArcDataControllerAdmin/my_kubeuser --namespace=arc --cluster=my_kubeuser -kubectl config use-context default/my_kubeuser/ArcDataControllerAdmin -``` --#### The troubleshooting notebook --Launch Azure Data Studio and open the troubleshooting notebook. --Implement the steps described in [033-manage-Postgres-with-AzureDataStudio.md](manage-postgresql-server-with-azure-data-studio.md) to: --1. Connect to your Arc Data Controller -2. Right select your Postgres instance and choose **[Manage]** -3. Select the **[Diagnose and solve problems] dashboard** -4. Select the **[Troubleshoot] link** ---The **TSG100 - The Azure Arc-enabled PostgreSQL server troubleshooter notebook** opens up: --#### Run the scripts -Select the 'Run All' button at the top to execute the notebook all at once, or you can step through and execute each code cell one by one. --View the output from the execution of the code cells for any potential issues. --We'll add more details to the notebook over time about how to recognize common problems and how to solve them. --## Next step -- Read about [getting logs for Azure Arc-enabled data services](troubleshooting-get-logs.md)-- Read about [searching logs with Kibana](monitor-grafana-kibana.md)-- Read about [monitoring with Grafana](monitor-grafana-kibana.md)-- Create your own notebooks |
azure-arc | Troubleshooting Get Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshooting-get-logs.md | - Title: Get logs to troubleshoot Azure Arc-enabled data services -description: Learn how to get log files from a data controller to troubleshoot Azure Arc-enabled data services. ------ Previously updated : 11/03/2021----# Get logs to troubleshoot Azure Arc-enabled data services ---## Prerequisites --Before you proceed, you need: --* Azure CLI (`az`) with the `arcdata` extension. For more information, see [Install client tools for deploying and managing Azure Arc data services](./install-client-tools.md). -* An administrator account to sign in to the Azure Arc-enabled data controller. --## Get log files --You can get service logs across all pods or specific pods for troubleshooting purposes. One way is to use standard Kubernetes tools such as the `kubectl logs` command. In this article, you'll use the Azure (`az`) CLI `arcdata` extension, which makes it easier to get all of the logs at once. --Run the following command to dump the logs: -- ```azurecli - az arcdata dc debug copy-logs --exclude-dumps --skip-compress --use-k8s --k8s-namespace - ``` -- For example: -- ```azurecli - #az arcdata dc debug copy-logs --exclude-dumps --skip-compress --use-k8s --k8s-namespace - ``` --The data controller creates the log files in the current working directory in a subdirectory called `logs`. --## Options --The `az arcdata dc debug copy-logs` command provides the following options to manage the output: --* Output the log files to a different directory by using the `--target-folder` parameter. -* Compress the files by omitting the `--skip-compress` parameter. -* Trigger and include memory dumps by omitting `--exclude-dumps`. We don't recommend this method unless Microsoft Support has requested the memory dumps. Getting a memory dump requires that the data controller setting `allowDumps` is set to `true` when the data controller is created. -* Filter to collect logs for just a specific pod (`--pod`) or container (`--container`) by name. -* Filter to collect logs for a specific custom resource by passing the `--resource-kind` and `--resource-name` parameters. The `resource-kind` parameter value should be one of the custom resource definition names. You can retrieve those names by using the command `kubectl get customresourcedefinition`. --With these parameters, you can replace the `<parameters>` in the following example: --```azurecli -az arcdata dc debug copy-logs --target-folder <desired folder> --exclude-dumps --skip-compress -resource-kind <custom resource definition name> --resource-name <resource name> --use-k8s --k8s-namespace -``` --For example: --```azurecli -az arcdata dc debug copy-logs --target-folder C:\temp\logs --exclude-dumps --skip-compress --resource-kind postgresql-12 --resource-name pg1 --use-k8s --k8s-namespace -``` --The following folder hierarchy is an example. It's organized by pod name, then container, and then by directory hierarchy within the container. --```output -<export directory> -Γö£ΓöÇΓöÇΓöÇdebuglogs-arc-20200827-180403 -Γöé Γö£ΓöÇΓöÇΓöÇbootstrapper-vl8j2 -Γöé Γöé ΓööΓöÇΓöÇΓöÇbootstrapper -Γöé Γöé Γö£ΓöÇΓöÇΓöÇapt -Γöé Γöé ΓööΓöÇΓöÇΓöÇfsck -Γöé Γö£ΓöÇΓöÇΓöÇcontrol-j2dm5 -Γöé Γöé Γö£ΓöÇΓöÇΓöÇcontroller -Γöé Γöé Γöé ΓööΓöÇΓöÇΓöÇcontroller -Γöé Γöé Γöé Γö£ΓöÇΓöÇΓöÇ2020-08-27 -Γöé Γöé Γöé ΓööΓöÇΓöÇΓöÇ2020-08-28 -Γöé Γöé ΓööΓöÇΓöÇΓöÇfluentbit -Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γöé Γö£ΓöÇΓöÇΓöÇfluentbit -Γöé Γöé ΓööΓöÇΓöÇΓöÇsupervisor -Γöé Γöé ΓööΓöÇΓöÇΓöÇlog -Γöé Γö£ΓöÇΓöÇΓöÇcontroldb-0 -Γöé Γöé Γö£ΓöÇΓöÇΓöÇfluentbit -Γöé Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γöé Γöé Γö£ΓöÇΓöÇΓöÇfluentbit -Γöé Γöé Γöé ΓööΓöÇΓöÇΓöÇsupervisor -Γöé Γöé Γöé ΓööΓöÇΓöÇΓöÇlog -Γöé Γöé ΓööΓöÇΓöÇΓöÇmssql-server -Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γöé Γö£ΓöÇΓöÇΓöÇmssql -Γöé Γöé Γö£ΓöÇΓöÇΓöÇmssql-server -Γöé Γöé ΓööΓöÇΓöÇΓöÇsupervisor -Γöé Γöé ΓööΓöÇΓöÇΓöÇlog -Γöé Γö£ΓöÇΓöÇΓöÇcontrolwd-ln6j8 -Γöé Γöé ΓööΓöÇΓöÇΓöÇcontrolwatchdog -Γöé Γöé ΓööΓöÇΓöÇΓöÇcontrolwatchdog -Γöé Γö£ΓöÇΓöÇΓöÇlogsdb-0 -Γöé Γöé ΓööΓöÇΓöÇΓöÇopensearch -Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γöé Γö£ΓöÇΓöÇΓöÇopensearch -Γöé Γöé Γö£ΓöÇΓöÇΓöÇprovisioner -Γöé Γöé ΓööΓöÇΓöÇΓöÇsupervisor -Γöé Γöé ΓööΓöÇΓöÇΓöÇlog -Γöé Γö£ΓöÇΓöÇΓöÇlogsui-7gg2d -Γöé Γöé ΓööΓöÇΓöÇΓöÇkibana -Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γöé Γö£ΓöÇΓöÇΓöÇapt -Γöé Γöé Γö£ΓöÇΓöÇΓöÇfsck -Γöé Γöé Γö£ΓöÇΓöÇΓöÇkibana -Γöé Γöé ΓööΓöÇΓöÇΓöÇsupervisor -Γöé Γöé ΓööΓöÇΓöÇΓöÇlog -Γöé Γö£ΓöÇΓöÇΓöÇmetricsdb-0 -Γöé Γöé ΓööΓöÇΓöÇΓöÇinfluxdb -Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γöé Γö£ΓöÇΓöÇΓöÇinfluxdb -Γöé Γöé ΓööΓöÇΓöÇΓöÇsupervisor -Γöé Γöé ΓööΓöÇΓöÇΓöÇlog -Γöé Γö£ΓöÇΓöÇΓöÇmetricsdc-2f62t -Γöé Γöé ΓööΓöÇΓöÇΓöÇtelegraf -Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γöé Γö£ΓöÇΓöÇΓöÇapt -Γöé Γöé Γö£ΓöÇΓöÇΓöÇfsck -Γöé Γöé Γö£ΓöÇΓöÇΓöÇsupervisor -Γöé Γöé Γöé ΓööΓöÇΓöÇΓöÇlog -Γöé Γöé ΓööΓöÇΓöÇΓöÇtelegraf -Γöé Γö£ΓöÇΓöÇΓöÇmetricsdc-jznd2 -Γöé Γöé ΓööΓöÇΓöÇΓöÇtelegraf -Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γöé Γö£ΓöÇΓöÇΓöÇapt -Γöé Γöé Γö£ΓöÇΓöÇΓöÇfsck -Γöé Γöé Γö£ΓöÇΓöÇΓöÇsupervisor -Γöé Γöé Γöé ΓööΓöÇΓöÇΓöÇlog -Γöé Γöé ΓööΓöÇΓöÇΓöÇtelegraf -Γöé Γö£ΓöÇΓöÇΓöÇmetricsdc-n5vnx -Γöé Γöé ΓööΓöÇΓöÇΓöÇtelegraf -Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γöé Γö£ΓöÇΓöÇΓöÇapt -Γöé Γöé Γö£ΓöÇΓöÇΓöÇfsck -Γöé Γöé Γö£ΓöÇΓöÇΓöÇsupervisor -Γöé Γöé Γöé ΓööΓöÇΓöÇΓöÇlog -Γöé Γöé ΓööΓöÇΓöÇΓöÇtelegraf -Γöé Γö£ΓöÇΓöÇΓöÇmetricsui-h748h -Γöé Γöé ΓööΓöÇΓöÇΓöÇgrafana -Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γöé Γö£ΓöÇΓöÇΓöÇgrafana -Γöé Γöé ΓööΓöÇΓöÇΓöÇsupervisor -Γöé Γöé ΓööΓöÇΓöÇΓöÇlog -Γöé ΓööΓöÇΓöÇΓöÇmgmtproxy-r5zxs -Γöé Γö£ΓöÇΓöÇΓöÇfluentbit -Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γöé Γö£ΓöÇΓöÇΓöÇfluentbit -Γöé Γöé ΓööΓöÇΓöÇΓöÇsupervisor -Γöé Γöé ΓööΓöÇΓöÇΓöÇlog -Γöé ΓööΓöÇΓöÇΓöÇservice-proxy -Γöé Γö£ΓöÇΓöÇΓöÇagent -Γöé Γö£ΓöÇΓöÇΓöÇnginx -Γöé ΓööΓöÇΓöÇΓöÇsupervisor -Γöé ΓööΓöÇΓöÇΓöÇlog -ΓööΓöÇΓöÇΓöÇdebuglogs-kube-system-20200827-180431 - Γö£ΓöÇΓöÇΓöÇcoredns-8bbb65c89-kklt7 - Γöé ΓööΓöÇΓöÇΓöÇcoredns - Γö£ΓöÇΓöÇΓöÇcoredns-8bbb65c89-z2vvr - Γöé ΓööΓöÇΓöÇΓöÇcoredns - Γö£ΓöÇΓöÇΓöÇcoredns-autoscaler-5585bf8c9f-g52nt - Γöé ΓööΓöÇΓöÇΓöÇautoscaler - Γö£ΓöÇΓöÇΓöÇkube-proxy-5c9s2 - Γöé ΓööΓöÇΓöÇΓöÇkube-proxy - Γö£ΓöÇΓöÇΓöÇkube-proxy-h6x56 - Γöé ΓööΓöÇΓöÇΓöÇkube-proxy - Γö£ΓöÇΓöÇΓöÇkube-proxy-nd2b7 - Γöé ΓööΓöÇΓöÇΓöÇkube-proxy - Γö£ΓöÇΓöÇΓöÇmetrics-server-5f54b8994-vpm5r - Γöé ΓööΓöÇΓöÇΓöÇmetrics-server - ΓööΓöÇΓöÇΓöÇtunnelfront-db87f4cd8-5xwxv - Γö£ΓöÇΓöÇΓöÇtunnel-front - Γöé Γö£ΓöÇΓöÇΓöÇapt - Γöé ΓööΓöÇΓöÇΓöÇjournal - ΓööΓöÇΓöÇΓöÇtunnel-probe - Γö£ΓöÇΓöÇΓöÇapt - Γö£ΓöÇΓöÇΓöÇjournal - ΓööΓöÇΓöÇΓöÇopenvpn -``` |
azure-arc | Uninstall Azure Arc Data Controller | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/uninstall-azure-arc-data-controller.md | - Title: Uninstall Azure Arc-enabled data services -description: Uninstall Azure Arc-enabled data services ------- Previously updated : 07/28/2022----# Uninstall Azure Arc-enabled data services --This article describes how to delete Azure Arc-enabled data service resources from Azure. --> [!WARNING] -> When you delete resources as described in this article, these actions are irreversible. --Deploying Azure Arc-enabled data services involves deploying an Azure Arc data controller and instances of data services SQL Managed Instance enabled by Azure Arc or Azure Arc-enabled PostgresQL server. Deployment creates several artifacts, such as: -- Custom Resource Definitions (CRDs)-- Cluster roles-- Cluster role bindings-- API services -- Namespace, if it didn't exist before --In directly connected mode, there are additional artifacts such as: -- Cluster extensions-- Custom locations--## Before --Before you delete a resource such as SQL Managed Instance enabled by Azure Arc or data controller, ensure you complete the following actions first: --1. For an indirectly connected data controller, export and upload the usage information to Azure for accurate billing calculation by following the instructions described in [Upload billing data to Azure - Indirectly connected mode](view-billing-data-in-azure.md#upload-billing-data-to-azureindirectly-connected-mode). --2. Ensure all the data services that have been create on the data controller are uninstalled as described in: -- - [Delete SQL Managed Instance enabled by Azure Arc](delete-managed-instance.md) - - [Delete an Azure Arc-enabled PostgreSQL server](delete-postgresql-hyperscale-server-group.md). ---After deleting any existing instances of SQL Managed Instance enabled by Azure Arc and/or Azure Arc-enabled PostgreSQL server, delete the data controller using one of the appropriate method for connectivity mode. --> [!Note] -> If you deployed the data controller in directly connected mode then follow the steps to: -> - [Delete data controller in directly connected mode using Azure portal](#delete-data-controller-in-directly-connected-mode-using-azure-portal) -> or -> - [Delete data controller in directly connected mode using Azure CLI](#delete-data-controller-in-directly-connected-mode-using-azure-cli) and then Delete the data controller either from Azure portal or CLI and then (2) Delete Kubernetes cluster artifacts. -> ->If you deployed the data controller in indirectly connected mode then follow the steps to [Delete data controller in indirectly connected mode](#delete-data-controller-in-indirectly-connected-mode). --## Delete data controller in directly connected mode using Azure portal --From Azure portal: -1. Browse to the resource group and delete the data controller. -2. Select the Azure Arc-enabled Kubernetes cluster, go to the Overview page: - - Select **Extensions** under Settings - - In the Extensions page, select the Azure Arc data services extension (of type `microsoft.arcdataservices`) and select on **Uninstall** -3. Optionally, delete the custom location that the data controller is deployed to. -4. Optionally, you can also delete the namespace on your Kubernetes cluster if there are no other resources created in the namespace. --See [Manage Azure resources by using the Azure portal](../../azure-resource-manager/management/manage-resources-portal.md). ---## Delete data controller in directly connected mode using Azure CLI --To delete the data controller in directly connected mode with the Azure CLI, there are three steps: --1. [Delete the data controller](#delete-the-data-controller) -1. [Delete the data controller extension](#delete-the-data-controller-extension) -1. [Delete the custom location](#delete-the-custom-location) --### Delete the data controller -After connecting to your Kubernetes cluster, run the following command to delete the data controller: --```azurecli -az arcdata dc delete --name <name of datacontroller> --resource-group <name of resource-group> --## Example -az arcdata dc delete --name arcdc --resource-group myrg -``` --### Delete the data controller extension --After you have deleted the data controller, delete the data controller extension as described below. To get the name of the Arc data controller extension, you can either browse to the Overview page of your connected cluster in Azure portal and look under the Extensions tab or use the below command to get a list of all extensions on the cluster: --```azurecli -az k8s-extension list --resource-group <name of resource-group> --cluster-name <name of connected cluster> --cluster-type connectedClusters --## Example -az k8s-extension list --resource-group myrg --cluster-name mycluster --cluster-type connectedClusters -``` -Once you have the name of the Arc data controller extension, delete it by running: --```azurecli -az k8s-extension delete --resource-group <name of resource-group> --cluster-name <name of connected cluster> --cluster-type connectedClusters --name <name of your Arc data controller extension> --## Example -az k8s-extension delete --resource-group myrg --cluster-name mycluster --cluster-type connectedClusters --name myadsextension -``` --Wait for a few minutes for above actions to complete. Ensure the data controller is deleted by running the below command to verify the status: --```console -kubectl get datacontrollers -A -``` --### Delete the custom location --If there are no other extensions associated with this custom location, proceed to delete the custom location as follows: --```azurecli -az customlocation delete --name <Name of customlocation> --resource-group <Name of resource group> --## Example -az customlocation delete --name myCL --resource-group myrg -``` --## Delete data controller in indirectly connected mode --By definition, with an indirectly connected data controller deployment, Azure portal is unaware of your Kubernetes cluster. Hence, in order to delete the data controller, you need to delete it on the Kubernetes cluster as well as Azure portal in two steps. --1. [Delete data controller in indirectly connected mode from cluster](#delete-data-controller-in-indirectly-connected-mode-from-cluster) -1. [Delete data controller in indirectly connected mode from Azure portal](#delete-data-controller-in-indirectly-connected-mode-from-azure-portal) --### Delete data controller in indirectly connected mode from cluster --Delete the data controller form the Kubernetes cluster by running the following command: --```azurecli -az arcdata dc delete --name <name of datacontroller> --k8s-namespace <namespace of data controller> --use-k8s --## Example -az arcdata dc delete --name arcdc --k8s-namespace arc --use-k8s -``` --### Delete data controller in indirectly connected mode from Azure portal --From the Azure portal, browse to the resource group containing the data controller, and delete. --## Delete Kubernetes cluster artifacts --After deleting the data controller as described above, follow the below steps to completely remove all artifacts related to Azure Arc-enabled data services. Removing all artifacts could be needed in situations where you have a partial or failed deployment, or simply want to reinstall the Azure Arc-enabled data services. --```console -## Substitute your namespace into the variable -export mynamespace="arc" ---## Delete Custom Resource Definitions -kubectl delete crd datacontrollers.arcdata.microsoft.com -kubectl delete crd postgresqls.arcdata.microsoft.com -kubectl delete crd sqlmanagedinstances.sql.arcdata.microsoft.com -kubectl delete crd sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com -kubectl delete crd dags.sql.arcdata.microsoft.com -kubectl delete crd exporttasks.tasks.arcdata.microsoft.com -kubectl delete crd monitors.arcdata.microsoft.com -kubectl delete crd activedirectoryconnectors.arcdata.microsoft.com -kubectl delete crd failovergroups.sql.arcdata.microsoft.com -kubectl delete crd kafkas.arcdata.microsoft.com -kubectl delete crd otelcollectors.arcdata.microsoft.com --## Delete Cluster roles and Cluster role bindings -kubectl delete clusterrole arcdataservices-extension -kubectl delete clusterrole $mynamespace:cr-arc-metricsdc-reader -kubectl delete clusterrole $mynamespace:cr-arc-dc-watch -kubectl delete clusterrole cr-arc-webhook-job -kubectl delete clusterrole $mynamespace:cr-upgrade-worker --kubectl delete clusterrolebinding $mynamespace:crb-arc-metricsdc-reader -kubectl delete clusterrolebinding $mynamespace:crb-arc-dc-watch -kubectl delete clusterrolebinding crb-arc-webhook-job -kubectl delete clusterrolebinding $mynamespace:crb-upgrade-worker --## API services Up to May 2021 release -kubectl delete apiservice v1alpha1.arcdata.microsoft.com -kubectl delete apiservice v1alpha1.sql.arcdata.microsoft.com --## June 2021 release -kubectl delete apiservice v1beta1.arcdata.microsoft.com -kubectl delete apiservice v1beta1.sql.arcdata.microsoft.com --## GA/July 2021 release -kubectl delete apiservice v1.arcdata.microsoft.com -kubectl delete apiservice v1.sql.arcdata.microsoft.com --## Delete mutatingwebhookconfiguration -kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-$mynamespace -``` --Optionally, also delete the namespace as follows: -``` -kubectl delete namespace <name of namespace> --## Example: -kubectl delete namespace arc -``` --## Verify all objects are deleted --1. Run `kubectl get crd` and ensure there are no results containing `*.arcdata.microsoft.com`. -2. Run `kubectl get clusterrole` and ensure there are no cluster roles in the format `<namespace>:cr-*`. -3. Run `kubectl get clusterrolebindings` and ensure there are no cluster role bindings in the format `<namespace>:crb-*`. |
azure-arc | Update Service Principal Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/update-service-principal-credentials.md | - Title: Update service principal credentials -description: Update credential for a service principal ------ Previously updated : 04/16/2024----# Update service principal credentials --This article explains how to update the secrets in the data controller. --For example, if you: --- Deployed the data controller using a specific set of values for service principal tenant ID, client ID, and client secret-- Change one or more of these values--You need to update the secrets in the data controller. --## Background --The service principal was created at [Create service principal](upload-metrics-and-logs-to-azure-monitor.md#create-service-principal). --## Steps --1. Access the service principal secret in the default editor. -- ```console - kubectl edit secret/upload-service-principal-secret -n <name of namespace> - ``` -- For example, to edit the service principal secret to a data controller in the `arc` namespace, run the following command: -- ```console - kubectl edit secret/upload-service-principal-secret -n arc - ``` -- The `kubectl edit` command opens the credentials .yml file in the default editor. ---1. Edit the service principal secret. -- In the default editor, replace the values in the data section with the updated credential information. -- For instance: -- ```console - # Please edit the object below. Lines beginning with a '#' will be ignored, - # and an empty file will abort the edit. If an error occurs while saving this file will be - # reopened with the relevant failures. - # - apiVersion: v1 - data: - authority: <authority id> - clientId: <client id> - clientSecret: <client secret>== - tenantId: <tenant id> - kind: Secret - metadata: - creationTimestamp: "2020-12-02T05:02:04Z" - name: upload-service-principal-secret - namespace: arc - resourceVersion: "7235659" - selfLink: /api/v1/namespaces/arc/secrets/upload-service-principal-secret - uid: <globally unique identifier> - type: Opaque - ``` -- Edit the values for `clientID`, `clientSecret` and/or `tenantID` as appropriate. --> [!NOTE] ->The values need to be base64 encoded. -Do not edit any other properties. --If an incorrect value is provided for `clientId`, `clientSecret`, or `tenantID` the command returns an error message as follows in the `control-xxxx` pod/controller container logs: --```output -YYYY-MM-DD HH:MM:SS.mmmm | ERROR | [AzureUpload] Upload task exception: A configuration issue is preventing authentication - check the error message from the server for details.You can modify the configuration in the application registration portal. See https://aka.ms/msal-net-invalid-client for details. Original exception: AADSTS7000215: Invalid client secret is provided. -``` --## Related content --- [Create service principal](upload-metrics-and-logs-to-azure-monitor.md#create-service-principal) |
azure-arc | Upgrade Active Directory Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-active-directory-connector.md | - Title: Upgrade Active Directory connector for Azure SQL Managed Instance direct or indirect mode connected to Azure Arc -description: The article describes how to upgrade an active directory connector for direct or indirect mode connected to SQL Managed Instance enabled by Azure Arc ------ Previously updated : 10/11/2022----# Upgrade Active Directory connector --This article describes how to upgrade the Active Directory connector. --## Prerequisites --Before you can proceed with the tasks in this article, you need: --- To connect and authenticate to a Kubernetes cluster-- An existing Kubernetes context selected-- Azure Arc data controller deployed, either in `direct` or `indirect` mode-- Active Directory connector deployed--### Install tools --To upgrade the Active Directory connector (adc), you need to have the Kubernetes tools such as kubectl installed. --The examples in this article use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or helm if you're familiar with those tools and Kubernetes yaml/json. --[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/) ---## Limitations --Auto upgrade of Active Directory connector is applicable from imageTag `v1.12.0_2022-10-11` and above and the Arc data controller must be at least `v1.11.0_2022-09-13` version. --The active directory connector (adc) must be at the same version as the data controller before a data controller is upgraded. --There is no batch upgrade process available at this time. --## Upgrade Active Directory connector for previous versions --For imageTag versions `v1.11.0_2022-09-13` or lower, the Active Directory connector must be upgraded manually as below: --Use a kubectl command to view the existing spec in yaml. --```console -kubectl get adc <adc-name> --namespace <namespace> --output yaml -``` --Run kubectl patch to update the desired version. --```console -kubectl patch adc <adc-name> --namespace <namespace> --type merge --patch '{"spec": {"update": {"desiredVersion": "v1.11.0_2022-09-13"}}}' -``` --## Monitor --You can monitor the progress of the upgrade with kubectl as follows: --```console -kubectl describe adc <adc-name> --namespace <namespace> -``` --### Output --The output for the command will show the resource information. Upgrade information will be in Status. --During the upgrade, ```State``` will show ```Updating``` and ```Running Version``` will be the current version: --```output -Status: - Last Update Time: 2022-09-20T16:01:48.449512Z - Observed Generation: 1 - Running Version: v1.10.0_2022-08-09 - State: Updating -``` --When the upgrade is complete, ```State``` will show ```Ready``` and ```Running Version``` will be the new version: --```output -Status: - Last Update Time: 2022-09-20T16:01:54.279612Z - Observed Generation: 2 - Running Version: v1.11.0_2022-09-13 - State: Ready -``` - |
azure-arc | Upgrade Data Controller Direct Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-cli.md | - Title: Upgrade directly connected Azure Arc data controller using the CLI -description: Article describes how to upgrade a directly connected Azure Arc data controller using the CLI ------- Previously updated : 07/07/2022----# Upgrade a directly connected Azure Arc data controller using the CLI --This article describes how to upgrade a directly connected Azure Arc-enabled data controller using the Azure CLI (`az`). --During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller won't cause downtime for the data services (SQL Managed Instance or PostgreSQL server). --## Prerequisites --You'll need a directly connected data controller with the imageTag v1.0.0_2021-07-30 or later. --To check the version, run: --```console -kubectl get datacontrollers -n <namespace> -o custom-columns=BUILD:.spec.docker.imageTag -``` --## Install tools --Before you can proceed with the tasks in this article, you need to install: --- The [Azure CLI (`az`)](/cli/azure/install-azure-cli)-- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)---The `arcdata` extension version and the image version are related. Check that you have the correct `arcdata` extension version that corresponds to the image version you want to upgrade to in the [Version log](version-log.md). --## View available images and chose a version --Pull the list of available images for the data controller with the following command: -- ```azurecli - az arcdata dc list-upgrades --k8s-namespace <namespace> - ``` --The command above returns output like the following example: --```output -Found 2 valid versions. The current datacontroller version is v1.0.0_2021-07-30. -v1.1.0_2021-11-02 -v1.0.0_2021-07-30 -``` --## Upgrade data controller --This section shows how to upgrade a directly connected data controller. --> [!NOTE] -> Some of the data services tiers and modes are generally available and some are in preview. -> If you install GA and preview services on the same data controller, you can't upgrade in place. -> To upgrade, delete all non-GA database instances. You can find the list of generally available -> and preview services in the [Release Notes](./release-notes.md). --For supported upgrade paths, see [Upgrade Azure Arc-enabled data services](upgrade-overview.md). --### Authenticate --You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller. --```kubectl -kubectl config use-context <Kubernetes cluster name> -``` --### Upgrade data controller --You can perform a dry run first. The dry run validates the registry exists, the version schema, and the private repository authorization token (if used). To perform a dry run, use the `--dry-run` parameter in the `az arcdata dc upgrade` command. For example: --```azurecli -az arcdata dc upgrade --resource-group <resource group> --name <data controller name> --desired-version <version> --dry-run [--no-wait] -``` --The output for the preceding command is: --```output -Preparing to upgrade dc arcdc in namespace arc to version <version-tag>. -****Dry Run**** -Arcdata Control Plane would be upgraded to: <version-tag> -``` --After the Arc data controller extension has been upgraded, run the `az arcdata dc upgrade` command, specifying the image tag with `--desired-version`. --```azurecli -az arcdata dc upgrade --resource-group <resource group> --name <data controller name> --desired-version <version> [--no-wait] -``` --Example: --```azurecli -az arcdata dc upgrade --resource-group rg-arcds --name dc01 --desired-version v1.7.0_2022-05-24 [--no-wait] -``` --## Monitor the upgrade status --You can monitor the progress of the upgrade with CLI. --### CLI --```azurecli - az arcdata dc status show --resource-group <resource group> -``` --The upgrade is a two-part process. First the controller is upgraded, then the monitoring stack is upgraded. When the upgrade is complete, the output will be: --```output -Ready -``` - |
azure-arc | Upgrade Data Controller Direct Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-portal.md | - Title: Upgrade directly connected Azure Arc data controller using the portal -description: Article describes how to upgrade a directly connected Azure Arc data controller using the portal ------ Previously updated : 07/07/2022----# Upgrade a directly connected Azure Arc data controller using the portal --This article describes how to upgrade a directly connected Azure Arc-enabled data controller using the Azure portal. --During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller will not cause downtime for the data services (SQL Managed Instance or PostgreSQL server). --## Prerequisites --You will need a directly connected data controller with the imageTag v1.0.0_2021-07-30 or later. --To check the version, run: --```console -kubectl get datacontrollers -n <namespace> -o custom-columns=BUILD:.spec.docker.imageTag -``` --## Upgrade data controller --This section shows how to upgrade a directly connected data controller. --> [!NOTE] -> Some of the data services tiers and modes are generally available and some are in preview. -> If you install GA and preview services on the same data controller, you can't upgrade in place. -> To upgrade, delete all non-GA database instances. You can find the list of generally available -> and preview services in the [Release Notes](./release-notes.md). --For supported upgrade paths, see [Upgrade Azure Arc-enabled data services](upgrade-overview.md). ---### Upgrade --Open your data controller resource. If an upgrade is available, you will see a notification on the **Overview** blade that says, "One or more upgrades are available for this data controller." --Under **Settings**, select the **Upgrade Management** blade. --In the table of available versions, choose the version you want to upgrade to and click "Upgrade Now". --In the confirmation dialog box, click "Upgrade". --## Monitor the upgrade status --To view the status of your upgrade in the portal, go to the resource group of the data controller and select the **Activity log** blade. --You will see a "Validate Deploy" option that shows the status. - |
azure-arc | Upgrade Data Controller Indirect Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-cli.md | - Title: Upgrade indirectly connected Azure Arc data controller using the CLI -description: Article describes how to upgrade an indirectly connected Azure Arc data controller using the CLI ------- Previously updated : 07/07/2022----# Upgrade an indirectly connected Azure Arc data controller using the CLI --This article describes how to upgrade an indirectly connected Azure Arc-enabled data controller using the Azure CLI (`az`). --During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller won't cause downtime for the data services (SQL Managed Instance or PostgreSQL server). --## Prerequisites --You'll need an indirectly connected data controller with the imageTag v1.0.0_2021-07-30 or later. --To check the version, run: --```console -kubectl get datacontrollers -n <namespace> -o custom-columns=BUILD:.spec.docker.imageTag -``` --## Install tools --Before you can proceed with the tasks in this article, you need to install: --- The [Azure CLI (`az`)](/cli/azure/install-azure-cli)-- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)---The `arcdata` extension version and the image version are related. Check that you have the correct `arcdata` extension version that corresponds to the image version you want to upgrade to in the [Version log](version-log.md). --## View available images and chose a version --Pull the list of available images for the data controller with the following command: -- ```azurecli - az arcdata dc list-upgrades --k8s-namespace <namespace> - ``` --The command above returns output like the following example: --```output -Found 2 valid versions. The current datacontroller version is v1.0.0_2021-07-30. -v1.1.0_2021-11-02 -v1.0.0_2021-07-30 -``` --## Upgrade data controller --This section shows how to upgrade an indirectly connected data controller. --> [!NOTE] -> Some of the data services tiers and modes are generally available and some are in preview. -> If you install GA and preview services on the same data controller, you can't upgrade in place. -> To upgrade, delete all non-GA database instances. You can find the list of generally available -> and preview services in the [Release Notes](./release-notes.md). --For supported upgrade paths, see [Upgrade Azure Arc-enabled data services](upgrade-overview.md). ---### Upgrade --You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller. --You can perform a dry run first. The dry run validates the registry exists, the version schema, and the private repository authorization token (if used). To perform a dry run, use the `--dry-run` parameter in the `az arcdata dc upgrade` command. For example: --```azurecli -az arcdata dc upgrade --desired-version <version> --k8s-namespace <namespace> --dry-run --use-k8s -``` --The output for the preceding command is: --```output -Preparing to upgrade dc arcdc in namespace arc to version <version-tag>. -Preparing to upgrade dc arcdc in namespace arc to version <version-tag>. -****Dry Run**** -Arcdata Control Plane would be upgraded to: <version-tag> -``` --To upgrade the data controller, run the `az arcdata dc upgrade` command, specifying the image tag with `--desired-version`. --```azurecli -az arcdata dc upgrade --name <data controller name> --desired-version <image tag> --k8s-namespace <namespace> --use-k8s -``` --Example: --```azurecli -az arcdata dc upgrade --name arcdc --desired-version v1.7.0_2022-05-24 --k8s-namespace arc --use-k8s -``` --The output for the preceding command shows the status of the steps: --```output -Preparing to upgrade dc arcdc in namespace arc to version <version-tag>. -Preparing to upgrade dc arcdc in namespace arc to version <version-tag>. -Creating service account: arc:cr-upgrade-worker -Creating cluster role: arc:cr-upgrade-worker -Creating cluster role binding: arc:crb-upgrade-worker -Cluster role binding: arc:crb-upgrade-worker created successfully. -Cluster role: arc:cr-upgrade-worker created successfully. -Service account arc:cr-upgrade-worker has been created successfully. -Creating privileged job arc-elevated-bootstrapper-job -``` --## Monitor the upgrade status --The upgrade is a two-part process. First the controller is upgraded, then the monitoring stack is upgraded. You can monitor the progress of the upgrade with CLI. --### CLI --```azurecli - az arcdata dc status show --name <data controller name> --k8s-namespace <namespace> --use-k8s -``` --When the upgrade is complete, the output will be: --```output -Ready -``` - |
azure-arc | Upgrade Data Controller Indirect Kubernetes Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md | - Title: Upgrade indirectly connected data controller for Azure Arc using Kubernetes tools -description: Article describes how to upgrade an indirectly connected data controller for Azure Arc using Kubernetes tools ------ Previously updated : 07/07/2022----# Upgrade an indirectly connected Azure Arc-enabled data controller using Kubernetes tools --This article explains how to upgrade an indirectly connected Azure Arc-enabled data controller with Kubernetes tools. --During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller will not cause downtime for the data services (SQL Managed Instance or PostgreSQL server). --In this article, you'll apply a .yaml file to: --1. Create the service account for running upgrade. -1. Upgrade the bootstrapper. -1. Upgrade the data controller. --> [!NOTE] -> Some of the data services tiers and modes are generally available and some are in preview. -> If you install GA and preview services on the same data controller, you can't upgrade in place. -> To upgrade, delete all non-GA database instances. You can find the list of generally available -> and preview services in the [Release Notes](./release-notes.md). --## Prerequisites --Prior to beginning the upgrade of the data controller, you'll need: --- To connect and authenticate to a Kubernetes cluster-- An existing Kubernetes context selected--You need an indirectly connected data controller with the `imageTag: v1.0.0_2021-07-30` or greater. --## Install tools --To upgrade the data controller using Kubernetes tools, you need to have the Kubernetes tools installed. --The examples in this article use `kubectl`, but similar approaches could be used with other Kubernetes tools -such as the Kubernetes dashboard, `oc`, or helm if you're familiar with those tools and Kubernetes yaml/json. --[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/) --## View available images and chose a version --Pull the list of available images for the data controller with the following command: --```azurecli -az arcdata dc list-upgrades --k8s-namespace <namespace> - ``` --The command above returns output like the following example: --```output -Found 2 valid versions. The current datacontroller version is <current-version>. -<available-version> -... -``` --## Upgrade data controller --This section shows how to upgrade an indirectly connected data controller. --> [!NOTE] -> Some of the data services tiers and modes are generally available and some are in preview. -> If you install GA and preview services on the same data controller, you can't upgrade in place. -> To upgrade, delete all non-GA database instances. You can find the list of generally available -> and preview services in the [Release Notes](./release-notes.md). --For supported upgrade paths, see [Upgrade Azure Arc-enabled data services](upgrade-overview.md). ---### Upgrade --You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the data controller. ---### Create the service account for running upgrade -- > [!IMPORTANT] - > Requires Kubernetes permissions for creating service account, role binding, cluster role, cluster role binding, and all the RBAC permissions being granted to the service account. --Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace of the data controller, for example: `arc`. Run the following command to create the deployer service account with the edited file. --```console -kubectl apply --namespace arc -f arcdata-deployer.yaml -``` ---### Upgrade the bootstrapper --The following command creates a job for upgrading the bootstrapper and related Kubernetes objects. -- > [!IMPORTANT] - > The yaml file in the following command defaults to mcr.microsoft.com/arcdata. Please save a copy of the yaml file and update it to a use a different registry/repository if necessary. --```console -kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/upgrade/yaml/bootstrapper-upgrade-job.yaml -``` --### Upgrade the data controller --The following command patches the image tag to upgrade the data controller. --```console -kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/upgrade/yaml/data-controller-upgrade.yaml -``` ---## Monitor the upgrade status --You can monitor the progress of the upgrade with kubectl. --### kubectl --```console -kubectl get datacontrollers --namespace <namespace> -w -kubectl get monitors --namespace <namespace> -w -``` --The upgrade is a two-part process. First the controller is upgraded, then the monitoring stack is upgraded. During the upgrade, use ```kubectl get monitors -n <namespace> -w``` to view the status. The output will be: --```output -NAME STATUS AGE -monitorstack Updating 36m -monitorstack Updating 36m -monitorstack Updating 39m -monitorstack Updating 39m -monitorstack Updating 41m -monitorstack Ready 41m -``` - |
azure-arc | Upgrade Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-overview.md | - Title: Overview - upgrade Azure Arc-enabled data services -description: Explains how to upgrade Azure Arc-enabled data controller, and other data services. ------ Previously updated : 08/15/2022----# Upgrade Azure Arc-enabled data services --This article describes the paths and options to upgrade Azure Arc-enabled data controller and data services. --## Supported versions --Each release contains an image tag. Use the image tag to identify when Microsoft released the component. Microsoft supports the component for one full year after the release. --Identify your current version by image tag. The image tag version scheme is: -- `<Major>.<Minor>.<optional:revision>_<date>`.-- `<date>` identifies the year, month, and day of the release. The pattern is: YYYY-MM-DD. --For example, a complete image tag for the release in June 2022 is: `v1.8.0_2022-06-06`. --The example image released on June 6, 2022. --Microsoft supports this release through June 5, 2023. --> [!NOTE] -> The latest current branch version is always in the **Full Support** servicing phase. This support statement means that if you encounter a code defect that warrants a critical update, you must have the latest current branch version installed in order to receive a fix. --## Upgrade path --Upgrades are limited to the next incremental minor or major version. For example: --- Supported version upgrades:- - 1.1 -> 1.2 - - 1.3 -> 2.0 -- Unsupported version upgrades:- - 1.1 -> 1.4 Not supported because one or more minor versions are skipped. --## Upgrade order --Upgrade the data controller before you upgrade any data service. SQL Managed Instance enabled by Azure Arc is an example of a data service. --A data controller may be up to one version ahead of a data service. A data service major version may not be one version ahead, or more than one version behind a data controller. --The following list displays supported and unsupported configurations, based on image tag. --- Supported configurations.- - Data controller and data service at same version: - - Data controller: `v1.9.0_2022-07-12` - - Data service: `v1.9.0_2022-07-12` - - Data controller ahead of data service by one version: - - Data controller: `v1.9.0_2022-07-12` - - Data service: `v1.8.0_2022-06-14` --- Unsupported configurations:- - Data controller behind data service: - - Data controller: `v1.8.0_2022-06-14` - - Data service: `v1.9.0_2022-07-12` - - Data controller ahead of data service by more than one version: - - Data controller: `v1.9.0_2022-07-12` - - Data service: `v1.6.0_2022-05-02` --## Schedule maintenance --The upgrade will cause a service interruption (downtime). --The amount of time to upgrade the data service depends on the service tier. --The data controller upgrade does not cause application downtime. --- General Purpose: A single replica is not available during the upgrade.-- Business Critical: A SQL managed instance incurs a brief service interruption (downtime) once during an upgrade. After the data controller upgrades a secondary replica, the service fails over to an upgraded replica. The controller then upgrades the previous primary replica.--> [!TIP] -> Upgrade the data services during scheduled maintenance time. --### Automatic upgrades --When a SQL managed instance `desiredVersion` is set to `auto`, the data controller will automatically upgrade the managed instance. |
azure-arc | Upgrade Sql Managed Instance Auto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-auto.md | - Title: Enable automatic upgrades - Azure SQL Managed Instance for Azure Arc -description: Article describes how to enable automatic upgrades for Azure SQL Managed Instance deployed for Azure Arc ------ Previously updated : 05/27/2022----# Enable automatic upgrades of an Azure SQL Managed Instance for Azure Arc ---You can set the `--desired-version` parameter of the `spec.update.desiredVersion` property of a SQL Managed Instance enabled by Azure Arc to `auto` to ensure that your managed instance will be upgraded after a data controller upgrade, with no interaction from a user. This setting simplifies management, as you don't need to manually upgrade every instance for every release. --After setting the `--desired-version` parameter of the `spec.update.desiredVersion` property to `auto` the first time, the Azure Arc-enabled data service will begin an upgrade of the managed instance to the newest image version within five minutes, or within the next [Maintenance Window](maintenance-window.md). Thereafter, within five minutes of a data controller being upgraded, or within the next maintenance window, the managed instance will begin the upgrade process. This setting works for both directly connected and indirectly connected modes. --If the `spec.update.desiredVersion` property is pinned to a specific version, automatic upgrades won't take place. This property allows you to let most instances automatically upgrade, while manually managing instances that need a more hands-on approach. --## Prerequisites --Your managed instance version must be equal to the data controller version before enabling auto mode. --## Enable with Kubernetes tools (kubectl) --Use kubectl to view the existing spec in yaml. --```console -kubectl --namespace <namespace> get sqlmi <sqlmi-name> --output yaml -``` --Run `kubectl patch` to set `desiredVersion` to `auto`. --```console -kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{"spec": {"update": {"desiredVersion": "auto"}}}' -``` --## Enable with CLI --To set the `--desired-version` to `auto`, use the following command: --Indirectly connected: --````cli -az sql mi-arc upgrade --name <instance name> --desired-version auto --k8s-namespace <namespace> --use-k8s -```` --Example: --````cli -az sql mi-arc upgrade --name instance1 --desired-version auto --k8s-namespace arc1 --use-k8s -```` --Directly connected: --````cli -az sql mi-arc upgrade --resource-group <resource group> --name <instance name> --desired-version auto [--no-wait] -```` --Example: --````cli -az sql mi-arc upgrade --resource-group rgarc --name instance1 --desired-version auto -```` |
azure-arc | Upgrade Sql Managed Instance Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-cli.md | - Title: Upgrade Azure SQL Managed Instance indirectly connected to Azure Arc using the CLI -description: Article describes how to upgrade an Azure SQL Managed Instance indirectly connected to Azure Arc-enabled using the CLI ------ Previously updated : 10/11/2022----# Upgrade Azure SQL Managed Instance indirectly connected Azure Arc using the CLI --This article describes how to upgrade a SQL Managed Instance deployed on an indirectly connected Azure Arc-enabled data controller using the Azure CLI (`az`). --## Prerequisites --### Install tools --Before you can proceed with the tasks in this article, install: --- The [Azure CLI (`az`)](/cli/azure/install-azure-cli)-- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)--The `arcdata` extension version and the image version are related. Check that you have the correct `arcdata` extension version that corresponds to the image version you want to upgrade to in the [Version log](version-log.md). --## Limitations --The Azure Arc Data Controller must be upgraded to the new version before the managed instance can be upgraded. --If Active Directory integration is enabled then Active Directory connector must be upgraded to the new version before the managed instance can be upgraded. --The managed instance must be at the same version as the data controller and active directory connector before a data controller is upgraded. --There's no batch upgrade process available at this time. --## Upgrade the managed instance --A dry run can be performed first. The dry run validates the version schema and lists which instance(s) will be upgraded. --For example: --```azurecli -az sql mi-arc upgrade --name <instance name> --k8s-namespace <namespace> --dry-run --use-k8s -``` --The output will be: --```output -Preparing to upgrade sql sqlmi-1 in namespace arc to data controller version. -****Dry Run****1 instance(s) would be upgraded by this commandsqlmi-1 would be upgraded to <version-tag>. -``` ----### Upgrade --To upgrade the managed instance, use the following command: --```azurecli -az sql mi-arc upgrade --name <instance name> --desired-version <version> --k8s-namespace <namespace> --use-k8s -``` --Example: --```azurecli -az sql mi-arc upgrade --name instance1 --desired-version v1.0.0.20211028 --k8s-namespace arc1 --use-k8s -``` --## Monitor --### CLI --You can monitor the progress of the upgrade with the `show` command. --```cli -az sql mi-arc show --name <instance name> --k8s-namespace <namespace> --use-k8s -``` --### Output --The output for the command will show the resource information. Upgrade information will be in Status. --During the upgrade, ```State``` will show ```Updating``` and ```Running Version``` will be the current version: --```output -Status: - Log Search Dashboard: https://30.88.222.48:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi-1')) - Metrics Dashboard: https://30.88.221.32:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi-1-0 - Observed Generation: 2 - Primary Endpoint: 30.76.129.38,1433 - Ready Replicas: 1/1 - Running Version: v1.0.0_2021-07-30 - State: Updating -``` --When the upgrade is complete, ```State``` will show ```Ready``` and ```Running Version``` will be the new version: --```output -Status: - Log Search Dashboard: https://30.88.222.48:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi-1')) - Metrics Dashboard: https://30.88.221.32:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi-1-0 - Observed Generation: 2 - Primary Endpoint: 30.76.129.38,1433 - Ready Replicas: 1/1 - Running Version: <version-tag> - State: Ready -``` - |
azure-arc | Upgrade Sql Managed Instance Direct Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-direct-cli.md | - Title: Upgrade a directly connected Azure SQL Managed Instance for Azure Arc using the CLI -description: Article describes how to upgrade a directly connected Azure SQL Managed Instance for Azure Arc using the CLI ------ Previously updated : 10/11/2022----# Upgrade an Azure SQL Managed Instance directly connected to Azure Arc using the CLI --This article describes how to upgrade an Azure SQL Managed Instance deployed on a directly connected Azure Arc-enabled data controller using the Azure CLI (`az`). --## Prerequisites --### Install tools --Before you can proceed with the tasks in this article, install: --- The [Azure CLI (`az`)](/cli/azure/install-azure-cli)-- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)--The `arcdata` extension version and the image version are related. Check that you have the correct `arcdata` extension version that corresponds to the image version you want to upgrade to in the [Version log](version-log.md). --## Limitations --The Azure Arc data controller must be upgraded to the new version before the managed instance can be upgraded. --If Active Directory integration is enabled then Active Directory connector must be upgraded to the new version before the managed instance can be upgraded. --The managed instance must be at the same version as the data controller and active directory connector before a data controller is upgraded. --There's no batch upgrade process available at this time. --## Upgrade the managed instance --You can perform a dry run first. The dry run validates the version schema and lists which instance(s) will be upgraded. Use `--dry-run`. For example: --```azurecli -az sql mi-arc upgrade --resource-group <resource group> --name <instance name> --dry-run -``` --The output will be: --```output -Preparing to upgrade sql sqlmi-1 in namespace arc to data controller version. -****Dry Run****1 instance(s) would be upgraded by this commandsqlmi-1 would be upgraded to <version-tag>. -``` ----### Upgrade --To upgrade the managed instance, use the following command: --```azurecli -az sql mi-arc upgrade --resource-group <resource group> --name <instance name> --desired-version <imageTag> [--no-wait] -``` --Example: --```azurecli -az sql mi-arc upgrade --resource-group myresource-group --name sql1 --desired-version v1.6.0_2022-05-02 [--no-wait] -``` --## Monitor --You can monitor the progress of the upgrade with CLI. --### CLI example --```cli -az sql mi-arc show --resource-group <resource group> --name <instance name> -``` --### Output --The output for the command will show the resource information. Upgrade information will be in Status. --During the upgrade, ```State``` will show ```Updating``` and ```Running Version``` will be the current version: --```output -Status: - Log Search Dashboard: https://30.88.222.48:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi-1')) - Metrics Dashboard: https://30.88.221.32:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi-1-0 - Observed Generation: 2 - Primary Endpoint: 30.76.129.38,1433 - Ready Replicas: 1/1 - Running Version: v1.5.0_2022-04-05 - State: Updating -``` --When the upgrade is complete, ```State``` will show ```Ready``` and ```Running Version``` will be the new version: --```output -Status: - Log Search Dashboard: https://30.88.222.48:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi-1')) - Metrics Dashboard: https://30.88.221.32:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi-1-0 - Observed Generation: 2 - Primary Endpoint: 30.76.129.38,1433 - Ready Replicas: 1/1 - Running Version: v1.6.0_2022-05-02 - State: Ready -``` - |
azure-arc | Upgrade Sql Managed Instance Direct Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-direct-portal.md | - Title: Upgrade Azure SQL Managed Instance directly connected Azure Arc using the portal -description: Article describes how to upgrade Azure SQL Managed Instance directly connected Azure Arc using Azure portal ------ Previously updated : 10/11/2022----# Upgrade Azure SQL Managed Instance directly connected Azure Arc using the portal --This article describes how to upgrade Azure SQL Managed Instance deployed on a directly connected Azure Arc-enabled data controller using the portal. --## Limitations --The Azure Arc data controller must be upgraded to the new version before the managed instance can be upgraded. --If Active Directory integration is enabled then Active Directory connector must be upgraded to the new version before the managed instance can be upgraded. --The managed instance must be at the same version as the data controller and active directory connector before a data controller is upgraded. --There's no batch upgrade process available at this time. --## Upgrade the managed instance ----### Upgrade --Open your SQL Managed Instance - Azure Arc resource. --Under **Settings**, select the **Upgrade Management**. --In the table of available versions, choose the version you want to upgrade to and select **Upgrade Now**. --In the confirmation dialog box, select **Upgrade**. --## Monitor the upgrade status --To view the status of your upgrade in the portal, go to the resource group of the SQL Managed Instance and select **Activity log**. --A **Validate Deploy** option that shows the status. - |
azure-arc | Upgrade Sql Managed Instance Indirect Kubernetes Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-indirect-kubernetes-tools.md | - Title: Upgrade Azure SQL Managed Instance indirectly connected to Azure Arc using Kubernetes tools -description: Article describes how to upgrade an indirectly connected SQL Managed Instance enabled by Azure Arc using Kubernetes tools ------ Previously updated : 10/11/2022----# Upgrade Azure SQL Managed Instance indirectly connected to Azure Arc using Kubernetes tools --This article describes how to upgrade Azure SQL Managed Instance deployed on an indirectly connected Azure Arc-enabled data controller using Kubernetes tools. --## Prerequisites --### Install tools --Before you can proceed with the tasks in this article, you need: --- To connect and authenticate to a Kubernetes cluster-- An existing Kubernetes context selected--You need an indirectly connected data controller with the `imageTag v1.0.0_2021-07-30` or greater. --## Limitations --The Azure Arc Data Controller must be upgraded to the new version before the managed instance can be upgraded. --If Active Directory integration is enabled then Active Directory connector must be upgraded to the new version before the managed instance can be upgraded. --The managed instance must be at the same version as the data controller and active directory connector before a data controller is upgraded. --There's no batch upgrade process available at this time. --## Upgrade the managed instance ----### Upgrade --Use a kubectl command to view the existing spec in yaml. --```console -kubectl --namespace <namespace> get sqlmi <sqlmi-name> --output yaml -``` --Run kubectl patch to update the desired version. --```console -kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{"spec": {"update": {"desiredVersion": "v1.1.0_2021-11-02"}}}' -``` --## Monitor --You can monitor the progress of the upgrade with kubectl as follows: --```console -kubectl describe sqlmi --namespace <namespace> -``` --### Output --The output for the command will show the resource information. Upgrade information will be in Status. --During the upgrade, ```State``` will show ```Updating``` and ```Running Version``` will be the current version: --```output -Status: - Log Search Dashboard: https://30.88.222.48:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi-1')) - Metrics Dashboard: https://30.88.221.32:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi-1-0 - Observed Generation: 2 - Primary Endpoint: 30.76.129.38,1433 - Ready Replicas: 1/1 - Running Version: v1.0.0_2021-07-30 - State: Updating -``` --When the upgrade is complete, ```State``` will show ```Ready``` and ```Running Version``` will be the new version: --```output -Status: - Log Search Dashboard: https://30.88.222.48:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi-1')) - Metrics Dashboard: https://30.88.221.32:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi-1-0 - Observed Generation: 2 - Primary Endpoint: 30.76.129.38,1433 - Ready Replicas: 1/1 - Running Version: <version-tag> - State: Ready -``` - |
azure-arc | Upload Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-logs.md | - Title: Upload logs to Azure Monitor -description: Upload logs for Azure Arc-enabled data services to Azure Monitor ------- Previously updated : 05/27/2022----# Upload logs to Azure Monitor --Periodically, you can export logs and then upload them to Azure. Exporting and uploading logs also creates and updates the data controller, SQL managed instance, and PostgreSQL server resources in Azure. --## Before you begin --Before you can upload logs, you need to: --1. [Create a log analytics workspace](#create-a-log-analytics-workspace) -1. [Assign ID and shared key to environment variables](#assign-id-and-shared-key-to-environment-variables) ---## Create a log analytics workspace --To create a log analytics workspace, execute these commands to create a Log Analytics Workspace and set the access information into environment variables. --> [!NOTE] -> Skip this step if you already have a workspace. --```azurecli -az monitor log-analytics workspace create --resource-group <resource group name> --workspace-name <some name you choose> -``` --Example output: --```output -{ - "customerId": "00000000-0000-0000-0000-000000000000", - "eTag": null, - "id": "/subscriptions/<Subscription ID>/resourcegroups/user-arc-demo/providers/microsoft.operationalinsights/workspaces/user-logworkspace", - "location": "eastus", - "name": "user-logworkspace", - "portalUrl": null, - "provisioningState": "Succeeded", - "resourceGroup": "user-arc-demo", - "retentionInDays": 30, - "sku": { - "lastSkuUpdate": "Thu, 30 Jul 2020 22:37:53 GMT", - "maxCapacityReservationLevel": 3000, - "name": "pergb2018" - }, - "source": "Azure", - "tags": null, - "type": "Microsoft.OperationalInsights/workspaces" -} -``` --## Assign ID and shared key to environment variables --Save the log workspace analytics `customerId` as an environment variable to be used later: --# [Windows](#tab/windows) --```console -SET WORKSPACE_ID=<customerId> -``` --# [PowerShell](#tab/powershell) --```PowerShell -$Env:WORKSPACE_ID='<customerId>' -``` -# [macOS & Linux](#tab/linux) --```console -export WORKSPACE_ID='<customerId>' -``` ----This command returns the access keys required to connect to your log analytics workspace: --```azurecli -az monitor log-analytics workspace get-shared-keys --resource-group MyResourceGroup --workspace-name MyLogsWorkpace -``` --Example output: --```output -{ - "primarySharedKey": "<primarySharedKey>==", - "secondarySharedKey": "<secondarySharedKey>==" -} -``` --Save the primary key in an environment variable to be used later: --# [Windows](#tab/windows) --```console -SET WORKSPACE_SHARED_KEY=<primarySharedKey> -``` --# [PowerShell](#tab/powershell) --```console -$Env:WORKSPACE_SHARED_KEY='<primarySharedKey>' -``` --# [macOS & Linux](#tab/linux) --```console -export WORKSPACE_SHARED_KEY='<primarySharedKey>' -``` -----## Verify environment variables --Check to make sure that all environment variables required are set if you want: --# [Windows](#tab/windows) --```console -echo %WORKSPACE_ID% -echo %WORKSPACE_SHARED_KEY% -``` --# [PowerShell](#tab/powershell) --```PowerShell -$Env:WORKSPACE_ID -$Env:WORKSPACE_SHARED_KEY -``` --# [macOS & Linux](#tab/linux) --```console -echo $WORKSPACE_ID -echo $WORKSPACE_SHARED_KEY -``` ----With the environment variables set, you can upload logs to the log workspace. --## Configure automatic upload of logs to Azure Log Analytics Workspace in direct mode using `az` CLI --In the **direct** connected mode, Logs upload can only be set up in **automatic** mode. This automatic upload of metrics can be set up either during deployment or post deployment of Azure Arc data controller. --### Enable automatic upload of logs to Azure Log Analytics Workspace --If the automatic upload of logs was disabled during Azure Arc data controller deployment, run the below command to enable automatic upload of logs. --```azurecli -az arcdata dc update --name <name of datacontroller> --resource-group <resource group> --auto-upload-logs true -#Example -az arcdata dc update --name arcdc --resource-group <myresourcegroup> --auto-upload-logs true -``` --### Enable automatic upload of logs to Azure Log Analytics Workspace --If the automatic upload of logs was enabled during Azure Arc data controller deployment, run the below command to disable automatic upload of logs. -``` -az arcdata dc update --name <name of datacontroller> --resource-group <resource group> --auto-upload-logs false -#Example -az arcdata dc update --name arcdc --resource-group <myresourcegroup> --auto-upload-logs false -``` --## Configure automatic upload of logs to Azure Log Analytics Workspace in **direct** mode using `kubectl` CLI --### Enable automatic upload of logs to Azure Log Analytics Workspace --To configure automatic upload of logs using ```kubectl```: --- ensure the Log Analytics Workspace is created as described in the earlier section-- create a Kubernetes secret for the Log Analytics workspace using the ```WorkspaceID``` and `SharedAccessKey` as follows:--``` -apiVersion: v1 -data: - primaryKey: <base64 encoding of Azure Log Analytics workspace primary key> - workspaceId: <base64 encoding of Azure Log Analytics workspace Id> -kind: Secret -metadata: - name: log-workspace-secret - namespace: <your datacontroller namespace> -type: Opaque -``` --- To create the secret, run:-- ```console - kubectl apply -f <myLogAnalyticssecret.yaml> --namespace <mynamespace> - ``` --- To open the settings as a yaml file in the default editor, run:-- ```console - kubectl edit datacontroller <DC name> --name <namespace> - ``` --- update the autoUploadLogs property to ```"true"```, and save the file----### Enable automatic upload of logs to Azure Log Analytics Workspace --To disable automatic upload of logs, run: --```console -kubectl edit datacontroller <DC name> --name <namespace> -``` --- update the autoUploadLogs property to `"false"`, and save the file--## Upload logs to Azure Monitor in **indirect** mode -- To upload logs for SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL servers run the following CLI commands- --1. Export all logs to the specified file: -- > [!NOTE] - > Exporting usage/billing information, metrics, and logs using the command `az arcdata dc export` requires bypassing SSL verification for now. You will be prompted to bypass SSL verification or you can set the `AZDATA_VERIFY_SSL=no` environment variable to avoid prompting. There is no way to configure an SSL certificate for the data controller export API currently. -- ```azurecli - az arcdata dc export --type logs --path logs.json --k8s-namespace arc - ``` ---2. Upload logs to an Azure monitor log analytics workspace: -- ```azurecli - az arcdata dc upload --path logs.json - ``` --## View your logs in Azure portal --Once your logs are uploaded, you should be able to query them using the log query explorer as follows: --1. Open the Azure portal and then search for your workspace by name in the search bar at the top and then select it. -2. Select Logs in the left panel. -3. Select Get Started (or select the links on the Getting Started page to learn more about Log Analytics if you are new to it). -4. Follow the tutorial to learn more about Log Analytics if this is your first time using Log Analytics. -5. Expand Custom Logs at the bottom of the list of tables and you will see a table called 'sql_instance_logs_CL' or 'postgresInstances_postgresql_logs_CL'. -6. Select the 'eye' icon next to the table name. -7. Select the 'View in query editor' button. -8. You'll now have a query in the query editor that will show the most recent 10 events in the log. -9. From here, you can experiment with querying the logs using the query editor, set alerts, etc. --## Automating uploads (optional) --If you want to upload metrics and logs on a scheduled basis, you can create a script and run it on a timer every few minutes. Below is an example of automating the uploads using a Linux shell script. --In your favorite text/code editor, add the following script to the file and save as a script executable file - such as `.sh` (Linux/Mac), `.cmd`, `.bat`, or `.ps1` (Windows). --```azurecli -az arcdata dc export --type logs --path logs.json --force --k8s-namespace arc -az arcdata dc upload --path logs.json -``` --Make the script file executable --```console -chmod +x myuploadscript.sh -``` --Run the script every 20 minutes: --```console -watch -n 1200 ./myuploadscript.sh -``` --You could also use a job scheduler like cron or Windows Task Scheduler or an orchestrator like Ansible, Puppet, or Chef. --## Related content --[Upload metrics, and logs to Azure Monitor](upload-metrics.md) --[Upload usage data, metrics, and logs to Azure Monitor](upload-usage-data.md) --[Upload billing data to Azure and view it in the Azure portal](view-billing-data-in-azure.md) --[View Azure Arc data controller resource in Azure portal](view-data-controller-in-azure-portal.md) |
azure-arc | Upload Metrics And Logs To Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-metrics-and-logs-to-azure-monitor.md | - Title: Upload usage data, metrics, and logs to Azure -description: Upload resource inventory, usage data, metrics, and logs to Azure ------- Previously updated : 04/16/2024----# Upload usage data, metrics, and logs to Azure --Periodically, you can export out usage information for billing purposes, monitoring metrics, and logs and then upload it to Azure. The export and upload of any of these three types of data will also create and update the data controller, and SQL managed instance resources in Azure. --Before you can upload usage data, metrics, or logs you need to: --* Install tools -* [Register the `Microsoft.AzureArcData` resource provider](#register-the-resource-provider) -* [Create the service principal](#create-service-principal) ---## Install tools --The required tools include: -* Azure CLI (az) -* `arcdata` extension --See [Install tools](./install-client-tools.md). --## Register the resource provider --Prior to uploading metrics or user data to Azure, you need to ensure that your Azure subscription has the `Microsoft.AzureArcData` resource provider registered. --To verify the resource provider, run the following command: --```azurecli -az provider show -n Microsoft.AzureArcData -o table -``` --If the resource provider is not currently registered in your subscription, you can register it. To register it, run the following command. This command may take a minute or two to complete. --```azurecli -az provider register -n Microsoft.AzureArcData --wait -``` --## Create service principal --The service principal is used to upload usage and metrics data. --Follow these commands to create your metrics upload service principal: --> [!NOTE] -> Creating a service principal requires [certain permissions in Azure](../../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app). --To create a service principal, update the following example. Replace `<ServicePrincipalName>`, `SubscriptionId` and `resourcegroup` with your values and run the command: --```azurecli -az ad sp create-for-rbac --name <ServicePrincipalName> --role Contributor --scopes /subscriptions/<SubscriptionId>/resourceGroups/<resourcegroup> -``` --If you created the service principal earlier, and just need to get the current credentials, run the following command to reset the credential. --```azurecli -az ad sp credential reset --name <ServicePrincipalName> -``` --For example, to create a service principal named `azure-arc-metrics`, run the following command --```azurecli -az ad sp create-for-rbac --name azure-arc-metrics --role Contributor --scopes /subscriptions/<SubscriptionId>/resourceGroups/myresourcegroup -``` --Example output: --```output -"appId": "<appId>", -"displayName": "azure-arc-metrics", -"name": "http://azure-arc-metrics", -"password": "<password>", -"tenant": "<tenant>" -``` --Save the `appId`, `password`, and `tenant` values in an environment variable for use later. These values are in the form of globally unique identifier (GUID). --# [Windows](#tab/windows) --```console -SET SPN_CLIENT_ID=<appId> -SET SPN_CLIENT_SECRET=<password> -SET SPN_TENANT_ID=<tenant> -``` --# [macOS & Linux](#tab/linux) --```console -export SPN_CLIENT_ID='<appId>' -export SPN_CLIENT_SECRET='<password>' -export SPN_TENANT_ID='<tenant>' -``` --# [PowerShell](#tab/powershell) --```console -$Env:SPN_CLIENT_ID="<appId>" -$Env:SPN_CLIENT_SECRET="<password>" -$Env:SPN_TENANT_ID="<tenant>" -``` ----After you have created the service principal, assign the service principal to the appropriate role. --## Assign roles to the service principal --Run this command to assign the service principal to the `Monitoring Metrics Publisher` role on the subscription where your database instance resources are located: --# [Windows](#tab/windows) --> [!NOTE] -> You need to use double quotes for role names when running from a Windows environment. --```azurecli -az role assignment create --assignee <appId> --role "Monitoring Metrics Publisher" --scope subscriptions/<SubscriptionID>/resourceGroups/<resourcegroup> --``` --# [macOS & Linux](#tab/linux) --```azurecli -az role assignment create --assignee <appId> --role 'Monitoring Metrics Publisher' --scope subscriptions/<SubscriptionID>/resourceGroups/<resourcegroup> -``` --# [PowerShell](#tab/powershell) --```azurecli -az role assignment create --assignee <appId> --role 'Monitoring Metrics Publisher' --scope subscriptions/<SubscriptionID>/resourceGroups/<resourcegroup> -``` ----Example output: --```output -{ - "canDelegate": null, - "id": "/subscriptions/<Subscription ID>/providers/Microsoft.Authorization/roleAssignments/<globally unique identifier>", - "name": "<globally unique identifier>", - "principalId": "<principal id>", - "principalType": "ServicePrincipal", - "roleDefinitionId": "/subscriptions/<Subscription ID>/providers/Microsoft.Authorization/roleDefinitions/<globally unique identifier>", - "scope": "/subscriptions/<Subscription ID>", - "type": "Microsoft.Authorization/roleAssignments" -} -``` --## Verify service principal role --```azurecli -az role assignment list --scope subscriptions/<SubscriptionID>/resourceGroups/<resourcegroup> -o table -``` --With the service principal assigned to the appropriate role, you can proceed to upload metrics, or user data. ----## Upload logs, metrics, or usage data --The specific steps for uploading logs, metrics, or usage data vary depending about the type of information you are uploading. --[Upload logs to Azure Monitor](upload-logs.md) --[Upload metrics to Azure Monitor](upload-metrics.md) --[Upload usage data to Azure](upload-usage-data.md) --## General guidance on exporting and uploading usage, and metrics --Create, read, update, and delete (CRUD) operations on Azure Arc-enabled data services are logged for billing and monitoring purposes. There are background services that monitor for these CRUD operations and calculate the consumption appropriately. The actual calculation of usage or consumption happens on a scheduled basis and is done in the background. --Upload the usage only once per day. When usage information is exported and uploaded multiple times within the same 24 hour period, only the resource inventory is updated in Azure portal but not the resource usage. --> [!NOTE] -> Note that usage data is automatically uploaded for Azure Arc data controller deployed in **direct** connected mode. --For uploading metrics, Azure monitor only accepts the last 30 minutes of data ([Learn more](/azure/azure-monitor/essentials/metrics-store-custom-rest-api#troubleshooting)). The guidance for uploading metrics is to upload the metrics immediately after creating the export file so you can view the entire data set in Azure portal. For instance, if you exported the metrics at 2:00 PM and ran the upload command at 2:50 PM. Since Azure Monitor only accepts data for the last 30 minutes, you may not see any data in the portal. --## Related content --[Learn about service principals](/powershell/azure/azurerm/create-azure-service-principal-azureps#what-is-a-service-principal) --[Upload billing data to Azure and view it in the Azure portal](view-billing-data-in-azure.md) --[View Azure Arc data controller resource in Azure portal](view-data-controller-in-azure-portal.md) |
azure-arc | Upload Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-metrics.md | - Title: Upload metrics to Azure Monitor -description: Upload Azure Arc-enabled data services metrics to Azure Monitor ------- Previously updated : 11/03/2021----# Upload metrics to Azure Monitor --Periodically, you can export monitoring metrics and then upload them to Azure. The export and upload of data also creates and update the data controller, SQL managed instance, and PostgreSQL server resources in Azure. --With Azure Arc data services, you can optionally upload your metrics to Azure Monitor so you can aggregate and analyze metrics, raise alerts, send notifications, or trigger automated actions. --Sending your data to Azure Monitor also allows you to store metrics data off-site and at huge scale, enabling long-term storage of the data for advanced analytics. --If you have multiple sites that have Azure Arc data services, you can use Azure Monitor as a central location to collect all of your logs and metrics across your sites. --## Upload metrics for Azure Arc data controller in **direct** mode --In the **direct** connected mode, metrics upload can only be set up in **automatic** mode. This automatic upload of metrics can be set up either during deployment of Azure Arc data controller or post deployment. -The Arc data services extension managed identity is used for uploading metrics. The managed identity needs to have the **Monitoring Metrics Publisher** role assigned to it. --> [!NOTE] -> If automatic upload of metrics was disabled during Azure Arc Data controller deployment, you must first retrieve the managed identity of the Arc data controller extension and grant **Monitoring Metrics Publisher** role before enabling automatic upload. Follow the steps below to retrieve the managed identity and grant the required roles. ---### (1) Retrieve managed identity of the Arc data controller extension --# [PowerShell](#tab/powershell) -```azurecli -$Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group <resource group> --cluster-name <connectedclustername> --cluster-type connectedClusters --name <name of extension> | convertFrom-json).identity.principalId -#Example -$Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group myresourcegroup --cluster-name myconnectedcluster --cluster-type connectedClusters --name ads-extension | convertFrom-json).identity.principalId -``` --# [macOS & Linux](#tab/linux) -```azurecli -export MSI_OBJECT_ID=`az k8s-extension show --resource-group <resource group> --cluster-name <connectedclustername> --cluster-type connectedClusters --name <name of extension> | jq '.identity.principalId' | tr -d \"` -#Example -export MSI_OBJECT_ID=`az k8s-extension show --resource-group myresourcegroup --cluster-name myconnectedcluster --cluster-type connectedClusters --name ads-extension | jq '.identity.principalId' | tr -d \"` -``` --# [Windows](#tab/windows) --N/A ----### (2) Assign role to the managed identity --Run the below command to assign the **Monitoring Metrics Publisher** role: -# [PowerShell](#tab/powershell) -```azurecli -az role assignment create --assignee $Env:MSI_OBJECT_ID --role 'Monitoring Metrics Publisher' --scope "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME" -``` --# [macOS & Linux](#tab/linux) -```azurecli -az role assignment create --assignee ${MSI_OBJECT_ID} --role 'Monitoring Metrics Publisher' --scope "/subscriptions/${subscription}/resourceGroups/${resourceGroup}" -``` --# [Windows](#tab/windows) --N/A ----### Automatic upload of metrics can be enabled as follows: -``` -az arcdata dc update --name <name of datacontroller> --resource-group <resource group> --auto-upload-metrics true -#Example -az arcdata dc update --name arcdc --resource-group <myresourcegroup> --auto-upload-metrics true -``` --To disable automatic upload of metrics to Azure Monitor, run the following command: -``` -az arcdata dc update --name <name of datacontroller> --resource-group <resource group> --auto-upload-metrics false -#Example -az arcdata dc update --name arcdc --resource-group <myresourcegroup> --auto-upload-metrics false -``` --## Upload metrics for Azure Arc data controller in **indirect** mode --In the **indirect** connected mode, service principal is used for uploading metrics. --### Prerequisites --Before you proceed, make sure you have created the required service principal and assigned it to an appropriate role. For details, see: -* [Create service principal](upload-metrics-and-logs-to-azure-monitor.md#create-service-principal). -* [Assign roles to the service principal](upload-metrics-and-logs-to-azure-monitor.md#assign-roles-to-the-service-principal) --### Set environment variables and confirm --Set the SPN authority URL in an environment variable: --# [PowerShell](#tab/powershell) --```PowerShell -$Env:SPN_AUTHORITY='https://login.microsoftonline.com' -``` --# [macOS & Linux](#tab/linux) --```console -export SPN_AUTHORITY='https://login.microsoftonline.com' -``` --# [Windows](#tab/windows) --```console -SET SPN_AUTHORITY=https://login.microsoftonline.com -``` ----Check to make sure that all environment variables required are set if you want: ---# [PowerShell](#tab/powershell) --```PowerShell -$Env:SPN_TENANT_ID -$Env:SPN_CLIENT_ID -$Env:SPN_CLIENT_SECRET -$Env:SPN_AUTHORITY -``` ---# [macOS & Linux](#tab/linux) --```console -echo $SPN_TENANT_ID -echo $SPN_CLIENT_ID -echo $SPN_CLIENT_SECRET -echo $SPN_AUTHORITY -``` --# [Windows](#tab/windows) --```console -echo %SPN_TENANT_ID% -echo %SPN_CLIENT_ID% -echo %SPN_CLIENT_SECRET% -echo %SPN_AUTHORITY% -``` ----### Upload metrics to Azure Monitor --To upload metrics for SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL, run the following CLI commands: -- -1. Export all metrics to the specified file: --> [!NOTE] -> Exporting usage/billing information, metrics, and logs using the command `az arcdata dc export` requires bypassing SSL verification for now. You will be prompted to bypass SSL verification or you can set the `AZDATA_VERIFY_SSL=no` environment variable to avoid prompting. There is no way to configure an SSL certificate for the data controller export API currently. -- ```azurecli - az arcdata dc export --type metrics --path metrics.json --k8s-namespace arc - ``` --2. Upload metrics to Azure monitor: -- ```azurecli - az arcdata dc upload --path metrics.json - ``` -- >[!NOTE] - >Wait for at least 30 mins after the Azure Arc-enabled data instances are created for the first upload. - > - >Make sure `upload` the metrics right away after `export` as Azure Monitor only accepts metrics for the last 30 minutes. [Learn more](/azure/azure-monitor/essentials/metrics-store-custom-rest-api#troubleshooting). ---If you see any errors indicating "Failure to get metrics" during export, check if data collection is set to `true` by running the following command: --```azurecli -az arcdata dc config show --k8s-namespace arc --use-k8s -``` --Look under "security section" --```output - "security": { - "allowDumps": true, - "allowNodeMetricsCollection": true, - "allowPodMetricsCollection": true, - }, -``` --Verify if the `allowNodeMetricsCollection` and `allowPodMetricsCollection` properties are set to `true`. --## View the metrics in the Portal --Once your metrics are uploaded, you can view them from the Azure portal. -> [!NOTE] -> Please note that it can take a couple of minutes for the uploaded data to be processed before you can view the metrics in the portal. ---To view your metrics, navigate to the [Azure portal](https://portal.azure.com). Then, search for your database instance by name in the search bar: --You can view CPU utilization on the Overview page or if you want more detailed metrics you can click on metrics from the left navigation panel --Choose sql server or postgres as the metric namespace. --Select the metric you want to visualize (you can also select multiple). --Change the frequency to last 30 minutes. --> [!NOTE] -> You can only upload metrics only for the last 30 minutes. Azure Monitor rejects metrics older than 30 minutes. --## Automating uploads (optional) --If you want to upload metrics and logs on a scheduled basis, you can create a script and run it on a timer every few minutes. Below is an example of automating the uploads using a Linux shell script. --In your favorite text/code editor, add the following script to the file and save as a script executable file such as `.sh` (Linux/Mac), `.cmd`, `.bat`, or `.ps1`. --```azurecli -az arcdata dc export --type metrics --path metrics.json --force --k8s-namespace arc -az arcdata dc upload --path metrics.json -``` --Make the script file executable --```console -chmod +x myuploadscript.sh -``` --Run the script every 20 minutes: --```console -watch -n 1200 ./myuploadscript.sh -``` --You could also use a job scheduler like cron or Windows Task Scheduler or an orchestrator like Ansible, Puppet, or Chef. --## General guidance on exporting and uploading usage, metrics --Create, read, update, and delete (CRUD) operations on Azure Arc-enabled data services are logged for billing and monitoring purposes. There are background services that monitor for these CRUD operations and calculate the consumption appropriately. The actual calculation of usage or consumption happens on a scheduled basis and is done in the background. --Upload the usage only once per day. When usage information is exported and uploaded multiple times within the same 24 hour period, only the resource inventory is updated in Azure portal but not the resource usage. --For uploading metrics, Azure monitor only accepts the last 30 minutes of data ([Learn more](/azure/azure-monitor/essentials/metrics-store-custom-rest-api#troubleshooting)). The guidance for uploading metrics is to upload the metrics immediately after creating the export file so you can view the entire data set in Azure portal. For instance, if you exported the metrics at 2:00 PM and ran the upload command at 2:50 PM. Since Azure Monitor only accepts data for the last 30 minutes, you may not see any data in the portal. --## Related content --[Upload logs to Azure Monitor](upload-logs.md) --[Upload usage data, metrics, and logs to Azure Monitor](upload-usage-data.md) --[Upload billing data to Azure and view it in the Azure portal](view-billing-data-in-azure.md) --[View Azure Arc data controller resource in Azure portal](view-data-controller-in-azure-portal.md) |
azure-arc | Upload Usage Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-usage-data.md | - Title: Upload usage data to Azure -description: Upload usage Azure Arc-enabled data services data to Azure ------ Previously updated : 05/27/2022----# Upload usage data to Azure in **indirect** mode --Periodically, you can export out usage information. The export and upload of this information creates and updates the data controller, SQL managed instance, and PostgreSQL resources in Azure. --> [!NOTE] -> Usage information is automatically uploaded for Azure Arc data controller deployed in **direct** connectivity mode. The instructions in this article only apply to uploading usage information for Azure Arc data controller deployed in **indirect** connectivity mode.. ---Wait at least 24 hours after creating the Azure Arc data controller before uploading usage data. --## Create service principal and assign roles --Before you proceed, make sure you have created the required service principal and assigned it to an appropriate role. For details, see: -* [Create service principal](upload-metrics-and-logs-to-azure-monitor.md#create-service-principal). -* [Assign roles to the service principal](upload-metrics-and-logs-to-azure-monitor.md#assign-roles-to-the-service-principal) ---## Upload usage data --Usage information such as inventory and resource usage can be uploaded to Azure in the following two-step way: --1. Export the usage data using `az arcdata dc export` command, as follows: --> [!NOTE] -> Exporting usage/billing information, metrics, and logs using the command `az arcdata dc export` requires bypassing SSL verification for now. You will be prompted to bypass SSL verification or you can set the `AZDATA_VERIFY_SSL=no` environment variable to avoid prompting. There is no way to configure an SSL certificate for the data controller export API currently. -- ```azurecli - az arcdata dc export --type usage --path usage.json --k8s-namespace <namespace> --use-k8s - ``` - - This command creates a `usage.json` file with all the Azure Arc-enabled data resources such as SQL managed instances and PostgreSQL instances etc. that are created on the data controller. ---For now, the file is not encrypted so that you can see the contents. Feel free to open in a text editor and see what the contents look like. --You will notice that there are two sets of data: `resources` and `data`. The `resources` are the data controller, PostgreSQL, and SQL Managed Instances. The `resources` records in the data capture the pertinent events in the history of a resource - when it was created, when it was updated, and when it was deleted. The `data` records capture how many cores were available to be used by a given instance for every hour. --Example of a `resource` entry: --```console - { - "customObjectName": "<resource type>-2020-29-5-23-13-17-164711", - "uid": "4bc3dc6b-9148-4c7a-b7dc-01afc1ef5373", - "instanceName": "sqlInstance001", - "instanceNamespace": "arc", - "instanceType": "<resource>", - "location": "eastus", - "resourceGroupName": "production-resources", - "subscriptionId": "482c901a-129a-4f5d-86e3-cc6b294590b2", - "isDeleted": false, - "externalEndpoint": "32.191.39.83:1433", - "vCores": "2", - "createTimestamp": "05/29/2020 23:13:17", - "updateTimestamp": "05/29/2020 23:13:17" - } -``` --Example of a `data` entry: --```console - { - "requestType": "usageUpload", - "clusterId": "4b0917dd-e003-480e-ae74-1a8bb5e36b5d", - "name": "DataControllerTestName", - "subscriptionId": "482c901a-129a-4f5d-86e3-cc6b294590b2", - "resourceGroup": "production-resources", - "location": "eastus", - "uploadRequest": { - "exportType": "usages", - "dataTimestamp": "2020-06-17T22:32:24Z", - "data": "[{\"name\":\"sqlInstance001\", - \"namespace\":\"arc\", - \"type\":\"<resource type>\", - \"eventSequence\":1, - \"eventId\":\"50DF90E8-FC2C-4BBF-B245-CB20DC97FF24\", - \"startTime\":\"2020-06-17T19:11:47.7533333\", - \"endTime\":\"2020-06-17T19:59:00\", - \"quantity\":1, - \"id\":\"4BC3DC6B-9148-4C7A-B7DC-01AFC1EF5373\"}]", - "signature":"MIIE7gYJKoZIhvcNAQ...2xXqkK" - } - } -``` ----2. Upload the usage data using the `upload` command. -- ```azurecli - az arcdata dc upload --path usage.json - ``` --## Upload frequency --In the **indirect** mode, usage information needs to be uploaded to Azure at least once in every 30 days. It is highly recommended to upload more frequently, such as daily. If usage information is not uploaded past 32 days, you will see some degradation in the service such as being unable to provision any new resources. --There will be two types of notifications for delayed usage uploads - warning phase and degraded phase. In the warning phase there will be a message such as `Billing data for the Azure Arc data controller has not been uploaded in {0} hours. Please upload billing data as soon as possible.`. --In the degraded phase, the message will look like `Billing data for the Azure Arc data controller has not been uploaded in {0} hours. Some functionality will not be available until the billing data is uploaded.`. --> [!NOTE] -> You will see the warning message if usage has not been uploaded for more than 48 hours. --The Azure portal Overview page for Data Controller and the Custom Resource status of the Data controller in your kubernetes cluster will both indicate the last upload date and the status message(s). ----## Automating uploads (optional) --If you want to upload metrics and logs on a scheduled basis, you can create a script and run it on a timer every few minutes. Below is an example of automating the uploads using a Linux shell script. --In your favorite text/code editor, add the following script to the file and save as a script executable file such as `.sh` (Linux/Mac) or `.cmd`, `.bat`, or `.ps1`. --```azurecli -az arcdata dc export --type usage --path usage.json --force --k8s-namespace <namespace> --use-k8s -az arcdata dc upload --path usage.json -``` --Make the script file executable --```console -chmod +x myuploadscript.sh -``` --Run the script every day for usage: --```console -watch -n 1200 ./myuploadscript.sh -``` --You could also use a job scheduler like cron or Windows Task Scheduler or an orchestrator like Ansible, Puppet, or Chef. --## Related content --[Upload metrics, and logs to Azure Monitor](upload-metrics.md) --[Upload logs to Azure Monitor](upload-logs.md) --[Upload billing data to Azure and view it in the Azure portal](view-billing-data-in-azure.md) --[View Azure Arc data controller resource in Azure portal](view-data-controller-in-azure-portal.md) |
azure-arc | Using Extensions In Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/using-extensions-in-postgresql-server.md | - Title: Use PostgreSQL extensions -description: Use PostgreSQL extensions ------- Previously updated : 11/03/2021----# Use PostgreSQL extensions in your Azure Arc-enabled PostgreSQL server --PostgreSQL is at its best when you use it with extensions. ---## Supported extensions -The following extensions are deployed by default in the containers of your Azure Arc-enabled PostgreSQL server, some of them are standard [`contrib`](https://www.postgresql.org/docs/14/contrib.html) extensions: -- `address_standardizer_data_us` 3.3.1-- `adminpack` 2.1-- `amcheck` 1.3-- `autoinc` 1-- `bloom` 1-- `btree_gin` 1.3-- `btree_gist` 1.6-- `citext` 1.6-- `cube` 1.5-- `dblink` 1.2-- `dict_int` 1-- `dict_xsyn` 1-- `earthdistance` 1.1-- `file_fdw` 1-- `fuzzystrmatch` 1.1-- `hstore` 1.8-- `hypopg` 1.3.1-- `insert_username` 1-- `intagg` 1.1-- `intarray` 1.5-- `isn` 1.2-- `lo` 1.1-- `ltree` 1.2-- `moddatetime` 1-- `old_snapshot` 1-- `orafce` 4-- `pageinspect` 1.9-- `pg_buffercache` 1.3-- `pg_cron` 1.4-1-- `pg_freespacemap` 1.2-- `pg_partman` 4.7.1-- `pg_prewarm` 1.2-- `pg_repack` 1.4.8-- `pg_stat_statements` 1.9-- `pg_surgery` 1-- `pg_trgm` 1.6-- `pg_visibility` 1.2-- `pgaudit` 1.7-- `pgcrypto` 1.3-- `pglogical` 2.4.2-- `pglogical_origin` 1.0.0-- `pgrouting` 3.4.1-- `pgrowlocks` 1.2-- `pgstattuple` 1.5-- `plpgsql` 1-- `postgis` 3.3.1-- `postgis_raster` 3.3.1-- `postgis_tiger_geocoder` 3.3.1-- `postgis_topology` 3.3.1-- `postgres_fdw` 1.1-- `refint` 1-- `seg` 1.4-- `sslinfo` 1.2-- `tablefunc` 1-- `tcn` 1-- `timescaledb` 2.8.1-- `tsm_system_rows` 1-- `tsm_system_time` 1-- `unaccent` 1.1--Updates to this list will be posted as it evolves over time. --## Enable extensions in Arc-enabled PostgreSQL server -You can create an Arc-enabled PostgreSQL server with any of the supported extensions enabled by passing a comma separated list of extensions to the `--extensions` parameter of the `create` command. --```azurecli -az postgres server-arc create -n <name> --k8s-namespace <namespace> --extensions "pgaudit,pg_partman" --use-k8s -``` -*NOTE*: Enabled extensions are added to the configuration ``shared_preload_libraries``. Extensions must be installed in your database before you can use it. To install a particular extension, you should run the [`CREATE EXTENSION`](https://www.postgresql.org/docs/current/sql-createextension.html) command. This command loads the packaged objects into your database. --For example, connect to your database and issue the following PostgreSQL command to install pgaudit extension: --```SQL -CREATE EXTENSION pgaudit; -``` --## Update extensions -You can add or remove extensions from an existing Arc-enabled PostgreSQL server. --You can run the kubectl describe command to get the current list of enabled extensions: -```console -kubectl describe postgresqls <server-name> -n <namespace> -``` -If there are extensions enabled the output contains a section like this: -```yml - config: - postgreSqlExtensions: pgaudit,pg_partman -``` --Check whether the extension is installed after connecting to the database by running following PostgreSQL command: -```SQL -select * from pg_extension; -``` --Enable new extensions by appending them to the existing list, or remove extensions by removing them from the existing list. Pass the desired list to the update command. For example, to add `pgcrypto` and remove `pg_partman` from the server in the example above: --```azurecli -az postgres server-arc update -n <name> --k8s-namespace <namespace> --extensions "pgaudit,pgrypto" --use-k8s -``` --Once allowed extensions list is updated. Connect to the database and install newly added extension by the following command: --```SQL -CREATE EXTENSION pgcrypto; -``` --Similarly, to remove an extension from an existing database issue the command [`DROP EXTENSION`](https://www.postgresql.org/docs/current/sql-dropextension.html) : --```SQL -DROP EXTENSION pg_partman; -``` --## Show the list of installed extensions -Connect to your database with the client tool of your choice and run the standard PostgreSQL query: -```SQL -select * from pg_extension; -``` --## Related content -- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. |
azure-arc | Validation Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md | - Title: "Azure Arc-enabled data services validation"--- Previously updated : 06/14/2022--- -description: "Describes validation program for Kubernetes distributions for Azure Arc-enabled data services." -keywords: "Kubernetes, Arc, Azure, K8s, validation, data services, SQL Managed Instance" ---# Azure Arc-enabled data services Kubernetes validation --Azure Arc-enabled data services team has worked with industry partners to validate specific distributions and solutions to host Azure Arc-enabled data services. This validation extends the [Azure Arc-enabled Kubernetes validation](../kubernetes/validation-program.md) for the data services. This article identifies partner solutions, versions, Kubernetes versions, SQL engine versions, and PostgreSQL server versions that have been verified to support the data services. --To see how all Azure Arc-enabled components are validated, see [Validation program overview](../validation-program/overview.md) --> [!NOTE] -> At the current time, SQL Managed Instance enabled by Azure Arc is generally available in select regions. -> -> Azure Arc-enabled PostgreSQL server is available for preview in select regions. --## Partners --### DataON --|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version -|--|--|--|--|--| -|[DataON AZS-6224](https://www.dataonstorage.com/products-solutions/integrated-systems-for-azure-stack-hci/dataon-integrated-system-azs-6224-for-azure-stack-hci/)|1.24.11| 1.20.0_2023-06-13|16.0.5100.7242|14.5 (Ubuntu 20.04)| --### Dell --|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version| -|--|--|--|--|--| -|[PowerStore 4.0](https://www.dell.com/en-us/shop/powerstore/sf/power-store)|1.28.10|1.30.0_2024-06-11|16.0.5349.20214|Not validated| -|[Unity XT](https://www.dell.com/en-us/dt/storage/unity.htm) |1.24.3|1.15.0_2023-01-10|16.0.816.19223 |Not validated| -|[PowerFlex](https://www.dell.com/en-us/dt/storage/powerflex.htm) |1.25.0 |1.21.0_2023-07-11 |16.0.5100.7242 |14.5 (Ubuntu 20.04) | --### Hitachi -|Solution and version |Kubernetes version |Azure Arc-enabled data services version |SQL engine version |PostgreSQL server version| -|--|--|--|--|--| -|[Hitachi UCP with Microsoft AKS-HCI](https://www.hitachivantara.com/en-us/solutions/hybrid-cloud-infrastructure.html)|1.27.3|1.29.0_2024-04-09*|16.0.5290.8214|14.5 (Ubuntu 20.04)| -|[Hitachi UCP with Red Hat OpenShift](https://www.hitachivantara.com/en-us/solutions/hybrid-cloud-infrastructure.html)|1.25.11|1.25.0_2023-11-14|16.0.5100.7246|Not validated| -|Hitachi Virtual Storage Software Block software-defined storage (VSSB)|1.24.12 |1.20.0_2023-06-13 |16.0.5100.7242 |14.5 (Ubuntu 20.04)| -|Hitachi Virtual Storage Platform (VSP) |1.24.12 |1.19.0_2023-05-09 |16.0.937.6221 |14.5 (Ubuntu 20.04)| --*: The solution was validated in indirect mode only (learn more about [the different connectivity modes](../dat)). --### HPE --|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version| -|--|--|--|--|--| -|HPE Superdome Flex 280 | 1.25.12 | 1.22.0_2023-08-08 | 16.0.5100.7242 |Not validated| -|HPE Apollo 4200 Gen10 Plus | 1.22.6 | 1.11.0_2022-09-13 |16.0.312.4243|12.3 (Ubuntu 12.3-1)| --### Kublr --|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version| -|--|--|--|--|--| -|[Kublr 1.26.0](https://docs.kublr.com/releasenotes/1.26/release-1.26.0/)|1.26.4, 1.25.6, 1.24.13, 1.23.17, 1.22.17|1.21.0_2023-07-11|16.0.5100.7242|14.5 (Ubuntu 20.04)| -|Kublr 1.21.2 | 1.22.10 | 1.9.0_2022-07-12 | 16.0.312.4243 |12.3 (Ubuntu 12.3-1) | --### Lenovo --|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version| -|--|--|--|--|--| -|[Lenovo ThinkEdge SE455 V3](https://lenovopress.lenovo.com/lp1724-lenovo-thinkedge-se455-v3-server)|1.26.6|1.24.0_2023-10-10|16.0.5100.7246|Not validated| -|Lenovo ThinkAgile MX1020 |1.26.6|1.24.0_2023-10-10 |16.0.5100.7246|Not validated| -|Lenovo ThinkAgile MX3520 |1.22.6|1.10.0_2022-08-09 |16.0.312.4243| 12.3 (Ubuntu 12.3-1)| --### Nutanix --|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version -|--|--|--|--|--| -| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV: 20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | 1.0.0_2021-07-30 | 15.0.2148.140| 12.3 (Ubuntu 12.3-1)| ---### PureStorage --|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version| -|--|--|--|--|--| -|[Portworx Enterprise 3.1](https://www.purestorage.com/products/cloud-native-applications/portworx.html)|1.28.7|1.30.0_2024-06-11|16.0.5349.20214|Not validated| -|Portworx Enterprise 2.7 1.22.5 |1.20.7 |1.1.0_2021-11-02 |15.0.2148.140 |Not validated | -|Portworx Enterprise 2.9 |1.22.5 |1.1.0_2021-11-02 |15.0.2195.191 |12.3 (Ubuntu 12.3-1) | --### Red Hat --|Solution and version |Kubernetes version |Azure Arc-enabled data services version |SQL engine version |PostgreSQL server version| -|--|--|--|--|--| -|[OpenShift 4.15.0](https://docs.openshift.com/container-platform/4.15/release_notes/ocp-4-15-release-notes.html)|1.28.6|1.27.0_2024-02-13|16.0.5100.7246|Not validated| -|[OpenShift 4.13.4](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html) |1.26.5 |1.21.0_2023-07-11 |16.0.5100.7242 |14.5 (Ubuntu 20.04) | -|OpenShift 4.10.16 |1.23.5 |1.11.0_2022-09-13 |16.0.312.4243 |12.3 (Ubuntu 12.3-1)| --### VMware --|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version -|--|--|--|--|--| -|TKGs 2.2|1.25.7|1.23.0_2023-09-12|16.0.5100.7246|14.5 (Ubuntu 20.04)| -|TKGm 2.3|1.26.5|1.23.0_2023-09-12|16.0.5100.7246|14.5 (Ubuntu 20.04)| -|TKGm 2.2|1.25.7|1.19.0_2023-05-09|16.0.937.6223|14.5 (Ubuntu 20.04)| -|TKGm 2.1.0|1.24.9|1.15.0_2023-01-10|16.0.816.19223|14.5 (Ubuntu 20.04)| ----### Wind River --|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version -|--|--|--|--|--| -|[Wind River Cloud Platform 22.12](https://www.windriver.com/studio/operator/cloud-platform)|1.24.4|1.26.0_2023-12-12|16.0.5100.7246|Not validated| -|Wind River Cloud Platform 22.06 | 1.23.1|1.9.0_2022-07-12 |16.0.312.4243| 12.3 (Ubuntu 12.3-1) | --## Data services validation process --The Sonobuoy Azure Arc-enabled data services plug-in automates the provisioning and testing of Azure Arc-enabled data services on a Kubernetes cluster. --### Prerequisites -v1.22.5+vmware.1 --- [Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata)-- [kubectl](https://kubernetes.io/docs/home/)-- [Azure Data Studio - Insider build](https://github.com/microsoft/azuredatastudio)--Create a Kubernetes config file configured to access the target Kubernetes cluster and set as the current context. How this file is generated and brought local to your computer is different from platform to platform. See [Kubernetes.io](https://kubernetes.io/docs/home/). --### Process --The conformance tests run as part of the Azure Arc-enabled Data services validation. A pre-requisite to running these tests is to pass on the Azure Arc-enabled Kubernetes tests for the Kubernetes distribution in use. --These tests verify that the product is compliant with the requirements of running and operating data services. This process helps assess if the product is enterprise ready for deployments. --1. Deploy data controller in both indirect and direct connect modes (learn more about [connectivity modes](/azure/azure-arc/data/connectivity)) -2. Deploy [SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) -3. Deploy [Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) --More tests will be added in future releases of Azure Arc-enabled data services. --## Additional information --- [Validation program overview](../validation-program/overview.md)-- [Azure Arc-enabled Kubernetes validation](../kubernetes/validation-program.md)-- [Azure Arc validation program - GitHub project](https://github.com/Azure/azure-arc-validation/)--## Related content --- [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md)-- [Create a data controller - indirectly connected with the CLI](create-data-controller-indirect-cli.md)-- To create a directly connected data controller, start with [Prerequisites to deploy the data controller in direct connectivity mode](create-data-controller-direct-prerequisites.md).------ |
azure-arc | Version Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md | - Title: Azure Arc-enabled data services - release versions -description: A log of versions by release date for Azure Arc-enabled data services -------- - ignite-2023 Previously updated : 08/22/2024--#Customer intent: As a data professional, I want to understand what versions of components align with specific releases. ---# Version log --This article identifies the component versions with each release of Azure Arc-enabled data services. --## September 9, 2024 --|Component|Value| -|--|--| -|Container images tag |`v1.33.0_2024-09-10`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-11-01-preview| -|`arcdata` Azure CLI extension version|1.5.18 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.33.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 972 | ---## August 13, 2024 --|Component|Value| -|--|--| -|Container images tag |`v1.32.0_2024-08-13`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-11-01-preview| -|`arcdata` Azure CLI extension version|1.5.17 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.32.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 972 | --## July 9, 2024 --|Component|Value| -|--|--| -|Container images tag |`v1.31.0_2024-07-09`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-11-01-preview| -|`arcdata` Azure CLI extension version|1.5.16 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.31.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 970 | --## June 11, 2024 --|Component|Value| -|--|--| -|Container images tag |`v1.30.0_2024-06-11`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-11-01-preview| -|`arcdata` Azure CLI extension version|1.5.15 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.30.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 970 | --## April 9, 2024 --|Component|Value| -|--|--| -|Container images tag |`v1.29.0_2024-04-09`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-11-01-preview| -|`arcdata` Azure CLI extension version|1.5.11 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.28.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 964 | ---## March 12, 2024 --|Component|Value| -|--|--| -|Container images tag |`v1.28.0_2024-03-12`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-11-01-preview| -|`arcdata` Azure CLI extension version|1.5.13 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.29.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 964 | --## February 13, 2024 --|Component|Value| -|--|--| -|Container images tag |`v1.27.0_2024-02-13`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-11-01-preview| -|`arcdata` Azure CLI extension version|1.5.10 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.27.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 957 | ---## December 12, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.26.0_2023-12-12`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-11-01-preview| -|`arcdata` Azure CLI extension version|1.6.0 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.26.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 957 | --## November 14, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.25.0_2023-11-14`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-11-01-preview| -|`arcdata` Azure CLI extension version|1.5.7 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.25.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 957 | --## October 10, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.24.0_2023-10-10`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-01-15-preview| -|`arcdata` Azure CLI extension version|1.5.6 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.24.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 957 | --## September 12, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.23.0_2023-09-12`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-01-15-preview| -|`arcdata` Azure CLI extension version|1.5.5 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.23.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 957 | --## August 8, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.22.0_2023-08-08`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-01-15-preview| -|`arcdata` Azure CLI extension version|1.5.4 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.22.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 957 | --## July 11, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.21.0_2023-07-11`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-01-15-preview| -|`arcdata` Azure CLI extension version|1.5.3 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.21.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 957 | --## June 13, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.20.0_2023-06-13`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-01-15-preview| -|`arcdata` Azure CLI extension version|1.5.2 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.20.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -|SQL Database version | 957 | --## May 9, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.19.0_2023-05-09`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2023-01-15-preview| -|`arcdata` Azure CLI extension version|1.5.0 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.19.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| -| SQL Database version | 931 | --## April 11, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.18.0_2023-04-11`| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v2| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v12| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`redis.arcdata.microsoft.com`| v1beta1| -|Azure Resource Manager (ARM) API version|2023-01-15-preview| -|`arcdata` Azure CLI extension version|1.4.13 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.18.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| --## March 14, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.17.0_2023-03-14 `| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v2| -|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v11| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| -|Azure Resource Manager (ARM) API version|2022-06-15-preview| -|`arcdata` Azure CLI extension version|1.4.12 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.17.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| --## February 14, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.16.0_2023-02-14 `| -|**CRD names and version:**| | -|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1| -|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v6| -|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| -|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| -|`kafkas.arcdata.microsoft.com`| v1beta1, v1beta2, v1beta3| -|`monitors.arcdata.microsoft.com`| v1beta1, v1, v2| -|`postgresqls.arcdata.microsoft.com`| v1beta1, v1beta2, v1beta3, v1beta4, v1beta5| -|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| -|`redis.arcdata.microsoft.com`| v1beta1, v1beta2| -|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v10| -|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| -|`sqlmanagedinstancereprovisionreplicatask.tasks.sql.arcdata.microsoft.com`| v1beta1| -|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| -|`telemetrycollectors.arcdata.microsoft.com` *use to be otelcollectors*| v1beta1, v1beta2, v1beta3, v1beta4| -|`telemetryrouters.arcdata.microsoft.com`| v1beta1, v1beta2, v1beta3, v1beta4, v1beta4, v1beta5| -|Azure Resource Manager (ARM) API version|2022-06-15-preview| -|`arcdata` Azure CLI extension version|1.4.11 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.16.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| --## January 13, 2023 --|Component|Value| -|--|--| -|Container images tag |`v1.15.0_2023-01-10`| -|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v7<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1 through v2<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`sqlmanagedinstancereprovisionreplicatask.tasks.sql.arcdata.microsoft.com`: v1beta1<br/>`telemetrycollectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3 *use to be otelcollectors*<br/>`telemetryrouters.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3, v1beta4<br/>`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| -|Azure Resource Manager (ARM) API version|2022-06-15-preview| -|`arcdata` Azure CLI extension version|1.4.10 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.15.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|*No Changes*<br/>1.7.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.7.0 ([Download](https://aka.ms/ads-azcli-ext))| --## December 13, 2022 --|Component|Value| -|--|--| -|Container images tag |`v1.14.0_2022-12-13`| -|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v7<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1 through v2<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`sqlmanagedinstancereprovisionreplicatask.tasks.sql.arcdata.microsoft.com`: v1beta1<br/>`telemetrycollectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3 *use to be otelcollectors*<br/>`telemetryrouters.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3, v1beta4<br/>`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| -|Azure Resource Manager (ARM) API version|2022-06-15-preview| -|`arcdata` Azure CLI extension version|1.4.9 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.14.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|*No Changes*<br/>1.7.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.7.0 ([Download](https://aka.ms/ads-azcli-ext))| --## November 8, 2022 ---|Component|Value| -|--|--| -|Container images tag |`v1.13.0_2022-11-08`| -|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v7<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1 through v2<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`sqlmanagedinstancereprovisionreplicatask.tasks.sql.arcdata.microsoft.com`: v1beta1<br/>`telemetrycollectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3 *use to be otelcollectors*<br/>`telemetryrouters.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| -|Azure Resource Manager (ARM) API version|2022-06-15-preview| -|`arcdata` Azure CLI extension version|1.4.8 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.13.0| -|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.7.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.7.0 ([Download](https://aka.ms/ads-azcli-ext))| --## October 11, 2022 --|Component|Value| -|--|--| -|Container images tag |`v1.12.0_2022-10-11`| -|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v7<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1 through v2<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`sqlmanagedinstancereprovisionreplicatask.tasks.sql.arcdata.microsoft.com`: v1beta1<br/>`otelcollectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`telemetryrouters.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| -|Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)| -|`arcdata` Azure CLI extension version|1.4.7| -|Arc enabled Kubernetes helm chart extension version|1.12.0| -|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|*No Changes*<br/>1.5.4 </br>1.5.4 | --## September 13, 2022 --|Component|Value| -|--|--| -|Container images tag |`v1.11.0_2022-09-13`| -|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancereprovisionreplicatask.tasks.sql.arcdata.microsoft.com`: v1beta1<br/>`otelcollectors.arcdata.microsoft.com`: v1beta1<br/>`telemetryrouters.arcdata.microsoft.com`: v1beta1<br/>| -|Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)| -|`arcdata` Azure CLI extension version|1.4.6| -|Arc enabled Kubernetes helm chart extension version|1.11.0 (Note: This versioning scheme is new, starting from this release. The scheme follows the semantic versioning scheme of the container images.)| -|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.5.4</br>1.5.4| --## August 9, 2022 --|Component|Value| -|--|--| -|Container images tag |`v1.10.0_2022-08-09`| -|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| -|Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)| -|`arcdata` Azure CLI extension version|1.4.5| -|Arc enabled Kubernetes helm chart extension version|1.2.20381002| -|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.5.1</br>1.5.1| --## July 12, 2022 --|Component|Value| -|--|--| -|Container images tag |`v1.9.0_2022-07-12`| -|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v5<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| -|Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)| -|`arcdata` Azure CLI extension version|1.4.3| -|Arc enabled Kubernetes helm chart extension version|1.2.20031002| -|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.3.0</br>1.3.0| --## June 14, 2022 --|Component|Value| -|--|--| -|Container images tag |`v1.8.0_2022-06-14`| -|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v5<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| -|ARM API version|2022-03-01-preview (No change)| -|`arcdata` Azure CLI extension version|1.4.2| -|Arc enabled Kubernetes helm chart extension version|1.2.19831003| -|Arc Data extension for Azure Data Studio|1.3.0 (No change)| ---## May 24, 2022 --|Component |Value | -|--|| -|Container images tag |`v1.7.0_2022-05-24`| -|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v5</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2| -|ARM API version|2022-03-01-preview (No change)| -|`arcdata` Azure CLI extension version| 1.4.1| -|Arc enabled Kubernetes helm chart extension version|1.2.19581002| -|Arc Data extension for Azure Data Studio|1.3.0| --## May 4, 2022 --|Component |Value | -|--|| -|Container images tag |`v1.6.0_2022-05-02`| -|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v5</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v5</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2| -|ARM API version|2022-03-01-preview| -|`arcdata` Azure CLI extension version| 1.4.0| -|Arc enabled Kubernetes helm chart extension version|1.2.19481002| -|Arc Data extension for Azure Data Studio|1.2.0| --## April 6, 2022 --|Component |Value | -|--|| -|Container images tag |`v1.5.0_2022-04-05`| -|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2, v3, v4</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2, v3, v4</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2| -|ARM API version|2021-11-01| -|`arcdata` Azure CLI extension version| 1.3.0| -|Arc enabled Kubernetes helm chart extension version|1.1.19211001| -|Arc Data extension for Azure Data Studio|1.1.0| --## March 8, 2022 --|Component |Value | -|--|| -|Container images tag |`v1.4.1_2022-03-08` -|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2, v3</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2, v3, v4</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1| -|ARM API version|2021-11-01| -|`arcdata` Azure CLI extension version| 1.2.3| -|Arc enabled Kubernetes helm chart extension version|1.1.18911000| -|Arc Data extension for Azure Data Studio|1.0| --## February 25, 2022 --|Component |Value | -|--|| -|Container images tag |`v1.4.0_2022-02-25` -|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2, v3</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2, v3, v4</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1| -|ARM API version|2021-11-01| -|`arcdata` Azure CLI extension version| 1.2.1| -|Arc enabled Kubernetes helm chart extension version|1.1.18791000| -|Arc Data extension for Azure Data Studio|1.0| --## January 27, 2022 --|Component |Value | -|--|| -|Container images tag |`v1.3.0_2022-01-27` -|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2, v3</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1| -|ARM API version|2021-11-01| -|`arcdata` Azure CLI extension version| 1.2.0| -|Arc enabled Kubernetes helm chart extension version|1.1.18501004| -|Arc Data extension for Azure Data Studio|1.0| --## December 16, 2021 --The following table describes the components in this release. --|Component |Value | -|--|| -|Container images tag | `v1.2.0_2021-12-15` | -|CRD names and versions | `datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2 <br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1 <br/>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1 | -|ARM API version | 2021-11-01 | -|`arcdata` Azure CLI extension version | 1.1.2 | -|Arc enabled Kubernetes helm chart extension version | 1.1.18031001 | -|Arc Data extension for Azure Data Studio | 0.11 | --## November 2, 2021 --The following table describes the components in this release. --|Component |Value | -|--|| -|Container images tag | `v1.1.0_2021-11-02` | -|CRD names and versions | `datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2 <br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1 <br/>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2 | -|ARM API version | 2021-11-01 | -|`arcdata` Azure CLI extension version | 1.1.0 (Nov 3)</br>1.1.1 (Nov4) | -|Arc enabled Kubernetes helm chart extension version | 1.0.17551005 - Required if upgrade from GA <br/><br/> 1.1.17561007 - GA+1/Nov release chart | -|Arc Data extension for Azure Data Studio | 0.9.7 | --## August 3, 2021 --This release provides an update for the Azure Arc extension for Azure Data Studio. The update aligns with July 30, general availability. The following table describes the updated component. --|Component |Value | -|--|| -|Arc Data extension for Azure Data Studio | 0.9.6 | --All other components are the same as previously released. --## July 30, 2021 --This release introduces general availability for SQL Managed Instance enabled by Azure Arc General Purpose and SQL Server enabled by Azure Arc. The following table describes the components in this release. --|Component |Value | -|--|| -|Container images tag | `v1.0.0_2021-07-30` | -|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 <br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1 <br/>`monitors.arcdata.microsoft.com`: v1beta1, v1 <br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 <br/>`postgresqls.arcdata.microsoft.com`: v1beta1 <br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1 <br/>`dags.sql.arcdata.microsoft.com`: v1beta1 <br/> | -|ARM API version | 2021-08-01 (stable) | -|`arcdata` Azure CLI extension version | 1.0 | -|Arc enabled Kubernetes helm chart extension version | 1.0.16701001, release train: stable | -|Arc Data extension for Azure Data Studio | 0.9.5 | |
azure-arc | View Arc Data Services Inventory In Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/view-arc-data-services-inventory-in-azure-portal.md | - Title: View inventory of your instances in the Azure portal -description: View inventory of your instances in the Azure portal ------ Previously updated : 11/03/2021----# Inventory of Arc enabled data services --You can view your Azure Arc-enabled data services in the Azure portal or in your Kubernetes cluster. --## View resources in Azure portal --After you upload your [metrics, logs](upload-metrics-and-logs-to-azure-monitor.md), or [usage](view-billing-data-in-azure.md), you can view your deployments of SQL Managed Instance enabled by Azure Arc or Azure Arc-enabled PostgreSQL servers in the Azure portal. To view your resource in the [Azure portal](https://portal.azure.com), follow these steps: --1. Go to **All services**. -1. Search for your database instance type. -1. Add the type to your favorites. -1. In the left menu, select the instance type. -1. View your instances in the same view as your other Azure SQL or Azure PostgreSQL server resources (use filters for a granular view). --## View resources in your Kubernetes cluster --If the Azure Arc data controller is deployed in **indirect** connectivity mode, you can run the below command to get a list of all the Azure Arc SQL managed instances: --``` -az sql mi-arc list --k8s-namespace <namespace> --use-k8s -#Example -az sql mi-arc list --k8s-namespace arc --use-k8s -``` --If the Azure Arc data controller is deployed in **direct** connectivity mode, you can run the below command to get a list of all the Azure Arc SQL managed instances: --``` -az sql mi-arc list --resource-group <resourcegroup> -#Example -az sql mi-arc list --resource-group myResourceGroup -``` |
azure-arc | View Billing Data In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/view-billing-data-in-azure.md | - Title: Upload billing data to Azure and view it in the Azure portal -description: Upload billing data to Azure and view it in the Azure portal ------ Previously updated : 11/03/2021----# Upload billing data to Azure and view it in the Azure portal -----## Connectivity Modes - Implications for billing data --There are two modes in which you can deploy your Azure Arc-enabled data --- **Indirectly connected** - There is no direct connection to Azure. Data is sent to Azure only through an export/upload process.-- **Directly connected** - In this mode there will be a dependency on the Azure Arc-enabled Kubernetes service to provide a direct connection between Azure and the Kubernetes cluster on which the Azure Arc-enabled data services are deployed. This will enable more capabilities from Azure and will also enable you to use the Azure portal to manage your Azure Arc-enabled data services just like you manage your data services in Azure PaaS. --You can read more about the difference between the [connectivity modes](./connectivity.md). --In the indirectly connected mode, billing data is periodically exported out of the Azure Arc data controller to a secure file and then uploaded to Azure and processed. In the upcoming directly connected mode, the billing data will be automatically sent to Azure approximately 1/hour to give a near real-time view into the costs of your services. The process of exporting and uploading the data in the indirectly connected mode can also be automated using scripts or we may build a service that will do it for you. --## Upload billing data to Azure - Indirectly connected mode --> [!NOTE] -> Uploading of usage (billing) data is automatically done in the direct connected mode. The following instructions is only for indirect connected mode. --To upload billing data to Azure, the following should happen first: --1. Create an Azure Arc-enabled data service if you don't have one already. For example create one of the following: - - [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) - - [Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) -2. Wait for at least 2 hours since the creation of the data service so that the billing telemetry collection process can collect some billing data. -3. Follow the steps described in [Upload resource inventory, usage data, metrics and logs to Azure Monitor](upload-metrics-and-logs-to-azure-monitor.md) to get setup with prerequisites for uploading usage/billing/logs data and then proceed to the [Upload usage data to Azure](upload-usage-data.md) to upload the billing data. ---## View billing data in Azure portal --Follow these steps to view billing data in the Azure portal: --1. Open the [Azure portal](https://portal.azure.com). -1. In the search box at the top of the screen type in **Cost Management** and click on the Cost Management service. -1. Under **Cost Management Overview**, click on the **Cost Management** tab. -1. Click on the **Cost analysis** tab on the left. -1. Click the **Cost by resource** button on the top of the view. -1. Make sure that your Scope is set to the subscription in which your data service resources were created. -1. Select **Cost by resource** in the View drop down next to the Scope selector near the top of the view. -1. Make sure the date filter is set to **This month** or some other time range that makes sense given the timing of when you created your data service resources. -1. Click **Add filter** to add a filter by **Resource type** = `Microsoft.AzureArcData/<data service type>` if you want to filter down to just one type of Azure Arc-enabled data service. -1. You will now see a list of all the resources that were created and uploaded to Azure. Since the billing meter is $0, you will see that the cost is always $0. --## Download billing data --You can download billing summary data directly from the Azure portal. --1. In the same **Cost analysis -> view by resource type** view that you reached by following the instructions above, click the Download button near the top. -1. Choose your download file type - Excel or CSV - and click the **Download data** button. -1. Open the file in an appropriate editor given the file type selected. --## Export billing data --You can also periodically, automatically export **detailed** usage and billing data to an Azure Storage container by creating a billing export job. This is useful if you want to see the details of your billing such as how many hours a given instance was billed for in the billing period. --Follow these steps to set up a billing export job: --1. Click **Exports** on the left. -1. Click **Add**. -1. Enter a name and export frequency and click Next. -1. Choose to either create a new storage account or use an existing one and fill out the form to specify the storage account, container, and directory path to export the billing data files to and click Next. -1. Click **Create**. --The billing data export files will be available in approximately 4 hours and will be exported on the schedule you specified when creating the billing export job. --Follow these steps to view the billing data files that are exported: --You can validate the billing data files in the Azure portal. --> [!IMPORTANT] -> After you create the billing export job, wait 4 hours before you proceed with the following steps. --1. In the search box at the top of the portal, type in **Storage accounts** and click on **Storage Accounts**. -3. Click on the storage account which you specified when creating the billing export job above. -4. Click on Containers on the left. -5. Click on the container you specified when creating the billing export job above. -6. Click on the folder you specified when creating the billing export job above. -7. Drill down into the generated folders and files and click on one of the generated .csv files. -8. Click the **Download** button which will save the file to your local Downloads folder. -9. Open the file using a .csv file viewer such as Excel. -10. Filter the results to show only the rows with the **Resource Type** = `Microsoft.AzureArcData/<data service resource type`. -11. You will see the number of hours the instance was used in the current 24 hour period in the UsageQuantity column. |
azure-arc | View Data Controller In Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/view-data-controller-in-azure-portal.md | - Title: View Azure Arc data controller resource in Azure portal -description: View Azure Arc data controller resource in Azure portal ------ Previously updated : 11/03/2021----# View Azure Arc data controller resource in the Azure portal --To view the Azure Arc data controller in the Azure portal, you must export at least one type of data (usage data, metrics, or logs) from your Kubernetes cluster and upload it to Azure. --## Direct connected mode --If the Azure Arc data controller is deployed in **direct** connected mode, usage data is automatically uploaded to Azure, and the Kubernetes resources are projected into Azure. --## Indirect connected mode --In the **indirect** connected mode, you must export and upload at least one type of data (usage data, metrics, or logs) to Azure. For more information on this process, see [Upload usage data, metrics, and logs to Azure](upload-metrics-and-logs-to-azure-monitor.md). This action creates the appropriate resources in Azure. --## Azure portal --After you complete your first [metrics or logs upload to Azure](upload-metrics-and-logs-to-azure-monitor.md) or [usage data upload](view-billing-data-in-azure.md), you can see the Azure Arc data controller and any SQL Managed Instance enabled by Azure Arcs or Azure Arc-enabled PostgreSQL server resources in the [Azure portal](https://portal.azure.com). --To find your data controller, search for it by name in the search bar and then select it. |
azure-arc | What Is Azure Arc Enabled Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/what-is-azure-arc-enabled-postgresql.md | - Title: What is Azure Arc-enabled PostgreSQL server? -description: What is Azure Arc-enabled PostgreSQL server? ------ Previously updated : 07/19/2023----# What is Azure Arc-enabled PostgreSQL server ---**Azure Arc-enabled PostgreSQL server** is one of the database engines available as part of Azure Arc-enabled data services. --## Compare PostgreSQL solutions provided by Microsoft in Azure --Microsoft offers PostgreSQL database services in Azure in two ways: -- As a managed service in **[Azure PaaS](https://portal.azure.com/#create/Microsoft.PostgreSQLServer)** (Platform As A Service)-- As a customer-managed service with Azure Arc as it is operated by customers or their partners/vendors--### Features --- Manage PostgreSQL simply-- Simplify monitoring, back up, patching/upgrade, access control & more-- Deploy PostgreSQL on any [Kubernetes](https://kubernetes.io/) infrastructure- - On-premises - - Cloud providers like AWS, GCP, and Azure - - Edge deployments (including lightweight Kubernetes [K3S](https://k3s.io/)) -- Integrate with Azure- - Direct connectivity mode - Deploy Azure Arc-enabled PostgreSQL server from the Azure portal - - Indirect connectivity mode - Deploy Azure Arc-enabled PostgreSQL server from the infrastructure that hosts it -- Secure- - Supports Active Directory - - Server and Client TLS - - System and user managed certificates -- Pay for what you use (per usage billing)-- Get support from Microsoft on PostgreSQL--## Architecture --Azure Arc-enabled PostgreSQL server is the community version of the [PostgreSQL 14](https://www.postgresql.org/) server with a curated set of available extensions. Most PostgreSQL applications workloads should be capable of running against Azure Arc-enabled PostgreSQL server using standard drivers. ---## Related content --### Try it out --Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. --### Deploy --Follow these steps to create on your own Kubernetes cluster: -- [Install the client tools](install-client-tools.md)-- [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md)-- [Create an Azure Arc-enabled PostgreSQL server on Azure Arc](create-postgresql-server.md) --### Learn -- [Azure Arc](https://aka.ms/azurearc)-- [Azure Arc-enabled Data Services overview](overview.md)-- [Azure Arc Hybrid Data Services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services)-- [Connectivity modes](connectivity.md)-- |
azure-arc | Agent Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md | - Title: "Upgrade Azure Arc-enabled Kubernetes agents" Previously updated : 12/13/2023-- -description: "Control agent upgrades for Azure Arc-enabled Kubernetes" ---# Upgrade Azure Arc-enabled Kubernetes agents --Azure Arc-enabled Kubernetes provides both automatic and manual upgrade capabilities for its [agents](conceptual-agent-overview.md) so that agents are upgraded to the [latest version](release-notes.md). If you disable automatic upgrade and instead rely on manual upgrade, a [version support policy](#version-support-policy) applies for Arc agents and the underlying Kubernetes clusters. --## Toggle automatic upgrade on or off when connecting a cluster to Azure Arc --Azure Arc-enabled Kubernetes provides its agents with out-of-the-box automatic upgrade capabilities. When automatic upgrade is enabled, the agent polls Azure hourly to check for a newer version. When a newer version becomes available, it triggers a Helm chart upgrade for the Azure Arc agents. --When you [connect a cluster to Azure Arc](quickstart-connect-cluster.md), the default setting is to enable automatic upgrade. --The following command connects a cluster to Azure Arc with automatic upgrade enabled: --```azurecli -az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest -``` --> [!IMPORTANT] -> Be sure you allow [connectivity to all required endpoints](network-requirements.md). In particular, connectivity to `dl.k8s.io` is required for automatic upgrades. --To opt out of automatic upgrade, specify the `--disable-auto-upgrade` parameter while connecting the cluster to Azure Arc. --The following command connects a cluster to Azure Arc with auto-upgrade disabled: --```azurecli -az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest --disable-auto-upgrade -``` --> [!TIP] -> If you plan to disable automatic upgrade, be aware of the [version support policy](#version-support-policy) for Azure Arc-enabled Kubernetes. --## Toggle automatic upgrade on or off after connecting a cluster to Azure Arc --After you connect a cluster to Azure Arc, you can change the automatic upgrade selection by using the `az connectedk8s update` command and setting `--auto-upgrade` to either true or false. --The following command turns automatic upgrade off for a connected cluster: --```azurecli -az connectedk8s update --name AzureArcTest1 --resource-group AzureArcTest --auto-upgrade false -``` --## Manually upgrade agents --If you've disabled automatic upgrade, you can manually initiate upgrades for the agents by using the `az connectedk8s upgrade` command. When doing so, you must specify the version to which you want to upgrade. --Azure Arc-enabled Kubernetes follows the standard [semantic versioning scheme](https://semver.org/) of `MAJOR.MINOR.PATCH` for versioning its agents. Each number in the version indicates general compatibility with the previous version: --* **Major versions** change when there are incompatible API updates or backwards-compatibility may be broken. -* **Minor versions** change when functionality changes are backwards-compatible to other minor releases. -* **Patch versions** change when backwards-compatible bug fixes are made. --While the schedule may vary, a new minor version of Azure Arc-enabled Kubernetes agents is [released approximately once per month](release-notes.md). --The following command manually upgrades the agents to version 1.8.14: --```azurecli -az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.8.14 -``` --## Check agent version --To list connected clusters and reported agent version, use the following command: --```azurecli -az connectedk8s list --query '[].{name:name,rg:resourceGroup,id:id,version:agentVersion}' -``` --## Check if automatic upgrade is enabled on a cluster --To check whether a cluster is enabled for automatic upgrade, run the following kubectl command. Note that the automatic upgrade configuration is not available in the public API for Azure Arc-enabled Kubernetes. --```console -kubectl -n azure-arc get cm azure-clusterconfig -o jsonpath="{.data['AZURE_ARC_AUTOUPDATE']}" -``` --## Version support policy --When you [create support requests](../../azure-portal/supportability/how-to-create-azure-support-request.md) for Azure Arc-enabled Kubernetes, the following version support policy applies: --* Azure Arc-enabled Kubernetes agents have a support window of "N-2", where 'N' is the latest minor release of agents. - * For example, if Azure Arc-enabled Kubernetes introduces 0.28.a today, versions 0.28.a, 0.28.b, 0.27.c, 0.27.d, 0.26.e, and 0.26.f are supported. --* Kubernetes clusters connecting to Azure Arc have a support window of "N-2", where 'N' is the latest stable minor release of [upstream Kubernetes](https://github.com/kubernetes/kubernetes/releases). - * For example, if Kubernetes introduces 1.20.a today, versions 1.20.a, 1.20.b, 1.19.c, 1.19.d, 1.18.e, and 1.18.f are supported. --If you create a support request and are using a version that is outside of the support policy (older than the "N-2" supported versions of agents and upstream Kubernetes clusters), you'll be asked to upgrade the clusters and agents to a supported version. --## Next steps --* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). -* Already have a Kubernetes cluster connected Azure Arc? [Create configurations on your Azure Arc-enabled Kubernetes cluster](./tutorial-use-gitops-connected-cluster.md). -* Learn how to [use Azure Policy to apply configurations at scale](./use-azure-policy.md). |
azure-arc | Azure Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md | - Title: "Azure RBAC on Azure Arc-enabled Kubernetes clusters" Previously updated : 05/28/2024-- -description: "Use Azure RBAC for authorization checks on Azure Arc-enabled Kubernetes clusters." ---# Use Azure RBAC on Azure Arc-enabled Kubernetes clusters --Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. By using this feature, you can use Microsoft Entra ID and role assignments in Azure to control authorization checks on the cluster. Azure role assignments let you granularly control which users can read, write, and delete Kubernetes objects such as deployment, pod, and service. --For a conceptual overview of this feature, see [Azure RBAC on Azure Arc-enabled Kubernetes](conceptual-azure-rbac.md). --## Prerequisites --- [Install or upgrade the Azure CLI](/cli/azure/install-azure-cli) to the latest version.--- Install the latest version of `connectedk8s` Azure CLI extension:-- ```azurecli - az extension add --name connectedk8s - ``` -- If the `connectedk8s` extension is already installed, you can update it to the latest version by using the following command: -- ```azurecli - az extension update --name connectedk8s - ``` --- Connect an existing Azure Arc-enabled Kubernetes cluster:- - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). - - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. --> [!NOTE] -> Azure RBAC is not available for Red Hat OpenShift or managed Kubernetes offerings where user access to the API server is restricted (ex: Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE)). -> -> Azure RBAC does not currently support Kubernetes clusters operating on ARM64 architecture. Please use [Kubernetes RBAC](identity-access-overview.md#kubernetes-rbac-authorization) to manage access control for ARM64-based Kubernetes clusters. -> -> For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](/azure/aks/manage-azure-rbac) and doesn't require the AKS cluster to be connected to Azure Arc. -## Enable Azure RBAC on the cluster --1. Get the cluster MSI identity by running the following command: -- ```azurecli - az connectedk8s show -g <resource-group> -n <connected-cluster-name> - ``` --1. Get the ID (`identity.principalId`) from the output and run the following command to assign the **Connected Cluster Managed Identity CheckAccess Reader** role to the cluster MSI: -- ```azurecli - az role assignment create --role "Connected Cluster Managed Identity CheckAccess Reader" --assignee "<Cluster MSI ID>" --scope <cluster ARM ID> - ``` --1. Enable Azure role-based access control (RBAC) on your Azure Arc-enabled Kubernetes cluster by running the following command: -- ```azurecli - az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features azure-rbac - ``` -- > [!NOTE] - > Before you run the preceding command, ensure that the `kubeconfig` file on the machine is pointing to the cluster on which you'll enable the Azure RBAC feature. - > - > Use `--skip-azure-rbac-list` with the preceding command for a comma-separated list of usernames, emails, and OpenID connections undergoing authorization checks by using Kubernetes native `ClusterRoleBinding` and `RoleBinding` objects instead of Azure RBAC. --### Generic cluster where no reconciler is running on the `apiserver` specification --1. SSH into every master node of the cluster and take the following steps: -- **If your `kube-apiserver` is a [static pod](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/):** -- 1. The `azure-arc-guard-manifests` secret in the `kube-system` namespace contains two files: `guard-authn-webhook.yaml` and `guard-authz-webhook.yaml`. Copy these files to the `/etc/guard` directory of the node. -- ```console - sudo mkdir -p /etc/guard - kubectl get secrets azure-arc-guard-manifests -n kube-system -o json | jq -r '.data."guard-authn-webhook.yaml"' | base64 -d > /etc/guard/guard-authn-webhook.yaml - kubectl get secrets azure-arc-guard-manifests -n kube-system -o json | jq -r '.data."guard-authz-webhook.yaml"' | base64 -d > /etc/guard/guard-authz-webhook.yaml - ``` -- 1. Open the `apiserver` manifest in edit mode: -- ```console - sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml - ``` -- 1. Add the following specification under `volumes`: -- ```yml - - hostPath - path: /etc/guard - type: Directory - name: azure-rbac - ``` -- 1. Add the following specification under `volumeMounts`: -- ```yml - - mountPath: /etc/guard - name: azure-rbac - readOnly: true - ``` -- **If your `kube-apiserver` is a not a static pod:** -- 1. Open the `apiserver` manifest in edit mode: -- ```console - sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml - ``` -- 1. Add the following specification under `volumes`: -- ```yml - - name: azure-rbac - secret: - secretName: azure-arc-guard-manifests - ``` -- 1. Add the following specification under `volumeMounts`: -- ```yml - - mountPath: /etc/guard - name: azure-rbac - readOnly: true - ``` --1. Add the following `apiserver` arguments: -- ```yml - - --authentication-token-webhook-config-file=/etc/guard/guard-authn-webhook.yaml - - --authentication-token-webhook-cache-ttl=5m0s - - --authorization-webhook-cache-authorized-ttl=5m0s - - --authorization-webhook-config-file=/etc/guard/guard-authz-webhook.yaml - - --authorization-webhook-version=v1 - - --authorization-mode=Node,RBAC,Webhook - ``` -- If the Kubernetes cluster is version 1.19.0 or later, you also need to set the following `apiserver` argument: -- ```yml - - --authentication-token-webhook-version=v1 - ``` --1. Save and close the editor to update the `apiserver` pod. --### Cluster created by using Cluster API --1. Copy the guard secret that contains authentication and authorization webhook configuration files from the workload cluster onto your machine: -- ```console - kubectl get secret azure-arc-guard-manifests -n kube-system -o yaml > azure-arc-guard-manifests.yaml - ``` --1. Change the `namespace` field in the *azure-arc-guard-manifests.yaml* file to the namespace within the management cluster where you're applying the custom resources for creation of workload clusters. --1. Apply this manifest: -- ```console - kubectl apply -f azure-arc-guard-manifests.yaml - ``` --1. Edit the `KubeadmControlPlane` object by running `kubectl edit kcp <clustername>-control-plane`: -- 1. Add the following snippet under `files`: -- ```console - - contentFrom: - secret: - key: guard-authn-webhook.yaml - name: azure-arc-guard-manifests - owner: root:root - path: /etc/kubernetes/guard-authn-webhook.yaml - permissions: "0644" - - contentFrom: - secret: - key: guard-authz-webhook.yaml - name: azure-arc-guard-manifests - owner: root:root - path: /etc/kubernetes/guard-authz-webhook.yaml - permissions: "0644" - ``` -- 1. Add the following snippet under `apiServer` > `extraVolumes`: -- ```console - - hostPath: /etc/kubernetes/guard-authn-webhook.yaml - mountPath: /etc/guard/guard-authn-webhook.yaml - name: guard-authn - readOnly: true - - hostPath: /etc/kubernetes/guard-authz-webhook.yaml - mountPath: /etc/guard/guard-authz-webhook.yaml - name: guard-authz - readOnly: true - ``` -- 1. Add the following snippet under `apiServer` > `extraArgs`: -- ```console - authentication-token-webhook-cache-ttl: 5m0s - authentication-token-webhook-config-file: /etc/guard/guard-authn-webhook.yaml - authentication-token-webhook-version: v1 - authorization-mode: Node,RBAC,Webhook - authorization-webhook-cache-authorized-ttl: 5m0s - authorization-webhook-config-file: /etc/guard/guard-authz-webhook.yaml - authorization-webhook-version: v1 - ``` -- 1. Save and close to update the `KubeadmControlPlane` object. Wait for these changes to appear on the workload cluster. --## Create role assignments for users to access the cluster --Owners of the Azure Arc-enabled Kubernetes resource can use either built-in roles or custom roles to grant other users access to the Kubernetes cluster. --### Built-in roles --| Role | Description | -||| -| [Azure Arc Kubernetes Viewer](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-viewer) | Allows read-only access to see most objects in a namespace. This role doesn't allow viewing secrets, because `read` permission on secrets would enable access to `ServiceAccount` credentials in the namespace. These credentials would in turn allow API access through that `ServiceAccount` value (a form of privilege escalation). | -| [Azure Arc Kubernetes Writer](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-writer) | Allows read/write access to most objects in a namespace. This role doesn't allow viewing or modifying roles or role bindings. However, this role allows accessing secrets and running pods as any `ServiceAccount` value in the namespace, so it can be used to gain the API access levels of any `ServiceAccount` value in the namespace. | -| [Azure Arc Kubernetes Admin](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-admin) | Allows admin access. It's intended to be granted within a namespace through `RoleBinding`. If you use it in `RoleBinding`, it allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role doesn't allow write access to resource quota or to the namespace itself. | -| [Azure Arc Kubernetes Cluster Admin](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-cluster-admin) | Allows superuser access to execute any action on any resource. When you use it in `ClusterRoleBinding`, it gives full control over every resource in the cluster and in all namespaces. When you use it in `RoleBinding`, it gives full control over every resource in the role binding's namespace, including the namespace itself.| --You can create role assignments scoped to the Azure Arc-enabled Kubernetes cluster in the Azure portal on the **Access Control (IAM)** pane of the cluster resource. You can also use the following Azure CLI commands: --```azurecli -az role assignment create --role "Azure Arc Kubernetes Cluster Admin" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID -``` --In those commands, `AZURE-AD-ENTITY-ID` can be a username (for example, `testuser@mytenant.onmicrosoft.com`) or even the `appId` value of a service principal. --Here's another example of creating a role assignment scoped to a specific namespace within the cluster: --```azurecli -az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID/namespaces/<namespace-name> -``` --> [!NOTE] -> You can create role assignments scoped to the cluster by using either the Azure portal or the Azure CLI. However, only Azure CLI can be used to create role assignments scoped to namespaces. --### Custom roles --You can choose to create your own role definition for use in role assignments. --Walk through the following example of a role definition that allows a user to only read deployments. For more information, see [the full list of data actions that you can use to construct a role definition](../../role-based-access-control/resource-provider-operations.md#microsoftkubernetes). --Copy the following JSON object into a file called *custom-role.json*. Replace the `<subscription-id>` placeholder with the actual subscription ID. The custom role uses one of the data actions and lets you view all deployments in the scope (cluster or namespace) where the role assignment is created. --```json -{ - "Name": "Arc Deployment Viewer", - "Description": "Lets you view all deployments in cluster/namespace.", - "Actions": [], - "NotActions": [], - "DataActions": [ - "Microsoft.Kubernetes/connectedClusters/apps/deployments/read" - ], - "NotDataActions": [], - "assignableScopes": [ - "/subscriptions/<subscription-id>" - ] -} -``` --1. Create the role definition by running the following command from the folder where you saved *custom-role.json*: -- ```azurecli - az role definition create --role-definition @custom-role.json - ``` --1. Create a role assignment by using this custom role definition: -- ```azurecli - az role assignment create --role "Arc Deployment Viewer" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID/namespaces/<namespace-name> - ``` --## Configure kubectl with user credentials --There are two ways to get the *kubeconfig* file that you need to access the cluster: --- You use the [cluster Connect](cluster-connect.md) feature (`az connectedk8s proxy`) of the Azure Arc-enabled Kubernetes cluster.-- The cluster admin shares the *kubeconfig* file with every other user.--### Use cluster connect --Run the following command to start the proxy process: --```azurecli -az connectedk8s proxy -n <clusterName> -g <resourceGroupName> -``` --After the proxy process is running, you can open another tab in your console to [start sending your requests to the cluster](#send-requests-to-the-cluster). --### Use a shared kubeconfig file --Using a shared kubeconfig requires slightly different steps depending on your Kubernetes version. --### [Kubernetes version >= 1.26](#tab/kubernetes-latest) --1. Run the following command to set the credentials for the user. Specify `serverApplicationId` as `6256c85f-0aad-4d50-b960-e6e9b21efe35` and `clientApplicationId` as `3f4439ff-e698-4d6d-84fe-09c9d574f06b`: -- ```console - kubectl config set-credentials <testuser>@<mytenant.onmicrosoft.com> \ - --auth-provider=azure \ - --auth-provider-arg=environment=AzurePublicCloud \ - --auth-provider-arg=client-id=<clientApplicationId> \ - --auth-provider-arg=tenant-id=<tenantId> \ - --auth-provider-arg=apiserver-id=<serverApplicationId> - ``` --1. Open the *kubeconfig* file that you created earlier. Under `contexts`, verify that the context associated with the cluster points to the user credentials that you created in the previous step. To set the current context to these user credentials, run the following command: -- ```console - kubectl config set-context --current=true --user=<testuser>@<mytenant.onmicrosoft.com> - ``` --1. Add the **config-mode** setting under `user` > `config`: - - ```console - name: testuser@mytenant.onmicrosoft.com - user: - auth-provider: - config: - apiserver-id: $SERVER_APP_ID - client-id: $CLIENT_APP_ID - environment: AzurePublicCloud - tenant-id: $TENANT_ID - config-mode: "1" - name: azure - ``` -- > [!NOTE] - >[Exec plugin](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins) is a Kubernetes authentication strategy that allows `kubectl` to execute an external command to receive user credentials to send to `apiserver`. Starting with Kubernetes version 1.26, the default Azure authorization plugin is no longer included in `client-go` and `kubectl`. With later versions, in order to use the exec plugin to receive user credentials you must use [Azure Kubelogin](https://azure.github.io/kubelogin/https://docsupdatetracker.net/index.html), a `client-go` credential (exec) plugin that implements Azure authentication. --1. Install Azure Kubelogin: -- - For Windows or Mac, follow the [Azure Kubelogin installation instructions](https://azure.github.io/kubelogin/install.html#installation). - - For Linux or Ubuntu, download the [latest version of kubelogin](https://github.com/Azure/kubelogin/releases), then run the following commands: -- ```bash - curl -LO https://github.com/Azure/kubelogin/releases/download/"$KUBELOGIN_VERSION"/kubelogin-linux-amd64.zip -- unzip kubelogin-linux-amd64.zip -- sudo mv bin/linux_amd64/kubelogin /usr/local/bin/ -- sudo chmod +x /usr/local/bin/kubelogin - ``` --1. Kubelogin can be used to authenticate with Azure Arc-enabled clusters by requesting a proof-of-possession (PoP) token. [Convert](https://azure.github.io/kubelogin/concepts/azure-arc.html) the kubeconfig using kubelogin to use the appropriate [login mode](https://azure.github.io/kubelogin/concepts/login-modes.html). For example, for [device code login](https://azure.github.io/kubelogin/concepts/login-modes/devicecode.html) with a Microsoft Entra user, the commands would be as follows: -- ```bash - export KUBECONFIG=/path/to/kubeconfig -- kubelogin convert-kubeconfig --pop-enabled --pop-claims 'u=<ARM ID of cluster>" - ``` --### [Kubernetes < v1.26](#tab/Kubernetes-earlier) --1. Run the following command to set the credentials for the user. Specify `serverApplicationId` as `6256c85f-0aad-4d50-b960-e6e9b21efe35` and `clientApplicationId` as `3f4439ff-e698-4d6d-84fe-09c9d574f06b`: -- ```console - kubectl config set-credentials <testuser>@<mytenant.onmicrosoft.com> \ - --auth-provider=azure \ - --auth-provider-arg=environment=AzurePublicCloud \ - --auth-provider-arg=client-id=<clientApplicationId> \ - --auth-provider-arg=tenant-id=<tenantId> \ - --auth-provider-arg=apiserver-id=<serverApplicationId> - ``` --1. Open the *kubeconfig* file that you created earlier. Under `contexts`, verify that the context associated with the cluster points to the user credentials that you created in the previous step. To set the current context to these user credentials, run the following command: -- ```console - kubectl config set-context --current=true --user=<testuser>@<mytenant.onmicrosoft.com> - ``` --1. Add the **config-mode** setting under `user` > `config`: - - ```console - name: testuser@mytenant.onmicrosoft.com - user: - auth-provider: - config: - apiserver-id: $SERVER_APP_ID - client-id: $CLIENT_APP_ID - environment: AzurePublicCloud - tenant-id: $TENANT_ID - config-mode: "1" - name: azure - ``` ----## Send requests to the cluster --1. Run any `kubectl` command. For example: -- - `kubectl get nodes` - - `kubectl get pods` --1. After you're prompted for browser-based authentication, copy the device login URL (`https://microsoft.com/devicelogin`) and open it in your web browser. --1. Enter the code printed on your console. Copy and paste the code on your terminal into the prompt for device authentication input. --1. Enter the username (`testuser@mytenant.onmicrosoft.com`) and the associated password. --1. If you see an error message like this, it means you're unauthorized to access the requested resource: -- ```console - Error from server (Forbidden): nodes is forbidden: User "testuser@mytenant.onmicrosoft.com" cannot list resource "nodes" in API group "" at the cluster scope: User doesn't have access to the resource in Azure. Update role assignment to allow access. - ``` -- An administrator needs to create a new role assignment that authorizes this user to have access on the resource. --<a name='use-conditional-access-with-azure-ad'></a> --## Use Conditional Access with Microsoft Entra ID --When you're integrating Microsoft Entra ID with your Azure Arc-enabled Kubernetes cluster, you can also use [Conditional Access](../../active-directory/conditional-access/overview.md) to control access to your cluster. --> [!NOTE] -> [Microsoft Entra Conditional Access](../../active-directory/conditional-access/overview.md) is a Microsoft Entra ID P2 capability. --To create an example Conditional Access policy to use with the cluster: --1. At the top of the Azure portal, search for and select **Microsoft Entra ID**. -1. On the menu for Microsoft Entra ID on the left side, select **Enterprise applications**. -1. On the menu for enterprise applications on the left side, select **Conditional Access**. -1. On the menu for Conditional Access on the left side, select **Policies** > **New policy**. -- :::image type="content" source="media/azure-rbac/conditional-access-new-policy.png" alt-text="Screenshot showing how to add a conditional access policy in the Azure portal." lightbox="media/azure-rbac/conditional-access-new-policy.png"::: --1. Enter a name for the policy, such as **arc-k8s-policy**. --1. Select **Users and groups**. Under **Include**, choose **Select users and groups**. Then choose the users and groups where you want to apply the policy. For this example, choose the same Microsoft Entra group that has administrative access to your cluster. -- :::image type="content" source="media/azure-rbac/conditional-access-users-groups.png" alt-text="Screenshot that shows selecting users or groups to apply the Conditional Access policy." lightbox="media/azure-rbac/conditional-access-users-groups.png"::: --1. Select **Cloud apps or actions**. Under **Include**, choose **Select apps**. Then search for and select the server application that you created earlier. -- :::image type="content" source="media/azure-rbac/conditional-access-apps.png" alt-text="Screenshot showing how to select a server application in the Azure portal." lightbox="media/azure-rbac/conditional-access-apps.png"::: ---1. Under **Access controls**, select **Grant**. Select **Grant access** > **Require device to be marked as compliant**. -- :::image type="content" source="media/azure-rbac/conditional-access-grant-compliant.png" alt-text="Screenshot showing how to allow only compliant devices in the Azure portal." lightbox="media/azure-rbac/conditional-access-grant-compliant.png"::: --1. Under **Enable policy**, select **On** > **Create**. -- :::image type="content" source="media/azure-rbac/conditional-access-enable-policies.png" alt-text="Screenshot showing how to enable a conditional access policy in the Azure portal." lightbox="media/azure-rbac/conditional-access-enable-policies.png"::: --Access the cluster again. For example, run the `kubectl get nodes` command to view nodes in the cluster: --```console -kubectl get nodes -``` --Follow the instructions to sign in again. An error message states that you're successfully logged in, but your admin requires the device that's requesting access to be managed by Microsoft Entra ID in order to access the resource. Follow these steps: --1. In the Azure portal, go to **Microsoft Entra ID**. -1. Select **Enterprise applications**. Then under **Activity**, select **Sign-ins**. -1. An entry at the top shows **Failed** for **Status** and **Success** for **Conditional Access**. Select the entry, and then select **Conditional Access** in **Details**. Notice that your Conditional Access policy is listed. -- :::image type="content" source="media/azure-rbac/conditional-access-sign-in-activity.png" alt-text="Screenshot showing a failed sign-in entry in the Azure portal." lightbox="media/azure-rbac/conditional-access-sign-in-activity.png"::: --<a name='configure-just-in-time-cluster-access-with-azure-ad'></a> --## Configure just-in-time cluster access with Microsoft Entra ID --Another option for cluster access control is to use [Privileged Identity Management (PIM)](../../active-directory/privileged-identity-management/pim-configure.md) for just-in-time requests. -->[!NOTE] -> [Microsoft Entra PIM](../../active-directory/privileged-identity-management/pim-configure.md) is a Microsoft Entra ID P2 capability. For more on Microsoft Entra ID SKUs, see the [pricing guide](https://azure.microsoft.com/pricing/details/active-directory/). --To configure just-in-time access requests for your cluster, complete the following steps: --1. At the top of the Azure portal, search for and select **Microsoft Entra ID**. -1. Take note of the tenant ID. For the rest of these instructions, we'll refer to that ID as `<tenant-id>`. -- :::image type="content" source="media/azure-rbac/jit-get-tenant-id.png" alt-text="Screenshot showing Microsoft Entra ID details in the Azure portal." lightbox="media/azure-rbac/jit-get-tenant-id.png"::: --1. On the menu for Microsoft Entra ID on the left side, under **Manage**, select **Groups** > **New group**. --1. Make sure that **Security** is selected for **Group type**. Enter a group name, such as **myJITGroup**. Under **Microsoft Entra roles can be assigned to this group (Preview)**, select **Yes**. Finally, select **Create**. -- :::image type="content" source="media/azure-rbac/jit-new-group-created.png" alt-text="Screenshot showing details for the new group in the Azure portal." lightbox="media/azure-rbac/jit-new-group-created.png"::: --1. You're brought back to the **Groups** page. Select your newly created group and take note of the object ID. For the rest of these instructions, we'll refer to this ID as `<object-id>`. -- :::image type="content" source="media/azure-rbac/jit-get-object-id.png" alt-text="Screenshot showing the object ID for the new group in the Azure portal." lightbox="media/azure-rbac/jit-get-object-id.png"::: --1. Back in the Azure portal, on the menu for **Activity** on the left side, select **Privileged Access (Preview)**. Then select **Enable Privileged Access**. -- :::image type="content" source="media/azure-rbac/jit-enabling-priv-access.png" alt-text="Screenshot showing selections for enabling privileged access in the Azure portal." lightbox="media/azure-rbac/jit-enabling-priv-access.png"::: --1. Select **Add assignments** to begin granting access. -- :::image type="content" source="media/azure-rbac/jit-add-active-assignment.png" alt-text="Screenshot showing how to add active assignments in the Azure portal." lightbox="media/azure-rbac/jit-add-active-assignment.png"::: --1. Select a role of **Member**, and select the users and groups to whom you want to grant cluster access. A group admin can modify these assignments at any time. When you're ready to move on, select **Next**. -- :::image type="content" source="media/azure-rbac/jit-adding-assignment.png" alt-text="Screenshot showing how to add assignments in the Azure portal." lightbox="media/azure-rbac/jit-adding-assignment.png"::: --1. Choose an assignment type of **Active**, choose the desired duration, and provide a justification. When you're ready to proceed, select **Assign**. For more on assignment types, see [Assign eligibility for a privileged access group (preview) in Privileged Identity Management](../../active-directory/privileged-identity-management/groups-assign-member-owner.md#assign-an-owner-or-member-of-a-group). -- :::image type="content" source="media/azure-rbac/jit-set-active-assignment.png" alt-text="Screenshot showing assignment properties in the Azure portal." lightbox="media/azure-rbac/jit-set-active-assignment.png"::: --After you've made the assignments, verify that just-in-time access is working by accessing the cluster. For example, use the `kubectl get nodes` command to view nodes in the cluster: --```console -kubectl get nodes -``` --Note the authentication requirement and follow the steps to authenticate. If authentication is successful, you should see output similar to this: --```output -To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AAAAAAAAA to authenticate. --NAME STATUS ROLES AGE VERSION -node-1 Ready agent 6m36s v1.18.14 -node-2 Ready agent 6m42s v1.18.14 -node-3 Ready agent 6m33s v1.18.14 -``` --## Next steps --- Securely connect to the cluster by using [Cluster Connect](cluster-connect.md).-- Read about the [architecture of Azure RBAC on Arc-enabled Kubernetes](conceptual-azure-rbac.md). |
azure-arc | Cluster Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md | - Title: "Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters." Previously updated : 10/27/2023-- -description: "With cluster connect, you can securely connect to Azure Arc-enabled Kubernetes clusters from anywhere without requiring any inbound port to be enabled on the firewall." ---# Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters --With cluster connect, you can securely connect to Azure Arc-enabled Kubernetes clusters from anywhere without requiring any inbound port to be enabled on the firewall. --Access to the `apiserver` of the Azure Arc-enabled Kubernetes cluster enables the following scenarios: --- Interactive debugging and troubleshooting.-- Cluster access to Azure services for [custom locations](custom-locations.md) and other resources created on top of it.--Before you begin, review the [conceptual overview of the cluster connect feature](conceptual-cluster-connect.md). --## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- An existing Azure Arc-enabled Kubernetes connected cluster.- - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). - - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. - -- Enable the [network requirements for Arc-enabled Kubernetes](network-requirements.md)- -- Enable these endpoints for outbound access:-- | Endpoint | Port | - |-|-| - |`*.servicebus.windows.net` | 443 | - |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 | -- > [!NOTE] - > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder. -----### [Azure CLI](#tab/azure-cli) ---- [Install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) Azure CLI to the latest version.--- Install the latest version of the `connectedk8s` Azure CLI extension:-- ```azurecli - az extension add --name connectedk8s - ``` -- If you've already installed the `connectedk8s` extension, update the extension to the latest version: -- ```azurecli - az extension update --name connectedk8s - ``` --- Replace the placeholders and run the below command to set the environment variables used in this document:-- ```azurecli - CLUSTER_NAME=<cluster-name> - RESOURCE_GROUP=<resource-group-name> - ARM_ID_CLUSTER=$(az connectedk8s show -n $CLUSTER_NAME -g $RESOURCE_GROUP --query id -o tsv) - ``` --### [Azure PowerShell](#tab/azure-powershell) --- Install [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-azure-powershell).--- Replace the placeholders and run the below command to set the environment variables used in this document:-- ```azurepowershell - $CLUSTER_NAME = <cluster-name> - $RESOURCE_GROUP = <resource-group-name> - $ARM_ID_CLUSTER = (Get-AzConnectedKubernetes -ResourceGroupName $RESOURCE_GROUP -Name $CLUSTER_NAME).Id - ``` ----## Set up authentication --On the existing Arc-enabled cluster, create the ClusterRoleBinding with either Microsoft Entra authentication or service account token. --<a name='azure-active-directory-authentication-option'></a> --### Microsoft Entra authentication option --#### [Azure CLI](#tab/azure-cli) --1. Get the `objectId` associated with your Microsoft Entra entity. If you are using a single user account, get the user principal name (UPN) associated with your Microsoft Entra entity. -- - For a Microsoft Entra group account: -- ```azurecli - AAD_ENTITY_ID=$(az ad signed-in-user show --query id -o tsv) - ``` -- - For a Microsoft Entra single user account: -- ```azurecli - AAD_ENTITY_ID=$(az ad signed-in-user show --query userPrincipalName -o tsv) - ``` -- - For a Microsoft Entra application: -- ```azurecli - AAD_ENTITY_ID=$(az ad sp show --id <id> --query id -o tsv) - ``` --1. Authorize the entity with appropriate permissions. -- - If you're using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Microsoft Entra entity (service principal or user) that needs to access this cluster. For example: -- ```console - kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_ID - ``` -- - If you're using Azure RBAC for authorization checks on the cluster, you can create an applicable [Azure role assignment](azure-rbac.md#built-in-roles) mapped to the Microsoft Entra entity. For example: -- ```azurecli - az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_ID --scope $ARM_ID_CLUSTER - az role assignment create --role "Azure Arc Enabled Kubernetes Cluster User Role" --assignee $AAD_ENTITY_ID --scope $ARM_ID_CLUSTER - ``` --#### [Azure PowerShell](#tab/azure-powershell) --1. Get the `objectId` associated with your Microsoft Entra entity. If you are using a single user account, you will get the user principal name (UPN) associated with your Microsoft Entra entity. -- - For a Microsoft Entra group account: -- ```azurepowershell - $AAD_ENTITY_ID = (az ad signed-in-user show --query id -o tsv) - ``` -- - For a Microsoft Entra single user account: -- ```azurepowershell - $AAD_ENTITY_ID = (az ad signed-in-user show --query userPrincipalName -o tsv) - ``` -- - For a Microsoft Entra application: -- ```azurepowershell - $AAD_ENTITY_ID = (az ad sp show --id <id> --query objectId -o tsv) - ``` --1. Authorize the entity with appropriate permissions. -- - If you're using native Kubernetes ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Microsoft Entra entity (service principal or user) that needs to access this cluster. For example: -- ```console - kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_ID - ``` -- - If you're using [Azure RBAC for authorization checks](azure-rbac.md) on the cluster, you can create an applicable [Azure role assignment](azure-rbac.md#built-in-roles) mapped to the Microsoft Entra entity. For example: -- ```azurepowershell - - az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_ID --scope $ARM_ID_CLUSTER - az role assignment create --role "Azure Arc Enabled Kubernetes Cluster User Role" --assignee $AAD_ENTITY_ID --scope $ARM_ID_CLUSTER - ``` ----### Service account token authentication option --#### [Azure CLI](#tab/azure-cli) --1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, run this command to create a service account. This example creates the service account in the default namespace, but you can substitute any other namespace for `default`. -- ```console - kubectl create serviceaccount demo-user -n default - ``` --1. Create ClusterRoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). If you used a different namespace in the first command, substitute it here for `default`. -- ```console - kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user - ``` --1. Create a service account token: -- ```console - kubectl apply -f - <<EOF - apiVersion: v1 - kind: Secret - metadata: - name: demo-user-secret - annotations: - kubernetes.io/service-account.name: demo-user - type: kubernetes.io/service-account-token - EOF - ``` -- ```console - TOKEN=$(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed 's/$/\n/g') - ``` --1. Get the token to output to console - - ```console - echo $TOKEN - ``` --#### [Azure PowerShell](#tab/azure-powershell) --1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, run this command to create a service account. This example creates the service account in the default namespace, but you can substitute any other namespace for `default`. -- ```console - kubectl create serviceaccount demo-user -n default - ``` --1. Create ClusterRoleBinding or RoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). If you used a different namespace in the first command, substitute it here for `default`. -- ```console - kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user - ``` --1. Create a service account token. Create a `demo-user-secret.yaml` file with the following content: -- ```yaml - apiVersion: v1 - kind: Secret - metadata: - name: demo-user-secret - annotations: - kubernetes.io/service-account.name: demo-user - type: kubernetes.io/service-account-token - ``` -- Then run these commands: -- ```console - kubectl apply -f demo-user-secret.yaml - ``` -- ```console - $TOKEN = ([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String((kubectl get secret demo-user-secret -o jsonpath='{$.data.token}')))) - ``` - -1. Get the token to output to console. - - ```console - echo $TOKEN - ``` ----## Access your cluster from a client device --Now you can access the cluster from a different client. Run the following steps on another client device. --1. Sign in using either Microsoft Entra authentication or service account token authentication. --1. Get the cluster connect `kubeconfig` needed to communicate with the cluster from anywhere (from even outside the firewall surrounding the cluster), based on the authentication option used: -- - If using Microsoft Entra authentication: -- ```azurecli - az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP - ``` -- - If using service account token authentication: -- ```azurecli - az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP --token $TOKEN - ``` -- > [!NOTE] - > This command will open the proxy and block the current shell. --1. In a different shell session, use `kubectl` to send requests to the cluster: -- ```powershell - kubectl get pods -A - ``` --You should now see a response from the cluster containing the list of all pods under the `default` namespace. --## Known limitations --Use `az connectedk8s show` to check your Arc-enabled Kubernetes agent version. --### [Agent version < 1.11.7](#tab/agent-version) --When making requests to the Kubernetes cluster, if the Microsoft Entra entity used is a part of more than 200 groups, you might see the following error: --`You must be logged in to the server (Error:Error while retrieving group info. Error:Overage claim (users with more than 200 group membership) is currently not supported.` --This is a known limitation. To get past this error: --1. Create a [service principal](/cli/azure/create-an-azure-service-principal-azure-cli), which is less likely to be a member of more than 200 groups. -1. [Sign in](/cli/azure/create-an-azure-service-principal-azure-cli#sign-in-using-a-service-principal) to Azure CLI with the service principal before running the `az connectedk8s proxy` command. --### [Agent version >= 1.11.7](#tab/agent-version-latest) --When making requests to the Kubernetes cluster, if the Microsoft Entra service principal used is a part of more than 200 groups, you might see the following error: --`Overage claim (users with more than 200 group membership) for SPN is currently not supported. For troubleshooting, please refer to aka.ms/overageclaimtroubleshoot` --This is a known limitation. To get past this error: --1. Create a [service principal](/cli/azure/create-an-azure-service-principal-azure-cli), which is less likely to be a member of more than 200 groups. -1. [Sign in](/cli/azure/create-an-azure-service-principal-azure-cli#sign-in-using-a-service-principal) to Azure CLI with the service principal before running the `az connectedk8s proxy` command. ----## Next steps --- Set up [Microsoft Entra RBAC](azure-rbac.md) on your clusters.-- Deploy and manage [cluster extensions](extensions.md). |
azure-arc | Conceptual Agent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-agent-overview.md | - Title: "Azure Arc-enabled Kubernetes agent overview" Previously updated : 08/24/2023- -description: "Learn about the Azure Arc agents deployed on the Kubernetes clusters when connecting them to Azure Arc." ---# Azure Arc-enabled Kubernetes agent overview --[Azure Arc-enabled Kubernetes](overview.md) provides a centralized, consistent control plane to manage policy, governance, and security across Kubernetes clusters in different environments. --Azure Arc agents are deployed on Kubernetes clusters when you [connect them to Azure Arc](quickstart-connect-cluster.md). This article provides an overview of these agents. --## Deploy agents to your cluster --Most on-premises datacenters enforce strict network rules that prevent inbound communication on the network boundary firewall. Azure Arc-enabled Kubernetes works with these restrictions by not requiring inbound ports on the firewall. Azure Arc agents require outbound communication to a [set list of network endpoints](network-requirements.md). --This diagram provides a high-level view of Azure Arc components. Kubernetes clusters in on-premises datacenters or different clouds are connected to Azure through the Azure Arc agents. This allows the clusters to be managed in Azure using management tools and Azure services. The clusters can also be accessed through offline management tools. ---The following high-level steps are involved in [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md): --1. Create a Kubernetes cluster on your choice of infrastructure (VMware vSphere, Amazon Web Services, Google Cloud Platform, etc.). The cluster must already exist before you connect it to Azure Arc. --1. Start the Azure Arc registration for your cluster. -- * The agent Helm chart is deployed on the cluster. - * The cluster nodes initiate an outbound communication to the [Microsoft Container Registry](https://github.com/microsoft/containerregistry), pulling the images needed to create the following agents in the `azure-arc` namespace: -- | Agent | Description | - | -- | -- | - | `deployment.apps/clusteridentityoperator` | Azure Arc-enabled Kubernetes currently supports only [system assigned identities](../../active-directory/managed-identities-azure-resources/overview.md). `clusteridentityoperator` initiates the first outbound communication. This first communication fetches the Managed Service Identity (MSI) certificate used by other agents for communication with Azure. | - | `deployment.apps/config-agent` | Watches the connected cluster for source control configuration resources applied on the cluster. Updates the compliance state. | - | `deployment.apps/controller-manager` | An operator of operators that orchestrates interactions between Azure Arc components. | - | `deployment.apps/metrics-agent` | Collects metrics of other Arc agents to verify optimal performance. | - | `deployment.apps/cluster-metadata-operator` | Gathers cluster metadata, including cluster version, node count, and Azure Arc agent version. | - | `deployment.apps/resource-sync-agent` | Syncs the above-mentioned cluster metadata to Azure. | - | `deployment.apps/flux-logs-agent` | Collects logs from the Flux operators deployed as a part of [source control configuration](conceptual-gitops-flux2.md). | - | `deployment.apps/extension-manager` | Installs and manages lifecycle of extension Helm charts. | - | `deployment.apps/kube-aad-proxy` | Used for authentication of requests sent to the cluster using cluster connect. | - | `deployment.apps/clusterconnect-agent` | Reverse proxy agent that enables the cluster connect feature to provide access to `apiserver` of the cluster. Optional component deployed only if the [cluster connect](conceptual-cluster-connect.md) feature is enabled. | - | `deployment.apps/guard` | Authentication and authorization webhook server used for Microsoft Entra RBAC. Optional component deployed only if [Azure RBAC](conceptual-azure-rbac.md) is enabled on the cluster. | --1. Once all the Azure Arc-enabled Kubernetes agent pods are in `Running` state, verify that your cluster is connected to Azure Arc. You should see: -- * An Azure Arc-enabled Kubernetes resource in [Azure Resource Manager](../../azure-resource-manager/management/overview.md). Azure tracks this resource as a projection of the customer-managed Kubernetes cluster, not the actual Kubernetes cluster itself. - * Cluster metadata (such as Kubernetes version, agent version, and number of nodes) appearing on the Azure Arc-enabled Kubernetes resource as metadata. --## Next steps --* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). -* View release notes to see [details about the latest agent versions](release-notes.md). -* Learn about [upgrading Azure Arc-enabled Kubernetes agents](agent-upgrade.md). -* Learn more about the creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-configurations.md). |
azure-arc | Conceptual Azure Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-azure-rbac.md | - Title: "Azure RBAC on Azure Arc-enabled Kubernetes" Previously updated : 05/22/2024- -description: "This article provides a conceptual overview of the Azure RBAC capability on Azure Arc-enabled Kubernetes." ---# Azure RBAC on Azure Arc-enabled Kubernetes clusters --Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. With Azure role-based access control (Azure RBAC), you can use Microsoft Entra ID and role assignments in Azure to control authorization checks on the cluster. This allows the benefits of Azure role assignments, such as activity logs showing all Azure RBAC changes to an Azure resource, to be used with your Azure Arc-enabled Kubernetes cluster. --## Architecture ---In order to route all authorization access checks to the authorization service in Azure, a webhook server ([guard](https://github.com/appscode/guard)) is deployed on the cluster. --The `apiserver` of the cluster is configured to use [webhook token authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) and [webhook authorization](https://kubernetes.io/docs/reference/access-authn-authz/webhook/) so that `TokenAccessReview` and `SubjectAccessReview` requests are routed to the guard webhook server. The `TokenAccessReview` and `SubjectAccessReview` requests are triggered by requests for Kubernetes resources sent to the `apiserver`. --Guard then makes a `checkAccess` call on the authorization service in Azure to see if the requesting Microsoft Entra entity has access to the resource of concern. --If that entity has a role that permits this access, an `allowed` response is sent from the authorization service to guard. Guard, in turn, sends an `allowed` response to the `apiserver`, enabling the calling entity to access the requested Kubernetes resource. --If the entity doesn't have a role that permits this access, a `denied` response is sent from the authorization service to guard. Guard sends a `denied` response to the `apiserver`, giving the calling entity a 403 forbidden error on the requested resource. --## Next steps --* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). -* [Set up Azure RBAC](./azure-rbac.md) on your Azure Arc-enabled Kubernetes cluster. |
azure-arc | Conceptual Cluster Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-cluster-connect.md | - Title: "Cluster connect access to Azure Arc-enabled Kubernetes clusters" Previously updated : 02/28/2024- -description: "Cluster connect allows developers to access their Azure Arc-enabled Kubernetes clusters from anywhere for interactive development and debugging." ---# Cluster connect access to Azure Arc-enabled Kubernetes clusters --The Azure Arc-enabled Kubernetes *cluster connect* feature provides connectivity to the `apiserver` of the cluster without requiring any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner. --Cluster connect allows developers to access their clusters from anywhere for interactive development and debugging. It also lets cluster users and administrators access or manage their clusters from anywhere. You can even use hosted agents/runners of Azure Pipelines, GitHub Actions, or any other hosted CI/CD service to deploy applications to on-premises clusters, without requiring self-hosted agents. --## Architecture ---On the cluster side, a reverse proxy agent called `clusterconnect-agent`, deployed as part of the agent Helm chart, makes outbound calls to the Azure Arc service to establish the session. --When the user calls `az connectedk8s proxy`: --1. The Azure Arc proxy binary is downloaded and spun up as a process on the client machine. -1. The Azure Arc proxy fetches a `kubeconfig` file associated with the Azure Arc-enabled Kubernetes cluster on which the `az connectedk8s proxy` is invoked. - * The Azure Arc proxy uses the caller's Azure access token and the Azure Resource Manager ID name. -1. The `kubeconfig` file, saved on the machine by the Azure Arc proxy, points the server URL to an endpoint on the Azure Arc proxy process. --When a user sends a request using this `kubeconfig` file: --1. The Azure Arc proxy maps the endpoint receiving the request to the Azure Arc service. -1. The Azure Arc service then forwards the request to the `clusterconnect-agent` running on the cluster. -1. The `clusterconnect-agent` passes on the request to the `kube-aad-proxy` component, which performs Microsoft Entra authentication on the calling entity. -1. After Microsoft Entra authentication, `kube-aad-proxy` uses Kubernetes [user impersonation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation) to forward the request to the cluster's `apiserver`. --## Next steps --* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). -* [Access your cluster](./cluster-connect.md) securely from anywhere using cluster connect. |
azure-arc | Conceptual Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-configurations.md | - Title: "GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes" Previously updated : 05/08/2023- -description: "This article provides a conceptual overview of GitOps and configurations capability of Azure Arc-enabled Kubernetes." ---# GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes --> [!IMPORTANT] -> The documents in this section are for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about GitOps with Flux v2](./conceptual-gitops-flux2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. -> -> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources. --In relation to Kubernetes, GitOps is the practice of declaring the desired state of Kubernetes cluster configurations (deployments, namespaces, etc.) in a Git repository. This declaration is followed by a polling and pull-based deployment of these cluster configurations using an operator. The Git repository can contain: --* YAML-format manifests describing any valid Kubernetes resources, including Namespaces, ConfigMaps, Deployments, DaemonSets, etc. -* Helm charts for deploying applications. --[Flux](https://docs.fluxcd.io/), a popular open-source tool in the GitOps space, can be deployed on the Kubernetes cluster to ease the flow of configurations from a Git repository to a Kubernetes cluster. Flux supports the deployment of its operator at both the cluster and namespace scopes. A flux operator deployed with namespace scope can only deploy Kubernetes objects within that specific namespace. The ability to choose between cluster or namespace scope helps you achieve multi-tenant deployment patterns on the same Kubernetes cluster. --## Configurations --[ ![Configurations architecture](./media/conceptual-configurations.png) ](./media/conceptual-configurations.png#lightbox) --The connection between your cluster and a Git repository is created as a configuration resource (`Microsoft.KubernetesConfiguration/sourceControlConfigurations`) on top of the Azure Arc-enabled Kubernetes resource (represented by `Microsoft.Kubernetes/connectedClusters`) in Azure Resource Manager. --The configuration resource properties are used to deploy Flux operator on the cluster with the appropriate parameters, such as the Git repo from which to pull manifests and the polling interval at which to pull them. The configuration resource data is stored encrypted at rest in an Azure Cosmos DB database to ensure data confidentiality. --The `config-agent` running in your cluster is responsible for: -* Tracking new or updated configuration resources on the Azure Arc-enabled Kubernetes resource. -* Deploying a Flux operator to watch the Git repository for each configuration resource. -* Applying any updates made to any configuration resource. --You can create multiple namespace-scoped configuration resources on the same Azure Arc-enabled Kubernetes cluster to achieve multi-tenancy. --> [!NOTE] -> * `config-agent` monitors for new or updated configuration resources to be available on the Azure Arc-enabled Kubernetes resource. Thus agents require connectivity for the desired state to be pulled down to the cluster. If agents are unable to connect to Azure, there is a delay in propagating the desired state to the cluster. -> * Sensitive customer inputs like private key, known hosts content, HTTPS username, and token/password are not stored for more than 48 hours in the Azure Arc-enabled Kubernetes services. If you are using sensitive inputs for configurations, bring the clusters online as regularly as possible. --## Apply configurations at scale --Since Azure Resource Manager manages your configurations, you can automate creating the same configuration across all Azure Arc-enabled Kubernetes resources using Azure Policy, within scope of a subscription or a resource group. --This at-scale enforcement ensures a common baseline configuration (containing configurations like ClusterRoleBindings, RoleBindings, and NetworkPolicy) can be applied across an entire fleet or inventory of Azure Arc-enabled Kubernetes clusters. --## Next steps --* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). -* [Create configurations](./tutorial-use-gitops-connected-cluster.md) on your Azure Arc-enabled Kubernetes cluster. -* [Use Azure Policy to apply configurations at scale](./use-azure-policy.md). |
azure-arc | Conceptual Connectivity Modes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-connectivity-modes.md | - Title: "Azure Arc-enabled Kubernetes connectivity modes" Previously updated : 03/26/2024- -description: "This article provides an overview of the connectivity modes supported by Azure Arc-enabled Kubernetes" ---# Azure Arc-enabled Kubernetes connectivity modes --Azure Arc-enabled Kubernetes requires deployment of Azure Arc agents on your Kubernetes clusters so that capabilities such as [configurations (GitOps)](conceptual-gitops-flux2.md), extensions, [cluster connect](conceptual-cluster-connect.md), and [custom location](conceptual-custom-locations.md) are made available on the cluster. Because Kubernetes clusters deployed on the edge may not have constant network connectivity, the agents may not always be able to reach the Azure Arc services while in a semi-connected mode. --## Understand connectivity modes --When working with Azure Arc-enabled Kubernetes clusters, it's important to understand how network connectivity modes impact your operations. --- **Fully connected**: With ongoing network connectivity, agents can consistently communicate with Azure. In this mode, there is typically little delay with tasks such as propagating GitOps configurations, enforcing Azure Policy and Gatekeeper policies, or collecting workload metrics and logs in Azure Monitor.--- **Semi-connected**: Azure Arc agents can pull desired state specification from the Arc services, then later realize this state on the cluster.-- > [!IMPORTANT] - > The managed identity certificate pulled down by the `clusteridentityoperator` is valid for up to 90 days before it expires. The agents will try to renew the certificate during this time period; however, if there is no network connectivity, the certificate may expire, and the Azure Arc-enabled Kubernetes resource will stop working. Because of this, we recommend ensuring that the connected cluster has network connectivity at least once every 30 days. If the certificate expires, you'll need to delete and then recreate the Azure Arc-enabled Kubernetes resource and agents in order to reactivate Azure Arc features on the cluster. --- **Disconnected**: Kubernetes clusters in disconnected environments that are unable to access Azure are not currently supported by Azure Arc-enabled Kubernetes.--## Connectivity status --The connectivity status of a cluster is determined by the time of the latest heartbeat received from the Arc agents deployed on the cluster: --| Status | Description | -| | -- | -| Connecting | The Azure Arc-enabled Kubernetes resource has been created in Azure, but the service hasn't received the agent heartbeat yet. | -| Connected | The Azure Arc-enabled Kubernetes service received an agent heartbeat within the previous 15 minutes. | -| Offline | The Azure Arc-enabled Kubernetes resource was previously connected, but the service hasn't received any agent heartbeat for at least 15 minutes. | -| Expired | The managed identity certificate of the cluster has expired. In this state, Azure Arc features will no longer work on the cluster. For more information on how to address expired Azure Arc-enabled Kubernetes resources, see the [FAQ](./faq.md#how-do-i-address-expired-azure-arc-enabled-kubernetes-resources). | --## Next steps --- Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).-- Learn more about creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-configurations.md).-- Review the [Azure Arc networking requirements](network-requirements.md). |
azure-arc | Conceptual Custom Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-custom-locations.md | - Title: "Custom locations with Azure Arc-enabled Kubernetes" Previously updated : 03/26/2024- -description: "This article provides a conceptual overview of the custom locations capability of Azure Arc-enabled Kubernetes." ---# Custom locations with Azure Arc-enabled Kubernetes --As an extension of the Azure location construct, the *custom locations* feature provides a way for tenant administrators to use their Azure Arc-enabled Kubernetes clusters as target locations for deploying Azure services instances. Examples of Azure offerings that can be deployed on top of custom locations include databases, such as SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL server. --Similar to Azure locations, end users within the tenant who have access to Custom Locations can deploy resources there using their company's private compute. ---You can visualize custom locations as an abstraction layer on top of Azure Arc-enabled Kubernetes clusters, cluster connect, and cluster extensions. Custom locations create the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the cluster. These other Azure services require cluster access to manage deployed resources. --## Architecture --When the admin [enables the custom locations feature on the cluster](custom-locations.md), a `ClusterRoleBinding` is created on the cluster, authorizing the Microsoft Entra application used by the custom locations resource provider. Once authorized, the custom locations resource provider can create `ClusterRoleBinding` or `RoleBinding` objects that are needed by other Azure resource providers to create custom resources on this cluster. The cluster extensions installed on the cluster determine the list of resource providers to authorize. ---When the user creates a data service instance on the cluster: --1. The PUT request is sent to Azure Resource Manager. -1. The PUT request is forwarded to the Azure Arc-enabled data services resource provider. -1. The RP fetches the `kubeconfig` file associated with the Azure Arc-enabled Kubernetes cluster on which the custom location exists. - * Custom location is referenced as `extendedLocation` in the original PUT request. -1. The Azure Arc-enabled data services resource provider uses the `kubeconfig` to communicate with the cluster to create a custom resource of the Azure Arc-enabled data services type on the namespace mapped to the custom location. - * The Azure Arc-enabled data services operator was deployed via cluster extension creation before the custom location existed. -1. The Azure Arc-enabled data services operator reads the new custom resource created on the cluster and creates the data controller, translating into realization of the desired state on the cluster. --The sequence of steps to create the SQL managed instance and PostgreSQL instance are identical to the sequence of steps described above. --## Next steps --* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). -* [Create a custom location](./custom-locations.md) on your Azure Arc-enabled Kubernetes cluster. |
azure-arc | Conceptual Data Exchange | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-data-exchange.md | - Title: "Data exchanged between Azure Arc-enabled Kubernetes cluster and Azure" Previously updated : 08/08/2023- -description: "The scenarios enabled by Azure Arc-enabled Kubernetes involve exchange of desired state configurations, metadata, and other scenario specific operational data." ---# Data exchanged between Azure Arc-enabled Kubernetes cluster and Azure --Azure Arc-enabled Kubernetes scenarios involve exchange of desired state configurations, metadata, and other scenario specific operational data between the Azure Arc-enabled Kubernetes cluster environment and Azure service. For all types of data, the Azure Arc agents initiate outbound communication to Azure services and thus require only egress access to endpoints listed under the [network prerequisites](network-requirements.md). Enabling inbound ports on firewall is not required for Azure Arc agents. --The following table presents a per-scenario breakdown of the data exchanged between these environments. --## Data exchange between cluster and Azure --| Scenario | Metadata | Communication mode | -| | -- | | -| Cluster metadata | Kubernetes cluster version | Agent pushes to Azure | -| Cluster metadata | Number of nodes in the cluster | Agent pushes to Azure | -| Cluster metadata | Agent version | Agent pushes to Azure | -| Cluster metadata | Kubernetes distribution type | Azure CLI pushes to Azure | -| Cluster metadata | Infrastructure type (AWS/GCP/vSphere/...) | Azure CLI pushes to Azure | -| Cluster metadata | vCPU count of nodes in the cluster | Agent pushes to Azure | -| Resource Health | Agent heartbeat | Agent pushes to Azure | -| Diagnostics and supportability | Resource consumption (memory/CPU) by agents | Agent pushes to Azure | -| Diagnostics and supportability | Logs of all agent containers | Agent pushes to Azure | -| Agent upgrade | Agent upgrade availability | Agent pulls from Azure | -| Configuration (GitOps) | Desired state of configuration: Git repository URL, flux operator parameters, private key, known hosts content, HTTPS username, token, or password | Agent pulls from Azure | -| Configuration (GitOps) | Status of flux operator installation | Agent pushes to Azure | -| Extensions | Desired state of extension: extension type, configuration settings, protected configuration settings, release train, auto-upgrade settings | Agent pulls from Azure | -| Azure Policy | Azure Policy assignments that need Gatekeeper enforcement within cluster | Agent pulls from Azure | -| Azure Policy | Audit and compliance status of in-cluster policy enforcements | Agent pushes to Azure | -| Azure Monitor | Metrics and logs of customer workloads | Agent pushes to Log Analytics workspace resource in customer's tenant and subscription | -| Cluster Connect | Requests sent to cluster | Outbound session established with Arc service by clusterconnect-agent used to send requests to cluster | -| Custom Location | Metadata on namespace and ClusterRoleBinding/RoleBinding for authorization | Outbound session established with Arc service by clusterconnect-agent used to send requests to cluster | -| Resources on top of custom location | Desired specifications of databases or application instances | Outbound session established with Arc service by clusterconnect-agent used to send requests to cluster | --## Next steps --* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). -* Learn about creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-configurations.md). -- |
azure-arc | Conceptual Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-extensions.md | - Title: "Cluster extensions - Azure Arc-enabled Kubernetes" Previously updated : 03/22/2024- -description: "This article provides a conceptual overview of the Azure Arc-enabled Kubernetes cluster extensions capability." ---# Cluster extensions --[Helm charts](https://helm.sh/) help you manage Kubernetes applications by providing the building blocks needed to define, install, and upgrade even the most complex Kubernetes applications. --The cluster extension feature builds on top of the packaging components of Helm. With extensions, you use an Azure Resource Manager-driven experience for installation and lifecycle management of different capabilities on top of your Kubernetes cluster. --A cluster operator or admin can [use the cluster extensions feature](extensions.md) to: --- Install and manage key management, data, and application offerings on your Kubernetes cluster.-- Use Azure Policy to automate at-scale deployment of cluster extensions across all clusters in your environment.-- Subscribe to release trains (for example, preview or stable) for each extension.-- Set up auto-upgrade for extensions or pin to a specific version and manually upgrade versions.-- Update extension properties or delete extension instances.--Extensions are available to support a wide range of Azure services and scenarios. For a list of currently supported extensions, see [Available extensions for Azure Arc-enabled Kubernetes clusters](extensions-release.md). --## Architecture ---The cluster extension instance is created as an extension Azure Resource Manager resource (`Microsoft.KubernetesConfiguration/extensions`) on top of the Azure Arc-enabled Kubernetes resource (represented by `Microsoft.Kubernetes/connectedClusters`) in Azure Resource Manager. --This representation in Azure Resource Manager allows you to author a policy that checks for all Azure Arc-enabled Kubernetes resources with or without a specific cluster extension. Once you've determined which clusters are missing the cluster extensions with desired property values, you can remediate these non-compliant resources using Azure Policy. --The `config-agent` running in your cluster tracks new and updated extension resources on the Azure Arc-enabled Kubernetes resource. The `extensions-manager` agent running in your cluster reads the extension type that needs to be installed, then pulls the associated Helm chart from Azure Container Registry or Microsoft Container Registry and installs it on the cluster. --Both the `config-agent` and `extensions-manager` components running in the cluster handle extension instance updates, version updates and extension instance deletion. These agents use the system-assigned managed identity of the cluster to securely communicate with Azure services. --> [!NOTE] -> `config-agent` checks for new or updated extension instances on top of Azure Arc-enabled Kubernetes cluster. The agents require connectivity for the desired state of the extension to be pulled down to the cluster. If agents are unable to connect to Azure, propagation of the desired state to the cluster is delayed. -> -> Protected configuration settings for an extension instance are stored for up to 48 hours in the Azure Arc-enabled Kubernetes services. As a result, if the cluster remains disconnected during the 48 hours after the extension resource was created on Azure, the extension changes from a `Pending` state to `Failed` state. To prevent this, we recommend bringing clusters online regularly. --> [!IMPORTANT] -> Currently, Azure Arc-enabled Kubernetes cluster extensions aren't supported on ARM64-based clusters, except for [Flux (GitOps)](conceptual-gitops-flux2.md). To [install and use other cluster extensions](extensions.md), the cluster must have at least one node of operating system and architecture type `linux/amd64`. --## Extension scope --Each extension type defines the scope at which they operate on the cluster. Extension installations on Arc-enabled Kubernetes clusters are either *cluster-scoped* or *namespace-scoped*. --A cluster-scoped extension will be installed in the `release-namespace` specified during extension creation. Typically, only one instance of the cluster-scoped extension and its components, such as pods, operators, and Custom Resource Definitions (CRDs), are installed in the release namespace on the cluster. --A namespace-scoped extension can be installed in a given namespace provided using the `ΓÇônamespace` property. Since the extension can be deployed at a namespace scope, multiple instances of the namespace-scoped extension and its components can run on the cluster. Each extension instance has permissions on the namespace where it is deployed to. All the above extensions are cluster-scoped except Event Grid on Kubernetes. --All of the [currently available extensions](extensions-release.md) are cluster-scoped, except for [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md) . --## Next steps --- Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).-- [Deploy cluster extensions](./extensions.md) on your Azure Arc-enabled Kubernetes cluster. |
azure-arc | Conceptual Gitops Ci Cd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-ci-cd.md | - Title: "CI/CD Workflow using GitOps - Azure Arc-enabled Kubernetes" Previously updated : 05/08/2023- -description: "This article provides a conceptual overview of a CI/CD workflow using GitOps with Flux" ---# CI/CD workflow using GitOps - Azure Arc-enabled Kubernetes --> [!IMPORTANT] -> The workflow described in this document uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about CI/CD workflow using GitOps with Flux v2](./conceptual-gitops-flux2-ci-cd.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. -> -> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources. --Modern Kubernetes deployments house multiple applications, clusters, and environments. With GitOps, you can manage these complex setups more easily, tracking the desired state of the Kubernetes environments declaratively with Git. Using common Git tooling to track cluster state, you can increase accountability, facilitate fault investigation, and enable automation to manage environments. --This conceptual overview explains GitOps as a reality in the full application change lifecycle using Azure Arc, Azure Repos, and Azure Pipelines. [Jump to an example](#example-workflow) of a single application change to GitOps-controlled Kubernetes environments. --## Architecture --Consider an application deployed to one or more Kubernetes environments. --![GitOps CI/CD architecture](./media/gitops-arch.png) --### Application repo -The application repo contains the application code that developers work on during their inner loop. The applicationΓÇÖs deployment templates live in this repo in a generic form, like Helm or Kustomize. Environment-specific values aren't stored. Changes to this repo invoke a PR or CI pipeline that starts the deployment process. -### Container Registry -The container registry holds all the first- and third-party images used in the Kubernetes environments. Tag first-party application images with human readable tags and the Git commit used to build the image. Cache third-party images for security, speed, and resilience. Set a plan for timely testing and integration of security updates. For more information, see the [ACR Consume and maintain public content](../../container-registry/tasks-consume-public-content.md) guide for an example. -### PR Pipeline -PRs to the application repo are gated on a successful run of the PR pipeline. This pipeline runs the basic quality gates, such as linting and unit tests on the application code. The pipeline tests the application and lints Dockerfiles and Helm templates used for deployment to a Kubernetes environment. Docker images should be built and tested, but not pushed. Keep the pipeline duration relatively short to allow for rapid iteration. -### CI Pipeline -The application CI pipeline runs all the PR pipeline steps and expands the testing and deployment checks. The pipeline can be run for each commit, or at a regular cadence with a group of commits. At this stage, perform application testing that is too long for a PR pipeline. Push Docker images to the Container Registry after building in preparation for deployment. The replaced template can be linted with a set of testing values. Images used at service runtime should be linted, built, and tested at this point. In the CI build specifically, artifacts are published for the CD step to consume in preparation for deployment. -### Flux -Flux is a service that runs in each cluster and is responsible for maintaining the desired state. The service frequently polls the GitOps repo for changes to its cluster and applies them. -### CD Pipeline -The CD pipeline is automatically triggered by successful CI builds. It uses the previously published templates, substitutes environment values, and opens a PR to the GitOps repo to request a change to the desired state of one or more Kubernetes clusters. Cluster administrators review the state change PR and approve the merge to the GitOps repo. The pipeline then waits for the PR to complete, which allows Flux to pick up the state change. -### GitOps repo -The GitOps repo represents the current desired state of all environments across clusters. Any change to this repo is picked up by the Flux service in each cluster and deployed. PRs are created with changes to the desired state, reviewed, and merged. These PRs contain changes to both deployment templates and the resulting rendered Kubernetes manifests. Low-level rendered manifests allow more careful inspection of changes typically unseen at the template-level. -### Kubernetes clusters -At least one Azure Arc-enabled Kubernetes cluster serves the different environments needed by the application. For example, a single cluster can serve both a dev and QA environment through different namespaces. A second cluster can provide easier separation of environments and more fine-grained control. -## Example workflow -As an application developer, Alice: -* Writes application code. -* Determines how to run the application in a Docker container. -* Defines the templates that run the container and dependent services in a Kubernetes cluster. --While Alice knows the application needs the capability to run in multiple environments, she doesn't know the specific settings for each environment. --Suppose Alice wants to make an application change that alters the Docker image used in the application deployment template. --1. Alice changes the deployment template, pushes it to a remote branch on the application repo, and opens a PR for review. -2. Alice asks her team to review the change. - * The PR pipeline runs validation. - * After a successful pipeline run, the team signs off and the change is merged. -3. The CI pipeline validates Alice's change and successfully completes. - * The change is safe to deploy to the cluster, and the artifacts are saved to the CI pipeline run. -4. Alice's change merges and triggers the CD pipeline. - * The CD pipeline picks up the artifacts stored by Alice's CI pipeline run. - * The CD pipeline substitutes the templates with environment-specific values, and stages any changes against the existing cluster state in the GitOps repo. - * The CD pipeline creates a PR to the GitOps repo with the desired changes to the cluster state. -5. Alice's team reviews and approves her PR. - * The change is merged into the target branch corresponding to the environment. -6. Within minutes, Flux notices a change in the GitOps repo and pulls Alice's change. - * Because of the Docker image change, the application pod requires an update. - * Flux applies the change to the cluster. -7. Alice tests the application endpoint to verify the deployment successfully completed. - > [!NOTE] - > For more environments targeted for deployment, the CD pipeline iterates by creating a PR for the next environment and repeats steps 4-7. The process many need extra approval for riskier deployments or environments, such as a security-related change or a production environment. -8. Once all the environments have received successful deployments, the pipeline completes. --## Next steps -Learn more about creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-configurations.md) |
azure-arc | Conceptual Gitops Flux2 Ci Cd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2-ci-cd.md | - Title: "CI/CD Workflow using GitOps (Flux v2) - Azure Arc-enabled Kubernetes" -description: "This article provides a conceptual overview of a CI/CD workflow using GitOps." Previously updated : 03/26/2024-----# CI/CD workflow using GitOps (Flux v2) --Modern Kubernetes deployments contain multiple applications, clusters, and environments. With GitOps, you can manage these complex setups more easily, tracking the desired state of the Kubernetes environments declaratively with Git. Using common Git tooling to declare cluster state, you can increase accountability, facilitate fault investigation, and enable automation to manage environments. --This article describes how GitOps fits into the full application change lifecycle using Azure Arc, Azure Repos, and Azure Pipelines. It also provides an example of a single application change to GitOps-controlled Kubernetes environments. --## Architecture --This diagram shows the CI/CD workflow for an application deployed to one or more Kubernetes environments. ---### Application code repository --The application repository contains the application code that developers work on during their inner loop. The application's deployment templates live in this repository in a generic form, such as Helm or Kustomize. Environment-specific values aren't stored in the repository. --Changes to this repo invoke a PR or CI pipeline that starts the deployment process. --### Container registry --The container registry holds all the first- and third-party images used in the Kubernetes environments. First-party application images are tagged with human-readable tags and the Git commit used to build the image. Third-party images may be cached to help with security, speed, and resilience. Set a plan for timely testing and integration of security updates. --For more information, see [How to consume and maintain public content with Azure Container Registry Tasks](../../container-registry/tasks-consume-public-content.md). --### PR pipeline --Pull requests from developers made to the application repository are gated on a successful run of the PR pipeline. This pipeline runs the basic quality gates, such as linting and unit tests on the application code. The pipeline tests the application and lints Dockerfiles and Helm templates used for deployment to a Kubernetes environment. Docker images should be built and tested, but not pushed. Keep the pipeline duration relatively short to allow for rapid iteration. --### CI pipeline --The application CI pipeline runs all the PR pipeline steps, expanding the testing and deployment checks. The pipeline can be run for each commit to main, or it can run at a regular cadence with a group of commits. --At this stage, application tests that are too consuming for the PR pipeline can be performed, including: --* Pushing images to container registry -* Image building, linting, and testing -* Template generation of raw YAML files --By the end of the CI build, artifacts are generated. These artifacts can be used by the CD step to consume in preparation for deployment. --### Flux cluster extension --Flux is an agent that runs in each cluster as a cluster extension. This Flux cluster extension is responsible for maintaining the desired state. The agent polls the GitOps repository at a user-defined interval, then reconciles the cluster state with the state declared in the Git repository. --For more information, see [Tutorial: Deploy applications using GitOps with Flux v2](tutorial-use-gitops-flux2.md). --### CD pipeline --The CD pipeline is automatically triggered by successful CI builds. In this pipeline environment, environment-specific values are substituted into the previously published templates, and a new pull request is raised against the GitOps repository with these values. This pull request contains the proposed changes to the desired state of one or more Kubernetes clusters. Cluster administrators review the pull request and approve the merge to the GitOps repository. The pipeline waits for the pull request to merge, after which Flux syncs and applies the state changes. --### GitOps repository --The GitOps repository represents the current desired state of all environments across clusters. Any change to this repository is picked up by the Flux service in each cluster and deployed. Changes to the desired state of the clusters are presented as pull requests, which are then reviewed, and finally merged upon approval of the changes. These pull requests contain changes to deployment templates and the resulting rendered Kubernetes manifests. Low-level rendered manifests allow more careful inspection of changes typically unseen at the template-level. --### GitOps connector --[GitOps Connector](https://github.com/microsoft/gitops-connector) creates a connection between the Flux agent and the GitOps Repository/CD pipeline. While changes are applied to the cluster, Flux notifies the GitOps connector of every phase change and health check performed. This component serves as an adapter. It understands how to communicate to a Git repository, and it updates the Git commit status so the synchronization progress is visible in the GitOps repository. When the deployment finishes (whether it succeeds or fails), the connector notifies the CD pipeline to continue so the pipeline can perform post-deployment activities, such as integration testing. --### Kubernetes clusters --At least one Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) cluster serves the different environments needed by the application. For example, a single cluster can serve both a dev and QA environment through different namespaces. A second cluster can provide easier separation of environments and more fine-grained control. --## Example workflow --As an application developer, Alice: --* Writes application code. -* Determines how to run the application in a Docker container. -* Defines the templates that run the container and dependent services in a Kubernetes cluster. --Alice wants to make sure the application has the capability to run in multiple environments, but she doesn't know the specific settings for each environment. --Suppose Alice wants to make an application change that alters the Docker image used in the application deployment template. --1. Alice changes the deployment template, pushes it to a remote branch called `alice` in the Application Repo, and opens a pull request for review against the `main` branch. --1. Alice asks her team to review the change. -- * The PR pipeline runs validation. - * After a successful PR pipeline run and team approval, the change is merged. --1. The CI pipeline then kicks off and validates Alice's change and successfully completes. -- * The change is safe to deploy to the cluster, and the artifacts are saved to the CI pipeline run. --1. The successful CI pipeline run triggers the CD pipeline. -- * The CD pipeline picks up the artifacts stored by Alice's CI pipeline run. - * The CD pipeline substitutes the templates with environment-specific values and stages any changes against the existing cluster state in the GitOps repository. - * The CD pipeline creates a pull request against the production branch of the GitOps Repo with the desired changes to the cluster state. --1. Alice's team reviews and approves her pull request. -- * The change is merged into the target branch corresponding to the environment. --1. Within minutes, Flux notices a change in the GitOps repository and pulls Alice's change. -- * Because of the Docker image change, the application pod requires an update. - * Flux applies the change to the cluster. - * Flux reports the deployment status back to the GitOps repository via [GitOps Connector](https://github.com/microsoft/gitops-connector). --1. The CD pipeline runs automated tests to verify the new deployment successfully completed and works as expected. -- > [!NOTE] - > For additional environments targeted for deployment, the CD pipeline iterates by creating a pull request for the next environment and repeats steps 4-7. The process many need extra approval for riskier deployments or environments, such as a security-related change or a production environment. --1. When all the environments have received successful deployments, the pipeline completes. --## Next steps --* Walk through our [tutorial to implement CI/CD with GitOps](tutorial-gitops-ci-cd.md). -* Learn about [creating connections between your cluster and a Git repository with Flux configurations](conceptual-gitops-flux2.md). |
azure-arc | Conceptual Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md | - Title: "Application deployments with GitOps (Flux v2)" -description: "This article provides a conceptual overview of GitOps in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." Previously updated : 03/27/2024-----# Application deployments with GitOps (Flux v2) for AKS and Azure Arc-enabled Kubernetes --Azure provides an automated application deployments capability using GitOps that works with Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters. The key benefits provided by adopting GitOps for deploying applications to Kubernetes clusters include: --* Continual visibility into the status of applications running on clusters. -* Separation of concerns between application development teams and infrastructure teams. Application teams don't need to have experience with Kubernetes deployments. Platform engineering teams typically create a self-serve model for application teams, empowering them to run deployments with higher confidence. -* Ability to recreate clusters with the same desired state in case of a crash or to scale out. --With GitOps, you declare the desired state of your Kubernetes clusters in files in Git repositories. The Git repositories may contain the following files: --* [YAML-formatted manifests](https://yaml.org/) that describe Kubernetes resources (such as Namespaces, Secrets, Deployments, and others) -* [Helm charts](https://helm.sh/docs/topics/charts/) for deploying applications -* [Kustomize files](https://kustomize.io/) to describe environment-specific changes --Because these files are stored in a Git repository, they're versioned, and changes between versions are easily tracked. Kubernetes controllers run in the clusters and continually reconcile the cluster state with the desired state declared in the Git repository. These operators pull the files from the Git repositories and apply the desired state to the clusters. The operators also continuously assure that the cluster remains in the desired state. --GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](https://fluxcd.io/docs/), a popular open-source tool set. Flux provides support for common file sources (Git and Helm repositories, Buckets, Azure Blob Storage) and template types (YAML, Helm, and Kustomize). Flux also supports [multi-tenancy](#multi-tenancy) and deployment dependency management, among other features. --Flux is deployed directly on the cluster, and each cluster's control plane is logically separated. This makes it scale well to hundreds and thousands of clusters. Flux enables pure pull-based GitOps application deployments. No access to clusters is needed by the source repo or by any other cluster. --## Flux cluster extension --GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` [cluster extension](./conceptual-extensions.md) resource. The `microsoft.flux` extension must be installed in the cluster before one or more `fluxConfigurations` can be created. The extension is installed automatically when you create the first `Microsoft.KubernetesConfiguration/fluxConfigurations` in a cluster, or you can install it manually using the portal, the Azure CLI (`az k8s-extension create --extensionType=microsoft.flux`), ARM template, or REST API. --### Controllers --By default, the `microsoft.flux` extension installs the [Flux controllers](https://fluxcd.io/docs/components/) (Source, Kustomize, Helm, Notification) and the FluxConfig Custom Resource Definition (CRD), `fluxconfig-agent`, and `fluxconfig-controller`. Optionally, you can also install the Flux `image-automation` and `image-reflector` controllers, which provide functionality for updating and retrieving Docker images. --* [Flux Source controller](https://toolkit.fluxcd.io/components/source/controller/): Watches the `source.toolkit.fluxcd.io` custom resources. Handles synchronization between the Git repositories, Helm repositories, Buckets and Azure Blob storage. Handles authorization with the source for private Git, Helm repos and Azure blob storage accounts. Surfaces the latest changes to the source through a tar archive file. -* [Flux Kustomize controller](https://toolkit.fluxcd.io/components/kustomize/controller/): Watches the `kustomization.toolkit.fluxcd.io` custom resources. Applies Kustomize or raw YAML files from the source onto the cluster. -* [Flux Helm controller](https://toolkit.fluxcd.io/components/helm/controller/): Watches the `helm.toolkit.fluxcd.io` custom resources. Retrieves the associated chart from the Helm Repository source surfaced by the Source controller. Creates the `HelmChart` custom resource and applies the `HelmRelease` with given version, name, and customer-defined values to the cluster. -* [Flux Notification controller](https://toolkit.fluxcd.io/components/notification/controller/): Watches the `notification.toolkit.fluxcd.io` custom resources. Receives notifications from all Flux controllers. Pushes notifications to user-defined webhook endpoints. -* Flux Custom Resource Definitions: -- * `kustomizations.kustomize.toolkit.fluxcd.io` - * `imagepolicies.image.toolkit.fluxcd.io` - * `imagerepositories.image.toolkit.fluxcd.io` - * `imageupdateautomations.image.toolkit.fluxcd.io` - * `alerts.notification.toolkit.fluxcd.io` - * `providers.notification.toolkit.fluxcd.io` - * `receivers.notification.toolkit.fluxcd.io` - * `buckets.source.toolkit.fluxcd.io` - * `gitrepositories.source.toolkit.fluxcd.io` - * `helmcharts.source.toolkit.fluxcd.io` - * `helmrepositories.source.toolkit.fluxcd.io` - * `helmreleases.helm.toolkit.fluxcd.io` - * `fluxconfigs.clusterconfig.azure.com` --* FluxConfig CRD: Custom Resource Definition for `fluxconfigs.clusterconfig.azure.com` custom resources that define `FluxConfig` Kubernetes objects. -* `fluxconfig-agent`: Responsible for watching Azure for new or updated `fluxConfigurations` resources, and for starting the associated Flux configuration in the cluster. Also responsible for pushing Flux status changes in the cluster back to Azure for each `fluxConfigurations` resource. -* `fluxconfig-controller`: Watches the `fluxconfigs.clusterconfig.azure.com` custom resources and responds to changes with new or updated configuration of GitOps machinery in the cluster. --> [!NOTE] -> The `microsoft.flux` extension is installed in the `flux-system` namespace and has [cluster-wide scope](conceptual-extensions.md#extension-scope). You can't install this extension at namespace scope. --## Flux configurations ---You create Flux configuration resources (`Microsoft.KubernetesConfiguration/fluxConfigurations`) to enable GitOps management of the cluster from your Git repos, Bucket sources or Azure Blob storage. When you create a `fluxConfigurations` resource, the values you supply for the [parameters](gitops-flux2-parameters.md), such as the target Git repo, are used to create and configure the Kubernetes objects that enable the GitOps process in that cluster. To ensure data security, the `fluxConfigurations` resource data is stored encrypted at rest in an Azure Cosmos DB database by the Cluster Configuration service. --The `fluxconfig-agent` and `fluxconfig-controller` agents, installed with the `microsoft.flux` extension, manage the GitOps configuration process. --`fluxconfig-agent` is responsible for the following tasks: --* Polls the Kubernetes Configuration data plane service for new or updated `fluxConfigurations` resources. -* Creates or updates `FluxConfig` custom resources in the cluster with the configuration information. -* Watches `FluxConfig` custom resources and pushes status changes back to the associated Azure fluxConfiguration resources. --`fluxconfig-controller` is responsible for the following tasks: --* Watches status updates to the Flux custom resources created by the managed `fluxConfigurations`. -* Creates private/public key pair that exists for the lifetime of the `fluxConfigurations`. This key is used for authentication if the URL is SSH based and if the user doesn't provide their own private key during creation of the configuration. -* Creates custom authentication secret based on user-provided private-key/http basic-auth/known-hosts/no-auth data. -* Sets up role-based access control (service account provisioned, role binding created/assigned, role created/assigned). -* Creates `GitRepository` or `Bucket` custom resource and `Kustomization` custom resources from the information in the `FluxConfig` custom resource. --Each `fluxConfigurations` resource in Azure is associated with one Flux `GitRepository` or `Bucket` custom resource and one or more `Kustomization` custom resources in a Kubernetes cluster. When you create a `fluxConfigurations` resource, you specify the URL to the source (Git repository, Bucket or Azure Blob storage) and the sync target in the source for each `Kustomization`. You can configure dependencies between `Kustomization` custom resources to control deployment sequencing. You can also create multiple namespace-scoped `fluxConfigurations` resources on the same cluster for different applications and app teams. --> [!NOTE] -> The `fluxconfig-agent` monitors for new or updated `fluxConfiguration` resources in Azure. The agent requires connectivity to Azure for the desired state of the `fluxConfiguration` to be applied to the cluster. If the agent can't connect to Azure, changes in the cluster wait until the agent can connect. If the cluster is disconnected from Azure for more than 48 hours, then the request to the cluster will time out, and the changes will need to be reapplied in Azure. -> -> Sensitive customer inputs like private key and token/password are stored for less than 48 hours in the Kubernetes Configuration service. If you update any of these values in Azure, make sure that your clusters connect with Azure within 48 hours. --You can monitor Flux configuration status and compliance in the Azure portal, or use dashboards to monitor status, compliance, resource consumption, and reconciliation activity. For more information, see [Monitor GitOps (Flux v2) status and activity](monitor-gitops-flux-2.md). --### Version support --The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. Starting with `microsoft.flux` version 1.7.0, ARM64-based clusters are supported. --> [!NOTE] -> If you have been using Flux v1, we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. -> -> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources. --## GitOps with private link --If you've added support for [private link to an Azure Arc-enabled Kubernetes cluster](private-link.md), then the `microsoft.flux` extension works out-of-the-box with communication back to Azure. For connections to your Git repository, Helm repository, or any other endpoints that are needed to deploy your Kubernetes manifests, you must provision these endpoints behind your firewall, or list them on your firewall, so that the Flux Source controller can successfully reach them. --## Data residency --The Azure GitOps service (Azure Kubernetes Configuration Management) stores/processes customer data. By default, customer data is replicated to the paired region. For the regions Singapore, East Asia, and Brazil South, all customer data is stored and processed in the region. --## Apply Flux configurations at scale --Because Azure Resource Manager manages your configurations, you can automate creating the same configuration across all Azure Kubernetes Service and Azure Arc-enabled Kubernetes resources using Azure Policy, within the scope of a subscription or a resource group. This at-scale enforcement ensures that specific configurations are applied consistently across entire groups of clusters. --For more information, see [Deploy applications consistently at scale using Flux v2 configurations and Azure Policy](./use-azure-policy-flux-2.md). --## Parameters --To see all the parameters supported by Flux v2 in Azure, see the [`az k8s-configuration` documentation](/cli/azure/k8s-configuration). The Azure implementation doesn't currently support every parameter that Flux supports. --For information about available parameters and how to use them, see [GitOps (Flux v2) supported parameters](gitops-flux2-parameters.md). --## Multi-tenancy --Flux v2 supports [multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy) starting in [version 0.26](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). This capability is integrated into Flux v2 in Azure. --> [!NOTE] -> For the multi-tenancy feature, you need to know if your manifests contain any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects, or [if you use a Kubernetes version less than 1.20.6](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). To prepare: -> -> * Upgrade to Kubernetes version 1.20.6 or greater. -> * In your Kubernetes manifests, assure that all `sourceRef` are to objects within the same namespace as the GitOps configuration. -> * If you need time to update your manifests, you can [opt out of multi-tenancy](#opt-out-of-multi-tenancy). However, you still need to upgrade your Kubernetes version. --### Update manifests for multi-tenancy --LetΓÇÖs say you deploy a `fluxConfiguration` to one of our Kubernetes clusters in the `cluster-config` namespace with cluster scope. You configure the source to sync the `https://github.com/fluxcd/flux2-kustomize-helm-example` repo. This is the same sample Git repo used in the [Deploy applications using GitOps with Flux v2 tutorial](tutorial-use-gitops-flux2.md). --After Flux syncs the repo, it deploys the resources described in the manifests (YAML files). Two of the manifests describe `HelmRelease` and `HelmRepository` objects. --```yaml -apiVersion: helm.toolkit.fluxcd.io/v2beta1 -kind: HelmRelease -metadata: - name: nginx - namespace: nginx -spec: - releaseName: nginx-ingress-controller - chart: - spec: - chart: nginx-ingress-controller - sourceRef: - kind: HelmRepository - name: bitnami - namespace: flux-system - version: "5.6.14" - interval: 1h0m0s - install: - remediation: - retries: 3 - # Default values - # https://github.com/bitnami/charts/blob/master/bitnami/nginx-ingress-controller/values.yaml - values: - service: - type: NodePort -``` --```yaml -apiVersion: source.toolkit.fluxcd.io/v1beta1 -kind: HelmRepository -metadata: - name: bitnami - namespace: flux-system -spec: - interval: 30m - url: https://charts.bitnami.com/bitnami -``` --By default, the Flux extension deploys the `fluxConfigurations` by impersonating the `flux-applier` service account that is deployed only in the `cluster-config` namespace. Using the above manifests, when multi-tenancy is enabled, the `HelmRelease` would be blocked. This is because the `HelmRelease` is in the `nginx` namespace, but it references a HelmRepository in the `flux-system` namespace. Also, the Flux `helm-controller` can't apply the `HelmRelease`, because there is no `flux-applier` service account in the `nginx` namespace. --To work with multi-tenancy, the correct approach is to deploy all Flux objects into the same namespace as the `fluxConfigurations`. This approach avoids the cross-namespace reference issue, and allows the Flux controllers to get the permissions to apply the objects. Thus, for a GitOps configuration created in the `cluster-config` namespace, these example manifests would change as follows: --```yaml -apiVersion: helm.toolkit.fluxcd.io/v2beta1 -kind: HelmRelease -metadata: - name: nginx - namespace: cluster-config -spec: - releaseName: nginx-ingress-controller - targetNamespace: nginx - chart: - spec: - chart: nginx-ingress-controller - sourceRef: - kind: HelmRepository - name: bitnami - namespace: cluster-config - version: "5.6.14" - interval: 1h0m0s - install: - remediation: - retries: 3 - # Default values - # https://github.com/bitnami/charts/blob/master/bitnami/nginx-ingress-controller/values.yaml - values: - service: - type: NodePort -``` --```yaml -apiVersion: source.toolkit.fluxcd.io/v1beta1 -kind: HelmRepository -metadata: - name: bitnami - namespace: cluster-config -spec: - interval: 30m - url: https://charts.bitnami.com/bitnami -``` --### Opt out of multi-tenancy --When the `microsoft.flux` extension is installed, multi-tenancy is enabled by default. If you need to disable multi-tenancy, you can opt out by creating or updating the `microsoft.flux` extension in your clusters with `--configuration-settings multiTenancy.enforce=false`, as shown in these example commands: --```azurecli -az k8s-extension create --extension-type microsoft.flux --configuration-settings multiTenancy.enforce=false -c CLUSTER_NAME -g RESOURCE_GROUP -n flux -t <managedClusters or connectedClusters> -``` --```azurecli -az k8s-extension update --configuration-settings multiTenancy.enforce=false -c CLUSTER_NAME -g RESOURCE_GROUP -n flux -t <managedClusters or connectedClusters> -``` --## Migrate from Flux v1 --If you're still using Flux v1, we recommend migrating to Flux v2 as soon as possible. --To migrate to using Flux v2 in the same clusters where you've been using Flux v1, you must first delete all Flux v1 `sourceControlConfigurations` from the clusters. Because Flux v2 has a fundamentally different architecture, the `microsoft.flux` cluster extension won't install if there are Flux v1 `sourceControlConfigurations` resources in a cluster. The process of removing Flux v1 configurations and deploying Flux v2 configurations shouldn't take more than 30 minutes. --Removing Flux v1 `sourceControlConfigurations` doesn't stop any applications that are running on the clusters. However, during the period when Flux v1 configuration is removed and Flux v2 extension isn't yet fully deployed: --* If there are new changes in the application manifests stored in a Git repository, these changes aren't pulled during the migration, and the application version deployed on the cluster will be stale. -* If there are unintended changes in the cluster state and it deviates from the desired state specified in source Git repository, the cluster won't be able to self-heal. --We recommend testing your migration scenario in a development environment before migrating your production environment. --### View and delete Flux v1 configurations --Use these Azure CLI commands to find and then delete existing `sourceControlConfigurations` in a cluster: --```azurecli -az k8s-configuration list --cluster-name <cluster name> --cluster-type <connectedClusters or managedClusters> --resource-group <resource group name> -az k8s-configuration delete --name <configuration name> --cluster-name <cluster name> --cluster-type <connectedClusters or managedClusters> --resource-group <resource group name> -``` --You can also find and delete existing GitOps configurations for a cluster in the Azure portal. To do so, navigate to the cluster where the configuration was created and select **GitOps** in the left pane. Select the configuration, then select **Delete**. --### Deploy Flux v2 configurations --Use the Azure portal or Azure CLI to [apply Flux v2 configurations](tutorial-use-gitops-flux2.md#apply-a-flux-configuration) to your clusters. --### Flux v1 retirement information --The open-source project of Flux v1 has been archived, and [feature development has stopped indefinitely](https://fluxcd.io/docs/migration/). --Flux v2 was launched as the upgraded open-source project of Flux. It has a new architecture and supports more GitOps use cases. Microsoft launched a version of an extension using Flux v2 in May 2022. Since then, customers have been advised to move to Flux v2 within three years, as support for using Flux v1 is scheduled to end in May 2025. --Key new features introduced in the GitOps extension for Flux v2: --* Flux v1 is a monolithic do-it-all operator. Flux v2 separates the functionalities into [specialized controllers](#controllers) (Source controller, Kustomize controller, Helm controller, and Notification controller). -* Supports synchronization with multiple source repositories. -* Supports [multi-tenancy](#multi-tenancy), like applying each source repository with its own set of permissions. -* Provides operational insights through health checks, events and alerts. -* Supports Git branches, pinning on commits and tags, and following SemVer tag ranges. -* Credentials configuration per GitRepository resource: SSH private key, HTTP/S username/password/token, and OpenPGP public keys. --## Next steps --* Use our tutorial to learn how to [enable GitOps on your AKS or Azure Arc-enabled Kubernetes clusters](tutorial-use-gitops-flux2.md). -* Learn about [CI/CD workflow using GitOps](conceptual-gitops-flux2-ci-cd.md). |
azure-arc | Conceptual Inner Loop Gitops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-inner-loop-gitops.md | - Title: "Inner Loop Developer Experience for Teams Adopting GitOps" Previously updated : 08/09/2023--- -description: "Learn how an established inner loop can enhance developer productivity and help in a seamless transition for teams adopting GitOps." --# Inner Loop Developer Experience for teams adopting GitOps --This article describes how an established inner loop can enhance developer productivity and help in a seamless transition from inner dev loop to outer loop for teams adopting GitOps. --## Inner dev loop frameworks --Building and deploying containers can slow the inner dev experience and impact team productivity. Cloud-native development teams benefit from a robust inner dev loop framework. Inner dev loop frameworks help with the iterative process of writing code, building, and debugging. --Capabilities of inner dev loop frameworks include: --- Automation of repetitive steps such as building code and deploying to target cluster.-- Enhanced ability to work with remote and local clusters, and supporting local tunnel debugging for hybrid setup.-- Ability to configure custom flow for team-based productivity.-- Handling microservice dependencies.-- Hot reloading, port forwarding, log, and terminal access.--Depending on the maturity and complexity of the service, dev teams can choose their cluster setup to accelerate the inner dev loop: --- All local-- All remote-- Hybrid--Many frameworks support these capabilities. Microsoft offers [Bridge to Kubernetes](/visualstudio/bridge/overview-bridge-to-kubernetes) for [local tunnel debugging](/visualstudio/bridge/bridge-to-kubernetes-vs-code#install-and-use-local-tunnel-debugging). Many other similar market offerings are available, such as DevSpace, Scaffold, and Tilt. --> [!NOTE] -> The market offering [DevSpace](https://github.com/loft-sh/devspace) shouldn't be confused with MicrosoftΓÇÖs offering, [Bridge to Kubernetes](/visualstudio/bridge/overview-bridge-to-kubernetes), which was previously named DevSpace. --## Inner loop to outer loop transition --Once you've evaluated and chosen an inner loop dev framework, you can build a seamless inner loop to outer loop transition. --As described in the example scenario covered in [CI/CD workflow using GitOps](conceptual-gitops-flux2-ci-cd.md), an application developer works on application code within an application repository. This application repository also holds high-level deployment Helm and/or Kustomize templates. --The CI/CD pipelines: --- Generate the low-level manifests from the high-level templates, adding environment-specific values.-- Create a pull request that merges the low-level manifests with the GitOps repo that holds desired state for the specific environment.--Similar low-level manifests can be generated locally for the inner dev loop, using the configuration values local to the developer. Application developers can iterate on the code changes and use the low-level manifests to deploy and debug applications. Generation of the low-level manifests can be integrated into an inner loop workflow, using the developerΓÇÖs local configuration. Most of the inner loop framework allows configuring custom flows by either extending through custom plugins or injecting script invocation based on hooks. --## Example inner loop workflow built with DevSpace framework --To illustrate the inner loop workflow, we can look at an example scenario. This example uses the DevSpace framework, but the general workflow can be used with other frameworks. --This diagram shows the workflow for the inner loop. ---This diagram shows the workflow for the inner loop to outer loop transition. ---In this example, as an application developer, Alice: --- Authors a devspace.yaml file to configure the inner loop.-- Writes and tests application code using the inner loop for efficiency.-- Deploys to staging or prod with outer loop.--Suppose Alice wants to update, run, and debug the application either in local or remote cluster. --1. Alice updates the local configuration for the development environment represented in .env file. -1. Alice runs `devspace use context` and selects the Kubernetes cluster context. -1. Alice selects a namespace to work with by running `devspace use namespace <namespace_name>`. -1. Alice can iterate changes to the application code, and deploys and debugs the application onto the target cluster by running `devspace dev`. -1. Running `devspace dev` generates low-level manifests based on AliceΓÇÖs local configuration and deploys the application. These low-level manifests are configured with DevSpace hooks in devspace.yaml. -1. Alice doesn't need to rebuild the container every time she makes code changes, since DevSpace enables hot reloading, using file sync to copy her latest changes inside the container. -1. Running `devspace dev` also deploys any dependencies configured in devspace.yaml, such as back-end dependencies to front-end. -1. Alice tests her changes by accessing the application through the forwarding configured through devspace.yaml. -1. Once Alice finalizes her changes, she can purge the deployment by running `devspace purge` and create a new pull request to merge her changes to the dev branch of the application repository. --> [!NOTE] -> Find the sample code for this workflow in our [GitHub repo](https://github.com/Azure/arc-cicd-demo-src). --## Next steps --- Learn about creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-gitops-flux2.md).-- Learn more about [CI/CD workflow using GitOps](conceptual-gitops-ci-cd.md). |
azure-arc | Conceptual Workload Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-workload-management.md | - Title: "Workload management in a multi-cluster environment with GitOps" -description: "This article provides a conceptual overview of the workload management in a multi-cluster environment with GitOps." Previously updated : 03/29/2023------# Workload management in a multi-cluster environment with GitOps --Developing modern cloud-native applications often includes building, deploying, configuring, and promoting workloads across a group of Kubernetes clusters. With the increasing diversity of Kubernetes cluster types, and the variety of applications and services, the process can become complex and unscalable. Enterprise organizations can be more successful in these efforts by having a well defined structure that organizes people and their activities, and by using automated tools. --This article walks you through a typical business scenario, outlining the involved personas and major challenges that organizations often face while managing cloud-native workloads in a multi-cluster environment. It also suggests an architectural pattern that can make this complex process simpler, observable, and more scalable. --## Scenario overview --This article describes an organization that develops cloud-native applications. Any application needs a compute resource to work on. In the cloud-native world, this compute resource is a Kubernetes cluster. An organization may have a single cluster or, more commonly, multiple clusters. So the organization must decide which applications should work on which clusters. In other words, they must schedule the applications across clusters. The result of this decision, or scheduling, is a model of the desired state of the clusters in their environment. Having that in place, they need somehow to deliver applications to the assigned clusters so that they can turn the desired state into the reality, or, in other words, reconcile it. --Every application goes through a software development lifecycle that promotes it to the production environment. For example, an application is built, deployed to Dev environment, tested and promoted to Stage environment, tested, and finally delivered to production. For a cloud-native application, the application requires and targets different Kubernetes cluster resources throughout its lifecycle. In addition, applications normally require clusters to provide some platform services, such as Prometheus and Fluentbit, and infrastructure configurations, such as networking policy. --Depending on the application, there may be a great diversity of cluster types to which the application is deployed. The same application with different configurations could be hosted on a managed cluster in the cloud, on a connected cluster in an on-premises environment, on a group of clusters on semi-connected edge devices on factory lines or military drones, and on an air-gapped cluster on a starship. Another complexity is that clusters in early lifecycle stages such as Dev and QA are normally managed by the developer, while reconciliation to actual production clusters may be managed by the organization's customers. In the latter case, the developer may be responsible only for promoting and scheduling the application across different rings. --## Challenges at scale --In a small organization with a single application and only a few operations, most of these processes can be handled manually with a handful of scripts and pipelines. But for enterprise organizations operating on a larger scale, it can be a real challenge. These organizations often produce hundreds of applications that target hundreds of cluster types, backed up by thousands of physical clusters. In these cases, handling such operations manually with scripts isn't feasible. --The following capabilities are required to perform this type of workload management at scale in a multi-cluster environment: --- Separation of concerns on scheduling and reconciling-- Promotion of the multi-cluster state through a chain of environments-- Sophisticated, extensible and replaceable scheduler-- Flexibility to use different reconcilers for different cluster types depending on their nature and connectivity-- Platform configuration management at scale--## Scenario personas --Before we describe the scenario, let's clarify which personas are involved, what responsibilities they have, and how they interact with each other. --### Platform team --The platform team is responsible for managing the clusters that host applications produced by application teams. --Key responsibilities of the platform team are: --* Define staging environments (Dev, QA, UAT, Prod). -* Define cluster types and their distribution across environments. -* Provision new clusters. -* Manage infrastructure configurations across the clusters. -* Maintain platform services used by applications. -* Schedule applications and platform services on the clusters. --### Application team --The application team is responsible for the software development lifecycle (SDLC) of their applications. They provide Kubernetes manifests that describe how to deploy the application to different targets. They're responsible for owning CI/CD pipelines that create container images and Kubernetes manifests and promote deployment artifacts across environment stages. --Typically, the application team has no knowledge of the clusters that they are deploying to. They aren't aware of the structure of the multi-cluster environment, global configurations, or tasks performed by other teams. The application team primarily understands the success of their application rollout as defined by the success of the pipeline stages. --Key responsibilities of the application team are: --* Develop, build, deploy, test, promote, release, and support their applications. -* Maintain and contribute to source and manifests repositories of their applications. -* Define and configure application deployment targets. -* Communicate to platform team, requesting desired compute resources for successful SDLC operations. --## High level flow --This diagram shows how the platform and application team personas interact with each other while performing their regular activities. ---The primary concept of this whole process is separation of concerns. There are workloads, such as applications and platform services, and there is a platform where these workloads run. The application team takes care of the workloads (*what*), while the platform team is focused on the platform (*where*). --The application team runs SDLC operations on their applications and promotes changes across environments. They don't know which clusters their application is deployed on in each environment. Instead, the application team operates with the concept of *deployment target*, which is simply a named abstraction within an environment. For example, deployment targets could be integration on Dev, functional tests and performance tests on QA, early adopters, external users on Prod, and so on. --The application team defines deployment targets for each rollout environment, and they know how to configure their application and how to generate manifests for each deployment target. This process is automated and exists in the application repositories space. It results in generated manifests for each deployment target, stored in a manifests storage such as a Git repository, Helm Repository, or OCI storage. --The platform team has a limited knowledge about the applications, so they aren't involved in the application configuration and deployment process. The platform team is in charge of platform clusters, grouped in cluster types. They describe cluster types with configuration values such as DNS names, endpoints of external services, and so on. The platform team assigns or schedules application deployment targets to various cluster types. With that in place, application behavior on a physical cluster is determined by the combination of the deployment target configuration values, and cluster type configuration values. --The platform team uses a separate platform repository that contains manifests for each cluster type. These manifests define the workloads that should run on each cluster type, and which platform configuration values should be applied. Clusters can fetch that information from the platform repository with their preferred reconciler and then apply the manifests. --Clusters report their compliance state with the platform and application repositories to the Deployment Observability Hub. The platform and application teams can query this information to analyze historical workload deployment across clusters. This information can be used in the dashboards, alerts and in the deployment pipelines to implement progressive rollout. --## Solution architecture --Let's have a look at the high level solution architecture and understand its primary components. ---### Control plane --The platform team models the multi-cluster environment in the control plane. It's designed to be human-oriented and easy to understand, update, and review. The control plane operates with abstractions such as Cluster Types, Environments, Workloads, Scheduling Policies, Configs and Templates. These abstractions are handled by an automated process that assigns deployment targets and configuration values to the cluster types, then saves the result to the platform GitOps repository. Although there may be thousands of physical clusters, the platform repository operates at a higher level, grouping the clusters into cluster types. --The main requirement for the control plane storage is to provide a reliable and secure transaction processing functionality, rather than being hit with complex queries against a large amount of data. Various technologies may be used to store the control plane data. --This architecture design suggests a Git repository with a set of pipelines to store and promote platform abstractions across environments. This design provides a few benefits: --* All advantages of GitOps principles, such as version control, change approvals, automation, pull-based reconciliation. -* Git repositories such as GitHub provide out of the box branching, security and PR review functionality. -* Easy implementation of the promotional flows with GitHub Actions Workflows or similar orchestrators. -* No need to maintain and expose a separate control plane service. --### Promotion and scheduling --The control plane repository contains two types of data: --* Data that gets promoted across environments, such as a list of onboarded workloads and various templates. -* Environment-specific configurations, such as included environment cluster types, config values, and scheduling policies. This data isn't promoted, as it's specific to each environment. --The data to be promoted is stored in the `main` branch. Environment-specific data is stored in the corresponding environment branches such as example `dev`, `qa`, and `prod`. Transforming data from the control plane to the GitOps repo is a combination of the promotion and scheduling flows. The promotion flow moves the change across the environments horizontally; the scheduling flow does the scheduling and generates manifests vertically for each environment. ---A commit to the `main` branch starts the promotion flow that triggers the scheduling flow for each environment one by one. The scheduling flow takes the base manifests from `main`, applies config values from a corresponding to this environment branch, and creates a PR with the resulting manifests to the platform GitOps repository. Once the rollout on this environment is complete and successful, the promotion flow goes ahead and performs the same procedure on the next environment. On each environment, the flow promotes the same commit ID of the `main` branch, making sure that the content from `main` goes to the next environment only after successful deployment to the previous environment. --A commit to the environment branch in the control plane repository starts the scheduling flow for this environment. For example, perhaps you have configured cosmo-db endpoint in the QA environment. You only want to update the QA branch of the platform GitOps repository, without touching anything else. The scheduling takes the `main` content, corresponding to the latest commit ID promoted to this environment, applies configurations, and promotes the resulting manifests to the platform GitOps branch. --### Workload assignment --In the platform GitOps repository, each workload assignment to a cluster type is represented by a folder that contains the following items: --* A dedicated namespace for this workload in this environment on a cluster of this type. -* Platform policies restricting workload permissions. -* Consolidated platform config maps with the values that the workload can use. -* Reconciler resources, pointing to a Workload Manifests Storage where the actual workload manifests or Helm charts are stored. For example, Flux GitRepository and Flux Kustomization, ArgoCD Application, Zarf descriptors, and so on. --### Cluster types and reconcilers --Every cluster type can use a different reconciler (such as Flux, ArgoCD, Zarf, Rancher Fleet, and so on) to deliver manifests from the Workload Manifests Storages. Cluster type definition refers to a reconciler, which defines a collection of manifest templates. The scheduler uses these templates to produce reconciler resources, such as Flux GitRepository and Flux Kustomization, ArgoCD Application, Zarf descriptors, and so on. The same workload may be scheduled to the cluster types, managed by different reconcilers, for example Flux and ArgoCD. The scheduler generates Flux GitRepository and Flux Kustomization for one cluster and ArgoCD Application for another cluster, but both of them point to the same Workload Manifests Storage containing the workload manifests. --### Platform services --Platform services are workloads (such as Prometheus, NGINX, Fluentbit, and so on) maintained by the platform team. Just like any workloads, they have their source repositories and manifests storage. The source repositories may contain pointers to external Helm charts. CI/CD pipelines pull the charts with containers and perform necessary security scans before submitting them to the manifests storage, from where they're reconciled to the clusters. --### Deployment Observability Hub --Deployment Observability Hub is a central storage that is easy to query with complex queries against a large amount of data. It contains deployment data with historical information on workload versions and their deployment state across clusters. Clusters register themselves in the storage and update their compliance status with the GitOps repositories. Clusters operate at the level of Git commits only. High-level information, such as application versions, environments, and cluster type data, is transferred to the central storage from the GitOps repositories. This high-level information gets correlated in the central storage with the commit compliance data sent from the clusters. --## Platform configuration concepts --### Separation of concerns --Application behavior on a deployment target is determined by configuration values. However, configuration values are not all the same. These values are provided by different personas at different points in the application lifecycle and have different scopes. Generally, there are application and platform configurations. --### Application configurations --Application configurations provided by the application developers are abstracted away from deployment target details. Typically, application developers aren't aware of host-specific details, such as which hosts the application will be deployed to or how many hosts there are. But the application developers do know a chain of environments and rings that the application is promoted through on its way to production. --Orthogonal to that, an application might be deployed multiple times in each environment to play different roles. For example, the same application can serve as a `dispatcher` and as an `exporter`. The application developers may want to configure the application differently for various use cases. For example, if the application is running as a `dispatcher` on a QA environment, it should be configured in this way regardless of the actual host. The configuration values of this type are provided at the development time, when the application developers create deployment descriptors/manifests for various environments/rings and application roles. --### Platform configurations --Besides development time configurations, an application often needs platform-specific configuration values such as endpoints, tags, or secrets. These values may be different on every single host where the application is deployed. The deployment descriptors/manifests, created by the application developers, refer to the configuration objects containing these values, such as config maps or secrets. Application developers expect these configuration objects to be present on the host and available for the application to consume. Commonly, these objects and their values are provided by a platform team. Depending on the organization, the platform team persona may be backed up by different departments/people, for example IT Global, Site IT, Equipment owners and such. --The concerns of the application developers and the platform team are totally separated. The application developers are focused on the application; they own and configure it. Similarly, the platform team owns and configures the platform. The key point is that the platform team doesn't configure applications, they configure environments for applications. Essentially, they provide environment variable values for the applications to use. --Platform configurations often consist of common configurations that are irrelevant to the applications consuming them, and application-specific configurations that may be unique for every application. ---### Configuration schema --Although the platform team may have limited knowledge about the applications and how they work, they know what platform configuration is required to be present on the target host. This information is provided by the application developers. They specify what configuration values their application needs, their types and constraints. One of the ways to define this contract is to use a JSON schema. For example: --```json -{ - "$schema": "http://json-schema.org/draft-07/schema#", - "title": "patch-to-core Platform Config Schema", - "description": "Schema for platform config", - "type": "object", - "properties": { - "ENVIRONMENT": { - "type": "string", - "description": "Environment Name" - }, - "TimeWindowShift": { - "type": "integer", - "description": "Time Window Shift" - }, - "QueryIntervalSec": { - "type": "integer", - "description": "Query Interval Sec" - }, - "module": { - "type": "object", - "description": "module", - "properties": { - "drop-threshold": { "type": "number" } - }, - "required": ["drop-threshold"] - } - }, - "required": [ - "ENVIRONMENT", - "module" - ] - } -``` --This approach is well known in the developer community, as the JSON schema is used by Helm to define the possible values to be provided for a Helm chart. --A formal contract also allows for automation. The platform team uses the control plane to provide the configuration values. The control plane analyzes what applications are supposed to be deployed on a host. It uses configuration schemas to advise what values should be provided by the platform team. The control plane composes configuration values for every application instance and validates them against the schema to see if all the values are in place. --The control plane may perform validation in multiple stages at different points in time. For example, the control plane validates a configuration value when it is provided by the platform team to check its type, format and basic constrains. The final and the most important validation is conducted when the control plane composes all available configuration values for the application in the configuration snapshot. Only at this point it is possible to check presence of required configuration values and check integrity constraints that involve multiple values, coming from different sources. --### Configuration graph model - -The control plane composes configuration value snapshots for the application instances on deployment targets. It pulls the values from different configuration containers. The relationship of these containers may represent a hierarchy or a graph. The control plane follows some rules to identify what configuration values from what containers should be hydrated into the application configuration snapshot. It's the platform team's responsibility to define the configuration containers and establish the hydration rules. Application developers aren't aware of this structure. They are aware of configuration values to be provided, and it's not their concern where the values are coming from. --### Label matching approach --A simple and flexible way to implement configuration composition is the label matching approach. ---In this diagram, configuration containers group configuration values at different levels such as **Site**, **Line**, **Environment**, and **Region**. Depending on the organization, the values in these containers may be provided by different personas, such as IT Global, Site IT, Equipment owners, or just the Platform team. Each container is marked with a set of labels that define where values from this container are applicable. Besides the configuration containers, there are abstractions representing an application and a host where the application is to be deployed. Both of them are marked with the labels as well. The combination of the application's and host's labels composes the instance's labels set. This set determines the values of configuration containers that should be pulled into the application configuration snapshot. This snapshot is delivered to the host and fed to the application instance. The control plane iterates over the containers and evaluates if the container's labels match the instance's labels set. If so, the container's values are included in the final snapshot; if not, the container is skipped. The control plane can be configured with different strategies of overriding and merging for the complex objects and arrays. --One of the biggest advantages of this approach is scalability. The structure of configuration containers is abstracted away from the application instance, which doesn't really know where the values are coming from. This lets the platform team easily manipulate the configuration containers, introduce new levels and configuration groups without reconfiguring hundreds of application instances. --### Templating --The control plane composes configuration snapshots for every application instance on every host. The variety of applications, hosts, the underlying technologies and the ways how applications are deployed can be very wide. Furthermore, the same application can be deployed completely differently on its way from dev to production environments. The concern of the control plane is to manage configurations, not to deploy. It should be agnostic from the underlying application/host technologies and generate configuration snapshots in a suitable format for each case (for example, a Kubernetes config map, properties file, Symphony catalog, or other format). --One option is to assign different templates to different host types. These templates are used by the control plane when it generates configuration snapshots for the applications to be deployed on the host. It would be beneficial to apply a standard templating approach, which is well known in the developer community. For example, the following templates can be defined with the [Go Templates](https://pkg.go.dev/text/template), which are widely used across the industry: --```yaml -# Standard Kubernetes config map -apiVersion: v1 -kind: ConfigMap -metadata: - name: platform-config - namespace: {{ .Namespace}} -data: -{{ toYaml .ConfigData | indent 2}} -``` --```yaml -# Symphony catalog object -apiVersion: federation.symphony/v1 -kind: Catalog -metadata: - name: platform-config - namespace: {{ .Namespace}} -spec: - type: config - name: platform-config - properties: -{{ toYaml .ConfigData | indent 4}} -``` --```yaml -# JSON file -{{ toJson .ConfigData}} -``` --Then we assign these templates to host A, B and C respectively. Assuming an application with same configuration values is about to be deployed to all three hosts, the control plane generates three different configuration snapshots for each instance: --```yaml -# Standard Kubernetes config map -apiVersion: v1 -kind: ConfigMap -metadata: - name: platform-config - namespace: line1 -data: - FACTORY_NAME: Atlantida - LINE_NAME_LOWER: line1 - LINE_NAME_UPPER: LINE1 - QueryIntervalSec: "911" -``` --```yaml -# Symphony catalog object -apiVersion: federation.symphony/v1 -kind: Catalog -metadata: - name: platform-config - namespace: line1 -spec: - type: config - name: platform-config - properties: - FACTORY_NAME: Atlantida - LINE_NAME_LOWER: line1 - LINE_NAME_UPPER: LINE1 - QueryIntervalSec: "911" -``` --```json -{ - "FACTORY_NAME" : "Atlantida", - "LINE_NAME_LOWER" : "line1", - "LINE_NAME_UPPER": "LINE1", - "QueryIntervalSec": "911" -} -``` --### Configuration storage --The control plane operates with configuration containers that group configuration values at different levels in a hierarchy or a graph. These containers should be stored somewhere. The most obvious approach is to use a database. It could be [etcd](https://etcd.io/), relational, hierarchical or a graph database, providing the most flexible and robust experience. The database gives the ability to granularly track and handle configuration values at the level of each individual configuration container. --Besides the main features such as storage and the ability to query and manipulate the configuration objects effectively, there should be functionality related to change tracking, approvals, promotions, rollbacks, version compares and so on. The control plane can implement all that on top of a database and encapsulate everything in a monolithic managed service. --Alternatively, this functionality can be delegated to Git to follow the "configuration as code" concept. For example, [Kalypso](https://github.com/microsoft/kalypso), being a Kubernetes operator, treats configuration containers as custom Kubernetes resources, that are essentially stored in etcd database. Even though the control plane doesn't dictate that, it is a common practice to originate configuration values in a Git repository, applying all the benefits that it gives out of the box. Then, the configuration values are delivered a Kubernetes etcd storage with a GitOps operator where the control plane can work with them to do the compositions. --### Git repositories hierarchy --It's not necessary to have a single Git repository with configuration values for the entire organization. Such a repository might become a bottleneck at scale, given the variety of the "platform team" personas, their responsibilities, and their access levels. Instead, you can use GitOps operator references, such as Flux GitRepository and Flux Kustomization, to build a repository hierarchy and eliminate the friction points: ----### Configuration versioning --Whenever application developers introduce a change in the application, they produce a new application version. Similarly, a new platform configuration value leads to a new version of the configuration snapshot. Versioning allows for tracking changes, explicit rollouts and rollbacks. --A key point is that application configuration snapshots are versioned independently from each other. A single configuration value change at the global or site level doesn't necessarily produce new versions of all application configuration snapshots; it impacts only those snapshots where this value is hydrated. A simple and effective way to track it would be to use a hash of the snapshot content as its version. In this way, if the snapshot content has changed because something has changed in the global configurations, there will be a new version. This new version is a subject to be applied either manually or automatically. In any case, this is a trackable event that can be rolled back if needed. --## Next steps --* Walk through a sample implementation to explore [workload management in a multi-cluster environment with GitOps](workload-management.md). -* Explore a [multi-cluster workload management sample repository](https://github.com/microsoft/kalypso). -* [Concept: CD process with GitOps](https://github.com/microsoft/kalypso/blob/main/docs/cd-concept.md). -* [Sample implementation: Explore CI/CD flow with GitOps](https://github.com/microsoft/kalypso/blob/main/cicd/tutorial/cicd-tutorial.md). |
azure-arc | Custom Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md | - Title: "Create and manage custom locations on Azure Arc-enabled Kubernetes" Previously updated : 03/26/2024-- -description: "Use custom locations to deploy Azure PaaS services on Azure Arc-enabled Kubernetes clusters" ---# Create and manage custom locations on Azure Arc-enabled Kubernetes -- The *custom locations* feature provides a way to configure your Azure Arc-enabled Kubernetes clusters as target locations for deploying instances of Azure offerings. Examples of Azure offerings that can be deployed on top of custom locations include databases, such as SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL server, or application instances, such as App Services, Functions, Event Grid, Logic Apps, and API Management. --A [custom location](conceptual-custom-locations.md) has a one-to-one mapping to a namespace within the Azure Arc-enabled Kubernetes cluster. The custom location Azure resource combined with Azure role-based access control (Azure RBAC) can be used to grant granular permissions to application developers or database admins, enabling them to deploy resources such as databases or application instances on top of Arc-enabled Kubernetes clusters in a multitenant environment. --In this article, you learn how to enable custom locations on an Arc-enabled Kubernetes cluster, and how to create a custom location. --## Prerequisites --- [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to the latest version.--- Install the latest versions of the following Azure CLI extensions:- - `connectedk8s` - - `k8s-extension` - - `customlocation` - - ```azurecli - az extension add --name connectedk8s - az extension add --name k8s-extension - az extension add --name customlocation - ``` -- If you have already installed the `connectedk8s`, `k8s-extension`, and `customlocation` extensions, update to the **latest version** by using the following command: -- ```azurecli - az extension update --name connectedk8s - az extension update --name k8s-extension - az extension update --name customlocation - ``` --- Verify completed provider registration for `Microsoft.ExtendedLocation`.-- 1. Enter the following commands: -- ```azurecli - az provider register --namespace Microsoft.ExtendedLocation - ``` -- 1. Monitor the registration process. Registration may take up to 10 minutes. -- ```azurecli - az provider show -n Microsoft.ExtendedLocation -o table - ``` -- Once registered, the `RegistrationState` state will have the `Registered` value. --- Verify you have an existing [Azure Arc-enabled Kubernetes connected cluster](quickstart-connect-cluster.md), and [upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. Confirm that the machine on which you will run the commands described in this article has a `kubeconfig` file that points to this cluster.--## Enable custom locations on your cluster --> [!TIP] -> The custom locations feature is dependent on the [cluster connect](cluster-connect.md) feature. Both features must be enabled in the cluster for custom locations to function. To enable the custom locations feature, follow the steps below: --If you are signed in to Azure CLI as a Microsoft Entra user, use the following command: --```azurecli -az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features cluster-connect custom-locations -``` --If you run the above command while signed in to Azure CLI using a service principal, you may observe the following warning: --```console -Unable to fetch oid of 'custom-locations' app. Proceeding without enabling the feature. Insufficient privileges to complete the operation. -``` --This warning occurs because the service principal lacks the necessary permissions to retrieve the `oid` (object ID) of the custom location used by the Azure Arc service. To avoid this error, follow these steps: --1. Sign in to Azure CLI with your user account. --1. Run the following command to fetch the `oid` (object ID) of the custom location, where `--id` is predefined and set to `bc313c14-388c-4e7d-a58e-70017303ee3b`: -- **Important!** Copy and run the command exactly as it is shown below. Do not replace the value passed to the `--id` parameter with a different value. -- ```azurecli - az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query id -o tsv - ``` --1. Sign in to Azure CLI using the service principal. Run the following command to enable the custom locations feature on the cluster, using the `oid` (object ID) value from the previous step for the `--custom-locations-oid` parameter: -- ```azurecli - az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <cl-oid> --features cluster-connect custom-locations - ``` --## Create custom location --1. Deploy the Azure service cluster extension of the Azure service instance you want to install on your cluster: -- - [Azure Arc-enabled data services](../dat) -- > [!NOTE] - > Outbound proxy without authentication and outbound proxy with basic authentication are supported by the Azure Arc-enabled data services cluster extension. Outbound proxy that expects trusted certificates is currently not supported. -- - [Azure App Service on Azure Arc](../../app-service/manage-create-arc-environment.md#install-the-app-service-extension) -- - [Event Grid on Kubernetes](../../event-grid/kubernetes/install-k8s-extension.md) --1. Get the Azure Resource Manager identifier of the Azure Arc-enabled Kubernetes cluster, referenced in later steps as `connectedClusterId`: -- ```azurecli - az connectedk8s show -n <clusterName> -g <resourceGroupName> --query id -o tsv - ``` --1. Get the Azure Resource Manager identifier of the cluster extension you deployed to the Azure Arc-enabled Kubernetes cluster, referenced in later steps as `extensionId`: -- ```azurecli - az k8s-extension show --name <extensionInstanceName> --cluster-type connectedClusters -c <clusterName> -g <resourceGroupName> --query id -o tsv - ``` --1. Create the custom location by referencing the Azure Arc-enabled Kubernetes cluster and the extension: -- ```azurecli - az customlocation create -n <customLocationName> -g <resourceGroupName> --namespace <name of namespace> --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionId> - ``` -- - Required parameters: -- | Parameter name | Description | - |-|| - | `--name, --n` | Name of the custom location. | - | `--resource-group, --g` | Resource group of the custom location. | - | `--namespace` | Namespace in the cluster bound to the custom location being created. | - | `--host-resource-id` | Azure Resource Manager identifier of the Azure Arc-enabled Kubernetes cluster (connected cluster). | - | `--cluster-extension-ids` | Azure Resource Manager identifier of a cluster extension instance installed on the connected cluster. For multiple extensions, provide a space-separated list of cluster extension IDs | -- - Optional parameters: -- | Parameter name | Description | - |--|| - | `--location, --l` | Location of the custom location Azure Resource Manager resource in Azure. If not specified, the location of the connected cluster is used. | - | `--tags` | Space-separated list of tags in the format `key[=value]`. Use '' to clear existing tags. | - | `--kubeconfig` | Admin `kubeconfig` of cluster. | --## Show details of a custom location --To show the details of a custom location, use the following command: --```azurecli -az customlocation show -n <customLocationName> -g <resourceGroupName> -``` --## List custom locations --To list all custom locations in a resource group, use the following command: --```azurecli -az customlocation list -g <resourceGroupName> -``` --## Update a custom location --Use the `update` command to add new values for `--tags` or associate new `--cluster-extension-ids` to the custom location, while retaining existing values for tags and associated cluster extensions. --```azurecli -az customlocation update -n <customLocationName> -g <resourceGroupName> --namespace <name of namespace> --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionIds> -``` --## Patch a custom location --Use the `patch` command to replace existing values for `--cluster-extension-ids` or `--tags`. Previous values are not retained. --```azurecli -az customlocation patch -n <customLocationName> -g <resourceGroupName> --namespace <name of namespace> --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionIds> -``` --## Delete a custom location --To delete a custom location, use the following command: --```azurecli -az customlocation delete -n <customLocationName> -g <resourceGroupName> -``` --## Troubleshooting --If custom location creation fails with the error `Unknown proxy error occurred`, modify your network policy to allow pod-to-pod internal communication within the `azure-arc` namespace. Be sure to also add the `azure-arc` namespace as part of the no-proxy exclusion list for your configured policy. --## Next steps --- Securely connect to the cluster using [Cluster Connect](cluster-connect.md).-- Continue with [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) for end-to-end instructions on installing extensions, creating custom locations, and creating the App Service Kubernetes environment.-- Create an Event Grid topic and an event subscription for [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md).-- Learn more about currently available [Azure Arc-enabled Kubernetes extensions](extensions-release.md). |
azure-arc | Deploy Marketplace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/deploy-marketplace.md | - Title: "Deploy and manage applications from Azure Marketplace on Azure Arc-enabled Kubernetes clusters" Previously updated : 01/26/2024-- -description: "Learn how to discover Kubernetes applications in Azure Marketplace and deploy them to your Arc-enabled Kubernetes clusters." ---# Deploy and manage applications from Azure Marketplace on Azure Arc-enabled Kubernetes clusters --[Azure Marketplace](/marketplace/azure-marketplace-overview) is an online store that contains thousands of IT software applications and services built by industry-leading technology companies. In Azure Marketplace, you can find, try, buy, and deploy the software and services that you need to build new solutions and manage your cloud infrastructure. The catalog includes solutions for different industries and technical areas, free trials, and consulting services from Microsoft partners. --Included among these solutions are Kubernetes application-based container offers. These offers contain applications that can run on Azure Arc-enabled Kubernetes clusters, represented as [cluster extensions](conceptual-extensions.md). Deploying an offer from Azure Marketplace creates a new instance of the extension on your Arc-enabled Kubernetes cluster. --This article shows you how to: --- Discover applications that support Azure Arc-enabled Kubernetes clusters.-- Purchase an application.-- Deploy the application on your cluster.-- Monitor usage and billing information.--You can use Azure CLI or the Azure portal to perform these tasks. --## Prerequisites --To deploy an application, you must have an existing Azure Arc-enabled Kubernetes connected cluster, with at least one node of operating system and architecture type `linux/amd64`. If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). Be sure to [upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version before you get started. --- An existing Azure Arc-enabled Kubernetes connected cluster, with at least one node of operating system and architecture type `linux/amd64`. If deploying [Flux (GitOps)](extensions-release.md#flux-gitops), you can use an ARM64-based cluster without a `linux/amd64` node.- - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). - - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. -- If using Azure CLI to review, deploy, and manage Azure Marketplace applications:- - The latest version of [Azure CLI](/cli/azure/install-azure-cli). - - The latest version of the `k8s-extension` Azure CLI extension. Install the extension by running `az extension add --name k8s-extension`. If the `k8s-extension` extension is already installed, make sure it's updated to the latest version by running `az extension update --name k8s-extension`. --> [!NOTE] -> This feature is currently supported only in the following regions: -> ->- East US, East US2, EastUS2 EUAP, West US, West US2, Central US, West Central US, South Central US, West Europe, North Europe, Canada Central, South East Asia, Australia East, Central India, Japan East, Korea Central, UK South, UK West, Germany West Central, France Central, East Asia, West US3, Norway East, South African North, North Central US, Australia South East, Switzerland North, Japan West, South India --## Discover Kubernetes applications that supports Azure Arc-enabled clusters --### [Azure portal](#tab/azure-portal) --To discover Kubernetes applications in the Azure Marketplace from within the Azure portal: --1. In the Azure portal, search for **Marketplace**. In the results, under **Services**, select **Marketplace**. -1. From **Marketplace**, you can search for an offer or publisher directly by name, or you can browse all offers. To find Kubernetes application offers, select **Containers** from the **Categories** section in the left menu. -- > [!IMPORTANT] - > The **Containers** category includes both Kubernetes applications and standalone container images. Be sure to select only Kubernetes application offers when following these steps. Container images have a different deployment process, and generally can't be deployed on Arc-enabled Kubernetes clusters. -- :::image type="content" source="media/deploy-marketplace/marketplace-containers.png" alt-text="Screenshot of Azure Marketplace showing the Containers menu item." lightbox="media/deploy-marketplace/marketplace-containers.png"::: --1. You'll see several Kubernetes application offers displayed on the page. To view all of the Kubernetes application offers, select **See more**. -- :::image type="content" source="media/deploy-marketplace/marketplace-see-more.png" alt-text="Screenshot showing the See more link for the Containers category in Azure Marketplace." lightbox="media/deploy-marketplace/marketplace-see-more.png"::: --1. Alternately, you can search for a specific `publisherId` to view that publisher's Kubernetes applications in Azure Marketplace. For details on how to find publisher IDs, see the Azure CLI tab for this article. -- :::image type="content" source="media/deploy-marketplace/marketplace-search-by-publisher.png" alt-text="Screenshot showing the option to search by publisher in Azure Marketplace." lightbox="media/deploy-marketplace/marketplace-search-by-publisher.png"::: --Once you find an application that you want to deploy, move on to the next section. --### [Azure CLI](#tab/azure-cli) --You can use Azure CLI to get a list of extensions, including Azure Marketplace applications, that can be deployed on Azure Arc-enabled connected clusters. To do so, run this command, providing the name of your connected cluster and the resource group where the cluster is located. --```azurecli-interactive -az k8s-extension extension-types list-by-cluster --cluster-type connectedClusters --cluster-name <clusterName> --resource-group <resourceGroupName> -``` --The command will return a list of extension types that can be deployed on the connected clusters, similar to the example shown here. --```json -"id": "/subscriptions/{sub}/resourceGroups/{rg} /providers/Microsoft.Kubernetes/connectedClusters/{clustername} /providers/Microsoft.KubernetesConfiguration/extensiontypes/contoso", -"name": "contoso", -"type": "Microsoft.KubernetesConfiguration/extensionTypes", -"properties": { - "extensionType": "contoso", - "description": "Contoso extension", - "isSystemExtension": false, - "publisher": "contoso", - "isManagedIdentityRequired": false, - "supportedClusterTypes": [ - "managedclusters", - "connectedclusters" - ], - "supportedScopes": { - "defaultScope": "namespace", - "clusterScopeSettings": { - "allowMultipleInstances": false, - "defaultReleaseNamespace": null - } - }, - "planInfo": { - "offerId": "contosoOffer", - "planId": "contosoPlan", - "publisherId": "contoso" - } -} -``` --When you find an application that you want to deploy, note the following values from the response received: `planId`, `publisherId`, `offerID`, and `extensionType`. You'll need these values to accept the application's terms and deploy the application. ----## Deploy a Kubernetes application --### [Azure portal](#tab/azure-portal) --Once you've identified an offer you want to deploy, follow these steps: --1. In the **Plans + Pricing** tab, review the options. If there are multiple plans available, find the one that meets your needs. Review the terms on the page to make sure they're acceptable, and then select **Create**. -- :::image type="content" source="media/deploy-marketplace/marketplace-plans-pricing.png" alt-text="Screenshot of the Plans + Pricing page for a Kubernetes offer in Azure Marketplace." lightbox="media/deploy-marketplace/marketplace-plans-pricing.png"::: --1. Select the resource group and Arc-enabled cluster to which you want to deploy the application. -- :::image type="content" source="media/deploy-marketplace/marketplace-select-cluster.png" alt-text="Screenshot showing the option to select a resource group and cluster for the Marketplace offer."::: --1. Complete all pages of the deployment wizard to specify all configuration options that the application requires. -- :::image type="content" source="media/deploy-marketplace/marketplace-configuration.png" alt-text="Screenshot showing configuration options for an Azure Marketplace offer."::: --1. When you're finished, select **Review + Create**, then select **Create** to deploy the offer. --### [Azure CLI](#tab/azure-cli) --#### Accept terms and agreements --Before you can deploy a Kubernetes application, you need to accept its terms and agreements. Be sure to read these terms carefully so that you understand costs and any other requirements. --To view the details of the terms, run the following command, providing the values for `offerID`, `planID`, and `publisherID`: --```azurecli-interactive -az vm image terms show --offer <offerID> --plan <planId> --publisher <publisherId> -``` --To accept the terms, run the following command, using the same values for `offerID`, `planID`, and `publisherID`. --```azurecli-interactive -az vm image terms accept --offer <offerID> --plan <planId> --publisher <publisherId> -``` --> [!NOTE] -> Although this command is for VMs, it also works for containers, including Arc-enabled Kubernetes clusters. For more information, see the [az vm image terms](/cli/azure/vm/image/terms) reference. --#### Deploy the application --To deploy the application (extension) through Azure CLI, follow the steps outlined in [Deploy and manage Azure Arc-enabled Kubernetes cluster extensions](extensions.md). An example command might look like this: --```azurecli-interactive -az k8s-extension create --name <offerID> --extension-type <extensionType> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters --plan-name <planId> --plan-product <offerID> --plan-publisher <publisherId> -``` ----## Verify the deployment --Deploying an offer from Azure Marketplace creates a new extension instance on your Arc-enabled Kubernetes cluster. You can verify that the deployment was successful by confirming the extension is running successfully. --### [Azure portal](#tab/azure-portal) --Verify the deployment navigating to the cluster you recently installed the extension on, then navigate to **Extensions**, where you'll see the extension status. ---If the deployment was successful, the **Status** will be **Succeeded**. If the status is **Creating**, the deployment is still in progress. Wait a few minutes then check again. --If the deployment fails, see [Troubleshoot the failed deployment of a Kubernetes application offer](/troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer). --### [Azure CLI](#tab/azure-cli) --Verify the deployment by using the following command to list the extensions that are already running or being deployed on your cluster: --```azurecli-interactive -az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters -``` --If the deployment was successful, `provisioningState` is `Succeeded`. If `provisioningState` is `Creating`, the deployment is still in progress. Wait a few minutes then check again. --If the deployment fails, see [Troubleshoot the failed deployment of a Kubernetes application offer](/troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer). --To view the extension instance from the cluster, run the following command: --```azurecli-interactive -az k8s-extension show --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters -``` ----## Monitor billing and usage information --You can monitor billing and usage information for a deployed extension in the Azure portal. --1. In the Azure portal, navigate to your cluster's resource group. --1. Select **Cost Management** > **Cost analysis**. Under **Product**, you can see a cost breakdown for the plan that you selected. -- :::image type="content" source="media/deploy-marketplace/extension-cost-analysis.png" alt-text="Screenshot of the Azure portal page for a resource group, with billing information broken down by offer plan." lightbox="media/deploy-marketplace/extension-cost-analysis.png"::: --## Remove an application --You can delete a purchased plan for a Kubernetes offer by deleting the extension instance on the cluster. --### [Azure portal](#tab/azure-portal) --To delete the extension instance in the Azure portal, select **Extensions** within your cluster. Select the application you want to remove, then select **Uninstall**. ---### [Azure CLI](#tab/azure-cli) --The following command deletes an extension from the cluster: --```azurecli-interactive -az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters -``` ----## Troubleshooting --For help with resolving issues, see [Troubleshoot the failed deployment of a Kubernetes application offer](/troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer). --## Next steps --- Learn about [extensions for Arc-enabled Kubernetes](conceptual-extensions.md).-- Use our quickstart to [connect a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md). |
azure-arc | Diagnose Connection Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md | - Title: "Diagnose connection issues for Azure Arc-enabled Kubernetes clusters" Previously updated : 12/15/2023-- -description: "Learn how to resolve common issues when connecting Kubernetes clusters to Azure Arc." ---# Diagnose connection issues for Azure Arc-enabled Kubernetes clusters --If you are experiencing issues connecting a cluster to Azure Arc, it's probably due to one of the issues listed here. We provide two flowcharts with guided help: one if you're [not using a proxy server](#connections-without-a-proxy), and one that applies if your network connection [uses a proxy server](#connections-with-a-proxy-server). --> [!TIP] -> The steps in this flowchart apply whether you're using Azure CLI or Azure PowerShell to [connect your cluster](quickstart-connect-cluster.md). However, some of the steps require the use of Azure CLI. If you haven't already [installed Azure CLI](/cli/azure/install-azure-cli), be sure to do so before you begin. --## Connections without a proxy --Review this flowchart in order to diagnose your issue when attempting to connect a cluster to Azure Arc without a proxy server. More details about each step are provided below. ---### Does the Azure identity have sufficient permissions? --Review the [prerequisites for connecting a cluster](quickstart-connect-cluster.md?tabs=azure-cli#prerequisites) and make sure that the identity you're using to connect the cluster has the necessary permissions. --### Are you running the latest version of Azure CLI? --Make sure you [have the latest version installed](/cli/azure/install-azure-cli). --If you connected your cluster by using Azure PowerShell, make sure you are [running the latest version](/powershell/azure/install-azure-powershell). --### Is the `connectedk8s` extension the latest version? --Update the Azure CLI `connectedk8s` extension to the latest version by running this command: --```azurecli -az extension update --name connectedk8s -``` --If you haven't installed the extension yet, you can do so by running the following command: --```azurecli -az extension add --name connectedk8s -``` --### Is kubeconfig pointing to the right cluster? --Run `kubectl config get-contexts` to confirm the target context name. Then set the default context to the right cluster by running `kubectl config use-context <target-cluster-name>`. --### Are all required resource providers registered? --Be sure that the Microsoft.Kubernetes, Microsoft.KubernetesConfiguration, and Microsoft.ExtendedLocation resource providers are [registered](quickstart-connect-cluster.md#register-providers-for-azure-arc-enabled-kubernetes). --### Are all network requirements met? --Review the [network requirements](network-requirements.md) and ensure that no required endpoints are blocked. --### Are all pods in the `azure-arc` namespace running? --If everything is working correctly, your pods should all be in the `Running` state. Run `kubectl get pods -n azure-arc` to confirm whether any pod's state is not `Running`. --### Still having problems? --The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) so we can investigate the problem further. --To generate the troubleshooting log file, run the following command: --```azurecli -az connectedk8s troubleshoot -g <myResourceGroup> -n <myK8sCluster> -``` --When you [create your support request](../../azure-portal/supportability/how-to-create-azure-support-request.md), in the **Additional details** section, use the **File upload** option to upload the generated log file. --## Connections with a proxy server --If you are using a proxy server on at least one machine, complete the first five steps of the non-proxy flowchart (through resource provider registration) for basic troubleshooting steps. Then, if you are still encountering issues, review the next flowchart for additional troubleshooting steps. More details about each step are provided below. ---### Is the machine executing commands behind a proxy server? --If the machine is executing commands behind a proxy server, you'll need to set all of the necessary environment variables. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server). --For example: --```bash -export HTTP_PROXY="http://<proxyIP>:<proxyPort>" -export HTTPS_PROXY="https://<proxyIP>:<proxyPort>" -export NO_PROXY="<cluster-apiserver-ip-address>:<proxyPort>" -``` --### Does the proxy server only accept trusted certificates? --Be sure to include the certificate file path by including `--proxy-cert <path-to-cert-file>` when running the `az connectedk8s connect` command. --```azurecli -az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-cert <path-to-cert-file> -``` --### Is the proxy server able to reach required network endpoints? --Review the [network requirements](network-requirements.md) and ensure that no required endpoints are blocked. --### Is the proxy server only using HTTP? --If your proxy server only uses HTTP, you can use `proxy-http` for both parameters. --If your proxy server is set up with both HTTP and HTTPS, run the `az connectedk8s connect` command with the `--proxy-https` and `--proxy-http` parameters specified. Be sure you are using `--proxy-http` for the HTTP proxy and `--proxy-https` for the HTTPS proxy. --```azurecli -az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port> -``` --### Does the proxy server require skip ranges for service-to-service communication? --If you require skip ranges, use `--proxy-skip-range <excludedIP>,<excludedCIDR>` in your `az connectedk8s connect` command. --```azurecli -az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port> --proxy-skip-range <excludedIP>,<excludedCIDR> -``` --### Are all pods in the `azure-arc` namespace running? --If everything is working correctly, your pods should all be in the `Running` state. Run `kubectl get pods -n azure-arc` to confirm whether any pod's state is not `Running`. ---### Check whether the DNS resolution is successful for the endpoint --From within the pod, you can run a DNS lookup to the endpoint. --What if you can't run the [kubectl exec](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec) command to connect to the pod and install the DNS Utils package? In this situation, you can [start a test pod in the same namespace as the problematic pod](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#create-a-simple-pod-to-use-as-a-test-environment), and then run the tests. --> [!NOTE] -> -> If the DNS resolution or egress traffic doesn't let you install the necessary network packages, you can use the `rishasi/ubuntu-netutil:1.0` docker image. In this image, the required packages are already installed. --Here's an example procedure for checking DNS resolution: --1. Start a test pod in the same namespace as the problematic pod: -- ```bash - kubectl run -it --rm test-pod --namespace <namespace> --image=debian:stable - ``` -- After the test pod is running, you'll gain access to the pod. --1. Run the following `apt-get` commands to install other tool packages: -- ```bash - apt-get update -y - apt-get install dnsutils -y - apt-get install curl -y - apt-get install netcat -y - ``` --1. After the packages are installed, run the [nslookup](/windows-server/administration/windows-commands/nslookup) command to test the DNS resolution to the endpoint: -- ```console - $ nslookup microsoft.com - Server: 10.0.0.10 - Address: 10.0.0.10#53 - ... - ... - Name: microsoft.com - Address: 20.53.203.50 - ``` --1. Try the DNS resolution from the upstream DNS server directly. This example uses Azure DNS: -- ```console - $ nslookup microsoft.com 168.63.129.16 - Server: 168.63.129.16 - Address: 168.63.129.16#53 - ... - ... - Address: 20.81.111.85 - ``` --1. Run the `host` command to check whether the DNS requests are routed to the upstream server: -- ```console - $ host -a microsoft.com - Trying "microsoft.com.default.svc.cluster.local" - Trying "microsoft.com.svc.cluster.local" - Trying "microsoft.com.cluster.local" - Trying "microsoft.com.00idcnmrrm4edot5s2or1onxsc.bx.internal.cloudapp.net" - Trying "microsoft.com" - Trying "microsoft.com" - ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62884 - ;; flags: qr rd ra; QUERY: 1, ANSWER: 27, AUTHORITY: 0, ADDITIONAL: 5 - - ;; QUESTION SECTION: - ;microsoft.com. IN ANY - - ;; ANSWER SECTION: - microsoft.com. 30 IN NS ns1-39.azure-dns.com. - ... - ... - ns4-39.azure-dns.info. 30 IN A 13.107.206.39 - - Received 2121 bytes from 10.0.0.10#53 in 232 ms - ``` --1. Run a test pod in the Windows node pool: -- ```bash - # For a Windows environment, use the Resolve-DnsName cmdlet. - kubectl run dnsutil-win --image='mcr.microsoft.com/windows/servercore:1809' --overrides='{"spec": { "nodeSelector": {"kubernetes.io/os": "windows"}}}' -- powershell "Start-Sleep -s 3600" - ``` --1. Run the [kubectl exec](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec) command to connect to the pod by using PowerShell: -- ```bash - kubectl exec -it dnsutil-win -- powershell - ``` --1. Run the [Resolve-DnsName](/powershell/module/dnsclient/resolve-dnsname) cmdlet in PowerShell to check whether the DNS resolution is working for the endpoint: -- ```console - PS C:\> Resolve-DnsName www.microsoft.com - - Name Type TTL Section NameHost - - - - -- - www.microsoft.com CNAME 20 Answer www.microsoft.com-c-3.edgekey.net - www.microsoft.com-c-3.edgekey. CNAME 20 Answer www.microsoft.com-c-3.edgekey.net.globalredir.akadns.net - net - www.microsoft.com-c-3.edgekey. CNAME 20 Answer e13678.dscb.akamaiedge.net - net.globalredir.akadns.net - - Name : e13678.dscb.akamaiedge.net - QueryType : AAAA - TTL : 20 - Section : Answer - IP6Address : 2600:1408:c400:484::356e - - - Name : e13678.dscb.akamaiedge.net - QueryType : AAAA - TTL : 20 - Section : Answer - IP6Address : 2600:1408:c400:496::356e - - - Name : e13678.dscb.akamaiedge.net - QueryType : A - TTL : 12 - Section : Answer - IP4Address : 23.200.197.152 - ``` --If the DNS resolution is not successful, verify the DNS configuration for the cluster. ----### Still having problems? --The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) so we can investigate the problem further. --To generate the troubleshooting log file, run the following command: --```azurecli -az connectedk8s troubleshoot -g <myResourceGroup> -n <myK8sCluster> -``` --When you [create your support request](../../azure-portal/supportability/how-to-create-azure-support-request.md), in the **Additional details** section, use the **File upload** option to upload the generated log file. --## Next steps --- View more [troubleshooting tips for using Azure Arc-enabled Kubernetes](troubleshooting.md).-- Review the process to [connect an existing Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md). |
azure-arc | Extensions Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md | - Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 08/08/2024- -description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes." ---# Available extensions for Azure Arc-enabled Kubernetes clusters --[Cluster extensions for Azure Arc-enabled Kubernetes](conceptual-extensions.md) provide an Azure Resource Manager-driven experience for installation and lifecycle management of different Azure capabilities on top of your cluster. These extensions can be [deployed to your clusters](extensions.md) to enable different scenarios and improve cluster management. --The following extensions are currently available for use with Arc-enabled Kubernetes clusters. All of these extensions are [cluster-scoped](conceptual-extensions.md#extension-scope), except for Azure API Management on Azure Arc, which is namespace-scoped. --## Azure Monitor Container Insights --- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters--Azure Monitor Container Insights provides visibility into the performance of workloads deployed on the Kubernetes cluster. Use this extension to collect memory and CPU utilization metrics from controllers, nodes, and containers. --For more information, see [Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters](/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json). --## Azure Policy --Azure Policy extends [Gatekeeper](https://github.com/open-policy-agent/gatekeeper), an admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/) (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. --For more information, see [Understand Azure Policy for Kubernetes clusters](../../governance/policy/concepts/policy-for-kubernetes.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json). --## Azure Key Vault Secrets Provider --- **Supported distributions**: AKS on Azure Stack HCI, AKS enabled by Azure Arc, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid--The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a CSI volume. For Azure Arc-enabled Kubernetes clusters, you can install the Azure Key Vault Secrets Provider extension to fetch secrets. --For more information, see [Use the Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters](tutorial-akv-secrets-provider.md). --## Microsoft Defender for Containers --- **Supported distributions**: AKS enabled by Azure Arc, Cluster API Azure, Azure Red Hat OpenShift, Red Hat OpenShift (version 4.6 or newer), Google Kubernetes Engine Standard, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid, Rancher Kubernetes Engine, Canonical Kubernetes Distribution--Microsoft Defender for Containers is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. It gathers information related to security like audit log data from the Kubernetes cluster, and provides recommendations and threat alerts based on gathered data. --For more information, see [Enable Microsoft Defender for Containers](/azure/defender-for-cloud/defender-for-kubernetes-azure-arc?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json). --> [!IMPORTANT] -> Defender for Containers support for Arc-enabled Kubernetes clusters is currently in public preview. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Azure Arc-enabled Open Service Mesh --- **Supported distributions**: AKS, AKS on Azure Stack HCI, AKS enabled by Azure Arc, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, Rancher Kubernetes Engine, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid--[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments. --For more information, see [Azure Arc-enabled Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md). --## Azure Arc-enabled Data Services --- **Supported distributions**: AKS, AKS on Azure Stack HCI, Azure Red Hat OpenShift, Google Kubernetes Engine, Canonical Kubernetes Distribution, OpenShift Container Platform, Amazon Elastic Kubernetes Service--Makes it possible for you to run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. This extension enables the *custom locations* feature, providing a way to configure Azure Arc-enabled Kubernetes clusters as target locations for deploying instances of Azure offerings. --For more information, see [Azure Arc-enabled Data Services](../dat#create-custom-location). --## Azure App Service on Azure Arc --- **Supported distributions**: AKS, AKS on Azure Stack HCI, Azure Red Hat OpenShift, Google Kubernetes Engine, OpenShift Container Platform--Allows you to provision an App Service Kubernetes environment on top of Azure Arc-enabled Kubernetes clusters. --For more information, see [App Service, Functions, and Logic Apps on Azure Arc (Preview)](../../app-service/overview-arc-integration.md). --> [!IMPORTANT] -> App Service on Azure Arc is currently in public preview. Review the [public preview limitations for App Service Kubernetes environments](../../app-service/overview-arc-integration.md#public-preview-limitations) before deploying this extension. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Azure Event Grid on Kubernetes --- **Supported distributions**: AKS, Red Hat OpenShift--Event Grid is an event broker used to integrate workloads that use event-driven architectures. This extension lets you create and manage Event Grid resources such as topics and event subscriptions on top of Azure Arc-enabled Kubernetes clusters. --For more information, see [Event Grid on Kubernetes with Azure Arc (Preview)](../../event-grid/kubernetes/overview.md). --> [!IMPORTANT] -> Event Grid on Kubernetes with Azure Arc is currently in public preview. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Azure API Management on Azure Arc --- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters.--With the integration between Azure API Management and Azure Arc on Kubernetes, you can deploy the API Management gateway component as an extension in an Azure Arc-enabled Kubernetes cluster. This extension is [namespace-scoped](conceptual-extensions.md#extension-scope), not cluster-scoped. --For more information, see [Deploy an Azure API Management gateway on Azure Arc (preview)](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md). --> [!IMPORTANT] -> API Management self-hosted gateway on Azure Arc is currently in public preview. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Azure Arc-enabled Machine Learning --- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. Not currently supported for ARM 64.--The Azure Machine Learning extension lets you deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters. --For more information, see [Introduction to Kubernetes compute target in Azure Machine Learning](/azure/machine-learning/how-to-attach-kubernetes-anywhere) and [Deploy Azure Machine Learning extension on AKS or Arc Kubernetes cluster](/azure/machine-learning/how-to-deploy-kubernetes-extension). --## Flux (GitOps) --- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters.--[GitOps on AKS and Azure Arc-enabled Kubernetes](conceptual-gitops-flux2.md) uses [Flux v2](https://fluxcd.io/docs/), a popular open-source tool set, to help manage cluster configuration and application deployment. GitOps is enabled in the cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` cluster extension resource. --For more information, see [Tutorial: Deploy applications using GitOps with Flux v2](tutorial-use-gitops-flux2.md). --The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. --> [!IMPORTANT] -> The [Flux v2.3.0 release](https://fluxcd.io/blog/2024/05/flux-v2.3.0/) includes API changes to the HelmRelease and HelmChart APIs, with deprecated fields removed, and an updated version of the kustomize package. An upcoming minor version update of Microsoft's Flux extension will include these changes, consistent with the upstream OSS Flux project. -> -> The [HelmRelease](https://fluxcd.io/flux/components/helm/helmreleases/) kind will be promoted from `v2beta1` to `v2` (GA). The `v2` API is backwards compatible with `v2beta1`, with the exception of these deprecated fields, which will be removed: -> -> - `.spec.chart.spec.valuesFile`: replaced by `.spec.chart.spec.valuesFiles` -> - `.spec.postRenderers.kustomize.patchesJson6902`: replaced by `.spec.postRenderers.kustomize.patches` -> - `.spec.postRenderers.kustomize.patchesStrategicMerge`: replaced by `.spec.postRenderers.kustomize.patches` -> - `.status.lastAppliedRevision`: replaced by `.status.history.chartVersion` -> -> The [HelmChart](https://fluxcd.io/flux/components/source/helmcharts/) kind will be promoted from `v1beta2` to `v1` (GA). The `v1` API is backwards compatible with `v1beta2`, with the exception of the `.spec.valuesFile` field, which will be replaced by `.spec.valuesFiles`. -> -> Use the new fields which are already available in the current version of the APIs, instead of the fields that will be removed. -> -> The kustomize package will be updated to v5.4.0, which contains the following breaking changes: -> -> - [Kustomization build fails when resources key is missing](https://github.com/kubernetes-sigs/kustomize/issues/5337) -> - [Components are now applied after generators and before transformers](https://github.com/kubernetes-sigs/kustomize/pull/5170) in [v5.1.0](https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv5.1.0) -> - [Null yaml values are replaced by "null"](https://github.com/kubernetes-sigs/kustomize/pull/5519) in [v5.4.0](https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv5.4.0) -> -> To avoid issues due to breaking changes, we recommend updating your manifests as soon as possible to ensure that your Flux configurations remain compliant with this release. ---> [!NOTE] -> When a new version of the `microsoft.flux` extension is released, it may take several days for the new version to become available in all regions. --### 1.11.1 (August 2024) --Flux version: [Release v2.3.0](https://github.com/fluxcd/flux2/releases/tag/v2.3.0) --- source-controller: v1.3.0-- kustomize-controller: v1.3.0-- helm-controller: v1.0.1-- notification-controller: v1.3.0-- image-automation-controller: v0.38.0-- image-reflector-controller: v0.32.0--Changes made for this version: --- Update flux OSS controllers.-- Resolved the continuous restart issue of the Fluent Bit sidecar in `fluxconfig-agent` and `fluxconfig-controller`.-- Addressed security vulnerabilities in `fluxconfig-agent` and `fluxconfig-controller` by updating the Go packages.-- Enabled workload identity for the Kustomize controller. For setup instructions, see [Workload identity in AKS clusters](/azure/azure-arc/kubernetes/tutorial-use-gitops-flux2#workload-identity-in-aks-clusters).-- Flux controller pods can now set the annotation `kubernetes.azure.com/set-kube-service-host-fqdn` in their pod specifications. This allows traffic to the API Server's domain name even when a Layer 7 firewall is present, facilitating deployments during extension installation. For more details, see [Configure annotation on Flux extension pods](/azure/azure-arc/kubernetes/tutorial-use-gitops-flux2#configure-annotation-on-flux-extension-pods).--### 1.10.0 (June 2024) --Flux version: [Release v2.1.2](https://github.com/fluxcd/flux2/releases/tag/v2.1.2) --- source-controller: v1.2.5-- kustomize-controller: v1.1.1-- helm-controller: v0.36.2-- notification-controller: v1.1.0-- image-automation-controller: v0.36.1-- image-reflector-controller: v0.30.0--Changes made for this version: --- The `FluxConfig` custom resource now includes support for [OCI repositories](https://fluxcd.io/flux/components/source/ocirepositories/). This enhancement means that Flux configurations can accommodate Git repository, Buckets, Azure Blob storage, or OCI repository as valid source types.--### 1.9.1 (April 2024) --Flux version: [Release v2.1.2](https://github.com/fluxcd/flux2/releases/tag/v2.1.2) --- source-controller: v1.2.5-- kustomize-controller: v1.1.1-- helm-controller: v0.36.2-- notification-controller: v1.1.0-- image-automation-controller: v0.36.1-- image-reflector-controller: v0.30.0--Changes made for this version: --- The log-level parameters for controllers (including `fluxconfig-agent` and `fluxconfig-controller`) are now customizable. For more information, see [Configurable log-level parameters](tutorial-use-gitops-flux2.md#configurable-log-level-parameters).-- Helm chart changes to expose new SSH host key algorithm to connect to Azure DevOps. For more information, see [Azure DevOps SSH-RSA deprecation](tutorial-use-gitops-flux2.md#azure-devops-ssh-rsa-deprecation).--## Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes --[Dapr](https://dapr.io/) is a portable, event-driven runtime that simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. The Dapr extension eliminates the overhead of downloading Dapr tooling and manually installing and managing the runtime on your clusters. --For more information, see [Dapr extension for AKS and Arc-enabled Kubernetes](/azure/aks/dapr). --## Azure AI Video Indexer --- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters--Azure AI Video Indexer enabled by Arc runs video and audio analysis on edge devices. The solution is designed to run on Azure Stack Edge Profile, a heavy edge device, and supports many video formats, including MP4 and other common formats. It supports several languages in all basic audio-related models. --For more information, see [Try Azure AI Video Indexer enabled by Arc](/azure/azure-video-indexer/azure-video-indexer-enabled-by-arc-quickstart). --## Edge Storage Accelerator --- **Supported distributions**: AKS enabled by Azure Arc, AKS Edge Essentials, Ubuntu--[Edge Storage Accelerator (ESA)](../edge-storage-accelerator/index.yml) is a first-party storage system designed for Arc-connected Kubernetes clusters. ESA can be deployed to write files to a "ReadWriteMany" persistent volume claim (PVC) where they are then transferred to Azure Blob Storage. ESA offers a range of features to support Azure IoT Operations and other Azure Arc Services. --For more information, see [What is Edge Storage Accelerator?](../edge-storage-accelerator/overview.md). --## Connected registry on Arc-enabled Kubernetes - -- **Supported distributions**: AKS enabled by Azure Arc, Kubernetes using kind.- -The connected registry extension for Azure Arc allows you to synchronize container images between your Azure Container Registry (ACR) and your on-premises Azure Arc-enabled Kubernetes cluster. This extension can be deployed to either a local or remote cluster and utilizes a synchronization schedule and window to ensure seamless syncing of images between the on-premises connected registry and the cloud-based ACR. - -For more information, see [Connected Registry for Arc-enabled Kubernetes clusters](../../container-registry/quickstart-connected-registry-arc-cli.md). --## Next steps --- Read more about [cluster extensions for Azure Arc-enabled Kubernetes](conceptual-extensions.md).-- Learn how to [deploy extensions to an Arc-enabled Kubernetes cluster](extensions.md). |
azure-arc | Extensions Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-troubleshooting.md | - Title: "Troubleshoot extension issues for Azure Arc-enabled Kubernetes clusters" Previously updated : 12/19/2023-- -description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes cluster extensions." ---# Troubleshoot extension issues for Azure Arc-enabled Kubernetes clusters --This document provides troubleshooting tips for common issues related to [cluster extensions](extensions-release.md), such as GitOps (Flux v2) and Open Service Mesh. --For help troubleshooting general issues with Azure Arc-enabled Kubernetes, see [Troubleshoot Azure Arc-enabled Kubernetes issues](troubleshooting.md). --## GitOps (Flux v2) --> [!NOTE] -> The Flux v2 extension can be used in either Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters. These troubleshooting tips generally apply regardless of cluster type. --For general help troubleshooting issues with `fluxConfigurations` resources, run these Azure CLI commands with the `--debug` parameter specified: --```azurecli -az provider show -n Microsoft.KubernetesConfiguration --debug -az k8s-configuration flux create <parameters> --debug -``` --### Webhook/dry run errors --If you see Flux fail to reconcile with an error like `dry-run failed, error: admission webhook "<webhook>" does not support dry run`, you can resolve the issue by finding the `ValidatingWebhookConfiguration` or the `MutatingWebhookConfiguration` and setting the `sideEffects` to `None` or `NoneOnDryRun`: --For more information, see [How do I resolve `webhook does not support dry run` errors?](https://fluxcd.io/docs/faq/#how-do-i-resolve-webhook-does-not-support-dry-run-errors) --### Errors installing the `microsoft.flux` extension --The `microsoft.flux` extension installs the Flux controllers and Azure GitOps agents into your Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. If the extension isn't already installed in a cluster and you [create a GitOps configuration resource](tutorial-use-gitops-flux2.md) for that cluster, the extension is installed automatically. --If you experience an error during installation, or if the extension is in a failed state, make sure that the cluster doesn't have any policies that restrict creation of the `flux-system` namespace or resources in that namespace. --For an AKS cluster, ensure that the subscription has the `Microsoft.ContainerService/AKS-ExtensionManager` feature flag enabled. --```azurecli -az feature register --namespace Microsoft.ContainerService --name AKS-ExtensionManager -``` --After that, run this command to determine if there are other problems. Set the cluster type (`-t`) parameter to `connectedClusters` for an Arc-enabled cluster or `managedClusters` for an AKS cluster. The name of the `microsoft.flux` extension is "flux" if the extension was installed automatically during creation of a GitOps configuration. Look in the `statuses` object for information. --```azurecli -az k8s-extension show -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux -t <connectedClusters or managedClusters> -``` --The displayed results can help you determine what went wrong and how to fix it. Possible remediation actions include: --- Force delete the extension by running `az k8s-extension delete --force -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux -t <managedClusters OR connectedClusters>`-- Uninstall the Helm release by running `helm uninstall flux -n flux-system`-- Delete the `flux-system` namespace from the cluster by running `kubectl delete namespaces flux-system`--After that, you can either [recreate a flux configuration](./tutorial-use-gitops-flux2.md), which installs the `microsoft.flux` extension automatically, or you can reinstall the flux extension [manually](extensions.md). --### Errors installing the `microsoft.flux` extension in a cluster with Microsoft Entra Pod Identity enabled --If you attempt to install the Flux extension in a cluster that has Microsoft Entra Pod Identity enabled, an error may occur in the extension-agent pod: --```console -{"Message":"2021/12/02 10:24:56 Error: in getting auth header : error {adal: Refresh request failed. Status Code = '404'. Response body: no azure identity found for request clientID <REDACTED>\n}","LogType":"ConfigAgentTrace","LogLevel":"Information","Environment":"prod","Role":"ClusterConfigAgent","Location":"westeurope","ArmId":"/subscriptions/<REDACTED>/resourceGroups/<REDACTED>/providers/Microsoft.Kubernetes/managedclusters/<REDACTED>","CorrelationId":"","AgentName":"FluxConfigAgent","AgentVersion":"0.4.2","AgentTimestamp":"2021/12/02 10:24:56"} -``` --The extension status also returns as `Failed`. --```console -"{\"status\":\"Failed\",\"error\":{\"code\":\"ResourceOperationFailure\",\"message\":\"The resource operation completed with terminal provisioning state 'Failed'.\",\"details\":[{\"code\":\"ExtensionCreationFailed\",\"message\":\" error: Unable to get the status from the local CRD with the error : {Error : Retry for given duration didn't get any results with err {status not populated}}\"}]}}", -``` --In this case, the extension-agent pod tries to get its token from IMDS on the cluster. but the token request is intercepted by the [pod identity](/azure/aks/use-azure-ad-pod-identity)). To fix this issue, [upgrade to the latest version](extensions.md#upgrade-extension-instance) of the `microsoft.flux` extension. --### Issues with kubelet identity when installing the `microsoft.flux` extension in an AKS cluster --With AKs clusters, one of the authentication options is *kubelet identity* using a user-assigned managed identity. Using kubelet identity can reduce operational overhead and increases security when connecting to Azure resources such as Azure Container Registry. --To let Flux use kubelet identity, add the parameter `--config useKubeletIdentity=true` when installing the Flux extension. --```console -az k8s-extension create --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type managedClusters --name flux --extension-type microsoft.flux --config useKubeletIdentity=true -``` --### Ensuring memory and CPU requirements for `microsoft.flux` extension installation are met --The controllers installed in your Kubernetes cluster with the `microsoft.flux` extension require CPU and memory resources to properly schedule on Kubernetes cluster nodes. Be sure that your cluster is able to meet the minimum memory and CPU resources that may be requested. Note also the maximum limits for potential CPU and memory resource requirements shown here. --| Container Name | Minimum CPU | Minimum memory | Maximum CPU | Maximum memory | -| -- | -- | -- | -| fluxconfig-agent | 5 m | 30 Mi | 50 m | 150 Mi | -| fluxconfig-controller | 5 m | 30 Mi | 100 m | 150 Mi | -| fluent-bit | 5 m | 30 Mi | 20 m | 150 Mi | -| helm-controller | 100 m | 64 Mi | 1000 m | 1 Gi | -| source-controller | 50 m | 64 Mi | 1000 m | 1 Gi | -| kustomize-controller | 100 m | 64 Mi | 1000 m | 1 Gi | -| notification-controller | 100 m | 64 Mi | 1000 m | 1 Gi | -| image-automation-controller | 100 m | 64 Mi | 1000 m | 1 Gi | -| image-reflector-controller | 100 m | 64 Mi | 1000 m | 1 Gi | --If you enabled a custom or built-in Azure Gatekeeper Policy that limits the resources for containers on Kubernetes clusters, such as `Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits`, ensure that either the resource limits on the policy are greater than the limits shown here, or that the `flux-system` namespace is part of the `excludedNamespaces` parameter in the policy assignment. --### Flux v1 --> [!NOTE] -> We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources. --To help troubleshoot issues with the `sourceControlConfigurations` resource in Flux v1, run these Azure CLI commands with `--debug` parameter specified: --```azurecli -az provider show -n Microsoft.KubernetesConfiguration --debug -az k8s-configuration create <parameters> --debug -``` --## Azure Monitor Container Insights --This section provides help troubleshooting issues with [Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters](/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters?toc=%2Fazure%2Fazure-arc%2Fkubernetes%2Ftoc.json&bc=%2Fazure%2Fazure-arc%2Fkubernetes%2Fbreadcrumb%2Ftoc.json&tabs=create-cli%2Cverify-portal). --### Enabling privileged mode for Canonical Charmed Kubernetes cluster --Azure Monitor Container Insights requires its DaemonSet to run in privileged mode. To successfully set up a Canonical Charmed Kubernetes cluster for monitoring, run the following command: --```console -juju config kubernetes-worker allow-privileged=true -``` --### Unable to install Azure Monitor Agent (AMA) on Oracle Linux 9.x --When trying to install the Azure Monitor Agent (AMA) on an Oracle Linux (RHEL) 9.x Kubernetes cluster, the AMA pods and the AMA-RS pod might not work properly due to the `addon-token-adapter` container in the pod. With this error, when checking the logs of the `ama-logs-rs` pod, `addon-token-adapter container`, you see output similar to the following: --```output -Command: kubectl -n kube-system logs ama-logs-rs-xxxxxxxxxx-xxxxx -c addon-token-adapter - -Error displayed: error modifying iptable rules: error adding rules to custom chain: running [/sbin/iptables -t nat -N aad-metadata --wait]: exit status 3: modprobe: can't change directory to '/lib/modules': No such file or directory --iptables v1.8.9 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?) --Perhaps iptables or your kernel needs to be upgraded. -``` --This error occurs because installing the extension requires the `iptable_nat` module, but this module isn't automatically loaded in Oracle Linux (RHEL) 9.x distributions. --To fix this issue, you must explicitly load the `iptables_nat` module on each node in the cluster, using the `modprobe` command `sudo modprobe iptables_nat`. After you have signed into each node and manually added the `iptable_nat` module, retry the AMA installation. --> [!NOTE] -> Performing this step does not make the `iptables_nat` module persistent. --## Azure Arc-enabled Open Service Mesh --This section provides commands that you can use to validate and troubleshoot the deployment of the [Open Service Mesh (OSM)](tutorial-arc-enabled-open-service-mesh.md) extension components on your cluster. --### Check OSM controller deployment --```bash -kubectl get deployment -n arc-osm-system --selector app=osm-controller -``` --If the OSM controller is healthy, you see output similar to: --```output -NAME READY UP-TO-DATE AVAILABLE AGE -osm-controller 1/1 1 1 59m -``` --### Check OSM controller pods --```bash -kubectl get pods -n arc-osm-system --selector app=osm-controller -``` --If the OSM controller is healthy, you see output similar to: --```output -NAME READY STATUS RESTARTS AGE -osm-controller-b5bd66db-wglzl 0/1 Evicted 0 61m -osm-controller-b5bd66db-wvl9w 1/1 Running 0 31m -``` --Even though one controller was *Evicted* at some point, there's another which is `READY 1/1` and `Running` with `0` restarts. If the column `READY` is anything other than `1/1`, the service mesh is in a broken state. Column `READY` with `0/1` indicates the control plane container is crashing. --Use the following command to inspect controller logs: --```bash -kubectl logs -n arc-osm-system -l app=osm-controller -``` --Column `READY` with a number higher than `1` after the `/` indicates that there are sidecars installed. OSM Controller generally won't work properly with sidecars attached. --### Check OSM controller service --```bash -kubectl get service -n arc-osm-system osm-controller -``` --If the OSM controller is healthy, you see the following output: --```output -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -osm-controller ClusterIP 10.0.31.254 <none> 15128/TCP,9092/TCP 67m -``` --> [!NOTE] -> The `CLUSTER-IP` will be different. The service `NAME` and `PORT(S)` should match what is shown here. --### Check OSM controller endpoints --```bash -kubectl get endpoints -n arc-osm-system osm-controller -``` --If the OSM controller is healthy, you see output similar to: --```output -NAME ENDPOINTS AGE -osm-controller 10.240.1.115:9092,10.240.1.115:15128 69m -``` --If the cluster has no `ENDPOINTS` for `osm-controller`, the control plane is unhealthy. This unhealthy state means that the controller pod crashed or that it was never deployed correctly. --### Check OSM injector deployment --```bash -kubectl get deployments -n arc-osm-system osm-injector -``` --If the OSM injector is healthy, you see output similar to: --```output -NAME READY UP-TO-DATE AVAILABLE AGE -osm-injector 1/1 1 1 73m -``` --### Check OSM injector pod --```bash -kubectl get pod -n arc-osm-system --selector app=osm-injector -``` --If the OSM injector is healthy, you see output similar to: --```output -NAME READY STATUS RESTARTS AGE -osm-injector-5986c57765-vlsdk 1/1 Running 0 73m -``` --The `READY` column must be `1/1`. Any other value indicates an unhealthy OSM injector pod. --### Check OSM injector service --```bash -kubectl get service -n arc-osm-system osm-injector -``` --If the OSM injector is healthy, you see output similar to: --```output -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -osm-injector ClusterIP 10.0.39.54 <none> 9090/TCP 75m -``` --Ensure the IP address listed for `osm-injector` service is `9090`. There should be no `EXTERNAL-IP`. --### Check OSM injector endpoints --```bash -kubectl get endpoints -n arc-osm-system osm-injector -``` --If the OSM injector is healthy, you see output similar to: --```output -NAME ENDPOINTS AGE -osm-injector 10.240.1.172:9090 75m -``` --For OSM to function, there must be at least one endpoint for `osm-injector`. The IP address of your OSM injector endpoints will vary, but the port `9090` must be the same. --### Check **Validating** and **Mutating** webhooks --```bash -kubectl get ValidatingWebhookConfiguration --selector app=osm-controller -``` --If the **Validating** webhook is healthy, you see output similar to: --```output -NAME WEBHOOKS AGE -osm-validator-mesh-osm 1 81m -``` --```bash -kubectl get MutatingWebhookConfiguration --selector app=osm-injector -``` --If the **Mutating** webhook is healthy, you see output similar to: --```output -NAME WEBHOOKS AGE -arc-osm-webhook-osm 1 102m -``` --Check for the service and the CA bundle of the **Validating** webhook by using this command: --```bash -kubectl get ValidatingWebhookConfiguration osm-validator-mesh-osm -o json | jq '.webhooks[0].clientConfig.service' -``` --A well configured **Validating** webhook configuration will have output similar to: --```json -{ - "name": "osm-config-validator", - "namespace": "arc-osm-system", - "path": "/validate", - "port": 9093 -} -``` --Check for the service and the CA bundle of the **Mutating** webhook by using the following command: --```bash -kubectl get MutatingWebhookConfiguration arc-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service' -``` --A well configured **Mutating** webhook configuration will have output similar to the: --```output -{ - "name": "osm-injector", - "namespace": "arc-osm-system", - "path": "/mutate-pod-creation", - "port": 9090 -} -``` --Check whether OSM Controller has given the **Validating** (or **Mutating**) webhook a CA Bundle by using the following command: --```bash -kubectl get ValidatingWebhookConfiguration osm-validator-mesh-osm -o json | jq -r '.webhooks[0].clientConfig.caBundle' | wc -c -``` --```bash -kubectl get MutatingWebhookConfiguration arc-osm-webhook-osm -o json | jq -r '.webhooks[0].clientConfig.caBundle' | wc -c -``` --Example output: --```bash -1845 -``` --The number in the output indicates the number of bytes, or the size of the CA Bundle. If the output is empty, 0, or a number under 1000, the CA Bundle isn't correctly provisioned. Without a correct CA Bundle, the `ValidatingWebhook` throws an error. --### Check the `osm-mesh-config` resource --Check for the existence of the resource: --```azurecli-interactive -kubectl get meshconfig osm-mesh-config -n arc-osm-system -``` --Check the content of the OSM MeshConfig: --```azurecli-interactive -kubectl get meshconfig osm-mesh-config -n arc-osm-system -o yaml -``` --You should see output similar to: --```yaml -apiVersion: config.openservicemesh.io/v1alpha1 -kind: MeshConfig -metadata: - creationTimestamp: "0000-00-00A00:00:00A" - generation: 1 - name: osm-mesh-config - namespace: arc-osm-system - resourceVersion: "2494" - uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31 -spec: - certificate: - certKeyBitSize: 2048 - serviceCertValidityDuration: 24h - featureFlags: - enableAsyncProxyServiceMapping: false - enableEgressPolicy: true - enableEnvoyActiveHealthChecks: false - enableIngressBackendPolicy: true - enableMulticlusterMode: false - enableRetryPolicy: false - enableSnapshotCacheMode: false - enableWASMStats: true - observability: - enableDebugServer: false - osmLogLevel: info - tracing: - enable: false - sidecar: - configResyncInterval: 0s - enablePrivilegedInitContainer: false - logLevel: error - resources: {} - traffic: - enableEgress: false - enablePermissiveTrafficPolicyMode: true - inboundExternalAuthorization: - enable: false - failureModeAllow: false - statPrefix: inboundExtAuthz - timeout: 1s - inboundPortExclusionList: [] - outboundIPRangeExclusionList: [] - outboundPortExclusionList: [] -kind: List -metadata: - resourceVersion: "" - selfLink: "" -``` --`osm-mesh-config` resource values: --| Key | Type | Default Value | Kubectl Patch Command Examples | -|--|||--| -| spec.traffic.enableEgress | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"enableEgress":false}}}' --type=merge` | -| spec.traffic.enablePermissiveTrafficPolicyMode | bool | `true` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge` | -| spec.traffic.outboundPortExclusionList | array | `[]` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"outboundPortExclusionList":[6379,8080]}}}' --type=merge` | -| spec.traffic.outboundIPRangeExclusionList | array | `[]` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"outboundIPRangeExclusionList":["10.0.0.0/32","1.1.1.1/24"]}}}' --type=merge` | -| spec.traffic.inboundPortExclusionList | array | `[]` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"inboundPortExclusionList":[6379,8080]}}}' --type=merge` | -| spec.certificate.serviceCertValidityDuration | string | `"24h"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"certificate":{"serviceCertValidityDuration":"24h"}}}' --type=merge` | -| spec.observability.enableDebugServer | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"observability":{"enableDebugServer":false}}}' --type=merge` | -| spec.observability.osmLogLevel | string | `"info"`| `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"observability":{"tracing":{"osmLogLevel": "info"}}}}' --type=merge` | -| spec.observability.tracing.enable | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"observability":{"tracing":{"enable":true}}}}' --type=merge` | -| spec.sidecar.enablePrivilegedInitContainer | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"sidecar":{"enablePrivilegedInitContainer":true}}}' --type=merge` | -| spec.sidecar.logLevel | string | `"error"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"sidecar":{"logLevel":"error"}}}' --type=merge` | -| spec.featureFlags.enableWASMStats | bool | `"true"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableWASMStats":"true"}}}' --type=merge` | -| spec.featureFlags.enableEgressPolicy | bool | `"true"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableEgressPolicy":"true"}}}' --type=merge` | -| spec.featureFlags.enableMulticlusterMode | bool | `"false"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableMulticlusterMode":"false"}}}' --type=merge` | -| spec.featureFlags.enableSnapshotCacheMode | bool | `"false"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableSnapshotCacheMode":"false"}}}' --type=merge` | -| spec.featureFlags.enableAsyncProxyServiceMapping | bool | `"false"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableAsyncProxyServiceMapping":"false"}}}' --type=merge` | -| spec.featureFlags.enableIngressBackendPolicy | bool | `"true"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableIngressBackendPolicy":"true"}}}' --type=merge` | -| spec.featureFlags.enableEnvoyActiveHealthChecks | bool | `"false"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableEnvoyActiveHealthChecks":"false"}}}' --type=merge` | --### Check namespaces -->[!Note] ->The arc-osm-system namespace will never participate in a service mesh and will never be labeled or annotated with the key/values shown here. --We use the `osm namespace add` command to join namespaces to a given service mesh. When a Kubernetes namespace is part of the mesh, follow these steps to confirm requirements are met. --View the annotations of the namespace `bookbuyer`: --```bash -kubectl get namespace bookbuyer -o json | jq '.metadata.annotations' -``` --The following annotation must be present: --```bash -{ - "openservicemesh.io/sidecar-injection": "enabled" -} -``` --View the labels of the namespace `bookbuyer`: --```bash -kubectl get namespace bookbuyer -o json | jq '.metadata.labels' -``` --The following label must be present: --```bash -{ - "openservicemesh.io/monitored-by": "osm" -} -``` --If you aren't using `osm` CLI, you can also manually add these annotations to your namespaces. If a namespace isn't annotated with `"openservicemesh.io/sidecar-injection": "enabled"`, or isn't labeled with `"openservicemesh.io/monitored-by": "osm"`, the OSM injector won't add Envoy sidecars. --> [!NOTE] -> After `osm namespace add` is called, only **new** pods will be injected with an Envoy sidecar. Existing pods must be restarted with the `kubectl rollout restart deployment` command. --### Verify the SMI CRDs --Check whether the cluster has the required Custom Resource Definitions (CRDs) by using the following command: --```bash -kubectl get crds -``` --Ensure that the CRDs correspond to the versions available in the release branch. To confirm which CRD versions are in use, visit the [SMI supported versions page](https://docs.openservicemesh.io/docs/overview/smi/) and select your version from the **Releases** dropdown. --Get the versions of the installed CRDs with the following command: --```bash -for x in $(kubectl get crds --no-headers | awk '{print $1}' | grep 'smi-spec.io'); do - kubectl get crd $x -o json | jq -r '(.metadata.name, "-" , .spec.versions[].name, "\n")' -done -``` --If CRDs are missing, use the following commands to install them on the cluster. Replace the version in these commands as needed (for example, v1.1.0 would be release-v1.1). --```bash -kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_http_route_group.yaml --kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_tcp_route.yaml --kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_traffic_access.yaml --kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_traffic_split.yaml -``` --To see CRD changes between releases, refer to the [OSM release notes](https://github.com/openservicemesh/osm/releases). --### Troubleshoot certificate management --For information on how OSM issues and manages certificates to Envoy proxies running on application pods, see the [OSM docs site](https://docs.openservicemesh.io/docs/guides/certificates/). --### Upgrade Envoy --When a new pod is created in a namespace monitored by the add-on, OSM injects an [Envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. If the Envoy version needs to be updated, follow the steps in the [Upgrade Guide](https://docs.openservicemesh.io/docs/guides/upgrade/#envoy) on the OSM docs site. --## Next steps --- Learn more about [cluster extensions](conceptual-extensions.md).-- View general [troubleshooting tips for Arc-enabled Kubernetes clusters](extensions-troubleshooting.md). |
azure-arc | Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md | - Title: "Deploy and manage Azure Arc-enabled Kubernetes cluster extensions"- Previously updated : 02/08/2024- -description: "Create and manage extension instances on Azure Arc-enabled Kubernetes clusters." ---# Deploy and manage Azure Arc-enabled Kubernetes cluster extensions --You can create extension instances in an Arc-enabled Kubernetes cluster, setting required and optional parameters including options related to updates and configurations. You can also view, list, update, and delete extension instances. --Before you begin, read the [conceptual overview of Arc-enabled Kubernetes cluster extensions](conceptual-extensions.md) and review the [list of currently available extensions](extensions-release.md). --## Prerequisites --* The latest version of [Azure CLI](/cli/azure/install-azure-cli). -* The latest versions of the `connectedk8s` and `k8s-extension` Azure CLI extensions. Install these extensions by running the following commands: - - ```azurecli - az extension add --name connectedk8s - az extension add --name k8s-extension - ``` -- If the `connectedk8s` and `k8s-extension` extensions are already installed, make sure they're updated to the latest version using the following commands: -- ```azurecli - az extension update --name connectedk8s - az extension update --name k8s-extension - ``` --* An existing Azure Arc-enabled Kubernetes connected cluster, with at least one node of operating system and architecture type `linux/amd64`. If deploying [Flux (GitOps)](extensions-release.md#flux-gitops), you can use an ARM64-based cluster without a `linux/amd64` node. - * If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). - * [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. --## Create extension instance --To create a new extension instance, use `k8s-extension create`, passing in values for the required parameters. --This example creates an [Azure Monitor Container Insights](extensions-release.md#azure-monitor-container-insights) extension instance on an Azure Arc-enabled Kubernetes cluster: --```azurecli -az k8s-extension create --name azuremonitor-containers --extension-type Microsoft.AzureMonitor.Containers --scope cluster --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters -``` --**Output:** --```json -{ - "autoUpgradeMinorVersion": true, - "configurationProtectedSettings": null, - "configurationSettings": { - "logAnalyticsWorkspaceResourceID": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/defaultresourcegroup-eus/providers/microsoft.operationalinsights/workspaces/defaultworkspace-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-eus" - }, - "creationTime": "2021-04-02T12:13:06.7534628+00:00", - "errorInfo": { - "code": null, - "message": null - }, - "extensionType": "microsoft.azuremonitor.containers", - "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/demo/providers/Microsoft.Kubernetes/connectedClusters/demo/providers/Microsoft.KubernetesConfiguration/extensions/azuremonitor-containers", - "identity": null, - "installState": "Pending", - "lastModifiedTime": "2021-04-02T12:13:06.753463+00:00", - "lastStatusTime": null, - "name": "azuremonitor-containers", - "releaseTrain": "Stable", - "resourceGroup": "demo", - "scope": { - "cluster": { - "releaseNamespace": "azuremonitor-containers" - }, - "namespace": null - }, - "statuses": [], - "systemData": null, - "type": "Microsoft.KubernetesConfiguration/extensions", - "version": "2.8.2" -} -``` --> [!NOTE] -> The service is unable to retain sensitive information for more than 48 hours. If Azure Arc-enabled Kubernetes agents don't have network connectivity for more than 48 hours and can't determine whether to create an extension on the cluster, the extension transitions to `Failed` state. Once that happens, you'll need to run `k8s-extension create` again to create a fresh extension Azure resource. -> -> Azure Monitor Container Insights is a singleton extension (only one required per cluster). You'll need to clean up any previous Helm chart installations of Azure Monitor Container Insights (without extensions) before installing the same via extensions. Follow the instructions for [deleting the Helm chart](/azure/azure-monitor/containers/kubernetes-monitoring-disable#remove-container-insights-with-helm) before running `az k8s-extension create`. --### Required parameters --The following parameters are required when using `az k8s-extension create` to create an extension instance. --| Parameter name | Description | -|-|| -| `--name` | Name of the extension instance | -| `--extension-type` | The [type of extension](extensions-release.md) you want to install on the cluster. For example: Microsoft.AzureMonitor.Containers, microsoft.azuredefender.kubernetes | -| `--scope` | [Scope of installation](conceptual-extensions.md#extension-scope) for the extension: `cluster` or `namespace` | -| `--cluster-name` | Name of the Azure Arc-enabled Kubernetes resource on which the extension instance has to be created | -| `--resource-group` | The resource group containing the Azure Arc-enabled Kubernetes resource | -| `--cluster-type` | The cluster type on which the extension instance has to be created. For most scenarios, use `connectedClusters`, which corresponds to Azure Arc-enabled Kubernetes clusters. | --### Optional parameters --Use one or more of these optional parameters as needed for your scenarios, along with the required parameters. --> [!NOTE] -> You can choose to automatically upgrade your extension instance to the latest minor and patch versions by setting `auto-upgrade-minor-version` to `true`, or you can instead set the version of the extension instance manually using the `--version` parameter. We recommend enabling automatic upgrades for minor and patch versions so that you always have the latest security patches and capabilities. -> -> Because major version upgrades may include breaking changes, automatic upgrades for new major versions of an extension instance aren't supported. You can choose when to [manually upgrade extension instances](#upgrade-extension-instance) to a new major version. ---| Parameter name | Description | -|--|| -| `--auto-upgrade-minor-version` | Boolean property that determines whether the extension minor version is automatically upgraded. The default setting is `true`. If this parameter is set to `true`, you can't set the `version` parameter, as the version will be dynamically updated. If set to `false`, the extension won't be automatically upgraded, even for patch versions. | -| `--version` | Version of the extension to be installed (specific version to pin the extension instance to). Must not be supplied if `auto-upgrade-minor-version` is set to `true`. | -| `--configuration-settings` | Settings that can be passed into the extension to control its functionality. These are passed in as space-separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-settings-file` can't be used in the same command. | -| `--configuration-settings-file` | Path to a JSON file with `key=value` pairs to be used for passing configuration settings into the extension. If this parameter is used in the command, then `--configuration-settings` can't be used in the same command. | -| `--configuration-protected-settings` | Settings that aren't retrievable using `GET` API calls or `az k8s-extension show` commands. Typically used to pass in sensitive settings. These are passed in as space-separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-protected-settings-file` can't be used in the same command. | -| `--configuration-protected-settings-file` | Path to a JSON file with `key=value` pairs to be used for passing sensitive settings into the extension. If this parameter is used in the command, then `--configuration-protected-settings` can't be used in the same command. | -| `--release-namespace` | This parameter indicates the namespace within which the release will be created. Only relevant if `scope` is set to `cluster`. | -| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. | -| `--target-namespace` | Indicates the namespace within which the release will be created. Permission of the system account created for this extension instance will be restricted to this namespace. Only relevant if `scope` is set to `namespace`. | --## Show extension details --To view details of a currently installed extension instance, use `k8s-extension show`, passing in values for the mandatory parameters. --```azurecli -az k8s-extension show --name azuremonitor-containers --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters -``` --**Output:** --```json -{ - "autoUpgradeMinorVersion": true, - "configurationProtectedSettings": null, - "configurationSettings": { - "logAnalyticsWorkspaceResourceID": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/defaultresourcegroup-eus/providers/microsoft.operationalinsights/workspaces/defaultworkspace-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-eus" - }, - "creationTime": "2021-04-02T12:13:06.7534628+00:00", - "errorInfo": { - "code": null, - "message": null - }, - "extensionType": "microsoft.azuremonitor.containers", - "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/demo/providers/Microsoft.Kubernetes/connectedClusters/demo/providers/Microsoft.KubernetesConfiguration/extensions/azuremonitor-containers", - "identity": null, - "installState": "Installed", - "lastModifiedTime": "2021-04-02T12:13:06.753463+00:00", - "lastStatusTime": "2021-04-02T12:13:49.636+00:00", - "name": "azuremonitor-containers", - "releaseTrain": "Stable", - "resourceGroup": "demo", - "scope": { - "cluster": { - "releaseNamespace": "azuremonitor-containers" - }, - "namespace": null - }, - "statuses": [], - "systemData": null, - "type": "Microsoft.KubernetesConfiguration/extensions", - "version": "2.8.2" -} -``` --## List all extensions installed on the cluster --To view a list of all extensions installed on a cluster, use `k8s-extension list`, passing in values for the mandatory parameters. --```azurecli -az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters -``` --**Output:** --```json -[ - { - "autoUpgradeMinorVersion": true, - "creationTime": "2020-09-15T02:26:03.5519523+00:00", - "errorInfo": { - "code": null, - "message": null - }, - "extensionType": "Microsoft.AzureMonitor.Containers", - "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myRg/providers/Microsoft.Kubernetes/connectedClusters/myCluster/providers/Microsoft.KubernetesConfiguration/extensions/myExtInstanceName", - "identity": null, - "installState": "Pending", - "lastModifiedTime": "2020-09-15T02:48:45.6469664+00:00", - "lastStatusTime": null, - "name": "myExtInstanceName", - "releaseTrain": "Stable", - "resourceGroup": "myRG", - "scope": { - "cluster": { - "releaseNamespace": "myExtInstanceName1" - } - }, - "statuses": [], - "type": "Microsoft.KubernetesConfiguration/extensions", - "version": "0.1.0" - }, - { - "autoUpgradeMinorVersion": true, - "creationTime": "2020-09-02T00:41:16.8005159+00:00", - "errorInfo": { - "code": null, - "message": null - }, - "extensionType": "microsoft.azuredefender.kubernetes", - "id": "/subscriptions/0e849346-4343-582b-95a3-e40e6a648ae1/resourceGroups/myRg/providers/Microsoft.Kubernetes/connectedClusters/myCluster/providers/Microsoft.KubernetesConfiguration/extensions/defender", - "identity": null, - "installState": "Pending", - "lastModifiedTime": "2020-09-02T00:41:16.8005162+00:00", - "lastStatusTime": null, - "name": "microsoft.azuredefender.kubernetes", - "releaseTrain": "Stable", - "resourceGroup": "myRg", - "scope": { - "cluster": { - "releaseNamespace": "myExtInstanceName2" - } - }, - "type": "Microsoft.KubernetesConfiguration/extensions", - "version": "0.1.0" - } -] -``` --## Update extension instance --> [!NOTE] -> Refer to documentation for the specific extension type to understand the specific settings in `--configuration-settings` and `--configuration-protected-settings` that are able to be updated. For `--configuration-protected-settings`, all settings are expected to be provided, even if only one setting is being updated. If any of these settings are omitted, those settings will be considered obsolete and deleted. --To update an existing extension instance, use `k8s-extension update`, passing in values for the mandatory and optional parameters. The mandatory and optional parameters are slightly different than those used to create an extension instance. --This example updates the `auto-upgrade-minor-version` setting for an Azure Machine Learning extension instance to `true`: --```azurecli -az k8s-extension update --name azureml --extension-type Microsoft.AzureML.Kubernetes --scope cluster --cluster-name <clusterName> --resource-group <resourceGroupName> --auto-upgrade-minor-version true --cluster-type managedClusters -``` --### Required parameters for update --| Parameter name | Description | -|-|| -| `--name` | Name of the extension instance | -| `--cluster-name` | Name of the cluster on which the extension instance has to be created | -| `--resource-group` | The resource group containing the cluster | -| `--cluster-type` | The cluster type on which the extension instance has to be created. For Azure Arc-enabled Kubernetes clusters, use `connectedClusters`. For AKS clusters, use `managedClusters`.| --### Optional parameters for update --| Parameter name | Description | -|--|| -| `--auto-upgrade-minor-version` | Boolean property that specifies whether the extension minor version is automatically upgraded. The default setting is `true`. If this parameter is set to true, you can't set the `version` parameter, as the version will be dynamically updated. If set to `false`, the extension won't be automatically upgraded, even for patch versions. | -| `--version` | Version of the extension to be installed (specific version to pin the extension instance to). Must not be supplied if auto-upgrade-minor-version is set to `true`. | -| `--configuration-settings` | Settings that can be passed into the extension to control its functionality. These are passed in as space-separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-settings-file` can't be used in the same command. Only the settings that require an update need to be provided. The provided settings will be replaced with the specified values. | -| `--configuration-settings-file` | Path to the JSON file with `key=value` pairs to be used for passing in configuration settings to the extension. If this parameter is used in the command, then `--configuration-settings` can't be used in the same command. | -| `--configuration-protected-settings` | Settings that aren't retrievable using `GET` API calls or `az k8s-extension show` commands. Typically used to pass in sensitive settings. These are passed in as space-separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-protected-settings-file` can't be used in the same command. When you update a protected setting, all of the protected settings are expected to be specified. If any of these settings are omitted, those settings will be considered obsolete and deleted. | -| `--configuration-protected-settings-file` | Path to a JSON file with `key=value` pairs to be used for passing in sensitive settings to the extension. If this parameter is used in the command, then `--configuration-protected-settings` can't be used in the same command. | -| `--scope` | Scope of installation for the extension - `cluster` or `namespace`. | -| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. | --## Upgrade extension instance --As noted earlier, if you set `auto-upgrade-minor-version` to true, the extension will automatically be upgraded when a new minor version is released. For most scenarios, we recommend enabling automatic upgrades. If you set `auto-upgrade-minor-version` to false, you'll have to upgrade the extension manually if you want a newer version. --Manual upgrades are also required to get a new major instance of an extension. You can choose when to upgrade in order to avoid any unexpected breaking changes with major version upgrades. --To manually upgrade an extension instance, use `k8s-extension update` and set the `version` parameter to specify a version. --This example updates an Azure Machine Learning extension instance to version x.y.z: --```azurecli -az k8s-extension update --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters --name azureml --version x.y.z -``` --## Delete extension instance --To delete an extension instance on a cluster, use `k8s-extension delete`, passing in values for the mandatory parameters: --```azurecli -az k8s-extension delete --name azuremonitor-containers --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters -``` --> [!NOTE] -> The Azure resource representing this extension gets deleted immediately. The Helm release on the cluster associated with this extension is only deleted when the agents running on the Kubernetes cluster have network connectivity and can reach out to Azure services again to fetch the desired state. ---## Next steps --* Review the [az k8s-extension CLI reference](/cli/azure/k8s-extension) for a comprehensive list of commands and parameters. -* Learn more about [how extensions work with Arc-enabled Kubernetes clusters](conceptual-extensions.md). -* Review the [cluster extensions currently available for Azure Arc-enabled Kubernetes](extensions-release.md). -* Get help [troubleshooting extension issues](extensions-troubleshooting.md). |
azure-arc | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/faq.md | - Title: "Azure Arc-enabled Kubernetes and GitOps frequently asked questions" Previously updated : 05/04/2023- -description: "This article contains a list of frequently asked questions related to Azure Arc-enabled Kubernetes and Azure GitOps." ----# Frequently Asked Questions - Azure Arc-enabled Kubernetes and GitOps --This article addresses frequently asked questions about Azure Arc-enabled Kubernetes and GitOps. --## What is the difference between Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS)? --AKS is the managed Kubernetes offering by Azure. AKS simplifies deploying a managed Kubernetes cluster in Azure by offloading much of the complexity and operational overhead to Azure. Since the Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes. --Azure Arc-enabled Kubernetes allows you to extend AzureΓÇÖs management capabilities (like Azure Monitor and Azure Policy) by connecting Kubernetes clusters to Azure. You maintain the underlying Kubernetes cluster itself. --## Do I need to connect my AKS clusters running on Azure to Azure Arc? --Currently, connecting an Azure Kubernetes Service (AKS) cluster to Azure Arc is not required for most scenarios. You may want to connect a cluster to run certain Azure Arc-enabled services such as App Services and Data Services on top of the cluster. This can be done using the [custom locations](custom-locations.md) feature of Azure Arc-enabled Kubernetes. --## Should I connect my AKS-HCI cluster and Kubernetes clusters on Azure Stack Edge to Azure Arc? --Connecting your AKS-HCI cluster or Kubernetes clusters on Azure Stack Edge to Azure Arc provides clusters with resource representation in Azure Resource Manager. This resource representation extends capabilities like Cluster Configuration, Azure Monitor, and Azure Policy (Gatekeeper) to connected Kubernetes clusters. --If the Azure Arc-enabled Kubernetes cluster is on Azure Stack Edge, AKS on Azure Stack HCI (>= April 2021 update), or AKS on Windows Server 2019 Datacenter (>= April 2021 update), then the Kubernetes configuration is included at no charge. --## How do I address expired Azure Arc-enabled Kubernetes resources? --The system-assigned managed identity associated with your Azure Arc-enabled Kubernetes cluster is only used by the Azure Arc agents to communicate with the Azure Arc services. The certificate associated with this system assigned managed identity has an expiration window of 90 days, and the agents will attempt to renew this certificate between Day 46 to Day 90. To avoid having your managed identity certificate expire, be sure that the cluster comes online at least once between Day 46 and Day 90 so that the certificate can be renewed. --If the managed identity certificate expires, the resource is considered `Expired` and all Azure Arc features (such as configuration, monitoring, and policy) will stop working on the cluster. --To check when the managed identity certificate will expire for a given cluster, run the following command: --```azurecli -az connectedk8s show -n <name> -g <resource-group> -``` --In the output, the value of the `managedIdentityCertificateExpirationTime` indicates when the managed identity certificate will expire (90D mark for that certificate). --If the value of `managedIdentityCertificateExpirationTime` indicates a timestamp from the past, then the `connectivityStatus` field in the above output will be set to `Expired`. In such cases, to get your Kubernetes cluster working with Azure Arc again: --1. Delete the Azure Arc-enabled Kubernetes resource and agents on the cluster. -- ```azurecli - az connectedk8s delete -n <name> -g <resource-group> - ``` --1. Recreate the Azure Arc-enabled Kubernetes resource by deploying agents on the cluster. -- ```azurecli - az connectedk8s connect -n <name> -g <resource-group> - ``` --> [!NOTE] -> `az connectedk8s delete` will also delete configurations and cluster extensions on top of the cluster. After running `az connectedk8s connect`, recreate the configurations and cluster extensions on the cluster, either manually or using Azure Policy. --## If I am already using CI/CD pipelines, can I still use Azure Arc-enabled Kubernetes or AKS and GitOps configurations? --Yes, you can still use configurations on a cluster receiving deployments via a CI/CD pipeline. Compared to traditional CI/CD pipelines, GitOps configurations feature some extra benefits. --### Drift reconciliation --The CI/CD pipeline applies changes only once during pipeline run. However, the GitOps operator on the cluster continuously polls the Git repository to fetch the desired state of Kubernetes resources on the cluster. If the GitOps operator finds the desired state of resources to be different from the actual state of resources on the cluster, this drift is reconciled. --### Apply GitOps at scale --CI/CD pipelines are useful for event-driven deployments to your Kubernetes cluster (for example, a push to a Git repository). However, to deploy the same configuration to all of your Kubernetes clusters, you need to manually configure each Kubernetes cluster's credentials to the CI/CD pipeline. --For Azure Arc-enabled Kubernetes, since Azure Resource Manager manages your GitOps configurations, you can automate creating the same configuration across all Azure Arc-enabled Kubernetes and AKS resources using Azure Policy, within the scope of a subscription or a resource group. This capability is even applicable to Azure Arc-enabled Kubernetes and AKS resources created after the policy assignment. --This feature applies baseline configurations (like network policies, role bindings, and pod security policies) across the entire Kubernetes cluster inventory to meet compliance and governance requirements. --### Cluster compliance --The compliance state of each GitOps configuration is reported back to Azure. This lets you keep track of any failed deployments. --## Does Azure Arc-enabled Kubernetes store any customer data outside of the cluster's region? --The feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. For all other regions, customer data is stored in Geo. This is applicable for Azure Arc-enabled Open Service Mesh and Azure Key Vault Secrets Provider extensions supported in Azure Arc-enabled Kubernetes. For other cluster extensions, please see their documentation to learn how they store customer data. For more information, see [Trust Center](https://azure.microsoft.com/global-infrastructure/data-residency/). --## Next steps --* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). -* Already have an AKS cluster or an Azure Arc-enabled Kubernetes cluster? [Create GitOps configurations on your Azure Arc-enabled Kubernetes cluster](./tutorial-use-gitops-flux2.md). -* Learn how to [setup a CI/CD pipeline with GitOps](./tutorial-gitops-flux2-ci-cd.md). -* Learn how to [use Azure Policy to apply configurations at scale](./use-azure-policy.md). -* Experience Azure Arc-enabled Kubernetes automated scenarios with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_k8s). |
azure-arc | Gitops Flux2 Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/gitops-flux2-parameters.md | - Title: "GitOps (Flux v2) supported parameters" -description: "Understand the supported parameters for GitOps (Flux v2) in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." Previously updated : 04/30/2024----# GitOps (Flux v2) supported parameters --Azure provides an automated application deployments capability using GitOps that works with Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters. GitOps with Flux v2 lets you use your Git repository as the source of truth for cluster configuration and application deployment. For more information, see [Application deployments with GitOps (Flux v2)](conceptual-gitops-flux2.md) and [Tutorial: Deploy applications using GitOps with Flux v2](tutorial-use-gitops-flux2.md). --GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](https://fluxcd.io/docs/), a popular open-source tool set that supports many parameters to enable various scenarios. For a description of all parameters that Flux supports, see the [official Flux documentation](https://fluxcd.io/docs/). --To see all the parameters supported by Flux in Azure, see the [`az k8s-configuration` documentation](/cli/azure/k8s-configuration). This implementation doesn't currently support every parameter that Flux supports. Let us know if a parameter you need is missing from the Azure implementation. --This article describes some of the parameters and arguments available for the `az k8s-configuration flux create` command. You can also see the full list of parameters for the `az k8s-configuration flux` by using the `-h` parameter in Azure CLI (for example, `az k8s-configuration flux -h` or `az k8s-configuration flux create -h`). --> [!TIP] -> A workaround to deploy Flux resources with non-supported parameters is to define the required Flux custom resources (such as [GitRepository](https://fluxcd.io/flux/components/source/gitrepositories/) or [Kustomization](https://fluxcd.io/flux/components/kustomize/kustomization/)) inside your Git repository. Deploy these resources with the `az k8s-configuration flux create` command. You will then still be able to access your Flux resources through the Azure Arc UI. --## Configuration general arguments --| Parameter | Format | Notes | -| - | - | - | -| `--cluster-name` `-c` | String | Name of the cluster resource in Azure. | -| `--cluster-type` `-t` | Allowed values: `connectedClusters`, `managedClusters`| Use `connectedClusters` for Azure Arc-enabled Kubernetes clusters or `managedClusters` for AKS clusters. | -| `--resource-group` `-g` | String | Name of the Azure resource group that holds the cluster resource. | -| `--name` `-n`| String | Name of the Flux configuration in Azure. | -| `--namespace` `--ns` | String | Name of the namespace to deploy the configuration. Default: `default`. | -| `--scope` `-s` | String | Permission scope for the operators. Possible values are `cluster` (full access) or `namespace` (restricted access). Default: `cluster`. | -| `--suspend` | flag | Suspends all source and kustomize reconciliations defined in this Flux configuration. Reconciliations active at the time of suspension will continue. | --## Source general arguments --| Parameter | Format | Notes | -| - | - | - | -| `--kind` | String | Source kind to reconcile. Allowed values: `bucket`, `git`, `azblob`. Default: `git`. | -| `--timeout` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Maximum time to attempt to reconcile the source before timing out. Default: `10m`. | -| `--sync-interval` `--interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Time between reconciliations of the source on the cluster. Default: `10m`. | --## Git repository source reference arguments --| Parameter | Format | Notes | -| - | - | - | -| `--branch` | String | Branch within the Git source to sync to the cluster. Default: `master`. Newer repositories might have a root branch named `main`, in which case you need to set `--branch=main`. | -| `--tag` | String | Tag within the Git source to sync to the cluster. Example: `--tag=3.2.0`. | -| `--semver` | String | Git tag `semver` range within the Git source to sync to the cluster. Example: `--semver=">=3.1.0-rc.1 <3.2.0"`. | -| `--commit` | String | Git commit SHA within the Git source to sync to the cluster. Example: `--commit=363a6a8fe6a7f13e05d34c163b0ef02a777da20a`. | --For more information, see the [Flux documentation on Git repository checkout strategies](https://fluxcd.io/docs/components/source/gitrepositories/#checkout-strategies). --### Public Git repository --| Parameter | Format | Notes | -| - | - | - | -| `--url` `-u` | `http[s]://server/repo[.git]` | URL of the Git repository source to reconcile with the cluster. | --### Private Git repository with SSH --> [!IMPORTANT] -> Azure DevOps [announced the deprecation of SSH-RSA](https://aka.ms/ado-ssh-rsa-deprecation) as a supported encryption method for connecting to Azure repositories using SSH. If you use SSH keys to connect to Azure repositories in Flux configurations, we recommend moving to more secure RSA-SHA2-256 or RSA-SHA2-512 keys. For more information, see [Azure DevOps SSH-RSA deprecation](tutorial-use-gitops-flux2.md#azure-devops-ssh-rsa-deprecation). --#### Private Git repository with SSH and Flux-created keys --Add the public key generated by Flux to the user account in your Git service provider. --| Parameter | Format | Notes | -| - | - | - | -| `--url` `-u` | `ssh://user@server/repo[.git]` | `git@` should replace `user@` if the public key is associated with the repository instead of the user account. | --#### Private Git repository with SSH and user-provided keys --Use your own private key directly or from a file. The key must be in [PEM format](https://aka.ms/PEMformat) and end with a newline (`\n`). --Add the associated public key to the user account in your Git service provider. --| Parameter | Format | Notes | -| - | - | - | -| `--url` `-u` | ssh://user@server/repo[.git] | `git@` should replace `user@` if the public key is associated with the repository instead of the user account. | -| `--ssh-private-key` | Base64 key in [PEM format](https://aka.ms/PEMformat) | Provide the key directly. | -| `--ssh-private-key-file` | Full path to local file | Provide the full path to the local file that contains the PEM-format key. --#### Private Git host with SSH and user-provided known hosts --The Flux operator maintains a list of common Git hosts in its `known_hosts` file. Flux uses this information to authenticate the Git repository before establishing the SSH connection. If you're using an uncommon Git repository or your own Git host, you can supply the host key so that Flux can identify your repository. --Just like private keys, you can provide your `known_hosts` content directly or in a file. When you're providing your own content, use the [known_hosts content format specifications](https://aka.ms/KnownHostsFormat), along with either of the preceding SSH key scenarios. --| Parameter | Format | Notes | -| - | - | - | -| `--url` `-u` | ssh://user@server/repo[.git] | `git@` can replace `user@`. | -| `--known-hosts` | Base64 string | Provide `known_hosts` content directly. | -| `--known-hosts-file` | Full path to local file | Provide `known_hosts` content in a local file. | --### Private Git repository with an HTTPS user and key --| Parameter | Format | Notes | -| - | - | - | -| `--url` `-u` | `https://server/repo[.git]` | HTTPS with Basic Authentication. | -| `--https-user` | Raw string | HTTPS username. | -| `--https-key` | Raw string | HTTPS personal access token or password. --### Private Git repository with an HTTPS CA certificate --| Parameter | Format | Notes | -| - | - | - | -| `--url` `-u` | `https://server/repo[.git]` | HTTPS with Basic Authentication. | -| `--https-ca-cert` | Base64 string | CA certificate for TLS communication. | -| `--https-ca-cert-file` | Full path to local file | Provide CA certificate content in a local file. | --## Bucket source arguments --If you use `bucket` source, here are the bucket-specific command arguments. --| Parameter | Format | Notes | -| - | - | - | -| `--url` `-u` | URL String | The URL for the `bucket`. Formats supported: `http://`, `https://`. | -| `--bucket-name` | String | Name of the `bucket` to sync. | -| `--bucket-access-key` | String | Access Key ID used to authenticate with the `bucket`. | -| `--bucket-secret-key` | String | Secret Key used to authenticate with the `bucket`. | -| `--bucket-insecure` | Boolean | Communicate with a `bucket` without TLS. If not provided, assumed false; if provided, assumed true. | --## Azure Blob Storage Account source arguments --If you use `azblob` source, here are the blob-specific command arguments. --| Parameter | Format | Notes | -| - | - | - | -| `--url` `-u` | URL String | The URL for the `azblob`. | -| `--container-name` | String | Name of the Azure Blob Storage container to sync | -| `--sp_client_id` | String | The client ID for authenticating a service principal with Azure Blob, required for this authentication method | -| `--sp_tenant_id` | String | The tenant ID for authenticating a service principal with Azure Blob, required for this authentication method | -| `--sp_client_secret` | String | The client secret for authenticating a service principal with Azure Blob | -| `--sp_client_cert` | String | The Base64 encoded client certificate for authenticating a service principal with Azure Blob | -| `--sp_client_cert_password` | String | The password for the client certificate used to authenticate a service principal with Azure Blob | -| `--sp_client_cert_send_chain` | String | Specifies whether to include x5c header in client claims when acquiring a token to enable subject name / issuer based authentication for the client certificate | -| `--account_key` | String | The Azure Blob Shared Key for authentication | -| `--sas_token` | String | The Azure Blob SAS Token for authentication | -| `--managed-identity-client-id` | String | The client ID of the managed identity for authentication with Azure Blob | --> [!IMPORTANT] -> When using managed identity authentication for AKS clusters and `azblob` source, the managed identity must be assigned at minimum the [Storage Blob Data Reader](/azure/role-based-access-control/built-in-roles#storage-blob-data-reader) role. Authentication using a managed identity is not yet available for Azure Arc-enabled Kubernetes clusters. --## Local secret for authentication with source --You can use a local Kubernetes secret for authentication with a `git`, `bucket` or `azBlob` source. The local secret must contain all of the authentication parameters needed for the source and must be created in the same namespace as the Flux configuration. --| Parameter | Format | Notes | -| - | - | - | -| `--local-auth-ref` `--local-ref` | String | Local reference to a Kubernetes secret in the Flux configuration namespace to use for authentication with the source. | --For HTTPS authentication, you create a secret with the `username` and `password`: --```azurecli -kubectl create ns flux-config -kubectl create secret generic -n flux-config my-custom-secret --from-literal=username=<my-username> --from-literal=password=<my-password-or-key> -``` --For SSH authentication, you create a secret with the `identity` and `known_hosts` fields: --```azurecli -kubectl create ns flux-config -kubectl create secret generic -n flux-config my-custom-secret --from-file=identity=./id_rsa --from-file=known_hosts=./known_hosts -``` --> [!IMPORTANT] -> Azure DevOps [announced the deprecation of SSH-RSA](https://aka.ms/ado-ssh-rsa-deprecation) as a supported encryption method for connecting to Azure repositories using SSH. If you use SSH keys to connect to Azure repositories in Flux configurations, we recommend moving to more secure RSA-SHA2-256 or RSA-SHA2-512 keys. For more information, see [Azure DevOps SSH-RSA deprecation](tutorial-use-gitops-flux2.md#azure-devops-ssh-rsa-deprecation). --For both cases, when you create the Flux configuration, use `--local-auth-ref my-custom-secret` in place of the other authentication parameters: --```azurecli -az k8s-configuration flux create -g <cluster_resource_group> -c <cluster_name> -n <config_name> -t connectedClusters --scope cluster --namespace flux-config -u <git-repo-url> --kustomization name=kustomization1 --local-auth-ref my-custom-secret -``` --Learn more about using a local Kubernetes secret with these authentication methods: --* [Git repository HTTPS authentication](https://fluxcd.io/docs/components/source/gitrepositories/#https-authentication) -* [Git repository HTTPS self-signed certificates](https://fluxcd.io/docs/components/source/gitrepositories/#https-self-signed-certificates) -* [Git repository SSH authentication](https://fluxcd.io/docs/components/source/gitrepositories/#ssh-authentication) -* [Bucket static authentication](https://fluxcd.io/docs/components/source/buckets/#static-authentication) --> [!NOTE] -> If you need Flux to access the source through your proxy, you must update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server). --## Git implementation --To support various repository providers that implement Git, Flux can be configured to use one of two Git libraries: `go-git` or `libgit2`. For details, see the [Flux documentation](https://fluxcd.io/docs/components/source/gitrepositories/#git-implementation). --The GitOps implementation of Flux v2 automatically determines which library to use for public cloud repositories: --* For GitHub, GitLab, and BitBucket repositories, Flux uses `go-git`. -* For Azure DevOps and all other repositories, Flux uses `libgit2`. --For on-premises repositories, Flux uses `libgit2`. --## Kustomization --Kustomization is a setting created for Flux configurations that lets you choose a specific path in the source repo that is reconciled into the cluster. You don't need to create a `kustomization.yaml file on this specified path. By default, all of the manifests in this path are reconciled. However, if you want to have a Kustomize overlay for applications available on this repo path, you should create [Kustomize files](https://kustomize.io/) in git for the Flux configuration to make use of. --By using [`az k8s-configuration flux kustomization create`](/cli/azure/k8s-configuration/flux/kustomization#az-k8s-configuration-flux-kustomization-create), you can create one or more kustomizations during the configuration. --| Parameter | Format | Notes | -| - | - | - | -| `--kustomization` | No value | Start of a string of parameters that configure a kustomization. You can use it multiple times to create multiple kustomizations. | -| `name` | String | Unique name for this kustomization. | -| `path` | String | Path within the Git repository to reconcile with the cluster. Default is the top level of the branch. | -| `prune` | Boolean | Default is `false`. Set `prune=true` to assure that the objects that Flux deployed to the cluster are cleaned up if they're removed from the repository or if the Flux configuration or kustomizations are deleted. Using `prune=true` is important for environments where users don't have access to the clusters and can make changes only through the Git repository. | -| `depends_on` | String | Name of one or more kustomizations (within this configuration) that must reconcile before this kustomization can reconcile. For example: `depends_on=["kustomization1","kustomization2"]`. If you remove a kustomization that has dependent kustomizations, the state of dependent kustomizations becomes `DependencyNotReady`, and reconciliation halts.| -| `timeout` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. | -| `sync_interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. | -| `retry_interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. | -| `validation` | String | Values: `none`, `client`, `server`. Default: `none`. See [Flux documentation](https://fluxcd.io/docs/) for details.| -| `force` | Boolean | Default: `false`. Set `force=true` to instruct the kustomize controller to re-create resources when patching fails because of an immutable field change. | --You can also use [`az k8s-configuration flux kustomization`](/cli/azure/k8s-configuration/flux/kustomization) to update, list, show, and delete kustomizations in a Flux configuration. --## Next steps --* Learn more about [Application deployments with GitOps (Flux v2) for AKS and Azure Arc-enabled Kubernetes](conceptual-gitops-flux2.md). -* Use our tutorial to learn how to [enable GitOps on your AKS or Azure Arc-enabled Kubernetes clusters](tutorial-use-gitops-flux2.md). -* Learn about [CI/CD workflow using GitOps](conceptual-gitops-flux2-ci-cd.md). |
azure-arc | Identity Access Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/identity-access-overview.md | - Title: "Azure Arc-enabled Kubernetes identity and access overview" Previously updated : 05/22/2024- -description: "Understand identity and access options for Arc-enabled Kubernetes clusters." ---# Azure Arc-enabled Kubernetes identity and access overview --You can authenticate, authorize, and control access to your Azure Arc-enabled Kubernetes clusters. This topic provides an overview of the options for doing so with your Arc-enabled Kubernetes clusters. --This image shows the ways that these different options can be used: ---You can also use both cluster connect and Azure RBAC together if that is most appropriate for your needs. --## Connectivity options --When planning how users will authenticate and access Arc-enabled Kubernetes clusters, the first decision is whether or not you want to use the cluster connect feature. --### Cluster connect --The Azure Arc-enabled Kubernetes [cluster connect](conceptual-cluster-connect.md) feature provides connectivity to the `apiserver` of the cluster. This connectivity doesn't require any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner. --With cluster connect, your Arc-enabled clusters can be accessed either within Azure or from the internet. This feature can help enable interactive debugging and troubleshooting scenarios. Cluster connect may also require less interaction for updates when permissions are needed for new users. All of the authorization and authentication options described in this article work with cluster connect. --Cluster connect is required if you want to use [custom locations](conceptual-custom-locations.md) or [viewing Kubernetes resources from Azure portal](kubernetes-resource-view.md). --For more information, see [Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters](cluster-connect.md). --<a name='azure-ad-and-azure-rbac-without-cluster-connect'></a> --### Microsoft Entra ID and Azure RBAC without cluster connect --If you don't want to use cluster connect, you can authenticate and authorize users so they can access the connected cluster by using [Microsoft Entra ID](/azure/active-directory/fundamentals/active-directory-whatis) and [Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview). Using [Azure RBAC on Azure Arc-enabled Kubernetes](conceptual-azure-rbac.md) lets you control the access that's granted to users in your tenant, managing access directly from Azure using familiar Azure identity and access features. You can also configure roles at the subscription or resource group scope, letting them roll out to all connected clusters within that scope. --Azure RBAC supports [conditional access](azure-rbac.md#use-conditional-access-with-azure-ad), allowing you to enable [just-in-time cluster access](azure-rbac.md#configure-just-in-time-cluster-access-with-azure-ad) or limit access to approved clients or devices. --Azure RBAC also supports a [direct mode of communication](azure-rbac.md#use-a-shared-kubeconfig-file), using Microsoft Entra identities to access connected clusters directly from within the datacenter, rather than requiring all connections to go through Azure. --Azure RBAC on Arc-enabled Kubernetes is currently in public preview. For more information, see [Use Azure RBAC on Azure Arc-enabled Kubernetes clusters](azure-rbac.md). --## Authentication options --Authentication is the process of verifying a user's identity. There are two options for authenticating to an Arc-enabled Kubernetes cluster: cluster connect and Azure RBAC. --<a name='azure-ad-authentication'></a> --### Microsoft Entra authentication --The [Azure RBAC on Arc-enabled Kubernetes](conceptual-azure-rbac.md) feature lets you use [Microsoft Entra ID](/azure/active-directory/fundamentals/active-directory-whatis) to allow users in your Azure tenant to access your connected Kubernetes clusters. --You can also use Microsoft Entra authentication with cluster connect. For more information, see [Microsoft Entra authentication option](cluster-connect.md#microsoft-entra-authentication-option). --### Service token authentication --With cluster connect, you can choose to authenticate via [service accounts](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens). --For more information, see [Service account token authentication option](cluster-connect.md#service-account-token-authentication-option). --## Authorization options --Authorization grants an authenticated user the permission to perform specified actions. With Azure Arc-enabled Kubernetes, there are two authorization options, both of which use role-based access control (RBAC): --- [Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview) uses Microsoft Entra ID and Azure Resource Manager to provide fine-grained access management to Azure resources. This allows the benefits of Azure role assignments, such as activity logs tracking all changes made, to be used with your Azure Arc-enabled Kubernetes clusters.-- [Kubernetes role-based access control (Kubernetes RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) lets you dynamically configure policies through the Kubernetes API so that users, groups, and service accounts only have access to specific cluster resources.--While Kubernetes RBAC works only on Kubernetes resources within your cluster, Azure RBAC works on resources across your Azure subscription. --### Azure RBAC authorization --[Azure role-based access control (RBAC)](../../role-based-access-control/overview.md) is an authorization system built on Azure Resource Manager and Microsoft Entra ID that provides fine-grained access management of Azure resources. With Azure RBAC, role definitions outline the permissions to be applied. You assign these roles to users or groups via a role assignment for a particular scope. The scope can be across the entire subscription or limited to a resource group or to an individual resource such as a Kubernetes cluster. --If you're using Microsoft Entra authentication without cluster connect, then Azure RBAC authorization is your only option for authorization. --If you're using cluster connect with Microsoft Entra authentication, you have the option to use Azure RBAC for connectivity to the `apiserver` of the cluster. For more information, see [Microsoft Entra authentication option](cluster-connect.md#azure-active-directory-authentication-option). --### Kubernetes RBAC authorization --[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) provides granular filtering of user actions. With Kubernetes RBAC, you assign users or groups permission to create and modify resources or view logs from running application workloads. You can create roles to define permissions, and then assign those roles to users with role bindings. Permissions may be scoped to a single namespace or across the entire cluster. --If you're using cluster connect with the [service account token authentication option](cluster-connect.md#service-account-token-authentication-option), you must use Kubernetes RBAC to provide connectivity to the `apiserver` of the cluster. This connectivity doesn't require any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner. --If you're using [cluster connect with Microsoft Entra authentication](cluster-connect.md#azure-active-directory-authentication-option), you also have the option to use Kubernetes RBAC instead of Azure RBAC. --## Next steps --- Learn more about [Azure Microsoft Entra ID](/azure/active-directory/fundamentals/active-directory-whatis) and [Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview).-- Learn about [cluster connect access to Azure Arc-enabled Kubernetes clusters](conceptual-cluster-connect.md).-- Learn about [Azure RBAC on Azure Arc-enabled Kubernetes](conceptual-azure-rbac.md)-- Learn about [access and identity options for Azure Kubernetes Service (AKS) clusters](/azure/aks/concepts-identity). |
azure-arc | Kubernetes Resource View | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/kubernetes-resource-view.md | - Title: Access Kubernetes resources from Azure portal Previously updated : 08/07/2023- -description: Learn how to interact with Kubernetes resources to manage an Azure Arc-enabled Kubernetes cluster from the Azure portal. ---# Access Kubernetes resources from Azure portal --The Azure portal includes a Kubernetes resource view for easy access to the Kubernetes resources in your Azure Arc-enabled Kubernetes cluster. Viewing Kubernetes resources from the Azure portal reduces context switching between the Azure portal and the `kubectl` command-line tool, streamlining the experience for viewing and editing your Kubernetes resources. The resource viewer currently includes multiple resource types, including deployments, pods, and replica sets. --## Prerequisites --- An existing Kubernetes cluster [connected](quickstart-connect-cluster.md) to Azure as an Azure Arc-enabled Kubernetes resource.--- An account that can authenticate to the cluster and access the resources in the portal:-- - If using [Azure RBAC](azure-rbac.md), ensure that the Microsoft Entra account that will access the portal has a role that lets it authenticate to the cluster, such as [Azure Arc Kubernetes Viewer](/azure/role-based-access-control/built-in-roles): -- ```azurecli - az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER - ``` -- - If using [cluster connect with service account token authentication](cluster-connect.md#service-account-token-authentication-option), ensure that the account uses a Kubernetes cluster role that can authenticate to the cluster, such as `cluster-admin`: - - ```console - kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID` - ``` -- The same account must have an Azure role such as [Azure Arc Kubernetes Viewer](/azure/role-based-access-control/built-in-roles) in order to authenticate to the Azure portal and view Arc-enabled cluster resources. --## View Kubernetes resources --To see the Kubernetes resources, navigate to your cluster in the Azure portal. The navigation pane on the left is used to access your resources: --- **Namespaces** displays the namespaces of your cluster. The filter at the top of the namespace list provides a quick way to filter and display your namespace resources.-- **Workloads** shows information about deployments, pods, replica sets, stateful sets, daemon sets, jobs, and cron jobs deployed to your cluster.-- **Services and ingresses** shows all of your cluster's service and ingress resources.-- **Storage** shows your Azure storage classes and persistent volume information.-- **Configuration** shows your cluster's config maps and secrets.---## Edit YAML --The Kubernetes resource view also includes a YAML editor. A built-in YAML editor means you can update Kubernetes objects from within the portal and apply changes immediately. -->[!WARNING] -> The Azure portal Kubernetes management capabilities and the YAML editor are built for learning and flighting new deployments in a development and test setting. Performing direct production changes by editing the YAML is not recommended. For production environments, consider using [GitOps to apply configurations](tutorial-use-gitops-flux2.md). --After you edit the YAML, select **Review + save**, confirm the changes, and then save again. ---## Next steps --- Learn how to [deploy Azure Monitor for containers](/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters?toc=/azure/azure-arc/kubernetes/toc.json) for more in-depth information about nodes and containers on your clusters.-- Learn about [identity and access options for Azure Arc-enabled Kubernetes](identity-access-overview.md). |
azure-arc | Monitor Gitops Flux 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/monitor-gitops-flux-2.md | - Title: Monitor GitOps (Flux v2) status and activity Previously updated : 10/18/2023- -description: Learn how to monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2. ---# Monitor GitOps (Flux v2) status and activity --To monitor status and activity related to GitOps with Flux v2 in your Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters, you have several options: --- Use the Azure portal to [monitor Flux configurations and resources on individual clusters](#monitor-flux-configurations-in-the-azure-portal).-- Use a Grafana dashboard to [monitor deployment and compliance status](#monitor-deployment-and-compliance-status).-- Use the Flux Control Plane and Flux Cluster Stats dashboards to [monitor resource consumption and reconciliations](#monitor-resource-consumption-and-reconciliations).-- Enable Prometheus scraping from clusters and create your own dashboards using the data in Azure Monitor workspace.-- Create alerts on Azure Monitor using the data available through Prometheus scraping.--This topic describes some of the ways you can monitor your Flux activity and status. --## Monitor Flux configurations in the Azure portal --After you've [created Flux configurations](tutorial-use-gitops-flux2.md#apply-a-flux-configuration) on your cluster, you can view status information in the Azure portal by navigating to a cluster and selecting **GitOps**. --### View details on cluster compliance and objects --The **Compliance** state shows whether the current state of the cluster matches the desired state. Possible values: --- **Compliant**: The cluster's state matches the desired state.-- **Pending**: An updated desired state has been detected, but that state has not yet been reconciled on the cluster.-- **Not Compliant**: The current state doesn't match the desired state.---To help debug reconciliation issues for a cluster, select **Configuration objects**. Here, you can view logs of each of the configuration objects that Flux creates for each Flux configuration. Select an object name to view its logs. ---To view the Kubernetes objects that have been created as a result of Flux configurations being applied, select **Workloads** in the **Kubernetes resources** section of the cluster's left navigation pane. Here, you can view all details of any resources that have been created on the cluster. --By default, you can filter by namespace and service name. You can also add any label filter that you may be using in your applications to help narrow down the search. --### View Flux configuration state and details --For each Flux configuration, the **State** column indicates whether the Flux configuration object has successfully been created on the cluster. --Select any Flux configuration to see its **Overview** page, including the following information: --- Source commit ID for the last synchronization-- Timestamp of the latest source update-- Status update timestamp (indicating when the latest statistics were obtained)-- Repo URL and branch-- Links to view different kustomizations---## Use dashboards to monitor GitOps status and activity --We provide dashboards to help you monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2. These JSON dashboards can be imported to Grafana to help you view and analyze your data in real time. You can also set up alerts for this information. --To import and use these dashboards, you need: --- One or more existing Arc-enabled Kubernetes clusters or AKS clusters.-- The [microsoft.flux extension](extensions-release.md#flux-gitops) installed on the clusters.-- At least one [Flux configuration](tutorial-use-gitops-flux2.md) created on the clusters.--## Monitor deployment and compliance status --Follow these steps to import dashboards that let you monitor Flux extension deployment and status across clusters, and the compliance status of Flux configuration on those clusters. --> [!NOTE] -> These steps describe the process for importing the dashboard to [Azure Managed Grafana](/azure/managed-grafana/overview). You can also [import this dashboard to any Grafana instance](https://grafana.com/docs/grafana/latest/dashboards/manage-dashboards/#import-a-dashboard). With this option, a service principal must be used; managed identity is not supported for data connection outside of Azure Managed Grafana. --1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). Ensure that you're able to access Grafana by selecting its endpoint on the Overview page. You need at least **Grafana Editor** level permissions to view and edit dashboards. You can check your access by going to **Access control (IAM)** on the Grafana instance. -1. If you're using a managed identity for the Azure Managed Grafana instance, follow these steps to assign it the **Monitoring Reader** role on the subscription(s): -- 1. In the Azure portal, navigate to the subscription that you want to add. - 1. Select **Access control (IAM)**. - 1. Select **Add role assignment**. - 1. Select the **Monitoring Reader** role, then select **Next**. - 1. On the **Members** tab, select **Managed identity**, then choose **Select members**. - 1. From the **Managed identity** list, select the subscription where you created your Azure Managed Grafana Instance. Then select **Azure Managed Grafana** and the name of your Azure Managed Grafana instance. - 1. Select **Review + Assign**. -- If you're using a service principal, grant the **Monitoring Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.) --1. [Create the Azure Monitor Data Source connection](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) in your Azure Managed Grafana instance. This connection lets the dashboard access Azure Resource Graph data. -1. Download the [GitOps Flux - Application Deployments Dashboard](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/GitOps%20Flux%20-%20Application%20Deployments%20Dashboard.json). -1. Follow the steps to [import the JSON dashboard to Grafana](/azure/managed-grafana/how-to-create-dashboard#import-a-json-dashboard). --After you have imported the dashboard, it will display information from the clusters that you're monitoring, with several panels that provide details. For more details on an item, select the link to visit the Azure portal, where you can find more information about configurations, errors and logs. ---The **Flux Extension Deployment Status** table lists all clusters where the Flux extension is deployed, along with current deployment status. ---The **Flux Configuration Compliance Status** table lists all Flux configurations created on the clusters, along with their compliance status. To see status and error logs for configuration objects such as Helm releases and kustomizations, select the **Non-Compliant** link from the **ComplianceState** column. ---The **Count of Flux Extension Deployments by Status** chart shows the count of clusters, based on their provisioning state. ---The **Count of Flux Configurations by Compliance Status** chart shows the count of Flux configurations, based on their compliance status with respect to the source repository. ---### Filter dashboard data to track application deployments --You can filter data in the **GitOps Flux - Application Deployments Dashboard** to change the information shown. For example, you can show data for only certain subscriptions or resource groups, or limit data to a particular cluster. To do so, select the filter option either from the top level dropdowns or from any column header in the tables. --For example, in the **Flux Configuration Compliance Status** table, you can select a specific commit from the **SourceLastSyncCommit** column. By doing so, you can track the status of a configuration deployment to all of the clusters affected by that commit. --### Create alerts for extension and configuration failures --After you've imported the dashboard as described in the previous section, you can set up alerts. These alerts notify you when Flux extensions or Flux configurations experience failures. --Follow the steps below to create an alert. Example queries are provided to detect extension provisioning or extension upgrade failures, or to detect compliance state failures. --1. In the left navigation menu of the dashboard, select **Alerting**. -1. Select **Alert rules**. -1. Select **+ Create alert rule**. The new alert rule page opens, with the **Grafana managed alerts** option selected by default. -1. In **Rule name**, add a descriptive name. This name is displayed in the alert rule list, and it will be the used as the `alertname` label for every alert instance created from this rule. -1. Under **Set a query and alert condition**: -- - Select a data source. The same data source used for the dashboard may be used here. - - For **Service**, select **Azure Resource Graph**. - - Select the subscriptions from the dropdown list. - - Enter the query you want to use. For example, for extension provisioning or upgrade failures, you can enter this query: -- ```kusto - kubernetesconfigurationresources - | where type == "microsoft.kubernetesconfiguration/extensions" - | extend provisioningState = tostring(properties.ProvisioningState) - | where provisioningState == "Failed" - | summarize count() by provisioningState - ``` -- Or for compliance state failures, you can enter this query: -- ```kusto - kubernetesconfigurationresources - | where type == "microsoft.kubernetesconfiguration/fluxconfigurations" - | extend complianceState=tostring(properties.complianceState) - | where complianceState == "Non-Compliant" - | summarize count() by complianceState - ``` -- - For **Threshold box**, select **A** for input type and set the threshold to **0** to receive alerts even if just one extension fails on the cluster. Mark this as the **Alert condition**. -- :::image type="content" source="media/monitor-gitops-flux2/application-dashboard-set-alerts.png" alt-text="Screenshot showing the alert creation process." lightbox="media/monitor-gitops-flux2/application-dashboard-set-alerts.png"::: --1. Specify the alert evaluation interval: -- - For **Condition**, select the query or expression to trigger the alert rule. - - For **Evaluate every**, enter the evaluation frequency as a multiple of 10 seconds. - - For **Evaluate for**, specify how long the condition must be true before the alert is created. - - In **Configure no data and error handling**, indicate what should happen when the alert rule returns no data or returns an error. - - To check the results from running the query, select **Preview**. --1. Add the storage location, rule group, and any additional metadata that you want to associate with the rule. -- - For **Folder**, select the folder where the rule should be stored. - - For **Group**, specify a predefined group. - - If desired, add a description and summary to customize alert messages. - - Add Runbook URL, panel, dashboard, and alert IDs as needed. --1. If desired, add any custom labels. Then select **Save**. --You can also [configure contact points](https://grafana.com/docs/grafana/latest/alerting/alerting-rules/manage-contact-points/) and [configure notification policies](https://grafana.com/docs/grafana/latest/alerting/alerting-rules/create-notification-policy/) for your alerts. --## Monitor resource consumption and reconciliations --Follow these steps to import dashboards that let you monitor Flux resource consumption, reconciliations, API requests, and reconciler status. --1. Follow the steps to [create an Azure Monitor Workspace](/azure/azure-monitor/essentials/azure-monitor-workspace-manage). -1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). -1. Enable Prometheus metrics collection on the [AKS clusters](/azure/azure-monitor/containers/prometheus-metrics-enable) and/or [Arc-enabled Kubernetes clusters](/azure/azure-monitor/essentials/prometheus-metrics-from-arc-enabled-cluster) that you want to monitor. -1. Configure Azure Monitor Agent to scrape the Azure Managed Flux metrics by creating a [configmap](/azure/azure-monitor/essentials/prometheus-metrics-scrape-configuration): -- ```yaml - kind: ConfigMap - apiVersion: v1 - data: - schema-version: - #string.used by agent to parse config. supported versions are {v1}. Configs with other schema versions will be rejected by the agent. - v1 - config-version: - #string.used by customer to keep track of this config file's version in their source control/repository (max allowed 10 chars, other chars will be truncated) - ver1 - default-scrape-settings-enabled: |- - kubelet = true - coredns = false - cadvisor = true - kubeproxy = false - apiserver = false - kubestate = true - nodeexporter = true - windowsexporter = false - windowskubeproxy = false - kappiebasic = true - prometheuscollectorhealth = false - # Regex for which namespaces to scrape through pod annotation based scraping. - # This is none by default. Use '.*' to scrape all namespaces of annotated pods. - pod-annotation-based-scraping: |- - podannotationnamespaceregex = "flux-system" - default-targets-scrape-interval-settings: |- - kubelet = "30s" - coredns = "30s" - cadvisor = "30s" - kubeproxy = "30s" - apiserver = "30s" - kubestate = "30s" - nodeexporter = "30s" - windowsexporter = "30s" - windowskubeproxy = "30s" - kappiebasic = "30s" - prometheuscollectorhealth = "30s" - podannotations = "30s" - metadata: - name: ama-metrics-settings-configmap - namespace: kube-system - ``` - -1. Download the [Flux Control Plane](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/Flux%20Control%20Plane.json) and [Flux Cluster Stats](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/Flux%20Cluster%20Stats.json) dashboards. -1. [Link the Managed Prometheus workspace to the Managed Grafana instance](/azure/azure-monitor/essentials/azure-monitor-workspace-manage#link-a-grafana-workspace). This takes a few minutes to complete. -1. Follow the steps to [import these JSON dashboards to Grafana](/azure/managed-grafana/how-to-create-dashboard#import-a-json-dashboard). --After you have imported the dashboards, they'll display information from the clusters that you're monitoring. To show information only for a particular cluster or namespace, use the filters near the top of each dashboard. --The **Flux Control Plane** dashboard shows details about status resource consumption, reconciliations at the cluster level, and Kubernetes API requests. ---The **Flux Cluster Stats** dashboard shows details about the number of reconcilers, along with the status and execution duration of each reconciler. ---### Create alerts for resource consumption and reconciliation issues --After you've imported the dashboard as described in the previous section, you can set up alerts. These alerts notify you of resource consumption and reconciliation issues that may require attention. --To enable these alerts, you deploy a Bicep template similar to the one shown here. The alert rules in this template are samples that can be modified as needed. --Once you've downloaded the Bicep template and made your changes, [follow these steps to deploy the template](/azure/azure-resource-manager/bicep/template-specs). --```bicep -param azureMonitorWorkspaceName string -param alertReceiverEmailAddress string --param kustomizationLookbackPeriodInMinutes int = 5 -param helmReleaseLookbackPeriodInMinutes int = 5 -param gitRepositoryLookbackPeriodInMinutes int = 5 -param bucketLookbackPeriodInMinutes int = 5 -param helmRepoLookbackPeriodInMinutes int = 5 -param timeToResolveAlerts string = 'PT10M' -param location string = resourceGroup().location --resource azureMonitorWorkspace 'Microsoft.Monitor/accounts@2023-04-03' = { - name: azureMonitorWorkspaceName - location: location -} --resource fluxRuleActionGroup 'Microsoft.Insights/actionGroups@2023-01-01' = { - name: 'fluxRuleActionGroup' - location: 'global' - properties: { - enabled: true - groupShortName: 'fluxGroup' - emailReceivers: [ - { - name: 'emailReceiver' - emailAddress: alertReceiverEmailAddress - } - ] - } -} --resource fluxRuleGroup 'Microsoft.AlertsManagement/prometheusRuleGroups@2023-03-01' = { - name: 'fluxRuleGroup' - location: location - properties: { - description: 'Flux Prometheus Rule Group' - scopes: [ - azureMonitorWorkspace.id - ] - enabled: true - interval: 'PT1M' - rules: [ - { - alert: 'KustomizationNotReady' - expression: 'sum by (cluster, namespace, name) (gotk_reconcile_condition{type="Ready", status="False", kind="Kustomization"}) > 0' - for: 'PT${kustomizationLookbackPeriodInMinutes}M' - labels: { - description: 'Kustomization reconciliation failing for last ${kustomizationLookbackPeriodInMinutes} minutes.' - } - annotations: { - description: 'Kustomization reconciliation failing for last ${kustomizationLookbackPeriodInMinutes} minutes.' - } - enabled: true - severity: 3 - resolveConfiguration: { - autoResolved: true - timeToResolve: timeToResolveAlerts - } - actions: [ - { - actionGroupId: fluxRuleActionGroup.id - } - ] - } - { - alert: 'HelmReleaseNotReady' - expression: 'sum by (cluster, namespace, name) (gotk_reconcile_condition{type="Ready", status="False", kind="HelmRelease"}) > 0' - for: 'PT${helmReleaseLookbackPeriodInMinutes}M' - labels: { - description: 'HelmRelease reconciliation failing for last ${helmReleaseLookbackPeriodInMinutes} minutes.' - } - annotations: { - description: 'HelmRelease reconciliation failing for last ${helmReleaseLookbackPeriodInMinutes} minutes.' - } - enabled: true - severity: 3 - resolveConfiguration: { - autoResolved: true - timeToResolve: timeToResolveAlerts - } - actions: [ - { - actionGroupId: fluxRuleActionGroup.id - } - ] - } - { - alert: 'GitRepositoryNotReady' - expression: 'sum by (cluster, namespace, name) (gotk_reconcile_condition{type="Ready", status="False", kind="GitRepository"}) > 0' - for: 'PT${gitRepositoryLookbackPeriodInMinutes}M' - labels: { - description: 'GitRepository reconciliation failing for last ${gitRepositoryLookbackPeriodInMinutes} minutes.' - } - annotations: { - description: 'GitRepository reconciliation failing for last ${gitRepositoryLookbackPeriodInMinutes} minutes.' - } - enabled: true - severity: 3 - resolveConfiguration: { - autoResolved: true - timeToResolve: timeToResolveAlerts - } - actions: [ - { - actionGroupId: fluxRuleActionGroup.id - } - ] - } - { - alert: 'BucketNotReady' - expression: 'sum by (cluster, namespace, name) (gotk_reconcile_condition{type="Ready", status="False", kind="Bucket"}) > 0' - for: 'PT${bucketLookbackPeriodInMinutes}M' - labels: { - description: 'Bucket reconciliation failing for last ${bucketLookbackPeriodInMinutes} minutes.' - } - annotations: { - description: 'Bucket reconciliation failing for last ${bucketLookbackPeriodInMinutes} minutes.' - } - enabled: true - severity: 3 - resolveConfiguration: { - autoResolved: true - timeToResolve: timeToResolveAlerts - } - actions: [ - { - actionGroupId: fluxRuleActionGroup.id - } - ] - } - { - alert: 'HelmRepositoryNotReady' - expression: 'sum by (cluster, namespace, name) (gotk_reconcile_condition{type="Ready", status="False", kind="HelmRepository"}) > 0' - for: 'PT${helmRepoLookbackPeriodInMinutes}M' - labels: { - description: 'HelmRepository reconciliation failing for last ${helmRepoLookbackPeriodInMinutes} minutes.' - } - annotations: { - description: 'HelmRepository reconciliation failing for last ${helmRepoLookbackPeriodInMinutes} minutes.' - } - enabled: true - severity: 3 - resolveConfiguration: { - autoResolved: true - timeToResolve: timeToResolveAlerts - } - actions: [ - { - actionGroupId: fluxRuleActionGroup.id - } - ] - } - ] - } -} --``` ---## Next steps --- Review our tutorial on [using GitOps with Flux v2 to manage configuration and application deployment](tutorial-use-gitops-flux2.md).-- Learn about [Azure Monitor Container Insights](/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json). |
azure-arc | Move Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/move-regions.md | - Title: "Move Arc-enabled Kubernetes clusters between regions" Previously updated : 12/20/2022-- -description: "Manually move your Azure Arc-enabled Kubernetes and connected cluster resources between regions." ---# Move Arc-enabled Kubernetes clusters across Azure regions --In some circumstances, you may want to move your [Arc-enabled Kubernetes clusters](overview.md) to another region. For example, you might want to deploy features or services that are only available in specific regions, or you need to change regions due to internal policy and governance requirements or capacity planning considerations. --This article describes how to move Arc-enabled Kubernetes clusters and any connected cluster resources to a different Azure region. --## Prerequisites --- Ensure that Azure Arc-enabled Kubernetes resources (`Microsoft.Kubernetes/connectedClusters`) are [supported in the target region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc).-- Ensure that any Azure Arc-enabled Kubernetes configuration resources (`Microsoft.KubernetesConfiguration/SourceControlConfigurations`, `Microsoft.KubernetesConfiguration/Extensions`, `Microsoft.KubernetesConfiguration/FluxConfigurations`) are supported in the target region.-- Ensure that the Arc-enabled services you've deployed on top of the cluster are supported in the target region.-- Ensure you have network access to the API server of your underlying Kubernetes cluster.--## Prepare --Before you begin, it's important to understand what moving these resources involves. --The `connectedClusters` resource is the Azure Resource Manager representation of a Kubernetes cluster outside of Azure (such as on-premises, another cloud, or edge). The underlying infrastructure lies in your environment, and Azure Arc provides a representation of the cluster on Azure by installing agents on the cluster. --Moving a connected cluster to a new region means deleting the ARM resource in the source region, cleaning up the agents on your cluster, and then connecting your cluster again in the target region. --Source control configurations, [Flux configurations](conceptual-gitops-flux2.md) and [extensions](conceptual-extensions.md) within the cluster are child resources of the connected cluster resource. To move these resources, you'll need to save details about the resources, then move the parent `connectedClusters` resource. After that, you can recreate the child resources in the target cluster resource. --## Move --1. Do a LIST to get all configuration resources in the source cluster (the cluster to be moved) and save the response body: -- - [Microsoft.KubernetesConfiguration/SourceControlConfigurations](/cli/azure/k8s-configuration?view=azure-cli-latest&preserve-view=true#az-k8sconfiguration-list) - - [Microsoft.KubernetesConfiguration/Extensions](/cli/azure/k8s-extension?view=azure-cli-latest&preserve-view=true#az-k8s-extension-list) - - [Microsoft.KubernetesConfiguration/FluxConfigurations](/cli/azure/k8s-configuration/flux?view=azure-cli-latest&preserve-view=true#az-k8s-configuration-flux-list) -- > [!NOTE] - > LIST/GET of configuration resources **do not** return `ConfigurationProtectedSettings`. For such cases, the only option is to save the original request body and reuse them while creating the resources in the new region. --1. [Delete](./move-regions.md#clean-up-source-resources) the previous Arc deployment from the underlying Kubernetes cluster. -1. With network access to the underlying Kubernetes cluster, run [this command](./quickstart-connect-cluster.md?tabs=azure-cli#connect-an-existing-kubernetes-cluster) to connect that cluster in the new region. -- > [!NOTE] - > The above command creates the cluster by default in the same location as its resource group. Use the `--location` parameter to explicitly provide the target region value. --1. [Verify](#verify) that the Arc connected cluster is successfully running in the new region. This is the target cluster. -1. Using the response body you saved, recreate each of the configuration resources obtained in the LIST command from the source cluster on the target cluster. --If you don't need to move the cluster, but want to move configuration resources to an Arc-enabled Kubernetes cluster in a different region, do the following: --1. Do a LIST to get all configuration resources in the source cluster as noted above, and save the response body. -1. Delete the resources from the source cluster. -1. In the target cluster, recreate each of the configuration resources obtained in the LIST command from the source cluster. --## Verify --1. Run `az connectedk8s show -n <connected-cluster-name> -g <resource-group>` and ensure the `connectivityStatus` value is `Connected`. -1. Run [this command](./quickstart-connect-cluster.md?tabs=azure-cli#view-azure-arc-agents-for-kubernetes) to verify all Arc agents are successfully deployed on the underlying cluster. -1. Do a LIST of all configuration resources in the target cluster. This should match the original LIST response from the source cluster. --## Clean up source resources --With network access to the underlying Kubernetes cluster, run [this command](./quickstart-connect-cluster.md?tabs=azure-cli#clean-up-resources) to delete the Arc connected cluster. This command deletes the Azure Arc-enabled Kubernetes cluster resource, any associated configuration resources, and any agents running on the cluster. --If you need to delete individual configuration resources in the source cluster without deleting the cluster resource, you can delete these resources individually: --- [Microsoft.KubernetesConfiguration/SourceControlConfigurations](/cli/azure/k8s-configuration?view=azure-cli-latest&preserve-view=true#az-k8s-configuration-delete)-- [Microsoft.KubernetesConfiguration/Extensions](/cli/azure/k8s-extension?view=azure-cli-latest&preserve-view=true#az-k8s-extension-delete)-- [Microsoft.KubernetesConfiguration/FluxConfigurations](/cli/azure/k8s-configuration/flux?view=azure-cli-latest&preserve-view=true#az-k8s-configuration-flux-delete) |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/network-requirements.md | - Title: Azure Arc-enabled Kubernetes network requirements -description: Learn about the networking requirements to connect Kubernetes clusters to Azure Arc. Previously updated : 06/11/2024-----# Azure Arc-enabled Kubernetes network requirements --This topic describes the networking requirements for connecting a Kubernetes cluster to Azure Arc and supporting various Arc-enabled Kubernetes scenarios. --## Details ----## Additional endpoints --Depending on your scenario, you may need connectivity to other URLs, such as those used by the Azure portal, management tools, or other Azure services. In particular, review these lists to ensure that you allow connectivity to any necessary endpoints: --- [Azure portal URLs](../../azure-portal/azure-portal-safelist-urls.md)-- [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints)--For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements](../network-requirements-consolidated.md). --## Next steps --- Understand [system requirements for Arc-enabled Kubernetes](system-requirements.md).-- Use our [quickstart](quickstart-connect-cluster.md) to connect your cluster.-- Review [frequently asked questions](faq.md) about Arc-enabled Kubernetes. |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md | - Title: "Overview of Azure Arc-enabled Kubernetes" Previously updated : 08/08/2023- -description: "Azure Arc-enabled Kubernetes allows you to attach Kubernetes clusters running anywhere so that you can manage and configure them in Azure." ---# What is Azure Arc-enabled Kubernetes? --Azure Arc-enabled Kubernetes allows you to attach Kubernetes clusters running anywhere so that you can manage and configure them in Azure. By managing all of your Kubernetes resources in a single control plane, you can enable a more consistent development and operation experience to run cloud-native apps anywhere and on any Kubernetes platform. --When the [Azure Arc agents are deployed to the cluster](quickstart-connect-cluster.md), an outbound connection to Azure is initiated, using industry-standard SSL to secure data in transit. --Once clusters are connected to Azure, they're represented as their own resources in Azure Resource Manager, and they can be organized using resource groups and tagging. --## Supported Kubernetes distributions --Azure Arc-enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. This includes clusters running on other public cloud providers (such as GCP or AWS) and clusters running on your on-premises data center (such as VMware vSphere or Azure Stack HCI). --The Azure Arc team has worked with key industry partners to [validate conformance of their Kubernetes distributions with Azure Arc-enabled Kubernetes](./validation-program.md). --## Scenarios and enhanced functionality --Once your Kubernetes clusters are connected to Azure, at scale you can: --* View all [connected Kubernetes clusters](quickstart-connect-cluster.md) running outside of Azure for inventory, grouping, and tagging, along with Azure Kubernetes Service (AKS) clusters. --* Configure clusters and deploy applications using [GitOps-based configuration management](tutorial-use-gitops-connected-cluster.md). --* View and monitor your clusters using [Azure Monitor for containers](/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters?toc=/azure/azure-arc/kubernetes/toc.json). --* Enforce threat protection using [Microsoft Defender for Kubernetes](/azure/defender-for-cloud/defender-for-kubernetes-azure-arc?toc=/azure/azure-arc/kubernetes/toc.json). --* Ensure governance through applying policies with [Azure Policy for Kubernetes](../../governance/policy/concepts/policy-for-kubernetes.md?toc=/azure/azure-arc/kubernetes/toc.json). --* Grant access and [connect](cluster-connect.md) to your Kubernetes clusters from anywhere, and manage access by using [Azure role-based access control (RBAC)](azure-rbac.md) on your cluster. --* Deploy machine learning workloads using [Azure Machine Learning for Kubernetes clusters](/azure/machine-learning/how-to-attach-kubernetes-anywhere?toc=/azure/azure-arc/kubernetes/toc.json). --* Deploy services that allow you to take advantage of specific hardware, comply with data residency requirements, or enable new scenarios. Examples of services include: - * [Azure Arc-enabled data services](../dat) - * [Azure Machine Learning for Kubernetes clusters](/azure/machine-learning/how-to-attach-kubernetes-anywhere?toc=/azure/azure-arc/kubernetes/toc.json) - * [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md) - * [App Services on Azure Arc](../../app-service/overview-arc-integration.md) - - [Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md) - * Deploy and manage Kubernetes applications targeted for Azure Arc-Enabled Kubernetes clusters from Azure Marketplace. - - [!INCLUDE [azure-lighthouse-supported-service](~/reusable-content/ce-skilling/azure/includes/azure-lighthouse-supported-service.md)] -- ## Next steps --* Learn about best practices and design patterns through the [Cloud Adoption Framework for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-kubernetes/eslz-arc-kubernetes-identity-access-management). -* Try out Arc-enabled Kubernetes without provisioning a full environment by using the [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_k8s). -* [Connect an existing Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md). |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md | - Title: Built-in policy definitions for Azure Arc-enabled Kubernetes -description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/06/2024--# ----# Azure Policy built-in definitions for Azure Arc-enabled Kubernetes --This page is an index of [Azure Policy](../../governance/policy/overview.md) built-in policy -definitions for Azure Arc-enabled Kubernetes. For additional Azure Policy built-ins for other -services, see -[Azure Policy built-in definitions](../../governance/policy/samples/built-in-policies.md). --The name of each built-in policy definition links to the policy definition in the Azure portal. Use -the link in the **Version** column to view the source on the -[Azure Policy GitHub repo](https://github.com/Azure/azure-policy). --## Azure Arc-enabled Kubernetes ---## Next steps --- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).-- Review the [Azure Policy definition structure](../../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../../governance/policy/concepts/effects.md). |
azure-arc | Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md | - Title: Use private connectivity for Azure Arc-enabled Kubernetes clusters with private link (preview) Previously updated : 09/21/2022- -description: With Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to use a single private endpoint. ----# Use private connectivity for Arc-enabled Kubernetes clusters with private link (preview) --[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure services to your virtual network using private endpoints. This means you can connect your on-premises Kubernetes clusters with Azure Arc and send all traffic over an Azure ExpressRoute or site-to-site VPN connection instead of using public networks. In Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to communicate with their Azure Arc resources using a single private endpoint. --This document covers when to use and how to set up Azure Arc Private Link (preview). --> [!IMPORTANT] -> The Azure Arc Private Link feature is currently in PREVIEW in all regions where Azure Arc-enabled Kubernetes is present, except South East Asia. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Advantages --With Private Link you can: --* Connect privately to Azure Arc without opening up any public network access. -* Ensure data from the Arc-enabled Kubernetes cluster is only accessed through authorized private networks. -* Prevent data exfiltration from your private networks by defining specific Azure Arc-enabled Kubernetes clusters and other Azure services resources, such as Azure Monitor, that connects through your private endpoint. -* Securely connect your private on-premises network to Azure Arc using ExpressRoute and Private Link. -* Keep all traffic inside the Microsoft Azure backbone network. --For more information, see [Key benefits of Azure Private Link](../../private-link/private-link-overview.md#key-benefits). --## How it works --Azure Arc Private Link Scope connects private endpoints (and the virtual networks they're contained in) to an Azure resource, in this case Azure Arc-enabled Kubernetes clusters. When you enable any one of the Arc-enabled Kubernetes cluster supported extensions, such as Azure Monitor, then connection to other Azure resources may be required for these scenarios. For example, in the case of Azure Monitor, the logs collected from the cluster are sent to Log Analytics workspace. --Connectivity to the other Azure resources from an Arc-enabled Kubernetes cluster listed earlier requires configuring Private Link for each service. For an example, see [Private Link for Azure Monitor](/azure/azure-monitor/logs/private-link-security). --## Current limitations --Consider these current limitations when planning your Private Link setup. --* You can associate at most one Azure Arc Private Link Scope with a virtual network. -* An Azure Arc-enabled Kubernetes cluster can only connect to one Azure Arc Private Link Scope. -* All on-premises Kubernetes clusters need to use the same private endpoint by resolving the correct private endpoint information (FQDN record name and private IP address) using the same DNS forwarder. For more information, see [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). The Azure Arc-enabled Kubernetes cluster, Azure Arc Private Link Scope, and virtual network must be in the same Azure region. The Private Endpoint and the virtual network must also be in the same Azure region, but this region can be different from that of your Azure Arc Private Link Scope and Arc-enabled Kubernetes cluster. -* Traffic to Microsoft Entra ID, Azure Resource Manager and Microsoft Container Registry service tags must be allowed through your on-premises network firewall during the preview. -* Other Azure services that you will use, for example Azure Monitor, requires their own private endpoints in your virtual network. -- > [!NOTE] - > The [Cluster Connect](conceptual-cluster-connect.md) (and hence the [Custom location](custom-locations.md)) feature is not supported on Azure Arc-enabled Kubernetes clusters with private connectivity enabled. This is planned and will be added later. Network connectivity using private links for Azure Arc services like Azure Arc-enabled data services and Azure Arc-enabled App services that use these features are also not supported yet. Refer to the section below for a list of [cluster extensions or Azure Arc services that support network connectivity through private links](#cluster-extensions-that-support-network-connectivity-through-private-links). --## Cluster extensions that support network connectivity through private links --On Azure Arc-enabled Kubernetes clusters configured with private links, the following extensions support end-to-end connectivity through private links. Refer to the guidance linked to each cluster extension for additional configuration steps and details on support for private links. --* [Azure GitOps](conceptual-gitops-flux2.md) -* [Azure Monitor](/azure/azure-monitor/logs/private-link-security) --## Planning your Private Link setup --To connect your Kubernetes cluster to Azure Arc over a private link, you need to configure your network to accomplish the following: --1. Establish a connection between your on-premises network and an Azure virtual network using a [site-to-site VPN](../../vpn-gateway/tutorial-site-to-site-portal.md) or [ExpressRoute](../../expressroute/expressroute-howto-linkvnet-arm.md) circuit. -1. Deploy an Azure Arc Private Link Scope, which controls which Kubernetes clusters can communicate with Azure Arc over private endpoints and associate it with your Azure virtual network using a private endpoint. -1. Update the DNS configuration on your local network to resolve the private endpoint addresses. -1. Configure your local firewall to allow access to Microsoft Entra ID, Azure Resource Manager and Microsoft Container Registry. -1. Associate the Azure Arc-enabled Kubernetes clusters with the Azure Arc Private Link Scope. -1. Optionally, deploy private endpoints for other Azure services your Azure Arc-enabled Kubernetes cluster is managed by, such as Azure Monitor. -The rest of this document assumes you have already set up your ExpressRoute circuit or site-to-site VPN connection. --## Network configuration --Azure Arc-enabled Kubernetes integrates with several Azure services to bring cloud management and governance to your hybrid Kubernetes clusters. Most of these services already offer private endpoints, but you need to configure your firewall and routing rules to allow access to Microsoft Entra ID and Azure Resource Manager over the internet until these services offer private endpoints. You also need to allow access to Microsoft Container Registry (and AzureFrontDoor.FirstParty as a precursor for Microsoft Container Registry) to pull images & Helm charts to enable services like Azure Monitor, as well as for initial setup of Azure Arc agents on the Kubernetes clusters. --There are two ways you can achieve this: --* If your network is configured to route all internet-bound traffic through the Azure VPN or ExpressRoute circuit, you can configure the network security group (NSG) associated with your subnet in Azure to allow outbound TCP 443 (HTTPS) access to Microsoft Entra ID, Azure Resource Manager, Azure Front Door and Microsoft Container Registry using [service tags](../../virtual-network/service-tags-overview.md). The NSG rules should look like the following: -- | Setting | Microsoft Entra ID rule | Azure Resource Manager rule | AzureFrontDoorFirstParty rule | Microsoft Container Registry rule | - |-|||| - | Source | Virtual Network | Virtual Network | Virtual Network | Virtual Network - | Source Port ranges | * | * | * | * - | Destination | Service Tag | Service Tag | Service Tag | Service Tag - | Destination service tag | AzureActiveDirectory | AzureResourceManager | AzureFrontDoor.FirstParty | MicrosoftContainerRegistry - | Destination port ranges | 443 | 443 | 443 | 443 - | Protocol | TCP | TCP | TCP | TCP - | Action | Allow | Allow | Allow (Both inbound and outbound) | Allow - | Priority | 150 (must be lower than any rules that block internet access) | 151 (must be lower than any rules that block internet access) | 152 (must be lower than any rules that block internet access) | 153 (must be lower than any rules that block internet access) | - | Name | AllowAADOutboundAccess | AllowAzOutboundAccess | AllowAzureFrontDoorFirstPartyAccess | AllowMCROutboundAccess --* Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Microsoft Entra ID, Azure Resource Manager, and Microsoft Container Registry, and inbound & outbound access to AzureFrontDoor.FirstParty using the downloadable service tag files. The JSON file contains all the public IP address ranges used by Microsoft Entra ID, Azure Resource Manager, AzureFrontDoor.FirstParty, and Microsoft Container Registry and is updated monthly to reflect any changes. Microsoft Entra service tag is AzureActiveDirectory, Azure Resource Manager's service tag is AzureResourceManager, Microsoft Container Registry's service tag is MicrosoftContainerRegistry, and Azure Front Door's service tag is AzureFrontDoor.FirstParty. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules. --## Create an Azure Arc Private Link Scope --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Go to **Create a resource** in the Azure portal, then search for Azure Arc Private Link Scope. Or you can go directly to the [Azure Arc Private Link Scope page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.HybridCompute%2FprivateLinkScopes) in the portal. --1. Select **Create**. -1. Select a subscription and resource group. During the preview, your virtual network and Azure Arc-enabled Kubernetes clusters must be in the same subscription as the Azure Arc Private Link Scope. -1. Give the Azure Arc Private Link Scope a name. -1. You can optionally require every Arc-enabled Kubernetes cluster associated with this Azure Arc Private Link Scope to send data to the service through the private endpoint. If you select **Enable public network access**, Kubernetes clusters associated with this Azure Arc Private Link Scope can communicate with the service over both private or public networks. You can change this setting after creating the scope as needed. -1. Select **Review + Create**. -- :::image type="content" source="media/private-link/create-private-link-scope.png" alt-text="Screenshot of the Azure Arc Private Link Scope creation screen in the Azure portal."::: --1. After the validation completes, select **Create**. --### Create a private endpoint --Once your Azure Arc Private Link Scope is created, you need to connect it with one or more virtual networks using a private endpoint. The private endpoint exposes access to the Azure Arc services on a private IP in your virtual network address space. --The Private Endpoint on your virtual network allows it to reach Azure Arc-enabled Kubernetes cluster endpoints through private IPs from your network's pool, instead of using to the public IPs of these endpoints. That allows you to keep using your Azure Arc-enabled Kubernetes clusters without opening your VNet to unrequested outbound traffic. Traffic from the Private Endpoint to your resources will go through Microsoft Azure, and is not routed to public networks. --1. In your scope resource, select **Private Endpoint connections** in the left-hand resource menu. Select **Add** to start the endpoint create process. You can also approve connections that were started in the Private Link center by selecting them, then selecting **Approve**. -- :::image type="content" source="media/private-link/create-private-endpoint.png" alt-text="Screenshot of the Private Endpoint connections screen in the Azure portal."::: --1. Pick the subscription, resource group, and name of the endpoint, and the region you want to use. This must be the same region as your virtual network. -1. Select **Next: Resource**. -1. On the **Resource** page, perform the following: - 1. Select the subscription that contains your Azure Arc Private Link Scope resource. - 1. For **Resource type**, choose Microsoft.HybridCompute/privateLinkScopes. - 1. From the **Resource** drop-down, choose the Azure Arc Private Link Scope that you created earlier. - 1. Select **Next: Configuration**. -1. On the **Configuration** page, perform the following: - 1. Choose the virtual network and subnet from which you want to connect to Azure Arc-enabled Kubernetes clusters. - 1. For **Integrate with private DNS zone**, select **Yes**. A new Private DNS Zone will be created. The actual DNS zones may be different from what is shown in the screenshot below. -- :::image type="content" source="media/private-link/create-private-endpoint-2.png" alt-text="Screenshot of the Configuration step to create a private endpoint in the Azure portal."::: -- > [!NOTE] - > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link, including this private endpoint and the Private Scope configuration. Next, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Arc-enabled Kubernetes clusters. - 1. Select **Review + create**. - 1. Let validation pass. - 1. Select **Create**. --## Configure on-premises DNS forwarding --Your on-premises Kubernetes clusters need to be able to resolve the private link DNS records to the private endpoint IP addresses. How you configure this depends on whether you are using Azure private DNS zones to maintain DNS records or using your own DNS server on-premises, along with how many clusters you are configuring. --### DNS configuration using Azure-integrated private DNS zones --If you set up private DNS zones for Azure Arc-enabled Kubernetes clusters when creating the private endpoint, your on-premises Kubernetes clusters must be able to forward DNS queries to the built-in Azure DNS servers to resolve the private endpoint addresses correctly. You need a DNS forwarder in Azure (either a purpose-built VM or an Azure Firewall instance with DNS proxy enabled), after which you can configure your on-premises DNS server to forward queries to Azure to resolve private endpoint IP addresses. --The private endpoint documentation provides guidance for configuring [on-premises workloads using a DNS forwarder](../../private-link/private-endpoint-dns-integration.md#on-premises-workloads-using-a-dns-forwarder). --### Manual DNS server configuration --If you opted out of using Azure private DNS zones during private endpoint creation, you'll need to create the required DNS records in your on-premises DNS server. --1. Go to the Azure portal. -1. Navigate to the private endpoint resource associated with your virtual network and Azure Arc Private Link Scope. -1. From the left-hand pane, select **DNS configuration** to see a list of the DNS records and corresponding IP addresses youΓÇÖll need to set up on your DNS server. The FQDNs and IP addresses will change based on the region you selected for your private endpoint and the available IP addresses in your subnet. -- :::image type="content" source="media/private-link/update-dns-configuration.png" alt-text="Screenshot showing manual DNS server configuration in the Azure portal."::: --1. Follow the guidance from your DNS server vendor to add the necessary DNS zones and A records to match the table in the portal. Ensure that you select a DNS server that is appropriately scoped for your network. Every Kubernetes cluster that uses this DNS server now resolves the private endpoint IP addresses and must be associated with the Azure Arc Private Link Scope, or the connection will be refused. --## Configure private links --> [!NOTE] -> Configuring private links for Azure Arc-enabled Kubernetes clusters is supported starting from version 1.3.0 of the `connectedk8s` CLI extension, but requires Azure CLI version greater than 2.3.0. If you use a version greater than 1.3.0 for the `connectedk8s` CLI extension, we have introduced validations to check and successfully connect the cluster to Azure Arc only if you're running Azure CLI version greater than 2.3.0. --You can configure private links for an existing Azure Arc-enabled Kubernetes cluster or when onboarding a Kubernetes cluster to Azure Arc for the first time using the command below: --```azurecli -az connectedk8s connect -g <resource-group-name> -n <connected-cluster-name> -l <location> --enable-private-link true --private-link-scope-resource-id <pls-arm-id> -``` --| Parameter name | Description | -| -- | -- | -| --enable-private-link |Property to enable/disable private links feature. Set it to "True" to enable connectivity with private links. | -| --private-link-scope-resource-id | ID of the private link scope resource created earlier. For example: /subscriptions//resourceGroups//providers/Microsoft.HybridCompute/privateLinkScopes/ | --For Azure Arc-enabled Kubernetes clusters that were set up prior to configuring the Azure Arc private link scope, you can configure private links through the Azure portal using the following steps: --1. In the Azure portal, navigate to your Azure Arc Private Link Scope resource. -1. From the left pane, select **Azure Arc resources** and then **+ Add**. -1. Select the Kubernetes clusters in the list that you want to associate with the Private Link Scope, and then choose **Select** to save your changes. -- > [!NOTE] - > The list only shows Azure Arc-enabled Kubernetes clusters that are within the same subscription and region as your Private Link Scope. -- :::image type="content" source="media/private-link/select-clusters.png" alt-text="Screenshot of the list of Kubernetes clusters for the Azure Arc Private Link Scope." lightbox="media/private-link/select-clusters.png"::: --## Troubleshooting --If you run into problems, the following suggestions may help: --* Check your on-premises DNS server(s) to verify it is either forwarding to Azure DNS or is configured with appropriate A records in your private link zone. These lookup commands should return private IP addresses in your Azure virtual network. If they resolve public IP addresses, double check your machine or server and networkΓÇÖs DNS configuration. -- ```console - nslookup gbl.his.arc.azure.com - nslookup agentserviceapi.guestconfiguration.azure.com - nslookup dp.kubernetesconfiguration.azure.com - ``` --* If you are having trouble onboarding your Kubernetes cluster, confirm that youΓÇÖve added the Microsoft Entra ID, Azure Resource Manager, AzureFrontDoor.FirstParty and Microsoft Container Registry service tags to your local network firewall. --## Next steps --* Learn more about [Azure Private Endpoint](../../private-link/private-link-overview.md). -* Learn how to [troubleshoot Azure Private Endpoint connectivity problems](../../private-link/troubleshoot-private-endpoint-connectivity.md). -* Learn how to [configure Private Link for Azure Monitor](/azure/azure-monitor/logs/private-link-security). |
azure-arc | Quickstart Connect Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md | - Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" -description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. - Previously updated : 06/27/2023----# Quickstart: Connect an existing Kubernetes cluster to Azure Arc --Get started with Azure Arc-enabled Kubernetes by using Azure CLI or Azure PowerShell to connect an existing Kubernetes cluster to Azure Arc. --For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enabled Kubernetes agent overview](./conceptual-agent-overview.md). To try things out in a sample/practice experience, visit the [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_k8s). --## Prerequisites --### [Azure CLI](#tab/azure-cli) --> [!IMPORTANT] -> In addition to these prerequisites, be sure to meet all [network requirements for Azure Arc-enabled Kubernetes](network-requirements.md). -* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* A basic understanding of [Kubernetes core concepts](/azure/aks/concepts-clusters-workloads). -* An [identity (user or service principal)](system-requirements.md#azure-ad-identity-requirements) which can be used to [log in to Azure CLI](/cli/azure/authenticate-azure-cli) and connect your cluster to Azure Arc. -* The latest version of [Azure CLI](/cli/azure/install-azure-cli). -* The latest version of **connectedk8s** Azure CLI extension, installed by running the following command: -- ```azurecli - az extension add --name connectedk8s - ``` --* An up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options: - * [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/) - * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes) - * Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html) -- >[!NOTE] - > The cluster needs to have at least one node of operating system and architecture type `linux/amd64` and/or `linux/arm64`. See [Cluster requirements](system-requirements.md#cluster-requirements) for more about ARM64 scenarios. --* At least 850 MB free for the Arc agents that will be deployed on the cluster, and capacity to use approximately 7% of a single CPU. -* A [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) and context pointing to your cluster. To know more about what a kubeconfig file is and how to set context to point to your cluster, please refer to this [article](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). --### [Azure PowerShell](#tab/azure-powershell) --> [!IMPORTANT] -> In addition to these prerequisites, be sure to meet all [network requirements for Azure Arc-enabled Kubernetes](network-requirements.md) -* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* A basic understanding of [Kubernetes core concepts](/azure/aks/concepts-clusters-workloads). -* An [identity (user or service principal)](system-requirements.md#azure-ad-identity-requirements) which can be used to [log in to Azure PowerShell](/powershell/azure/authenticate-azureps) and connect your cluster to Azure Arc. -* [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-azure-powershell) -* The **Az.ConnectedKubernetes** PowerShell module, installed by running the following command: -- ```azurepowershell-interactive - Install-Module -Name Az.ConnectedKubernetes - ``` --* An up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options: - * [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/) - * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes) - * Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html) -- >[!NOTE] - > The cluster needs to have at least one node of operating system and architecture type `linux/amd64` and/or `linux/arm64`. See [Cluster requirements](system-requirements.md#cluster-requirements) for more about ARM64 scenarios. --* At least 850 MB free for the Arc agents that will be deployed on the cluster, and capacity to use approximately 7% of a single CPU. -* A [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) and context pointing to your cluster. ----## Register providers for Azure Arc-enabled Kubernetes --### [Azure CLI](#tab/azure-cli) --1. Enter the following commands: -- ```azurecli - az provider register --namespace Microsoft.Kubernetes - az provider register --namespace Microsoft.KubernetesConfiguration - az provider register --namespace Microsoft.ExtendedLocation - ``` --1. Monitor the registration process. Registration may take up to 10 minutes. -- ```azurecli - az provider show -n Microsoft.Kubernetes -o table - az provider show -n Microsoft.KubernetesConfiguration -o table - az provider show -n Microsoft.ExtendedLocation -o table - ``` -- Once registered, you should see the `RegistrationState` state for these namespaces change to `Registered`. --### [Azure PowerShell](#tab/azure-powershell) --1. Enter the following commands: -- ```azurepowershell - Register-AzResourceProvider -ProviderNamespace Microsoft.Kubernetes - Register-AzResourceProvider -ProviderNamespace Microsoft.KubernetesConfiguration - Register-AzResourceProvider -ProviderNamespace Microsoft.ExtendedLocation - ``` --1. Monitor the registration process. Registration may take up to 10 minutes. -- ```azurepowershell - Get-AzResourceProvider -ProviderNamespace Microsoft.Kubernetes - Get-AzResourceProvider -ProviderNamespace Microsoft.KubernetesConfiguration - Get-AzResourceProvider -ProviderNamespace Microsoft.ExtendedLocation - ``` -- Once registered, you should see the `RegistrationState` state for these namespaces change to `Registered`. ----## Create a resource group --Run the following command: --### [Azure CLI](#tab/azure-cli) --```azurecli -az group create --name AzureArcTest --location EastUS --output table -``` --Output: --```output -Location Name -- -eastus AzureArcTest -``` --### [Azure PowerShell](#tab/azure-powershell) --```azurepowershell -New-AzResourceGroup -Name AzureArcTest -Location EastUS -``` --Output: --```output -ResourceGroupName : AzureArcTest -Location : eastus -ProvisioningState : Succeeded -Tags : -ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/AzureArcTest -``` ----## Connect an existing Kubernetes cluster --Run the following command to connect your cluster. This command deploys the Azure Arc agents to the cluster and installs Helm v. 3.6.3 to the `.azure` folder of the deployment machine. This Helm 3 installation is only used for Azure Arc, and it doesn't remove or change any previously installed versions of Helm on the machine. --In this example, the cluster's name is AzureArcTest1. --### [Azure CLI](#tab/azure-cli) --```azurecli -az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest -``` --Output: --```output -Helm release deployment succeeded -- { - "aadProfile": { - "clientAppId": "", - "serverAppId": "", - "tenantId": "" - }, - "agentPublicKeyCertificate": "xxxxxxxxxxxxxxxxxxx", - "agentVersion": null, - "connectivityStatus": "Connecting", - "distribution": "gke", - "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/AzureArcTest/providers/Microsoft.Kubernetes/connectedClusters/AzureArcTest1", - "identity": { - "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "type": "SystemAssigned" - }, - "infrastructure": "gcp", - "kubernetesVersion": null, - "lastConnectivityTime": null, - "location": "eastus", - "managedIdentityCertificateExpirationTime": null, - "name": "AzureArcTest1", - "offering": null, - "provisioningState": "Succeeded", - "resourceGroup": "AzureArcTest", - "tags": {}, - "totalCoreCount": null, - "totalNodeCount": null, - "type": "Microsoft.Kubernetes/connectedClusters" - } -``` --> [!TIP] -> The above command without the location parameter specified creates the Azure Arc-enabled Kubernetes resource in the same location as the resource group. To create the Azure Arc-enabled Kubernetes resource in a different location, specify either `--location <region>` or `-l <region>` when running the `az connectedk8s connect` command. --> [!IMPORTANT] -> If deployment fails due to a timeout error, see our [troubleshooting guide](troubleshooting.md#helm-timeout-error) for details on how to resolve this issue. --### [Azure PowerShell](#tab/azure-powershell) --```azurepowershell -New-AzConnectedKubernetes -ClusterName AzureArcTest1 -ResourceGroupName AzureArcTest -Location eastus -``` --Output: --```output -Location Name Type - --eastus AzureArcTest1 microsoft.kubernetes/connectedclusters -``` ----## Connect using an outbound proxy server --If your cluster is behind an outbound proxy server, requests must be routed via the outbound proxy server. --### [Azure CLI](#tab/azure-cli) --1. On the deployment machine, set the environment variables needed for Azure CLI to use the outbound proxy server: -- ```bash - export HTTP_PROXY=<proxy-server-ip-address>:<port> - export HTTPS_PROXY=<proxy-server-ip-address>:<port> - export NO_PROXY=<cluster-apiserver-ip-address>:<port> - ``` --2. On the Kubernetes cluster, run the connect command with the `proxy-https` and `proxy-http` parameters specified. If your proxy server is set up with both HTTP and HTTPS, be sure to use `--proxy-http` for the HTTP proxy and `--proxy-https` for the HTTPS proxy. If your proxy server only uses HTTP, you can use that value for both parameters. -- ```azurecli - az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port> --proxy-skip-range <excludedIP>,<excludedCIDR> --proxy-cert <path-to-cert-file> - ``` --> [!NOTE] -> -> * Some network requests such as the ones involving in-cluster service-to-service communication need to be separated from the traffic that is routed via the proxy server for outbound communication. The `--proxy-skip-range` parameter can be used to specify the CIDR range and endpoints in a comma-separated way so that any communication from the agents to these endpoints do not go via the outbound proxy. At a minimum, the CIDR range of the services in the cluster should be specified as value for this parameter. For example, let's say `kubectl get svc -A` returns a list of services where all the services have ClusterIP values in the range `10.0.0.0/16`. Then the value to specify for `--proxy-skip-range` is `10.0.0.0/16,kubernetes.default.svc,.svc.cluster.local,.svc`. -> * `--proxy-http`, `--proxy-https`, and `--proxy-skip-range` are expected for most outbound proxy environments. `--proxy-cert` is *only* required if you need to inject trusted certificates expected by proxy into the trusted certificate store of agent pods. -> * The outbound proxy has to be configured to allow websocket connections. --### [Azure PowerShell](#tab/azure-powershell) --1. On the deployment machine, set the environment variables needed for Azure PowerShell to use the outbound proxy server: -- ```powershell - $Env:HTTP_PROXY = "<proxy-server-ip-address>:<port>" - $Env:HTTPS_PROXY = "<proxy-server-ip-address>:<port>" - $Env:NO_PROXY = "<cluster-apiserver-ip-address>:<port>" - ``` --2. On the Kubernetes cluster, run the connect command with the proxy parameter specified: -- ```azurepowershell - New-AzConnectedKubernetes -ClusterName <cluster-name> -ResourceGroupName <resource-group> -Location eastus -Proxy 'https://<proxy-server-ip-address>:<port>' - ``` ----For outbound proxy servers where only a trusted certificate needs to be provided without the proxy server endpoint inputs, `az connectedk8s connect` can be run with just the `--proxy-cert` input specified. In case multiple trusted certificates are expected, the combined certificate chain can be provided in a single file using the `--proxy-cert` parameter. --> [!NOTE] -> -> * `--custom-ca-cert` is an alias for `--proxy-cert`. Either parameters can be used interchangeably. Passing both parameters in the same command will honor the one passed last. --### [Azure CLI](#tab/azure-cli) --Run the connect command with the `--proxy-cert` parameter specified: --```azurecli -az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-cert <path-to-cert-file> -``` --### [Azure PowerShell](#tab/azure-powershell) --The ability to pass in the proxy certificate only without the proxy server endpoint details isn't currently supported via PowerShell. ----## Verify cluster connection --Run the following command: --### [Azure CLI](#tab/azure-cli) --```azurecli -az connectedk8s list --resource-group AzureArcTest --output table -``` --Output: --```output -Name Location ResourceGroup -- - -AzureArcTest1 eastus AzureArcTest -``` --### [Azure PowerShell](#tab/azure-powershell) --```azurepowershell -Get-AzConnectedKubernetes -ResourceGroupName AzureArcTest -``` --Output: --```output -Location Name Type - --eastus AzureArcTest1 microsoft.kubernetes/connectedclusters -``` ----> [!NOTE] -> After onboarding the cluster, it takes around 5 to 10 minutes for the cluster metadata (cluster version, agent version, number of nodes, etc.) to surface on the overview page of the Azure Arc-enabled Kubernetes resource in Azure portal. --> [!TIP] -> For help troubleshooting problems while connecting your cluster, see [Diagnose connection issues for Azure Arc-enabled Kubernetes clusters](diagnose-connection-issues.md). --## View Azure Arc agents for Kubernetes --Azure Arc-enabled Kubernetes deploys several agents into the `azure-arc` namespace. --1. View these deployments and pods using: -- ```bash - kubectl get deployments,pods -n azure-arc - ``` --1. Verify all pods are in a `Running` state. -- Output: -- ```output - NAME READY UP-TO-DATE AVAILABLE AGE - deployment.apps/cluster-metadata-operator 1/1 1 1 13d - deployment.apps/clusterconnect-agent 1/1 1 1 13d - deployment.apps/clusteridentityoperator 1/1 1 1 13d - deployment.apps/config-agent 1/1 1 1 13d - deployment.apps/controller-manager 1/1 1 1 13d - deployment.apps/extension-manager 1/1 1 1 13d - deployment.apps/flux-logs-agent 1/1 1 1 13d - deployment.apps/kube-aad-proxy 1/1 1 1 13d - deployment.apps/metrics-agent 1/1 1 1 13d - deployment.apps/resource-sync-agent 1/1 1 1 13d -- NAME READY STATUS RESTARTS AGE - pod/cluster-metadata-operator-9568b899c-2stjn 2/2 Running 0 13d - pod/clusterconnect-agent-576758886d-vggmv 3/3 Running 0 13d - pod/clusteridentityoperator-6f59466c87-mm96j 2/2 Running 0 13d - pod/config-agent-7cbd6cb89f-9fdnt 2/2 Running 0 13d - pod/controller-manager-df6d56db5-kxmfj 2/2 Running 0 13d - pod/extension-manager-58c94c5b89-c6q72 2/2 Running 0 13d - pod/flux-logs-agent-6db9687fcb-rmxww 1/1 Running 0 13d - pod/kube-aad-proxy-67b87b9f55-bthqv 2/2 Running 0 13d - pod/metrics-agent-575c565fd9-k5j2t 2/2 Running 0 13d - pod/resource-sync-agent-6bbd8bcd86-x5bk5 2/2 Running 0 13d - ``` --For more information about these agents, see [Azure Arc-enabled Kubernetes agent overview](conceptual-agent-overview.md). --## Clean up resources --### [Azure CLI](#tab/azure-cli) --You can delete the Azure Arc-enabled Kubernetes resource, any associated configuration resources, *and* any agents running on the cluster using Azure CLI using the following command: --```azurecli -az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest -``` --If the deletion process fails, use the following command to force deletion (adding `-y` if you want to bypass the confirmation prompt): --```azurecli -az connectedk8s delete -n AzureArcTest1 -g AzureArcTest --force -``` --This command can also be used if you experience issues when creating a new cluster deployment (due to previously created resources not being completely removed). -->[!NOTE] -> Deleting the Azure Arc-enabled Kubernetes resource using the Azure portal removes any associated configuration resources, but *does not* remove any agents running on the cluster. Best practice is to delete the Azure Arc-enabled Kubernetes resource using `az connectedk8s delete` rather than deleting the resource in the Azure portal. --### [Azure PowerShell](#tab/azure-powershell) --You can delete the Azure Arc-enabled Kubernetes resource, any associated configuration resources, *and* any agents running on the cluster using Azure PowerShell using the following command: --```azurepowershell -Remove-AzConnectedKubernetes -ClusterName AzureArcTest1 -ResourceGroupName AzureArcTest -``` -->[!NOTE] -> Deleting the Azure Arc-enabled Kubernetes resource using the Azure portal removes any associated configuration resources, but *does not* remove any agents running on the cluster. Best practice is to delete the Azure Arc-enabled Kubernetes resource using `Remove-AzConnectedKubernetes` rather than deleting the resource in the Azure portal. ----## Next steps --* Learn how to [deploy configurations using GitOps with Flux v2](tutorial-use-gitops-flux2.md). -* [Troubleshoot common Azure Arc-enabled Kubernetes issues](troubleshooting.md). -* Experience Azure Arc-enabled Kubernetes automated scenarios with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_k8s). |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/release-notes.md | - Title: "What's new with Azure Arc-enabled Kubernetes" Previously updated : 07/23/2024- -description: "Learn about the latest releases of Arc-enabled Kubernetes." ---# What's new with Azure Arc-enabled Kubernetes --Azure Arc-enabled Kubernetes is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about recent releases of the [Azure Arc-enabled Kubernetes agents](conceptual-agent-overview.md). --When any of the Arc-enabled Kubernetes agents are updated, all of the agents in the `azure-arc` namespace are incremented with a new version number, so that the version numbers are consistent across agents. When a new version is released, all of the agents are upgraded together to the newest version (whether or not there are functionality changes in a given agent), unless you have [disabled automatic upgrades](agent-upgrade.md) for the cluster. --We generally recommend using the most recent versions of the agents. The [version support policy](agent-upgrade.md#version-support-policy) covers the most recent version and the two previous versions (N-2). --## Version 1.18.x (July 2024) --- Fixed `logCollector` pod restarts-- Updated to Microsoft Go v1.22.5-- Other bug fixes--## Version 1.17.x (June 2024) --- Upgraded to use [Microsoft Go 1.22 to be FIPS compliant](https://github.com/microsoft/go/blob/microsoft/main/eng/doc/fips/README.md#tls-with-fips-compliant-settings)--## Version 1.16.x (May 2024) --- Migrated to use Microsoft Go w/ OpenSSL and fixed some vulnerabilities--## Version 1.15.3 (March 2024) --- Various enhancements and bug fixes--## Version 1.14.5 (December 2023) --- Migrated auto-upgrade to use latest Helm release--## Version 1.13.4 (October 2023) --- Various enhancements and bug fixes--## Version 1.13.1 (September 2023) --- Various enhancements and bug fixes--## Version 1.12.5 (July 2023) --- Alpine base image powering our Arc agent containers has been updated from 3.7.12 to 3.18.0--## Version 1.11.7 (May 2023) --- Updates to enable users that belong to more than 200 groups in cluster connect scenarios--## Version 1.11.3 (April 2023) --- Updates to base image of Arc-enabled Kubernetes agents to address security CVE--## Next steps --- Learn how to [enable or disable automatic agent upgrades](agent-upgrade.md).-- Learn how to [connect a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md). |
azure-arc | Resource Graph Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/resource-graph-samples.md | - Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes -description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Previously updated : 08/09/2023-----# Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes --This page is a collection of [Azure Resource Graph](../../governance/resource-graph/overview.md) sample queries for Azure Arc-enabled Kubernetes. --## Sample queries ---## Next steps --- Learn more about the [query language](../../governance/resource-graph/concepts/query-language.md).-- Learn more about how to [explore resources](../../governance/resource-graph/concepts/explore-resources.md). |
azure-arc | System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/system-requirements.md | - Title: "Azure Arc-enabled Kubernetes system requirements" Previously updated : 08/28/2023-- -description: Learn about the system requirements to connect Kubernetes clusters to Azure Arc. ---# Azure Arc-enabled Kubernetes system requirements --This article describes the basic requirements for [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md), along with system requirement information related to various Arc-enabled Kubernetes scenarios. --## Cluster requirements --Azure Arc-enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. This includes clusters running on other public cloud providers (such as GCP or AWS) and clusters running on your on-premises data center (such as VMware vSphere or Azure Stack HCI). --You must also have a [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) and context pointing to your cluster. --The cluster must have at least one node with operating system and architecture type `linux/amd64` and/or `linux/arm64`. --> [!IMPORTANT] -> Many Arc-enabled Kubernetes features and scenarios are supported on ARM64 nodes, such as [cluster connect](cluster-connect.md) and [viewing Kubernetes resources in the Azure portal](kubernetes-resource-view.md). However, if using Azure CLI to enable these scenarios, [Azure CLI must be installed](/cli/azure/install-azure-cli) and run from an AMD64 machine. Azure RBAC on Arc-enabled Kubernetes is currently not supported on ARM64 nodes. Please use [Kubernetes RBAC](identity-access-overview.md#kubernetes-rbac-authorization) for ARM64 nodes. -> -> Currently, Azure Arc-enabled Kubernetes [cluster extensions](conceptual-extensions.md) aren't supported on ARM64-based clusters, except for [Flux (GitOps)](conceptual-gitops-flux2.md). To [install and use other cluster extensions](extensions.md), the cluster must have at least one node of operating system and architecture type `linux/amd64`. -## Compute and memory requirements --The Arc agents deployed on the cluster require: --- At least 850 MB of free memory-- Capacity to use approximately 7% of a single CPU--For a multi-node Kubernetes cluster environment, pods can get scheduled on different nodes. --## Management tool requirements --To connect a cluster to Azure Arc, you'll need to use either Azure CLI or Azure PowerShell. --For Azure CLI: --- [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to the latest version.-- Install the latest version of **connectedk8s** Azure CLI extension:-- ```azurecli - az extension add --name connectedk8s - ``` --For Azure PowerShell: --- Install [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-azure-powershell).-- Install the **Az.ConnectedKubernetes** PowerShell module:-- ```azurepowershell-interactive - Install-Module -Name Az.ConnectedKubernetes - ``` --> [!NOTE] -> When you deploy the Azure Arc agents to a cluster, Helm v. 3.6.3 will be installed in the `.azure` folder of the deployment machine. This [Helm 3](https://helm.sh/docs/) installation is only used for Azure Arc, and it doesn't remove or change any previously installed versions of Helm on the machine. --<a name='azure-ad-identity-requirements'></a> --## Microsoft Entra identity requirements --To connect your cluster to Azure Arc, you must have a Microsoft Entra identity (user or service principal) which can be used to log in to [Azure CLI](/cli/azure/authenticate-azure-cli) or [Azure PowerShell](/powershell/azure/authenticate-azureps) and connect your cluster to Azure Arc. --This identity must have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`). If connecting the cluster to an existing resource group (rather than a new one created by this identity), the identity must have 'Read' permission for that resource group. --The [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-clusterazure-arc-onboarding) can be used for this identity. This role is useful for at-scale onboarding, as it has only the granular permissions required to connect clusters to Azure Arc, and doesn't have permission to update, delete, or modify any other clusters or other Azure resources. --## Azure resource provider requirements --To use Azure Arc-enabled Kubernetes, the following [Azure resource providers](../../azure-resource-manager/management/resource-providers-and-types.md) must be registered in your subscription: --- **Microsoft.Kubernetes**-- **Microsoft.KubernetesConfiguration**-- **Microsoft.ExtendedLocation**--You can register the resource providers using the following commands: --Azure PowerShell: --```azurepowershell-interactive -Connect-AzAccount -Set-AzContext -SubscriptionId [subscription you want to onboard] -Register-AzResourceProvider -ProviderNamespace Microsoft.Kubernetes -Register-AzResourceProvider -ProviderNamespace Microsoft.KubernetesConfiguration -Register-AzResourceProvider -ProviderNamespace Microsoft.ExtendedLocation -``` --Azure CLI: --```azurecli-interactive -az account set --subscription "{Your Subscription Name}" -az provider register --namespace Microsoft.Kubernetes -az provider register --namespace Microsoft.KubernetesConfiguration -az provider register --namespace Microsoft.ExtendedLocation -``` --You can also register the resource providers in the [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). --## Network requirements --Be sure that you have connectivity to the [required endpoints for Azure Arc-enabled Kubernetes](network-requirements.md). --## Next steps --- Review the [network requirements for using Arc-enabled Kubernetes](system-requirements.md).-- Use our [quickstart](quickstart-connect-cluster.md) to connect your cluster. |
azure-arc | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md | - Title: "Troubleshoot platform issues for Azure Arc-enabled Kubernetes clusters" Previously updated : 12/15/2023-- -description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps." ---# Troubleshoot platform issues for Azure Arc-enabled Kubernetes clusters --This document provides troubleshooting guides for issues with Azure Arc-enabled Kubernetes connectivity, permissions, and agents. It also provides troubleshooting guides for Azure GitOps, which can be used in either Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. --For help troubleshooting issues related to extensions, such as GitOps (Flux v2), Azure Monitor Container Insights, Open Service Mesh, see [Troubleshoot extension issues for Azure Arc-enabled Kubernetes clusters](extensions-troubleshooting.md). --## Azure CLI --Before using `az connectedk8s` or `az k8s-configuration` CLI commands, ensure that Azure CLI is set to work against the correct Azure subscription. --```azurecli -az account set --subscription 'subscriptionId' -az account show -``` --## Azure Arc agents --All agents for Azure Arc-enabled Kubernetes are deployed as pods in the `azure-arc` namespace. All pods should be running and passing their health checks. --First, verify the Azure Arc Helm Chart release: --```console -$ helm --namespace default status azure-arc -NAME: azure-arc -LAST DEPLOYED: Fri Apr 3 11:13:10 2020 -NAMESPACE: default -STATUS: deployed -REVISION: 5 -TEST SUITE: None -``` --If the Helm Chart release isn't found or missing, try [connecting the cluster to Azure Arc](./quickstart-connect-cluster.md) again. --If the Helm Chart release is present with `STATUS: deployed`, check the status of the agents using `kubectl`: --```console -$ kubectl -n azure-arc get deployments,pods -NAME READY UP-TO-DATE AVAILABLE AGE -deployment.apps/cluster-metadata-operator 1/1 1 1 3d19h -deployment.apps/clusterconnect-agent 1/1 1 1 3d19h -deployment.apps/clusteridentityoperator 1/1 1 1 3d19h -deployment.apps/config-agent 1/1 1 1 3d19h -deployment.apps/controller-manager 1/1 1 1 3d19h -deployment.apps/extension-events-collector 1/1 1 1 3d19h -deployment.apps/extension-manager 1/1 1 1 3d19h -deployment.apps/flux-logs-agent 1/1 1 1 3d19h -deployment.apps/kube-aad-proxy 1/1 1 1 3d19h -deployment.apps/metrics-agent 1/1 1 1 3d19h -deployment.apps/resource-sync-agent 1/1 1 1 3d19h --NAME READY STATUS RESTARTS AGE -pod/cluster-metadata-operator-74747b975-9phtz 2/2 Running 0 3d19h -pod/clusterconnect-agent-cf4c7849c-88fmf 3/3 Running 0 3d19h -pod/clusteridentityoperator-79bdfd945f-pt2rv 2/2 Running 0 3d19h -pod/config-agent-67bcb94b7c-d67t8 1/2 Running 0 3d19h -pod/controller-manager-559dd48b64-v6rmk 2/2 Running 0 3d19h -pod/extension-events-collector-85f4fbff69-55zmt 2/2 Running 0 3d19h -pod/extension-manager-7c7668446b-69gps 3/3 Running 0 3d19h -pod/flux-logs-agent-fc7c6c959-vgqvm 1/1 Running 0 3d19h -pod/kube-aad-proxy-84d668c44b-j457m 2/2 Running 0 3d19h -pod/metrics-agent-58fb8554df-5ll67 2/2 Running 0 3d19h -pod/resource-sync-agent-dbf5db848-c9lg8 2/2 Running 0 3d19h -``` --All pods should show `STATUS` as `Running` with either `3/3` or `2/2` under the `READY` column. Fetch logs and describe the pods returning an `Error` or `CrashLoopBackOff`. If any pods are stuck in `Pending` state, there might be insufficient resources on cluster nodes. [Scaling up your cluster](https://kubernetes.io/docs/tasks/administer-cluster/) can get these pods to transition to `Running` state. --## Resource provisioning failed/Service timeout error --If you see these errors, check [Azure status](https://azure.status.microsoft/en-us/status) to see if there are any active events impacting the status of the Azure Arc-enabled Kubernetes service. If so, wait until the service event has been resolved, then try onboarding again after [deleting the existing connected cluster resource](quickstart-connect-cluster.md#clean-up-resources). If there are no service events, and you continue to face issues while onboarding, [open a support ticket](/azure/azure-portal/supportability/how-to-create-azure-support-request) so we can investigate the problem. --## Overage claims error --If you receive an overage claim, make sure that your service principal isn't part of more than 200 Microsoft Entra groups. If this is the case, you must create and use another service principal that isn't a member of more than 200 groups, or remove the original service principal from some of its groups and try again. --An overage claim may also occur if you have configured an outbound proxy environment without allowing the endpoint `https://<region>.obo.arc.azure.com:8084/` for outbound traffic. --If neither of these apply, [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) so we can look into the issue. --## Issues when connecting Kubernetes clusters to Azure Arc --Connecting clusters to Azure Arc requires access to an Azure subscription and `cluster-admin` access to a target cluster. If you can't reach the cluster, or if you have insufficient permissions, connecting the cluster to Azure Arc will fail. Make sure you've met all of the [prerequisites to connect a cluster](quickstart-connect-cluster.md#prerequisites). --> [!TIP] -> For a visual guide to troubleshooting connection issues, see [Diagnose connection issues for Arc-enabled Kubernetes clusters](diagnose-connection-issues.md). --### DNS resolution issues --Visit [Debugging DNS Resolution](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/) for help resolving issues with DNS resolution on your cluster. --### Outbound network connectivity issues --Issues with outbound network connectivity from the cluster may arise for different reasons. First make sure all of the [network requirements](network-requirements.md) have been met. --If you encounter connectivity issues, and your cluster is behind an outbound proxy server, make sure you've passed proxy parameters during the onboarding of your cluster and that the proxy is configured correctly. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server). --You may see an error similar to the following: --`An exception has occurred while trying to execute the cluster diagnostic checks in the cluster. Exception: Unable to pull cluster-diagnostic-checks helm chart from the registry 'mcr.microsoft.com/azurearck8s/helmchart/stable/clusterdiagnosticchecks:0.1.2': Error: failed to do request: Head "https://mcr.microsoft.com/v2/azurearck8s/helmchart/stable/clusterdiagnosticchecks/manifests/0.1.2": dial tcp xx.xx.xx.219:443: i/o timeout` --This error occurs when the `https://k8connecthelm.azureedge.net` endpoint is blocked. Be sure that your network allows connectivity to this endpoint and meets all of the other [networking requirements](network-requirements.md). --### Unable to retrieve MSI certificate --Problems retrieving the MSI certificate are usually due to network issues. Check to make sure all of the [network requirements](network-requirements.md) have been met, then try again. --### Insufficient cluster permissions --If the provided kubeconfig file doesn't have sufficient permissions to install the Azure Arc agents, the Azure CLI command returns an error: `Error: list: failed to list: secrets is forbidden: User "myuser" cannot list resource "secrets" in API group "" at the cluster scope` --To resolve this issue, ensure that the user connecting the cluster to Azure Arc has the `cluster-admin` role assigned. --### Unable to connect OpenShift cluster to Azure Arc --If `az connectedk8s connect` is timing out and failing when connecting an OpenShift cluster to Azure Arc: --1. Ensure that the OpenShift cluster meets the version prerequisites: 4.5.41+ or 4.6.35+ or 4.7.18+. --1. Before you run `az connectedk8s connnect`, run this command on the cluster: -- ```console - oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa - ``` --### Installation timeouts --Connecting a Kubernetes cluster to Azure Arc-enabled Kubernetes requires installation of Azure Arc agents on the cluster. If the cluster is running over a slow internet connection, the container image pull for agents may take longer than the Azure CLI timeouts. --### Helm timeout error --You may see the error `Unable to install helm release: Error: UPGRADE Failed: time out waiting for the condition`. To resolve this issue, try the following steps: --1. Run the following command: -- ```console - kubectl get pods -n azure-arc - ``` --2. Check if the `clusterconnect-agent` or the `config-agent` pods are showing `crashloopbackoff`, or if not all containers are running: -- ```output - NAME READY STATUS RESTARTS AGE - cluster-metadata-operator-664bc5f4d-chgkl 2/2 Running 0 4m14s - clusterconnect-agent-7cb8b565c7-wklsh 2/3 CrashLoopBackOff 0 1m15s - clusteridentityoperator-76d645d8bf-5qx5c 2/2 Running 0 4m15s - config-agent-65d5df564f-lffqm 1/2 CrashLoopBackOff 0 1m14s - ``` --3. If the `azure-identity-certificate` isn't present, the system assigned managed identity hasn't been installed. -- ```console - kubectl get secret -n azure-arc -o yaml | grep name: - ``` -- ```output - name: azure-identity-certificate - ``` -- To resolve this issue, try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If the issue continues to happen, it could be an issue with your proxy settings. In that case, [try connecting your cluster to Azure Arc via a proxy](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy. Also verify that all of the [network prerequisites](network-requirements.md) have been met. --4. If the `clusterconnect-agent` and the `config-agent` pods are running, but the `kube-aad-proxy` pod is missing, check your pod security policies. This pod uses the `azure-arc-kube-aad-proxy-sa` service account, which doesn't have admin permissions but requires the permission to mount host path. --5. If the `kube-aad-proxy` pod is stuck in `ContainerCreating` state, check whether the kube-aad-proxy certificate has been downloaded onto the cluster. -- ```console - kubectl get secret -n azure-arc -o yaml | grep name: - ``` -- ```output - name: kube-aad-proxy-certificate - ``` -- If the certificate is missing, [delete the deployment](quickstart-connect-cluster.md#clean-up-resources) and try onboarding again, using a different name for the cluster. If the problem continues, [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). --### CryptoHash module error --When attempting to onboard Kubernetes clusters to the Azure Arc platform, the local environment (for example, your client console) may return the following error message: --```output -Cannot load native module 'Crypto.Hash._MD5' -``` --Sometimes, dependent modules fail to download successfully when adding the extensions `connectedk8s` and `k8s-configuration` through Azure CLI or Azure PowerShell. To fix this problem, manually remove and then add the extensions in the local environment. --To remove the extensions, use: --```azurecli -az extension remove --name connectedk8s -az extension remove --name k8s-configuration -``` --To add the extensions, use: --```azurecli -az extension add --name connectedk8s -az extension add --name k8s-configuration -``` --## Cluster connect issues --If your cluster is behind an outbound proxy or firewall, verify that websocket connections are enabled for `*.servicebus.windows.net`, which is required specifically for the [Cluster Connect](cluster-connect.md) feature. Additionally, make sure you're using the latest version of the `connectedk8s` Azure CLI extension if you're experiencing problems using cluster connect. --If the `clusterconnect-agent` and `kube-aad-proxy` pods are missing, then the cluster connect feature is likely disabled on the cluster. If so, `az connectedk8s proxy` will fail to establish a session with the cluster, and you may see an error reading `Cannot connect to the hybrid connection because no agent is connected in the target arc resource.` --To resolve this error, enable the cluster connect feature on your cluster: --```azurecli -az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $RESOURCE_GROUP -``` --For more information, see [Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters](cluster-connect.md). --## Enable custom locations using service principal --When connecting your cluster to Azure Arc or enabling custom locations on an existing cluster, you may see the following warning: --```console -Unable to fetch oid of 'custom-locations' app. Proceeding without enabling the feature. Insufficient privileges to complete the operation. -``` --This warning occurs when you use a service principal to log into Azure, and the service principal doesn't have the necessary permissions. To avoid this error, follow these steps: --1. Sign in into Azure CLI using your user account. Retrieve the Object ID of the Microsoft Entra application used by Azure Arc service: -- ```azurecli - az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv - ``` --1. Sign in into Azure CLI using the service principal. Use the `<objectId>` value from the previous step to enable custom locations on the cluster: -- * To enable custom locations when connecting the cluster to Arc, run `az connectedk8s connect -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId>` - * To enable custom locations on an existing Azure Arc-enabled Kubernetes cluster, run `az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> --features cluster-connect custom-locations` --## Next steps --* Get a visual walkthrough of [how to diagnose connection issues](diagnose-connection-issues.md). -* View [troubleshooting tips related to cluster extensions](extensions-troubleshooting.md). |
azure-arc | Tutorial Akv Secrets Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md | - Title: Use Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters -description: Learn how to set up the Azure Key Vault Provider for Secrets Store CSI Driver interface as an extension on Azure Arc enabled Kubernetes cluster - Previously updated : 06/11/2024----# Use the Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters --The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a [CSI volume](https://kubernetes-csi.github.io/docs/). For Azure Arc-enabled Kubernetes clusters, you can install the Azure Key Vault Secrets Provider extension to fetch secrets. --Capabilities of the Azure Key Vault Secrets Provider extension include: --- Mounts secrets/keys/certs to pod using a CSI Inline volume-- Supports pod portability with the SecretProviderClass CRD-- Supports Linux and Windows containers-- Supports sync with Kubernetes Secrets-- Supports auto rotation of secrets-- Extension components are deployed to availability zones, making them zone redundant--## Prerequisites --- A cluster with a supported Kubernetes distribution that has already been [connected to Azure Arc](quickstart-connect-cluster.md). The following Kubernetes distributions are currently supported for this scenario:- - Cluster API Azure - - Azure Kubernetes Service (AKS) clusters on Azure Stack HCI - - AKS enabled by Azure Arc - - Google Kubernetes Engine - - OpenShift Kubernetes Distribution - - Canonical Kubernetes Distribution - - Elastic Kubernetes Service - - Tanzu Kubernetes Grid - - Azure Red Hat OpenShift -- Outbound connectivity to the following endpoints:- - `linuxgeneva-microsoft.azurecr.io` - - `upstreamarc.azurecr.io` - - `*.blob.core.windows.net` -- Ensure you've met the [general prerequisites for cluster extensions](extensions.md#prerequisites). You must use version 0.4.0 or newer of the `k8s-extension` Azure CLI extension.--## Install the Azure Key Vault Secrets Provider extension on an Arc-enabled Kubernetes cluster --You can install the Azure Key Vault Secrets Provider extension on your connected cluster in the Azure portal, by using Azure CLI, or by deploying an ARM template. --Only one instance of the extension can be deployed on each Azure Arc-enabled Kubernetes cluster. --> [!TIP] -> If the cluster is behind an outbound proxy server, ensure that you connect it to Azure Arc using the [proxy configuration](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) option before installing the extension. --### Azure portal --1. In the [Azure portal](https://portal.azure.com/#home), navigate to **Kubernetes - Azure Arc** and select your cluster. -1. Select **Extensions** (under **Settings**), and then select **+ Add**. -- :::image type="content" source="media/tutorial-akv-secrets-provider/extension-install-add-button.png" lightbox="media/tutorial-akv-secrets-provider/extension-install-add-button.png" alt-text="Screenshot showing the Extensions pane for an Arc-enabled Kubernetes cluster in the Azure portal."::: --1. From the list of available extensions, select **Azure Key Vault Secrets Provider** to deploy the latest version of the extension. -- :::image type="content" source="media/tutorial-akv-secrets-provider/extension-install-new-resource.png" alt-text="Screenshot showing the Azure Key Vault Secrets Provider extension in the Azure portal."::: --1. Follow the prompts to deploy the extension. If needed, customize the installation by changing the default options on the **Configuration** tab. --### Azure CLI --1. Set the environment variables: -- ```azurecli-interactive - export CLUSTER_NAME=<arc-cluster-name> - export RESOURCE_GROUP=<resource-group-name> - ``` --2. Install the Secrets Store CSI Driver and the Azure Key Vault Secrets Provider extension by running the following command: -- ```azurecli-interactive - az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider - ``` --You should see output similar to this example. It may take several minutes before the secrets provider Helm chart is deployed to the cluster. --```json -{ - "aksAssignedIdentity": null, - "autoUpgradeMinorVersion": true, - "configurationProtectedSettings": {}, - "configurationSettings": {}, - "customLocationSettings": null, - "errorInfo": null, - "extensionType": "microsoft.azurekeyvaultsecretsprovider", - "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Kubernetes/connectedClusters/$CLUSTER_NAME/providers/Microsoft.KubernetesConfiguration/extensions/akvsecretsprovider", - "identity": { - "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "tenantId": null, - "type": "SystemAssigned" - }, - "location": null, - "name": "akvsecretsprovider", - "packageUri": null, - "provisioningState": "Succeeded", - "releaseTrain": "Stable", - "resourceGroup": "$RESOURCE_GROUP", - "scope": { - "cluster": { - "releaseNamespace": "kube-system" - }, - "namespace": null - }, - "statuses": [], - "systemData": { - "createdAt": "2022-05-12T18:35:56.552889+00:00", - "createdBy": null, - "createdByType": null, - "lastModifiedAt": "2022-05-12T18:35:56.552889+00:00", - "lastModifiedBy": null, - "lastModifiedByType": null - }, - "type": "Microsoft.KubernetesConfiguration/extensions", - "version": "1.1.3" -} -``` --### ARM template --1. Create a .json file using the following format. Be sure to update the \<cluster-name\> value to refer to your cluster. -- ```json - { - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "ConnectedClusterName": { - "defaultValue": "<cluster-name>", - "type": "String", - "metadata": { - "description": "The Connected Cluster name." - } - }, - "ExtensionInstanceName": { - "defaultValue": "akvsecretsprovider", - "type": "String", - "metadata": { - "description": "The extension instance name." - } - }, - "ExtensionVersion": { - "defaultValue": "", - "type": "String", - "metadata": { - "description": "The version of the extension type." - } - }, - "ExtensionType": { - "defaultValue": "Microsoft.AzureKeyVaultSecretsProvider", - "type": "String", - "metadata": { - "description": "The extension type." - } - }, - "ReleaseTrain": { - "defaultValue": "stable", - "type": "String", - "metadata": { - "description": "The release train." - } - } - }, - "functions": [], - "resources": [ - { - "type": "Microsoft.KubernetesConfiguration/extensions", - "apiVersion": "2021-09-01", - "name": "[parameters('ExtensionInstanceName')]", - "identity": { - "type": "SystemAssigned" - }, - "properties": { - "extensionType": "[parameters('ExtensionType')]", - "releaseTrain": "[parameters('ReleaseTrain')]", - "version": "[parameters('ExtensionVersion')]" - }, - "scope": "[concat('Microsoft.Kubernetes/connectedClusters/', parameters('ConnectedClusterName'))]" - } - ] - } - ``` --1. Now set the environment variables by using the following Azure CLI command: -- ```azurecli-interactive - export TEMPLATE_FILE_NAME=<template-file-path> - export DEPLOYMENT_NAME=<desired-deployment-name> - ``` --1. Finally, run this Azure CLI command to install the Azure Key Vault Secrets Provider extension: -- ```azurecli-interactive - az deployment group create --name $DEPLOYMENT_NAME --resource-group $RESOURCE_GROUP --template-file $TEMPLATE_FILE_NAME - ``` --You should now be able to view the secret provider resources and use the extension in your cluster. --## Validate the extension installation --To confirm successful installation of the Azure Key Vault Secrets Provider extension, run the following command. --```azurecli-interactive -az k8s-extension show --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --name akvsecretsprovider -``` --You should see output similar to this example. --```json -{ - "aksAssignedIdentity": null, - "autoUpgradeMinorVersion": true, - "configurationProtectedSettings": {}, - "configurationSettings": {}, - "customLocationSettings": null, - "errorInfo": null, - "extensionType": "microsoft.azurekeyvaultsecretsprovider", - "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Kubernetes/connectedClusters/$CLUSTER_NAME/providers/Microsoft.KubernetesConfiguration/extensions/akvsecretsprovider", - "identity": { - "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "tenantId": null, - "type": "SystemAssigned" - }, - "location": null, - "name": "akvsecretsprovider", - "packageUri": null, - "provisioningState": "Succeeded", - "releaseTrain": "Stable", - "resourceGroup": "$RESOURCE_GROUP", - "scope": { - "cluster": { - "releaseNamespace": "kube-system" - }, - "namespace": null - }, - "statuses": [], - "systemData": { - "createdAt": "2022-05-12T18:35:56.552889+00:00", - "createdBy": null, - "createdByType": null, - "lastModifiedAt": "2022-05-12T18:35:56.552889+00:00", - "lastModifiedBy": null, - "lastModifiedByType": null - }, - "type": "Microsoft.KubernetesConfiguration/extensions", - "version": "1.1.3" -} -``` --## Create or select an Azure Key Vault --Next, specify the Azure Key Vault to use with your connected cluster. If you don't already have one, create a new Key Vault by using the following commands. Keep in mind that the name of your key vault must be globally unique. --Set the following environment variables: --```azurecli-interactive -export AKV_RESOURCE_GROUP=<resource-group-name> -export AZUREKEYVAULT_NAME=<AKV-name> -export AZUREKEYVAULT_LOCATION=<AKV-location> -``` --Next, run the following command: --```azurecli -az keyvault create -n $AZUREKEYVAULT_NAME -g $AKV_RESOURCE_GROUP -l $AZUREKEYVAULT_LOCATION -``` --Azure Key Vault can store keys, secrets, and certificates. For this example, you can set a plain text secret called `DemoSecret` by using the following command: --```azurecli -az keyvault secret set --vault-name $AZUREKEYVAULT_NAME -n DemoSecret --value MyExampleSecret -``` --Before you move on to the next section, take note of the following properties: --- Name of the secret object in Key Vault-- Object type (secret, key, or certificate)-- Name of your Key Vault resource-- The Azure Tenant ID for the subscription to which the Key Vault belongs--## Provide identity to access Azure Key Vault --Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed through a service principal. Follow these steps to provide an identity that can access your Key Vault. --1. Follow the steps [to create a service principal in Azure](/entra/identity-platform/howto-create-service-principal-portal). Take note of the Client ID and Client Secret generated in this step. -1. Next, [ensure Azure Key Vault has GET permission to the created service principal](/azure/key-vault/general/assign-access-policy#assign-an-access-policy). -1. Use the client ID and Client Secret from the first step to create a Kubernetes secret on the connected cluster: -- ```bash - kubectl create secret generic secrets-store-creds --from-literal clientid="<client-id>" --from-literal clientsecret="<client-secret>" - ``` --1. Label the created secret: -- ```bash - kubectl label secret secrets-store-creds secrets-store.csi.k8s.io/used=true - ``` --1. Create a `SecretProviderClass` with the following YAML, filling in your values for key vault name, tenant ID, and objects to retrieve from your AKV instance: -- ```yml - # This is a SecretProviderClass example using service principal to access Keyvault - apiVersion: secrets-store.csi.x-k8s.io/v1 - kind: SecretProviderClass - metadata: - name: akvprovider-demo - spec: - provider: azure - parameters: - usePodIdentity: "false" - keyvaultName: <key-vault-name> - cloudName: # Defaults to AzurePublicCloud - objects: | - array: - - | - objectName: DemoSecret - objectType: secret # object types: secret, key or cert - objectVersion: "" # [OPTIONAL] object versions, default to latest if empty - tenantId: <tenant-Id> # The tenant ID of the Azure Key Vault instance - ``` -- For use with national clouds, change `cloudName` to `AzureUSGovernmentCloud` for Azure Government, or to `AzureChinaCloud` for Microsoft Azure operated by 21Vianet. --1. Apply the SecretProviderClass to your cluster: -- ```bash - kubectl apply -f secretproviderclass.yaml - ``` --1. Create a pod with the following YAML, filling in the name of your identity: -- ```yml - # This is a sample pod definition for using SecretProviderClass and service principal to access Keyvault - kind: Pod - apiVersion: v1 - metadata: - name: busybox-secrets-store-inline - spec: - containers: - - name: busybox - image: k8s.gcr.io/e2e-test-images/busybox:1.29 - command: - - "/bin/sleep" - - "10000" - volumeMounts: - - name: secrets-store-inline - mountPath: "/mnt/secrets-store" - readOnly: true - volumes: - - name: secrets-store-inline - csi: - driver: secrets-store.csi.k8s.io - readOnly: true - volumeAttributes: - secretProviderClass: "akvprovider-demo" - nodePublishSecretRef: - name: secrets-store-creds - ``` --1. Apply the pod to your cluster: -- ```bash - kubectl apply -f pod.yaml - ``` --## Validate the secrets --After the pod starts, the mounted content at the volume path specified in your deployment YAML is available. --```bash -## show secrets held in secrets-store -kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/ --## print a test secret 'DemoSecret' held in secrets-store -kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/DemoSecret -``` --## Additional configuration options --The Azure Key Vault Secrets Provider extension supports [Helm chart configurations](https://github.com/Azure/secrets-store-csi-driver-provider-azure/blob/master/charts/csi-secrets-store-provider-azure/README.md#configuration). --The following configuration settings are frequently used with the Azure Key Vault Secrets Provider extension: --| Configuration Setting | Default | Description | -| | -- | -- | -| enableSecretRotation | false | Boolean type. If `true`, periodically updates the pod mount and Kubernetes Secret with the latest content from external secrets store | -| rotationPollInterval | 2 m | If `enableSecretRotation` is `true`, this setting specifies the secret rotation poll interval duration. This duration can be adjusted based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest. | -| syncSecret.enabled | false | Boolean input. In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. If `true`, `SecretProviderClass` allows the `secretObjects` field to define the desired state of the synced Kubernetes Secret objects. | --These settings can be specified when the extension is installed by using the `az k8s-extension create` command: --```azurecli-interactive -az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider --configuration-settings secrets-store-csi-driver.enableSecretRotation=true secrets-store-csi-driver.rotationPollInterval=3m secrets-store-csi-driver.syncSecret.enabled=true -``` --You can also change these settings after installation by using the `az k8s-extension update` command: --```azurecli-interactive -az k8s-extension update --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --name akvsecretsprovider --configuration-settings secrets-store-csi-driver.enableSecretRotation=true secrets-store-csi-driver.rotationPollInterval=3m secrets-store-csi-driver.syncSecret.enabled=true -``` --You can use other configuration settings as needed for your deployment. For example, to change the kubelet root directory while creating a cluster, modify the `az k8s-extension create` command: --```azurecli-interactive -az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider --configuration-settings linux.kubeletRootDir=/path/to/kubelet secrets-store-csi-driver.linux.kubeletRootDir=/path/to/kubelet -``` --## Uninstall the Azure Key Vault Secrets Provider extension --To uninstall the extension, run the following command: --```azurecli-interactive -az k8s-extension delete --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --name akvsecretsprovider -``` --> [!NOTE] -> Uninstalling the extension doesn't delete the Custom Resource Definitions (CRDs) that were created when the extension was installed. --To confirm that the extension instance has been deleted, run the following command: --```azurecli-interactive -az k8s-extension list --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP -``` --If the extension was successfully removed, you won't see the Azure Key Vault Secrets Provider extension listed in the output. If you don't have any other extensions installed on your cluster, you'll see an empty array. --If you no longer need it, be sure to delete the Kubernetes secret associated with the service principal by running the following command: --```bash -kubectl delete secret secrets-store-creds -``` --## Reconciliation and troubleshooting --The Azure Key Vault Secrets Provider extension is self-healing. If somebody tries to change or delete an extension component that was deployed when the extension was installed, that component is reconciled to its original state. The only exceptions are for Custom Resource Definitions (CRDs). If CRDs are deleted, they won't be reconciled. To restore deleted CRDs, use the `az k8s-extension create` command again with the existing extension instance name. --For more information about resolving common issues, see the open source troubleshooting guides for [Azure Key Vault provider for Secrets Store CSI driver](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/troubleshooting/) and [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/troubleshooting.html). --## Next steps --- Want to try things out? Get started quickly with an [Azure Arc Jumpstart scenario](https://aka.ms/arc-jumpstart-akv-secrets-provider) using Cluster API.-- Learn more about [Azure Key Vault](/azure/key-vault/general/overview). |
azure-arc | Tutorial Arc Enabled Open Service Mesh | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md | - Title: Azure Arc-enabled Open Service Mesh -description: Deploy the Open Service Mesh (OSM) extension on Azure Arc-enabled Kubernetes cluster - Previously updated : 01/11/2024----# Azure Arc-enabled Open Service Mesh --[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments. --OSM runs an Envoy-based control plane on Kubernetes, can be configured with [SMI](https://smi-spec.io/) APIs, and works by injecting an Envoy proxy as a sidecar container next to each instance of your application. [Read more](https://docs.openservicemesh.io/#features) on the service mesh scenarios enabled by Open Service Mesh. --All components of Azure Arc-enabled OSM are deployed on availability zones, making them zone redundant. --## Installation options and requirements --Azure Arc-enabled Open Service Mesh can be deployed through Azure portal, Azure CLI, an ARM template, or a built-in Azure policy. --### Prerequisites --- Ensure you have met all the common prerequisites for cluster extensions listed [here](extensions.md#prerequisites).-- Use `az k8s-extension` CLI extension version >= v1.0.4--### Current support limitations --- Only one instance of Open Service Mesh can be deployed on an Azure Arc-connected Kubernetes cluster.-- Support is available for the two most recently released minor versions of Arc-enabled Open Service Mesh. Find the latest version [here](https://github.com/Azure/osm-azure/releases). Supported release versions are appended with notes. Ignore the tags associated with intermediate releases.-- The following Kubernetes distributions are currently supported:- - AKS (Azure Kubernetes Service) Engine - - AKS clusters on Azure Stack HCI - - AKS enabled by Azure Arc - - Cluster API Azure - - Google Kubernetes Engine - - Canonical Kubernetes Distribution - - Rancher Kubernetes Engine - - OpenShift Kubernetes Distribution - - Amazon Elastic Kubernetes Service - - VMware Tanzu Kubernetes Grid -- Azure Monitor integration with Azure Arc-enabled Open Service Mesh is available [in preview with limited support](#monitoring-application-using-azure-monitor-and-applications-insights-preview).--## Basic installation using Azure portal --To deploy using Azure portal, once you have an Arc connected cluster, go to the cluster's **Open Service Mesh** section. --[![Open Service Mesh located under Settings for Arc enabled Kubernetes cluster](media/tutorial-arc-enabled-open-service-mesh/osm-portal-install.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-portal-install.jpg#lightbox) --Select the **Install extension** button to deploy the latest version of the extension. --Alternatively, you can use the CLI experience captured here. For at-scale onboarding, read further in this article about deployment using [ARM template](#install-azure-arc-enabled-osm-using-arm-template) and using [Azure Policy](#install-azure-arc-enabled-osm-using-built-in-policy). --## Basic installation using Azure CLI --The following steps assume that you already have a cluster with a supported Kubernetes distribution connected to Azure Arc. Ensure that your KUBECONFIG environment variable points to the kubeconfig of the Arc-enabled Kubernetes cluster. --Set the environment variables: --```azurecli-interactive -export CLUSTER_NAME=<arc-cluster-name> -export RESOURCE_GROUP=<resource-group-name> -``` --If you're using an OpenShift cluster, skip to the [OpenShift installation steps](#install-osm-on-an-openshift-cluster). --Create the extension: --> [!NOTE] -> To pin a specific version of OSM, add the `--version x.y.z` flag to the `create` command. Note that this will set the value for `auto-upgrade-minor-version` to false. --```azurecli-interactive -az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm -``` --You should see output similar to this example. It may take 3-5 minutes for the actual OSM helm chart to get deployed to the cluster. Until this deployment happens, `installState` will remain `Pending`. --```json -{ - "autoUpgradeMinorVersion": true, - "configurationSettings": {}, - "creationTime": "2021-04-29T17:50:11.4116524+00:00", - "errorInfo": { - "code": null, - "message": null - }, - "extensionType": "microsoft.openservicemesh", - "id": "/subscriptions/<subscription-id>/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Kubernetes/connectedClusters/$CLUSTER_NAME/providers/Microsoft.KubernetesConfiguration/extensions/osm", - "identity": null, - "installState": "Pending", - "lastModifiedTime": "2021-04-29T17:50:11.4116525+00:00", - "lastStatusTime": null, - "location": null, - "name": "osm", - "releaseTrain": "stable", - "resourceGroup": "$RESOURCE_GROUP", - "scope": { - "cluster": { - "releaseNamespace": "arc-osm-system" - }, - "namespace": null - }, - "statuses": [], - "type": "Microsoft.KubernetesConfiguration/extensions", - "version": "x.y.z" -} -``` --Next, [validate your installation](#validate-installation). --## Custom installations --The following sections describe certain custom installations of Azure Arc-enabled OSM. Custom installations require setting values of OSM by in a JSON file and passing them into `k8s-extension create` CLI command. --### Install OSM on an OpenShift cluster --1. Copy and save the following contents into a JSON file. If you have already created a configuration settings file, add the following line to the existing file to preserve your previous changes. -- ```json - { - "osm.osm.enablePrivilegedInitContainer": "true" - } - ``` --2. [Install OSM with custom values](#setting-values-during-osm-installation). --3. Add the privileged [security context constraint](https://docs.openshift.com/container-platform/4.7/authentication/managing-security-context-constraints.html) to each service account for the applications in the mesh. -- ```azurecli-interactive - oc adm policy add-scc-to-user privileged -z <service account name> -n <service account namespace> - ``` --It may take 3-5 minutes for the actual OSM helm chart to get deployed to the cluster. Until this deployment happens, `installState` will remain `Pending`. --To ensure that the privileged init container setting doesn't revert to the default, pass in the `"osm.osm.enablePrivilegedInitContainer" : "true"` configuration setting to all subsequent `az k8s-extension create` commands. --### Enable High Availability features on installation --OSM's control plane components are built with High Availability and Fault Tolerance in mind. This section describes how to -enable Horizontal Pod Autoscaling (HPA) and Pod Disruption Budget (PDB) during installation. Read more about the [design -considerations of High Availability on OSM](https://docs.openservicemesh.io/docs/guides/ha_scale/high_availability/). --#### Horizontal Pod Autoscaling (HPA) --HPA automatically scales up or down control plane pods based on the average target CPU utilization (%) and average target -memory utilization (%) defined by the user. To enable HPA and set applicable values on OSM control plane pods during installation, create or -append to your existing JSON settings file as shown here, repeating the key/value pairs for each control plane pod -(`osmController`, `injector`) that you want to enable HPA on. --```json -{ - "osm.osm.<control_plane_pod>.autoScale.enable" : "true", - "osm.osm.<control_plane_pod>.autoScale.minReplicas" : "<allowed values: 1-10>", - "osm.osm.<control_plane_pod>.autoScale.maxReplicas" : "<allowed values: 1-10>", - "osm.osm.<control_plane_pod>.autoScale.cpu.targetAverageUtilization" : "<allowed values 0-100>", - "osm.osm.<control_plane_pod>.autoScale.memory.targetAverageUtilization" : "<allowed values 0-100>" -} -``` --Now, [install OSM with custom values](#setting-values-during-osm-installation). --#### Pod Disruption Budget (PDB) --In order to prevent disruptions during planned outages, control plane pods `osm-controller` and `osm-injector` have a PDB -that ensures there's always at least one pod corresponding to each control plane application. --To enable PDB, create or append to your existing JSON settings file as follows for each desired control plane pod -(`osmController`, `injector`): --```json -{ - "osm.osm.<control_plane_pod>.enablePodDisruptionBudget" : "true" -} -``` --Now, [install OSM with custom values](#setting-values-during-osm-installation). --### Install OSM with cert-manager for certificate management --[cert-manager](https://cert-manager.io/) is a provider that can be used for issuing signed certificates to OSM without -the need for storing private keys in Kubernetes. Refer to OSM's [cert-manager documentation](https://docs.openservicemesh.io/docs/guides/certificates/) -and [demo](https://docs.openservicemesh.io/docs/demos/cert-manager_integration/) to learn more. --> [!NOTE] -> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace in commands or specify with flag `--osm-namespace arc-osm-system`. --To install OSM with cert-manager as the certificate provider, create or append to your existing JSON settings file the `certificateProvider.kind` -value set to cert-manager as shown here. To change from the default cert-manager values specified in OSM documentation, -also include and update the subsequent `certmanager.issuer` lines. --```json -{ - "osm.osm.certificateProvider.kind" : "cert-manager", - "osm.osm.certmanager.issuerName" : "<issuer name>", - "osm.osm.certmanager.issuerKind" : "<issuer kind>", - "osm.osm.certmanager.issuerGroup" : "<issuer group>" -} -``` --Now, [install OSM with custom values](#setting-values-during-osm-installation). --### Install OSM with Contour for ingress --OSM provides multiple options to expose mesh services externally using ingress. OSM can use [Contour](https://projectcontour.io/), which -works with the ingress controller installed outside the mesh and provisioned with a certificate to participate in the mesh. -Refer to [OSM's ingress documentation](https://docs.openservicemesh.io/docs/guides/traffic_management/ingress/#1-using-contour-ingress-controller-and-gateway) -and [demo](https://docs.openservicemesh.io/docs/demos/ingress_contour/) to learn more. --> [!NOTE] -> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace in commands or specify with flag `--osm-namespace arc-osm-system`. -To set required values for configuring Contour during OSM installation, append the following to your JSON settings file: --```json -{ - "osm.osm.osmNamespace" : "arc-osm-system", - "osm.contour.enabled" : "true", - "osm.contour.configInline.tls.envoy-client-certificate.name" : "osm-contour-envoy-client-cert", - "osm.contour.configInline.tls.envoy-client-certificate.namespace" : "arc-osm-system" -} -``` --### Setting values during OSM installation --Any values that need to be set during OSM installation need to be saved to a single JSON file and passed in through the Azure CLI -install command. --After you create a JSON file with applicable values as described in the custom installation sections, set the file path as an environment variable: --```azurecli-interactive -export SETTINGS_FILE=<json-file-path> -``` --Run the `az k8s-extension create` command to create the OSM extension, passing in the settings file using the `--configuration-settings-file` flag: --```azurecli-interactive -az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm --configuration-settings-file $SETTINGS_FILE -``` --## Install Azure Arc-enabled OSM using ARM template --After connecting your cluster to Azure Arc, create a JSON file with the following format, making sure to update the `<cluster-name>` and `<osm-arc-version>` values: --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "ConnectedClusterName": { - "defaultValue": "<cluster-name>", - "type": "String", - "metadata": { - "description": "The Connected Cluster name." - } - }, - "ExtensionInstanceName": { - "defaultValue": "osm", - "type": "String", - "metadata": { - "description": "The extension instance name." - } - }, - "ExtensionVersion": { - "defaultValue": "<osm-arc-version>", - "type": "String", - "metadata": { - "description": "The extension type version." - } - }, - "ExtensionType": { - "defaultValue": "Microsoft.openservicemesh", - "type": "String", - "metadata": { - "description": "The extension type." - } - }, - "ReleaseTrain": { - "defaultValue": "Stable", - "type": "String", - "metadata": { - "description": "The release train." - } - } - }, - "functions": [], - "resources": [ - { - "type": "Microsoft.KubernetesConfiguration/extensions", - "apiVersion": "2020-07-01-preview", - "name": "[parameters('ExtensionInstanceName')]", - "properties": { - "extensionType": "[parameters('ExtensionType')]", - "releaseTrain": "[parameters('ReleaseTrain')]", - "version": "[parameters('ExtensionVersion')]" - }, - "scope": "[concat('Microsoft.Kubernetes/connectedClusters/', parameters('ConnectedClusterName'))]" - } - ] -} -``` --Set the environment variables: --```azurecli-interactive -export TEMPLATE_FILE_NAME=<template-file-path> -export DEPLOYMENT_NAME=<desired-deployment-name> -``` --Run this command to install the OSM extension: --```azurecli-interactive -az deployment group create --name $DEPLOYMENT_NAME --resource-group $RESOURCE_GROUP --template-file $TEMPLATE_FILE_NAME -``` --You should now be able to view the OSM resources and use the OSM extension in your cluster. --## Install Azure Arc-enabled OSM using built-in policy --A built-in policy is available on Azure portal under the **Kubernetes** category: **Azure Arc-enabled Kubernetes clusters should have the Open Service Mesh extension installed**. This policy can be assigned at the scope of a subscription or a resource group. --The default action of this policy is **Deploy if not exists**. However, you can choose to audit the clusters for extension installations by changing the parameters during assignment. You're also prompted to specify the version you wish to install (v1.0.0-1 or higher) as a parameter. --## Validate installation --Run the following command. --```azurecli-interactive -az k8s-extension show --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --name osm -``` --You should see a JSON output similar to: --```json -{ - "autoUpgradeMinorVersion": true, - "configurationSettings": {}, - "creationTime": "2021-04-29T19:22:00.7649729+00:00", - "errorInfo": { - "code": null, - "message": null - }, - "extensionType": "microsoft.openservicemesh", - "id": "/subscriptions/<subscription-id>/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Kubernetes/connectedClusters/$CLUSTER_NAME/providers/Microsoft.KubernetesConfiguration/extensions/osm", - "identity": null, - "installState": "Installed", - "lastModifiedTime": "2021-04-29T19:22:00.7649731+00:00", - "lastStatusTime": "2021-04-29T19:23:27.642+00:00", - "location": null, - "name": "osm", - "releaseTrain": "stable", - "resourceGroup": "$RESOURCE_GROUP", - "scope": { - "cluster": { - "releaseNamespace": "arc-osm-system" - }, - "namespace": null - }, - "statuses": [], - "type": "Microsoft.KubernetesConfiguration/extensions", - "version": "x.y.z" -} -``` --For more commands that you can use to validate and troubleshoot the deployment of the Open Service Mesh (OSM) extension components on your cluster, see [our troubleshooting guide](extensions-troubleshooting.md#azure-arc-enabled-open-service-mesh) --## OSM controller configuration --OSM deploys a MeshConfig resource `osm-mesh-config` as a part of its control plane in `arc-osm-system` namespace. The purpose of this MeshConfig is to provide the mesh owner/operator the ability to update some of the mesh configurations based on their needs. To view the default values, use the following command. --```azurecli-interactive -kubectl describe meshconfig osm-mesh-config -n arc-osm-system -``` --The output shows the default values: --```azurecli-interactive - Certificate: - Cert Key Bit Size: 2048 - Service Cert Validity Duration: 24h - Feature Flags: - Enable Async Proxy Service Mapping: false - Enable Egress Policy: true - Enable Envoy Active Health Checks: false - Enable Ingress Backend Policy: true - Enable Multicluster Mode: false - Enable Retry Policy: false - Enable Snapshot Cache Mode: false - Enable WASM Stats: true - Observability: - Enable Debug Server: false - Osm Log Level: info - Tracing: - Enable: false - Sidecar: - Config Resync Interval: 0s - Enable Privileged Init Container: false - Log Level: error - Resources: - Traffic: - Enable Egress: false - Enable Permissive Traffic Policy Mode: true - Inbound External Authorization: - Enable: false - Failure Mode Allow: false - Stat Prefix: inboundExtAuthz - Timeout: 1s - Inbound Port Exclusion List: - Outbound IP Range Exclusion List: - Outbound Port Exclusion List: -``` --For more information, see the [Config API reference](https://docs.openservicemesh.io/docs/api_reference/config/v1alpha1/). Notice that `spec.traffic.enablePermissiveTrafficPolicyMode` is set to `true`. When OSM is in permissive traffic policy mode, [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services. --`osm-mesh-config` can also be viewed in the Azure portal by selecting **Edit configuration** in the cluster's Open Service Mesh section. --[![Edit configuration button located on top of the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration.jpg#lightbox) --### Making changes to OSM controller configuration --> [!NOTE] -> Values in the MeshConfig `osm-mesh-config` are persisted across upgrades. --Changes to `osm-mesh-config` can be made using the `kubectl patch` command. In the following example, the permissive traffic policy mode is changed to false. --```azurecli-interactive -kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":false}}}' --type=merge -``` --If an incorrect value is used, validations on the MeshConfig CRD prevent the change with an error message explaining why the value is invalid. For example, this command shows what happens if we patch `enableEgress` to a non-boolean value: --```azurecli-interactive -kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"enableEgress":"no"}}}' --type=merge -# Validations on the CRD will deny this change -The MeshConfig "osm-mesh-config" is invalid: spec.traffic.enableEgress: Invalid value: "string": spec.traffic.enableEgress in body must be of type boolean: "string" -``` --Alternatively, to edit `osm-mesh-config` in Azure portal, select **Edit configuration** in the cluster's Open Service Mesh section. --[![Edit configuration button in the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration-edit.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration-edit.jpg#lightbox) --## Using Azure Arc-enabled OSM --To start using OSM capabilities, you need to first onboard the application namespaces to the service mesh. Download the OSM CLI from the [OSM GitHub releases page](https://github.com/openservicemesh/osm/releases/). Once the namespaces are added to the mesh, you can configure the SMI policies to achieve the desired OSM capability. --### Onboard namespaces to the service mesh --Add namespaces to the mesh by running the following command: --```azurecli-interactive -osm namespace add <namespace_name> -``` --Namespaces can be onboarded from Azure portal as well by selecting **+Add** in the cluster's Open Service Mesh section. --[![+Add button located on top of the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-add-namespace.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-portal-add-namespace.jpg#lightbox) --For more information about onboarding services, see the [Open Service Mesh documentation](https://docs.openservicemesh.io/docs/guides/app_onboarding/#onboard-services). --### Configure OSM with Service Mesh Interface (SMI) policies --You can start with a [sample application](https://docs.openservicemesh.io/docs/getting_started/install_apps/) or use your test environment to try out SMI policies. --> [!NOTE] -> If you use sample applications, ensure that their versions match the version of the OSM extension installed on your cluster. For example, if you are using v1.0.0 of the OSM extension, use the bookstore manifest from release-v1.0 branch of OSM upstream repository. --### Configuring your own Jaeger, Prometheus and Grafana instances --The OSM extension doesn't install add-ons like [Jaeger](https://www.jaegertracing.io/docs/getting-started/), [Prometheus](https://prometheus.io/docs/prometheus/latest/installation/), [Grafana](https://grafana.com/docs/grafana/latest/installation/) and [Flagger](https://docs.flagger.app/). You can integrate OSM with your own running instances of those tools instead. To integrate with your own instances, see the following documentation: --- [BYO-Jaeger instance](https://docs.openservicemesh.io/docs/guides/observability/tracing/#byo-bring-your-own)-- [BYO-Prometheus instance](https://docs.openservicemesh.io/docs/guides/observability/metrics/#prometheus)-- [BYO-Grafana dashboard](https://docs.openservicemesh.io/docs/guides/observability/metrics/#grafana)-- [OSM Progressive Delivery with Flagger](https://docs.flagger.app/tutorials/osm-progressive-delivery)--> [!NOTE] -> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace name `arc-osm-system` when making changes to `osm-mesh-config`. --## Monitoring application using Azure Monitor and Applications Insights (preview) --Both Azure Monitor and Azure Application Insights help you maximize the availability and performance of your applications and services by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. Azure Arc-enabled Open Service Mesh has deep integrations into both of these Azure services. This integration provides a seamless Azure experience for viewing and responding to critical KPIs provided by OSM metrics. ---Follow these steps to allow Azure Monitor to scrape Prometheus endpoints for collecting application metrics. --1. Follow the guidance available [here](#onboard-namespaces-to-the-service-mesh) to ensure that the application namespaces that you wish to be monitored are onboarded to the mesh. --2. Expose the Prometheus endpoints for application namespaces. -- ```azurecli-interactive - osm metrics enable --namespace <namespace1> - osm metrics enable --namespace <namespace2> - ``` --3. Install the Azure Monitor extension using the guidance available [here](/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters?toc=/azure/azure-arc/kubernetes/toc.json). --4. Create a Configmap in the `kube-system` namespace that enables Azure Monitor to monitor your namespaces. For example, create a `container-azm-ms-osmconfig.yaml` with the following to monitor `<namespace1>` and `<namespace2>`: -- ```yaml - kind: ConfigMap - apiVersion: v1 - data: - schema-version: v1 - config-version: ver1 - osm-metric-collection-configuration: |- - # OSM metric collection settings - [osm_metric_collection_configuration] - [osm_metric_collection_configuration.settings] - # Namespaces to monitor - monitor_namespaces = ["<namespace1>", "<namespace2>"] - metadata: - name: container-azm-ms-osmconfig - namespace: kube-system - ``` --5. Run the following kubectl command -- ```azurecli-interactive - kubectl apply -f container-azm-ms-osmconfig.yaml - ``` --It may take up to 15 minutes for the metrics to show up in Log Analytics. You can try querying the InsightsMetrics table. --```azurecli-interactive -InsightsMetrics -| where Name contains "envoy" -| extend t=parse_json(Tags) -| where t.app == "namespace1" -``` --### Navigating the OSM dashboard --1. Access your Arc connected Kubernetes cluster using this [link](https://aka.ms/azmon/osmux). -2. Go to Azure Monitor and navigate to the **Reports** tab to access the OSM workbook. -3. Select the time-range & namespace to scope your services. --[![OSM workbook](media/tutorial-arc-enabled-open-service-mesh/osm-workbook.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-workbook.jpg#lightbox) --#### Requests tab --The **Requests** tab shows a summary of all the http requests sent via service to service in OSM. --- You can view all the services by selecting the service in the grid.-- You can view total requests, request error rate & P90 latency.-- You can drill down to destination and view trends for HTTP error/success code, success rate, pod resource utilization, and latencies at different percentiles.--#### Connections tab --The **Connections** tab shows a summary of all the connections between your services in Open Service Mesh. --- Outbound connections: total number of connections between Source and destination services.-- Outbound active connections: last count of active connections between source and destination in selected time range.-- Outbound failed connections: total number of failed connections between source and destination service.--## Upgrade to a specific version of OSM --There may be some downtime of the control plane during upgrades. The data plane is only affected during CRD upgrades. --### Supported upgrades --The OSM extension can be upgraded manually across minor and major versions. However, automatic upgrade (if enabled) only works across minor versions. --### Upgrade to a specific OSM version manually --The following command upgrades the OSM-Arc extension to a specific version: --```azurecli-interactive -az k8s-extension update --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --release-train stable --name osm --version x.y.z -``` --### Enable automatic upgrades --If automatic upgrades aren't enabled by default, the following command can be run to enable them. The current value of `--auto-upgrade-minor-version` can be verified by running the `az k8s-extension show` command as detailed in the [Validate installation](#validate-installation) step. --```azurecli-interactive -az k8s-extension update --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --release-train stable --name osm --auto-upgrade-minor-version true -``` --## Uninstall Azure Arc-enabled OSM --Use the following command: --```azurecli-interactive -az k8s-extension delete --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --name osm -y -``` --Verify that the extension instance has been deleted: --```azurecli-interactive -az k8s-extension list --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP -``` --This output should not include OSM. If you do not have any other extensions installed on your cluster, it's just an empty array. --When you use the `az k8s-extension` command to delete the OSM extension, the `arc-osm-system` namespace is not removed, and the actual resources within the namespace (like mutating webhook configuration and osm-controller pod) take around 10 minutes to delete. --> [!NOTE] -> Use the az k8s-extension CLI to uninstall OSM components managed by Arc. Using the OSM CLI to uninstall is not supported by Arc and can result in undesirable behavior. --## Next steps --- Just want to try things out? Get started quickly with an [Azure Arc Jumpstart](https://aka.ms/arc-jumpstart-osm) scenario using Cluster API.-- Get [troubleshooting help for Azure Arc-enabled OSM](extensions-troubleshooting.md#azure-arc-enabled-open-service-mesh).- |
azure-arc | Tutorial Gitops Ci Cd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md | - Title: 'Tutorial: Implement CI/CD with GitOps using Azure Arc-enabled Kubernetes clusters' -description: This tutorial walks through setting up a CI/CD solution using GitOps with Azure Arc-enabled Kubernetes clusters. - Previously updated : 05/08/2023---# Tutorial: Implement CI/CD with GitOps using Azure Arc-enabled Kubernetes clusters --> [!IMPORTANT] -> This tutorial uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial that uses GitOps with Flux v2](./tutorial-gitops-flux2-ci-cd.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. -> -> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources. --In this tutorial, you'll set up a CI/CD solution using GitOps with Azure Arc-enabled Kubernetes clusters. Using the sample Azure Vote app, you'll: --> [!div class="checklist"] -> * Create an Azure Arc-enabled Kubernetes cluster. -> * Connect your application and GitOps repos to Azure Repos. -> * Import CI/CD pipelines. -> * Connect your Azure Container Registry (ACR) to Azure DevOps and Kubernetes. -> * Create environment variable groups. -> * Deploy the `dev` and `stage` environments. -> * Test the application environments. --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---## Before you begin --This tutorial assumes familiarity with Azure DevOps, Azure Repos and Pipelines, and Azure CLI. --* Sign into [Azure DevOps Services](https://dev.azure.com/). -* Complete the [previous tutorial](./tutorial-use-gitops-connected-cluster.md) to learn how to deploy GitOps for your CI/CD environment. -* Understand the [benefits and architecture](./conceptual-configurations.md) of this feature. -* Verify you have: - * A [connected Azure Arc-enabled Kubernetes cluster](./quickstart-connect-cluster.md#connect-an-existing-kubernetes-cluster) named **arc-cicd-cluster**. - * A connected Azure Container Registry (ACR) with either [AKS integration](/azure/aks/cluster-container-registry-integration) or [non-AKS cluster authentication](../../container-registry/container-registry-auth-kubernetes.md). - * "Build Admin" and "Project Admin" permissions for [Azure Repos](/azure/devops/repos/get-started/what-is-repos) and [Azure Pipelines](/azure/devops/pipelines/get-started/pipelines-get-started). -* Install the following Azure Arc-enabled Kubernetes CLI extensions of versions >= 1.0.0: -- ```azurecli - az extension add --name connectedk8s - az extension add --name k8s-configuration - ``` - * To update these extensions to the latest version, run the following commands: -- ```azurecli - az extension update --name connectedk8s - az extension update --name k8s-configuration - ``` --## Import application and GitOps repos into Azure Repos --Import an [application repo](./conceptual-gitops-ci-cd.md#application-repo) and a [GitOps repo](./conceptual-gitops-ci-cd.md#gitops-repo) into Azure Repos. For this tutorial, use the following example repos: --* **arc-cicd-demo-src** application repo - * URL: https://github.com/Azure/arc-cicd-demo-src - * Contains the example Azure Vote App that you will deploy using GitOps. -* **arc-cicd-demo-gitops** GitOps repo - * URL: https://github.com/Azure/arc-cicd-demo-gitops - * Works as a base for your cluster resources that house the Azure Vote App. --Learn more about [importing Git repos](/azure/devops/repos/git/import-git-repository). -->[!NOTE] -> Importing and using two separate repositories for application and GitOps repos can improve security and simplicity. The application and GitOps repositories' permissions and visibility can be tuned individually. -> For example, the cluster administrator may not find the changes in application code relevant to the desired state of the cluster. Conversely, an application developer doesn't need to know the specific parameters for each environment - a set of test values that provide coverage for the parameters may be sufficient. --## Connect the GitOps repo --To continuously deploy your app, connect the application repo to your cluster using GitOps. Your **arc-cicd-demo-gitops** GitOps repo contains the basic resources to get your app up and running on your **arc-cicd-cluster** cluster. --The initial GitOps repo contains only a [manifest](https://github.com/Azure/arc-cicd-demo-gitops/blob/master/arc-cicd-cluster/manifests/namespaces.yml) that creates the **dev** and **stage** namespaces corresponding to the deployment environments. --The GitOps connection that you create will automatically: -* Sync the manifests in the manifest directory. -* Update the cluster state. --The CI/CD workflow will populate the manifest directory with extra manifests to deploy the app. ---1. [Create a new GitOps connection](./tutorial-use-gitops-connected-cluster.md) to your newly imported **arc-cicd-demo-gitops** repo in Azure Repos. -- ```azurecli - az k8s-configuration create \ - --name cluster-config \ - --cluster-name arc-cicd-cluster \ - --resource-group myResourceGroup \ - --operator-instance-name cluster-config \ - --operator-namespace cluster-config \ - --repository-url https://dev.azure.com/<Your organization>/<Your project>/_git/arc-cicd-demo-gitops \ - --https-user <Azure Repos username> \ - --https-key <Azure Repos PAT token> \ - --scope cluster \ - --cluster-type connectedClusters \ - --operator-params='--git-readonly --git-path=arc-cicd-cluster/manifests' - ``` --1. Ensure that Flux *only* uses the `arc-cicd-cluster/manifests` directory as the base path. Define the path by using the following operator parameter: -- `--git-path=arc-cicd-cluster/manifests` -- > [!NOTE] - > If you are using an HTTPS connection string and are having connection problems, ensure you omit the username prefix in the URL. For example, `https://alice@dev.azure.com/contoso/project/_git/arc-cicd-demo-gitops` must have `alice@` removed. The `--https-user` specifies the user instead, for example `--https-user alice`. --1. Check the state of the deployment in Azure portal. - * If successful, you'll see both `dev` and `stage` namespaces created in your cluster. --## Import the CI/CD pipelines --Now that you've synced a GitOps connection, you'll need to import the CI/CD pipelines that create the manifests. --The application repo contains a `.pipeline` folder with the pipelines you'll use for PRs, CI, and CD. Import and rename the three pipelines provided in the sample repo: --| Pipeline file name | Description | -| - | - | -| [`.pipelines/az-vote-pr-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-pr-pipeline.yaml) | The application PR pipeline, named **arc-cicd-demo-src PR** | -| [`.pipelines/az-vote-ci-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-ci-pipeline.yaml) | The application CI pipeline, named **arc-cicd-demo-src CI** | -| [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) | The application CD pipeline, named **arc-cicd-demo-src CD** | ----## Connect your ACR -Both your pipelines and cluster will be utilizing ACR to store and retrieve Docker images. --### Connect ACR to Azure DevOps -During the CI process, you'll deploy your application containers to a registry. Start by creating an Azure service connection: --1. In Azure DevOps, open the **Service connections** page from the project settings page. In TFS, open the **Services** page from the **settings** icon in the top menu bar. -2. Choose **+ New service connection** and select the type of service connection you need. -3. Fill in the parameters for the service connection. For this tutorial: - * Name the service connection **arc-demo-acr**. - * Select **myResourceGroup** as the resource group. -4. Select the **Grant access permission to all pipelines**. - * This option authorizes YAML pipeline files for service connections. -5. Choose **OK** to create the connection. --### Connect ACR to Kubernetes -Enable your Kubernetes cluster to pull images from your ACR. If it's private, authentication will be required. --#### Connect ACR to existing AKS clusters --Integrate an existing ACR with existing AKS clusters using the following command: --```azurecli -az aks update -n arc-cicd-cluster -g myResourceGroup --attach-acr arc-demo-acr -``` --#### Create an image pull secret --To connect non-AKS and local clusters to your ACR, create an image pull secret. Kubernetes uses image pull secrets to store information needed to authenticate your registry. --Create an image pull secret with the following `kubectl` command. Repeat for both the `dev` and `stage` namespaces. -```console -kubectl create secret docker-registry <secret-name> \ - --namespace <namespace> \ - --docker-server=<container-registry-name>.azurecr.io \ - --docker-username=<service-principal-ID> \ - --docker-password=<service-principal-password> -``` --To avoid having to set an imagePullSecret for every Pod, consider adding the imagePullSecret to the Service account in the `dev` and `stage` namespaces. See the [Kubernetes tutorial](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) for more information. --## Create environment variable groups --### App repo variable group -[Create a variable group](/azure/devops/pipelines/library/variable-groups) named **az-vote-app-dev**. Set the following values: --| Variable | Value | -| -- | -- | -| AZ_ACR_NAME | (your ACR instance, for example. azurearctest.azurecr.io) | -| AZURE_SUBSCRIPTION | (your Azure Service Connection, which should be **arc-demo-acr** from earlier in the tutorial) | -| AZURE_VOTE_IMAGE_REPO | The full path to the Azure Vote App repo, for example azurearctest.azurecr.io/azvote | -| ENVIRONMENT_NAME | Dev | -| MANIFESTS_BRANCH | `master` | -| MANIFESTS_FOLDER | `azure-vote-manifests` | -| MANIFESTS_REPO | `arc-cicd-demo-gitops` | -| ORGANIZATION_NAME | Name of Azure DevOps organization | -| PROJECT_NAME | Name of GitOps project in Azure DevOps | -| REPO_URL | Full URL for GitOps repo | -| SRC_FOLDER | `azure-vote` | -| TARGET_CLUSTER | `arc-cicd-cluster` | -| TARGET_NAMESPACE | `dev` | --### Stage environment variable group --1. Clone the **az-vote-app-dev** variable group. -1. Change the name to **az-vote-app-stage**. -1. Ensure the following values for the corresponding variables: --| Variable | Value | -| -- | -- | -| ENVIRONMENT_NAME | Stage | -| TARGET_NAMESPACE | `stage` | --You're now ready to deploy to the `dev` and `stage` environments. --## Give More Permissions to the Build Service -The CD pipeline uses the security token of the running build to authenticate to the GitOps repository. More permissions are needed for the pipeline to create a new branch, push changes, and create pull requests. --1. Go to `Project settings` from the Azure DevOps project main page. -1. Select `Repositories`. -1. Select `<GitOps Repo Name>`. -1. Select `Security`. -1. For the `<Project Name> Build Service (<Organization Name>)`, allow `Contribute`, `Contribute to pull requests`, and `Create branch`. --For more information, see: -- [Grant VC Permissions to the Build Service](/azure/devops/pipelines/scripts/git-commands?preserve-view=true&tabs=yaml&view=azure-devops#version-control)-- [Manage Build Service Account Permissions](/azure/devops/pipelines/process/access-tokens?preserve-view=true&tabs=yaml&view=azure-devops#manage-build-service-account-permissions)---## Deploy the dev environment for the first time -With the CI and CD pipelines created, run the CI pipeline to deploy the app for the first time. --### CI pipeline --During the initial CI pipeline run, you may get a resource authorization error in reading the service connection name. -1. Verify the variable being accessed is AZURE_SUBSCRIPTION. -1. Authorize the use. -1. Rerun the pipeline. --The CI pipeline: -* Ensures the application change passes all automated quality checks for deployment. -* Does any extra validation that couldn't be completed in the PR pipeline. - * Specific to GitOps, the pipeline also publishes the artifacts for the commit that will be deployed by the CD pipeline. -* Verifies the Docker image has changed and the new image is pushed. --### CD pipeline -During the initial CD pipeline run, you'll be asked to give the pipeline access to the GitOps repository. Select View when prompted that the pipeline needs permission to access a resource. Then, select Permit to grant permission to use the GitOps repository for the current and future runs of the pipeline. --The successful CI pipeline run triggers the CD pipeline to complete the deployment process. You'll deploy to each environment incrementally. --> [!TIP] -> If the CD pipeline does not automatically trigger: -> 1. Verify the name matches the branch trigger in [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) -> * It should be `arc-cicd-demo-src CI`. -> 1. Rerun the CI pipeline. --Once the template and manifest changes to the GitOps repo have been generated, the CD pipeline will create a commit, push it, and create a PR for approval. -1. Open the PR link given in the `Create PR` task output. -1. Verify the changes to the GitOps repo. You should see: - * High-level Helm template changes. - * Low-level Kubernetes manifests that show the underlying changes to the desired state. Flux deploys these manifests. -1. If everything looks good, approve and complete the PR. --1. After a few minutes, Flux picks up the change and starts the deployment. -1. Forward the port locally using `kubectl` and ensure the app works correctly using: -- `kubectl port-forward -n dev svc/azure-vote-front 8080:80` --1. View the Azure Vote app in your browser at `http://localhost:8080/`. --1. Vote for your favorites and get ready to make some changes to the app. --## Set up environment approvals -Upon app deployment, you can not only make changes to the code or templates, but you can also unintentionally put the cluster into a bad state. --If the dev environment reveals a break after deployment, keep it from going to later environments using environment approvals. --1. In your Azure DevOps project, go to the environment that needs to be protected. -1. Navigate to **Approvals and Checks** for the resource. -1. Select **Create**. -1. Provide the approvers and an optional message. -1. Select **Create** again to complete the addition of the manual approval check. --For more details, see the [Define approval and checks](/azure/devops/pipelines/process/approvals) tutorial. --Next time the CD pipeline runs, the pipeline will pause after the GitOps PR creation. Verify the change has been synced properly and passes basic functionality. Approve the check from the pipeline to let the change flow to the next environment. --## Make an application change --With this baseline set of templates and manifests representing the state on the cluster, you'll make a small change to the app. --1. In the **arc-cicd-demo-src** repo, edit [`azure-vote/src/azure-vote-front/config_file.cfg`](https://github.com/Azure/arc-cicd-demo-src/blob/master/azure-vote/src/azure-vote-front/config_file.cfg) file. --2. Since "Cats vs Dogs" isn't getting enough votes, change it to "Tabs vs Spaces" to drive up the vote count. --3. Commit the change in a new branch, push it, and create a pull request. - * This is the typical developer flow that will start the CI/CD lifecycle. --## PR validation pipeline --The PR pipeline is the first line of defense against a faulty change. Usual application code quality checks include linting and static analysis. From a GitOps perspective, you also need to assure the same quality for the resulting infrastructure to be deployed. --The application's Dockerfile and Helm charts can use linting in a similar way to the application. --Errors found during linting range from: -* Incorrectly formatted YAML files, to -* Best practice suggestions, such as setting CPU and Memory limits for your application. --> [!NOTE] -> To get the best coverage from Helm linting in a real application, you will need to substitute values that are reasonably similar to those used in a real environment. --Errors found during pipeline execution appear in the test results section of the run. From here, you can: -* Track the useful statistics on the error types. -* Find the first commit on which they were detected. -* Stack trace style links to the code sections that caused the error. --Once the pipeline run has finished, you have assured the quality of the application code and the template that will deploy it. You can now approve and complete the PR. The CI will run again, regenerating the templates and manifests, before triggering the CD pipeline. --> [!TIP] -> In a real environment, don't forget to set branch policies to ensure the PR passes your quality checks. For more information, see the [Set branch policies](/azure/devops/repos/git/branch-policies) article. --## CD process approvals --A successful CI pipeline run triggers the CD pipeline to complete the deployment process. Similar to the first time you can the CD pipeline, you'll deploy to each environment incrementally. This time, the pipeline requires you to approve each deployment environment. --1. Approve the deployment to the `dev` environment. -1. Once the template and manifest changes to the GitOps repo have been generated, the CD pipeline will create a commit, push it, and create a PR for approval. -1. Open the PR link given in the task. -1. Verify the changes to the GitOps repo. You should see: - * High-level Helm template changes. - * Low-level Kubernetes manifests that show the underlying changes to the desired state. -1. If everything looks good, approve and complete the PR. -1. Wait for the deployment to complete. -1. As a basic smoke test, navigate to the application page and verify the voting app now displays Tabs vs Spaces. - * Forward the port locally using `kubectl` and ensure the app works correctly using: - `kubectl port-forward -n dev svc/azure-vote-front 8080:80` - * View the Azure Vote app in your browser at http://localhost:8080/ and verify the voting choices have changed to Tabs vs Spaces. -1. Repeat steps 1-7 for the `stage` environment. --Your deployment is now complete. This ends the CI/CD workflow. --## Clean up resources --If you're not going to continue to use this application, delete any resources with the following steps: --1. Delete the Azure Arc GitOps configuration connection: - ```azurecli - az k8s-configuration delete \ - --name cluster-config \ - --cluster-name arc-cicd-cluster \ - --resource-group myResourceGroup \ - --cluster-type connectedClusters - ``` --2. Remove the `dev` namespace: - * `kubectl delete namespace dev` --3. Remove the `stage` namespace: - * `kubectl delete namespace stage` --## Next steps --In this tutorial, you have set up a full CI/CD workflow that implements DevOps from application development through deployment. Changes to the app automatically trigger validation and deployment, gated by manual approvals. --Advance to our conceptual article to learn more about GitOps and configurations with Azure Arc-enabled Kubernetes. --> [!div class="nextstepaction"] -> [CI/CD Workflow using GitOps - Azure Arc-enabled Kubernetes](./conceptual-gitops-ci-cd.md) |
azure-arc | Tutorial Gitops Flux2 Ci Cd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md | - Title: "Tutorial: Implement CI/CD with GitOps (Flux v2)" -description: "This tutorial walks through setting up a CI/CD solution using GitOps (Flux v2) in Azure Arc-enabled Kubernetes or Azure Kubernetes Service clusters." --- Previously updated : 03/03/2023---# Tutorial: Implement CI/CD with GitOps (Flux v2) --In this tutorial, you'll set up a CI/CD solution using GitOps with Flux v2 and Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. Using the sample Azure Vote app, you'll: --> [!div class="checklist"] -> * Create an Azure Arc-enabled Kubernetes or AKS cluster. -> * Connect your application and GitOps repositories to Azure Repos or GitHub. -> * Implement CI/CD flow with either Azure Pipelines or GitHub. -> * Connect your Azure Container Registry to Azure DevOps and Kubernetes. -> * Create environment variable groups or secrets. -> * Deploy the `dev` and `stage` environments. -> * Test the application environments. --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---## Prerequisites --* Complete the [previous tutorial](./tutorial-use-gitops-flux2.md) to learn how to deploy GitOps for your CI/CD environment. -* Understand the [benefits and architecture](./conceptual-gitops-flux2.md) of this feature. -* Verify you have: - * A [connected Azure Arc-enabled Kubernetes cluster](./quickstart-connect-cluster.md#connect-an-existing-kubernetes-cluster) named **arc-cicd-cluster**. - * A connected Azure Container Registry with either [AKS integration](/azure/aks/cluster-container-registry-integration) or [non-AKS cluster authentication](../../container-registry/container-registry-auth-kubernetes.md). -* Install the latest versions of these Azure Arc-enabled Kubernetes and Kubernetes Configuration CLI extensions: -- ```azurecli - az extension add --name connectedk8s - az extension add --name k8s-configuration - ``` -- * To update these extensions to the latest version, run the following commands: -- ```azurecli - az extension update --name connectedk8s - az extension update --name k8s-configuration - ``` --### Connect Azure Container Registry to Kubernetes --Enable your Kubernetes cluster to pull images from your Azure Container Registry. If it's private, authentication is required. --#### Connect Azure Container Registry to existing AKS clusters --Integrate an existing Azure Container Registry with existing AKS clusters using the following command: --```azurecli -az aks update -n arc-cicd-cluster -g myResourceGroup --attach-acr arc-demo-acr -``` --#### Create an image pull secret --To connect non-AKS and local clusters to your Azure Container Registry, create an image pull secret. Kubernetes uses image pull secrets to store information needed to authenticate your registry. --Create an image pull secret with the following `kubectl` command. Repeat for both the `dev` and `stage` namespaces. --```console -kubectl create secret docker-registry <secret-name> \ - --namespace <namespace> \ - --docker-server=<container-registry-name>.azurecr.io \ - --docker-username=<service-principal-ID> \ - --docker-password=<service-principal-password> -``` --To avoid having to set an imagePullSecret for every Pod, consider adding the imagePullSecret to the Service account in the `dev` and `stage` namespaces. For more information, see the [Kubernetes tutorial](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account). --Depending on the CI/CD orchestrator you prefer, you can proceed with instructions either for Azure DevOps or for GitHub. --## Implement CI/CD with Azure DevOps --This tutorial assumes familiarity with Azure DevOps, Azure Repos and Pipelines, and Azure CLI. --Make sure to complete the following steps first: --* Sign into [Azure DevOps Services](https://dev.azure.com/). -* Verify you have "Build Admin" and "Project Admin" permissions for [Azure Repos](/azure/devops/repos/get-started/what-is-repos) and [Azure Pipelines](/azure/devops/pipelines/get-started/pipelines-get-started). --### Import application and GitOps repositories into Azure Repos --Import an [application repository](./conceptual-gitops-ci-cd.md#application-repo) and a [GitOps repository](./conceptual-gitops-ci-cd.md#gitops-repo) into Azure Repos. For this tutorial, use the following example repositories: --* **arc-cicd-demo-src** application repository - * URL: https://github.com/Azure/arc-cicd-demo-src - * Contains the example Azure Vote App that you'll deploy using GitOps. - * Import the repository with name `arc-cicd-demo-src` --* **arc-cicd-demo-gitops** GitOps repository - * URL: https://github.com/Azure/arc-cicd-demo-gitops - * Works as a base for your cluster resources that house the Azure Vote App. - * Import the repository with name `arc-cicd-demo-gitops` --Learn more about [importing Git repositories](/azure/devops/repos/git/import-git-repository). -->[!NOTE] -> Importing and using two separate repositories for application and GitOps repositories can improve security and simplicity. The application and GitOps repositories' permissions and visibility can be tuned individually. -> For example, the cluster administrator may not find the changes in application code relevant to the desired state of the cluster. Conversely, an application developer doesn't need to know the specific parameters for each environment - a set of test values that provide coverage for the parameters may be sufficient. --### Connect the GitOps repository --To continuously deploy your app, connect the application repository to your cluster using GitOps. Your **arc-cicd-demo-gitops** GitOps repository contains the basic resources to get your app up and running on your **arc-cicd-cluster** cluster. --The initial GitOps repository contains only a [manifest](https://github.com/Azure/arc-cicd-demo-gitops/blob/master/arc-cicd-cluster/manifests/namespaces.yml) that creates the **dev** and **stage** namespaces corresponding to the deployment environments. --The GitOps connection that you create will automatically: --* Sync the manifests in the manifest directory. -* Update the cluster state. --The CI/CD workflow populates the manifest directory with extra manifests to deploy the app. --1. [Create a new GitOps connection](./tutorial-use-gitops-flux2.md) to your newly imported **arc-cicd-demo-gitops** repository in Azure Repos. -- ```azurecli - az k8s-configuration flux create \ - --name cluster-config \ - --cluster-name arc-cicd-cluster \ - --namespace flux-system \ - --resource-group myResourceGroup \ - -u https://dev.azure.com/<Your organization>/<Your project>/_git/arc-cicd-demo-gitops \ - --https-user <Azure Repos username> \ - --https-key <Azure Repos PAT token> \ - --scope cluster \ - --cluster-type connectedClusters \ - --branch master \ - --kustomization name=cluster-config prune=true path=arc-cicd-cluster/manifests - ``` -- > [!TIP] - > For an AKS cluster (rather than an Arc-enabled cluster), use `-cluster-type managedClusters`. --1. Check the state of the deployment in Azure portal. - * If successful, you'll see both `dev` and `stage` namespaces created in your cluster. - * You can also confirm that on the Azure portal page of your cluster, a configuration `cluster-config` is created on the f`GitOps` tab. --### Import the CI/CD pipelines --Now that you've synced a GitOps connection, you need to import the CI/CD pipelines that create the manifests. --The application repository contains a `.pipeline` folder with pipelines used for PRs, CI, and CD. Import and rename the three pipelines provided in the sample repository: --| Pipeline file name | Description | -| - | - | -| [`.pipelines/az-vote-pr-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-pr-pipeline.yaml) | The application PR pipeline, named **arc-cicd-demo-src PR** | -| [`.pipelines/az-vote-ci-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-ci-pipeline.yaml) | The application CI pipeline, named **arc-cicd-demo-src CI** | -| [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) | The application CD pipeline, named **arc-cicd-demo-src CD** | --### Connect Azure Container Registry to Azure DevOps --During the CI process, you deploy your application containers to a registry. Start by creating an Azure service connection: --1. In Azure DevOps, open the **Service connections** page from the project settings page. In TFS, open the **Services** page from the **settings** icon in the top menu bar. -2. Choose **+ New service connection** and select the type of service connection you need. -3. Fill in the parameters for the service connection. For this tutorial: - * Name the service connection **arc-demo-acr**. - * Select **myResourceGroup** as the resource group. -4. Select the **Grant access permission to all pipelines**. - * This option authorizes YAML pipeline files for service connections. -5. Choose **OK** to create the connection. --### Configure PR service connection --CD pipeline manipulates PRs in the GitOps repository. It needs a service connection in order to do this. To configure this connection: --1. In Azure DevOps, open the **Service connections** page from the project settings page. In TFS, open the **Services** page from the **settings** icon in the top menu bar. -2. Choose **+ New service connection** and select `Generic` type. -3. Fill in the parameters for the service connection. For this tutorial: - * Server URL `https://dev.azure.com/<Your organization>/<Your project>/_apis/git/repositories/arc-cicd-demo-gitops` - * Leave Username and Password blank. - * Name the service connection **azdo-pr-connection**. -4. Select the **Grant access permission to all pipelines**. - * This option authorizes YAML pipeline files for service connections. -5. Choose **OK** to create the connection. --### Install GitOps Connector --1. Add GitOps Connector repository to Helm repositories: -- ```console - helm repo add gitops-connector https://azure.github.io/gitops-connector/ - ``` --1. Install the connector to the cluster: -- ```console - helm upgrade -i gitops-connector gitops-connector/gitops-connector \ - --namespace flux-system \ - --set gitRepositoryType=AZDO \ - --set ciCdOrchestratorType=AZDO \ - --set gitOpsOperatorType=FLUX \ - --set azdoGitOpsRepoName=arc-cicd-demo-gitops \ - --set azdoOrgUrl=https://dev.azure.com/<Your organization>/<Your project> \ - --set gitOpsAppURL=https://dev.azure.com/<Your organization>/<Your project>/_git/arc-cicd-demo-gitops \ - --set orchestratorPAT=<Azure Repos PAT token> - ``` -- > [!NOTE] - > `Azure Repos PAT token` should have `Build: Read & execute` and `Code: Full` permissions. --1. Configure Flux to send notifications to GitOps connector: -- ```console - cat <<EOF | kubectl apply -f - - apiVersion: notification.toolkit.fluxcd.io/v1beta1 - kind: Alert - metadata: - name: gitops-connector - namespace: flux-system - spec: - eventSeverity: info - eventSources: - - kind: GitRepository - name: cluster-config - - kind: Kustomization - name: cluster-config-cluster-config - providerRef: - name: gitops-connector - - apiVersion: notification.toolkit.fluxcd.io/v1beta1 - kind: Provider - metadata: - name: gitops-connector - namespace: flux-system - spec: - type: generic - address: http://gitops-connector:8080/gitopsphase - EOF - ``` --For the details on installation, refer to the [GitOps Connector](https://github.com/microsoft/gitops-connector#installation) repository. --### Create environment variable groups --#### App repository variable group --[Create a variable group](/azure/devops/pipelines/library/variable-groups) named **az-vote-app-dev**. Set the following values: --| Variable | Value | -| -- | -- | -| AZURE_SUBSCRIPTION | (your Azure Service Connection, which should be **arc-demo-acr** from earlier in the tutorial) | -| AZ_ACR_NAME | Azure ACR name, for example arc-demo-acr | -| ENVIRONMENT_NAME | Dev | -| MANIFESTS_BRANCH | `master` | -| MANIFESTS_REPO | `arc-cicd-demo-gitops` | -| ORGANIZATION_NAME | Name of Azure DevOps organization | -| PROJECT_NAME | Name of GitOps project in Azure DevOps | -| REPO_URL | Full URL for GitOps repository | -| SRC_FOLDER | `azure-vote` | -| TARGET_CLUSTER | `arc-cicd-cluster` | -| TARGET_NAMESPACE | `dev` | -| VOTE_APP_TITLE | Voting Application | -| AKS_RESOURCE_GROUP | AKS Resource group. Needed for automated testing. | -| AKS_NAME | AKS Name. Needed for automated testing. | --#### Stage environment variable group --1. Clone the **az-vote-app-dev** variable group. -1. Change the name to **az-vote-app-stage**. -1. Ensure the following values for the corresponding variables: --| Variable | Value | -| -- | -- | -| ENVIRONMENT_NAME | Stage | -| TARGET_NAMESPACE | `stage` | --You're now ready to deploy to the `dev` and `stage` environments. --#### Create environments --In your Azure DevOps project, create `Dev` and `Stage` environments. For details, see [Create and target an environment](/azure/devops/pipelines/process/environments). --### Give more permissions to the build service --The CD pipeline uses the security token of the running build to authenticate to the GitOps repository. More permissions are needed for the pipeline to create a new branch, push changes, and create pull requests. --1. Go to `Project settings` from the Azure DevOps project main page. -1. Select `Repos/Repositories`. -1. Select `Security`. -1. For the `<Project Name> Build Service (<Organization Name>)` and for the `Project Collection Build Service (<Organization Name>)` (type in the search field, if it doesn't show up), allow `Contribute`, `Contribute to pull requests`, and `Create branch`. -1. Go to `Pipelines/Settings` -1. Switch off `Protect access to repositories in YAML pipelines` option --For more information, see: --* [Grant VC Permissions to the Build Service](/azure/devops/pipelines/scripts/git-commands?preserve-view=true&tabs=yaml&view=azure-devops#version-control) -* [Manage Build Service Account Permissions](/azure/devops/pipelines/process/access-tokens?preserve-view=true&tabs=yaml&view=azure-devops#manage-build-service-account-permissions) --### Deploy the dev environment for the first time --With the CI and CD pipelines created, run the CI pipeline to deploy the app for the first time. --#### CI pipeline --During the initial CI pipeline run, you may get a resource authorization error in reading the service connection name. --1. Verify the variable being accessed is AZURE_SUBSCRIPTION. -1. Authorize the use. -1. Rerun the pipeline. --The CI pipeline: --* Ensures the application change passes all automated quality checks for deployment. -* Does any extra validation that couldn't be completed in the PR pipeline. - * Specific to GitOps, the pipeline also publishes the artifacts for the commit that will be deployed by the CD pipeline. -* Verifies the Docker image has changed and the new image is pushed. --#### CD pipeline --During the initial CD pipeline run, you need to give the pipeline access to the GitOps repository. Select **View** when prompted that the pipeline needs permission to access a resource. Then, select **Permit** to grant permission to use the GitOps repository for the current and future runs of the pipeline. --The successful CI pipeline run triggers the CD pipeline to complete the deployment process. You'll deploy to each environment incrementally. --> [!TIP] -> If the CD pipeline does not automatically trigger: -> -> 1. Verify the name matches the branch trigger in [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) -> * It should be `arc-cicd-demo-src CI`. -> 1. Rerun the CI pipeline. --Once the template and manifest changes to the GitOps repository have been generated, the CD pipeline creates a commit, pushes it, and creates a PR for approval. --1. Find the PR created by the pipeline to the GitOps repository. -1. Verify the changes to the GitOps repository. You should see: - * High-level Helm template changes. - * Low-level Kubernetes manifests that show the underlying changes to the desired state. Flux deploys these manifests. -1. If everything looks good, approve and complete the PR. --1. After a few minutes, Flux picks up the change and starts the deployment. -1. Monitor the Git Commit status on the Commit history tab. Once it is `succeeded`, the CD pipeline starts automated testing. -1. Forward the port locally using `kubectl` and ensure the app works correctly using: -- ```console - kubectl port-forward -n dev svc/azure-vote-front 8080:80 - ``` --1. View the Azure Vote app in your browser at `http://localhost:8080/`. --1. Vote for your favorites and get ready to make some changes to the app. --### Set up environment approvals --Upon app deployment, you can not only make changes to the code or templates, but you can also unintentionally put the cluster into a bad state. --If the dev environment reveals a break after deployment, keep it from going to later environments using environment approvals. --1. In your Azure DevOps project, go to the environment that needs to be protected. -1. Navigate to **Approvals and Checks** for the resource. -1. Select **Create**. -1. Provide the approvers and an optional message. -1. Select **Create** again to complete the addition of the manual approval check. --For more details, see the [Define approval and checks](/azure/devops/pipelines/process/approvals) tutorial. --Next time the CD pipeline runs, the pipeline will pause after the GitOps PR creation. Verify the change has been synced properly and passes basic functionality. Approve the check from the pipeline to let the change flow to the next environment. --### Make an application change --With this baseline set of templates and manifests representing the state on the cluster, you'll make a small change to the app. --1. In the **arc-cicd-demo-src** repository, edit [`azure-vote/src/azure-vote-front/config_file.cfg`](https://github.com/Azure/arc-cicd-demo-src/blob/master/azure-vote/src/azure-vote-front/config_file.cfg) file. --2. Since "Cats vs Dogs" isn't getting enough votes, change it to "Tabs vs Spaces" to drive up the vote count. --3. Commit the change in a new branch, push it, and create a pull request. This sequence of steps is the typical developer flow that starts the CI/CD lifecycle. --### PR validation pipeline --The PR pipeline is the first line of defense against a faulty change. Usual application code quality checks include linting and static analysis. From a GitOps perspective, you also need to assure the same quality for the resulting infrastructure to be deployed. --The application's Dockerfile and Helm charts can use linting in a similar way to the application. --Errors found during linting range from incorrectly formatted YAML files, to best practice suggestions, such as setting CPU and Memory limits for your application. --> [!NOTE] -> To get the best coverage from Helm linting in a real application, you will need to substitute values that are reasonably similar to those used in a real environment. --Errors found during pipeline execution appear in the test results section of the run. From here, you can: --* Track the useful statistics on the error types. -* Find the first commit on which they were detected. -* Stack trace style links to the code sections that caused the error. --Once the pipeline run has finished, you have assured the quality of the application code and the template that deploys it. You can now approve and complete the PR. The CI will run again, regenerating the templates and manifests, before triggering the CD pipeline. --> [!TIP] -> In a real environment, don't forget to set branch policies to ensure the PR passes your quality checks. For more information, see [Set branch policies](/azure/devops/repos/git/branch-policies). --### CD process approvals --A successful CI pipeline run triggers the CD pipeline to complete the deployment process. This time, the pipeline requires you to approve each deployment environment. --1. Approve the deployment to the `dev` environment. -1. Once the template and manifest changes to the GitOps repository have been generated, the CD pipeline creates a commit, pushes it, and creates a PR for approval. -1. Verify the changes to the GitOps repository. You should see: - * High-level Helm template changes. - * Low-level Kubernetes manifests that show the underlying changes to the desired state. -1. If everything looks good, approve and complete the PR. -1. Wait for the deployment to complete. -1. As a basic smoke test, navigate to the application page and verify the voting app now displays Tabs vs Spaces. - * Forward the port locally using `kubectl` and ensure the app works correctly using: - `kubectl port-forward -n dev svc/azure-vote-front 8080:80` - * View the Azure Vote app in your browser at `http://localhost:8080/` and verify the voting choices have changed to Tabs vs Spaces. -1. Repeat steps 1-7 for the `stage` environment. --The deployment is now complete. --For a detailed overview of all the steps and techniques implemented in the CI/CD workflows used in this tutorial, see the [Azure DevOps GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/master/docs/azdo-gitops.md). ---## Implement CI/CD with GitHub --This tutorial assumes familiarity with GitHub, GitHub Actions. --### Fork application and GitOps repositories --Fork an [application repository](./conceptual-gitops-ci-cd.md#application-repo) and a [GitOps repository](./conceptual-gitops-ci-cd.md#gitops-repo). For this tutorial, use the following example repositories: --* **arc-cicd-demo-src** application repository - * URL: https://github.com/Azure/arc-cicd-demo-src - * Contains the example Azure Vote App that you will deploy using GitOps. --* **arc-cicd-demo-gitops** GitOps repository - * URL: https://github.com/Azure/arc-cicd-demo-gitops - * Works as a base for your cluster resources that house the Azure Vote App. --### Connect the GitOps repository --To continuously deploy your app, connect the application repository to your cluster using GitOps. Your **arc-cicd-demo-gitops** GitOps repository contains the basic resources to get your app up and running on your **arc-cicd-cluster** cluster. --The initial GitOps repository contains only a [manifest](https://github.com/Azure/arc-cicd-demo-gitops/blob/master/arc-cicd-cluster/manifests/namespaces.yml) that creates the **dev** and **stage** namespaces corresponding to the deployment environments. --The GitOps connection that you create will automatically: --* Sync the manifests in the manifest directory. -* Update the cluster state. --The CI/CD workflow populates the manifest directory with extra manifests to deploy the app. --1. [Create a new GitOps connection](./tutorial-use-gitops-flux2.md) to your newly forked **arc-cicd-demo-gitops** repository in GitHub. -- ```azurecli - az k8s-configuration flux create \ - --name cluster-config \ - --cluster-name arc-cicd-cluster \ - --namespace cluster-config \ - --resource-group myResourceGroup \ - -u https://github.com/<Your organization>/arc-cicd-demo-gitops.git \ - --https-user <Azure Repos username> \ - --https-key <Azure Repos PAT token> \ - --scope cluster \ - --cluster-type connectedClusters \ - --branch master \ - --kustomization name=cluster-config prune=true path=arc-cicd-cluster/manifests - ``` --1. Check the state of the deployment in Azure portal. - * If successful, you'll see both `dev` and `stage` namespaces created in your cluster. --### Install GitOps Connector --1. Add GitOps Connector repository to Helm repositories: -- ```console - helm repo add gitops-connector https://azure.github.io/gitops-connector/ - ``` --1. Install the connector to the cluster: -- ```console - helm upgrade -i gitops-connector gitops-connector/gitops-connector \ - --namespace flux-system \ - --set gitRepositoryType=GITHUB \ - --set ciCdOrchestratorType=GITHUB \ - --set gitOpsOperatorType=FLUX \ - --set gitHubGitOpsRepoName=arc-cicd-demo-src \ - --set gitHubGitOpsManifestsRepoName=arc-cicd-demo-gitops \ - --set gitHubOrgUrl=https://api.github.com/repos/<Your organization> \ - --set gitOpsAppURL=https://github.com/<Your organization>/arc-cicd-demo-gitops/commit \ - --set orchestratorPAT=<GitHub PAT token> - ``` --1. Configure Flux to send notifications to GitOps connector: -- ```console - cat <<EOF | kubectl apply -f - - apiVersion: notification.toolkit.fluxcd.io/v1beta1 - kind: Alert - metadata: - name: gitops-connector - namespace: flux-system - spec: - eventSeverity: info - eventSources: - - kind: GitRepository - name: cluster-config - - kind: Kustomization - name: cluster-config-cluster-config - providerRef: - name: gitops-connector - - apiVersion: notification.toolkit.fluxcd.io/v1beta1 - kind: Provider - metadata: - name: gitops-connector - namespace: flux-system - spec: - type: generic - address: http://gitops-connector:8080/gitopsphase - EOF - ``` --For the details on installation, refer to the [GitOps Connector](https://github.com/microsoft/gitops-connector#installation) repository. --### Create GitHub secrets --#### Create GitHub repository secrets --| Secret | Value | -| -- | -- | -| AZURE_CREDENTIALS | Credentials for Azure in the following format {"clientId":"GUID","clientSecret":"GUID","subscriptionId":"GUID","tenantId":"GUID"} | -| AZ_ACR_NAME | Azure ACR name, for example arc-demo-acr | -| MANIFESTS_BRANCH | `master` | -| MANIFESTS_FOLDER | `arc-cicd-cluster` | -| MANIFESTS_REPO | `https://github.com/your-organization/arc-cicd-demo-gitops` | -| VOTE_APP_TITLE | Voting Application | -| AKS_RESOURCE_GROUP | AKS Resource group. Needed for automated testing. | -| AKS_NAME | AKS Name. Needed for automated testing. | -| PAT | GitHub PAT token with the permission to PR to the GitOps repository | --#### Create GitHub environment secrets --1. Create `az-vote-app-dev` environment with the following secrets: --| Secret | Value | -| -- | -- | -| ENVIRONMENT_NAME | Dev | -| TARGET_NAMESPACE | `dev` | --1. Create `az-vote-app-stage` environment with the following secrets: --| Secret | Value | -| -- | -- | -| ENVIRONMENT_NAME | Stage | -| TARGET_NAMESPACE | `stage` | --You're now ready to deploy to the `dev` and `stage` environments. --#### CI/CD Dev workflow --To start the CI/CD Dev workflow, change the source code. In the application repository, update values in `.azure-vote/src/azure-vote-front/config_file.cfg` file and push the changes to the repository. --The CI/CD Dev workflow: --* Ensures the application change passes all automated quality checks for deployment. -* Does any extra validation that couldn't be completed in the PR pipeline. -* Verifies the Docker image has changed and the new image is pushed. -* Publishes the artifacts (Docker image tags, Manifest templates, Utils) that will be used by the following CD stages. -* Deploys the application to Dev environment. - * Generates manifests to the GitOps repository. - * Creates a PR to the GitOps repository for approval. --1. Find the PR created by the pipeline to the GitOps repository. -1. Verify the changes to the GitOps repository. You should see: - * High-level Helm template changes. - * Low-level Kubernetes manifests that show the underlying changes to the desired state. Flux deploys these manifests. -1. If everything looks good, approve and complete the PR. -1. After a few minutes, Flux picks up the change and starts the deployment. -1. Monitor the Git Commit status on the Commit history tab. Once it is `succeeded`, the `CD Stage` workflow will start. -1. Forward the port locally using `kubectl` and ensure the app works correctly using: -- ```console - kubectl port-forward -n dev svc/azure-vote-front 8080:80 - ``` --1. View the Azure Vote app in your browser at `http://localhost:8080/`. -1. Vote for your favorites and get ready to make some changes to the app. --#### CD Stage workflow --The CD Stage workflow starts automatically once Flux successfully deploys the application to dev environment and notifies GitHub actions via GitOps Connector. --The CD Stage workflow: --* Runs application smoke tests against Dev environment -* Deploys the application to Stage environment. - * Generates manifests to the GitOps repository - * Creates a PR to the GitOps repository for approval --Once the manifests PR to the Stage environment is merged and Flux successfully applies all the changes, the Git commit status is updated in the GitOps repository. The deployment is now complete. --For a detailed overview of all the steps and techniques implemented in the CI/CD workflows used in this tutorial, see the [GitHub GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/master/docs/azdo-gitops-githubfluxv2.md). --## Clean up resources --If you're not going to continue to use this application, delete any resources with the following steps: --1. Delete the Azure Arc GitOps configuration connection: -- ```azurecli - az k8s-configuration flux delete \ - --name cluster-config \ - --cluster-name arc-cicd-cluster \ - --resource-group myResourceGroup \ - -t connectedClusters --yes - ``` --1. Delete GitOps Connector: -- ```console - helm uninstall gitops-connector -n flux-system - kubectl delete alerts.notification.toolkit.fluxcd.io gitops-connector -n flux-system - kubectl delete providers.notification.toolkit.fluxcd.io gitops-connector -n flux-system - ``` --## Next steps --In this tutorial, you have set up a full CI/CD workflow that implements DevOps from application development through deployment. Changes to the app automatically trigger validation and deployment, gated by manual approvals. --Advance to our conceptual article to learn more about GitOps and configurations with Azure Arc-enabled Kubernetes. --> [!div class="nextstepaction"] -> [Concept: CD process with GitOps](https://github.com/microsoft/kalypso/blob/main/docs/cd-concept.md) -> [Sample implementation: Explore CI/CD flow with GitOps](https://github.com/microsoft/kalypso/blob/main/cicd/tutorial/cicd-tutorial.md) |
azure-arc | Tutorial Use Gitops Connected Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md | - Title: 'Tutorial: Deploy configurations using GitOps on an Azure Arc-enabled Kubernetes cluster' -description: This tutorial demonstrates applying configurations on an Azure Arc-enabled Kubernetes cluster. - Previously updated : 05/08/2023----# Tutorial: Deploy configurations using GitOps on an Azure Arc-enabled Kubernetes cluster --> [!IMPORTANT] -> This tutorial is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. -> -> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources. --In this tutorial, you will apply configurations using GitOps on an Azure Arc-enabled Kubernetes cluster. You'll learn how to: --> [!div class="checklist"] -> * Create a configuration on an Azure Arc-enabled Kubernetes cluster using an example Git repository. -> * Validate that the configuration was successfully created. -> * Apply configuration from a private Git repository. -> * Validate the Kubernetes configuration. --## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An existing Azure Arc-enabled Kubernetes connected cluster.- - If you haven't connected a cluster yet, walk through our [Connect an Azure Arc-enabled Kubernetes cluster quickstart](quickstart-connect-cluster.md). -- An understanding of the benefits and architecture of this feature. Read more in [Configurations and GitOps - Azure Arc-enabled Kubernetes article](conceptual-configurations.md).-- Install the `k8s-configuration` Azure CLI extension of version >= 1.0.0:- - ```azurecli - az extension add --name k8s-configuration - ``` -- >[!TIP] - > If the `k8s-configuration` extension is already installed, you can update it to the latest version using the following command - `az extension update --name k8s-configuration` --## Create a configuration --The [example repository](https://github.com/Azure/arc-k8s-demo) used in this article is structured around the persona of a cluster operator. The manifests in this repository provision a few namespaces, deploy workloads, and provide some team-specific configuration. Using this repository with GitOps creates the following resources on your cluster: --* Namespaces: `cluster-config`, `team-a`, `team-b` -* Deployment: `arc-k8s-demo` -* ConfigMap: `team-a/endpoints` --The `config-agent` polls Azure for new or updated configurations. This task will take up to 5 minutes. --If you are associating a private repository with the configuration, complete the steps below in [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository). --## Use Azure CLI -Use the Azure CLI extension for `k8s-configuration` to link a connected cluster to the [example Git repository](https://github.com/Azure/arc-k8s-demo). -1. Name this configuration `cluster-config`. -1. Instruct the agent to deploy the operator in the `cluster-config` namespace. -1. Give the operator `cluster-admin` permissions. -- ```azurecli - az k8s-configuration create --name cluster-config --cluster-name AzureArcTest1 --resource-group AzureArcTest --operator-instance-name cluster-config --operator-namespace cluster-config --repository-url https://github.com/Azure/arc-k8s-demo --scope cluster --cluster-type connectedClusters - ``` -- ```output - { - "complianceStatus": { - "complianceState": "Pending", - "lastConfigApplied": "0001-01-01T00:00:00", - "message": "{\"OperatorMessage\":null,\"ClusterState\":null}", - "messageLevel": "3" - }, - "configurationProtectedSettings": {}, - "enableHelmOperator": false, - "helmOperatorProperties": null, - "id": "/subscriptions/<sub id>/resourceGroups/<group name>/providers/Microsoft.Kubernetes/connectedClusters/<cluster name>/providers/Microsoft.KubernetesConfiguration/sourceControlConfigurations/cluster-config", - "name": "cluster-config", - "operatorInstanceName": "cluster-config", - "operatorNamespace": "cluster-config", - "operatorParams": "--git-readonly", - "operatorScope": "cluster", - "operatorType": "Flux", - "provisioningState": "Succeeded", - "repositoryPublicKey": "", - "repositoryUrl": "https://github.com/Azure/arc-k8s-demo", - "resourceGroup": "MyRG", - "sshKnownHostsContents": "", - "systemData": { - "createdAt": "2020-11-24T21:22:01.542801+00:00", - "createdBy": null, - "createdByType": null, - "lastModifiedAt": "2020-11-24T21:22:01.542801+00:00", - "lastModifiedBy": null, - "lastModifiedByType": null - }, - "type": "Microsoft.KubernetesConfiguration/sourceControlConfigurations" - } - ``` --### Use a public Git repository --| Parameter | Format | -| - | - | -| `--repository-url` | http[s]://server/repo[.git] --### Use a private Git repository with SSH and Flux-created keys --Add the public key generated by Flux to the user account in your Git service provider. If the key is added to the repository instead of the user account, use `git@` in place of `user@` in the URL. --Jump to the [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository) section for more details. ---| Parameter | Format | Notes -| - | - | - | -| `--repository-url` | ssh://user@server/repo[.git] or user@server:repo[.git] | `git@` may replace `user@` --### Use a private Git repository with SSH and user-provided keys --Provide your own private key directly or in a file. The key must be in [PEM format](https://aka.ms/PEMformat) and end with newline (\n). --Add the associated public key to the user account in your Git service provider. If the key is added to the repository instead of the user account, use `git@` in place of `user@`. --Jump to the [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository) section for more details. --| Parameter | Format | Notes | -| - | - | - | -| `--repository-url` | ssh://user@server/repo[.git] or user@server:repo[.git] | `git@` may replace `user@` | -| `--ssh-private-key` | base64-encoded key in [PEM format](https://aka.ms/PEMformat) | Provide key directly | -| `--ssh-private-key-file` | full path to local file | Provide full path to local file that contains the PEM-format key --### Use a private Git host with SSH and user-provided known hosts --The Flux operator maintains a list of common Git hosts in its known hosts file to authenticate the Git repository before establishing the SSH connection. If you are using an *uncommon* Git repository or your own Git host, you can supply the host key so that Flux can identify your repo. --Just like private keys, you can provide your known_hosts content directly or in a file. When providing your own content, use the [known_hosts content format specifications](https://aka.ms/KnownHostsFormat), along with either of the SSH key scenarios above. --| Parameter | Format | Notes | -| - | - | - | -| `--repository-url` | ssh://user@server/repo[.git] or user@server:repo[.git] | `git@` may replace `user@` | -| `--ssh-known-hosts` | base64-encoded | Provide known hosts content directly | -| `--ssh-known-hosts-file` | full path to local file | Provide known hosts content in a local file | --### Use a private Git repository with HTTPS --| Parameter | Format | Notes | -| - | - | - | -| `--repository-url` | https://server/repo[.git] | HTTPS with basic auth | -| `--https-user` | raw or base64-encoded | HTTPS username | -| `--https-key` | raw or base64-encoded | HTTPS personal access token or password -->[!NOTE] ->* Helm operator chart version 1.2.0+ supports the HTTPS Helm release private auth. ->* HTTPS Helm release is not supported for AKS managed clusters. ->* If you need Flux to access the Git repository through your proxy, you will need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server). ---## Additional Parameters --Customize the configuration with the following optional parameters: --| Parameter | Description | -| - | - | -| `--enable-helm-operator`| Switch to enable support for Helm chart deployments. | -| `--helm-operator-params` | Chart values for Helm operator (if enabled). For example, `--set helm.versions=v3`. | -| `--helm-operator-chart-version` | Chart version for Helm operator (if enabled). Use version 1.2.0+. Default: '1.2.0'. | -| `--operator-namespace` | Name for the operator namespace. Default: 'default'. Max: 23 characters. | -| `--operator-params` | Parameters for operator. Must be given within single quotes. For example, ```--operator-params='--git-readonly --sync-garbage-collection --git-branch=main'``` --### Options supported in `--operator-params`: --| Option | Description | -| - | - | -| `--git-branch` | Branch of Git repository to use for Kubernetes manifests. Default is 'master'. Newer repositories have root branch named `main`, in which case you need to set `--git-branch=main`. | -| `--git-path` | Relative path within the Git repository for Flux to locate Kubernetes manifests. | -| `--git-readonly` | Git repository will be considered read-only. Flux will not attempt to write to it. | -| `--manifest-generation` | If enabled, Flux will look for .flux.yaml and run Kustomize or other manifest generators. | -| `--git-poll-interval` | Period at which to poll Git repository for new commits. Default is `5m` (5 minutes). | -| `--sync-garbage-collection` | If enabled, Flux will delete resources that it created, but are no longer present in Git. | -| `--git-label` | Label to keep track of sync progress. Used to tag the Git branch. Default is `flux-sync`. | -| `--git-user` | Username for Git commit. | -| `--git-email` | Email to use for Git commit. --If you don't want Flux to write to the repository and `--git-user` or `--git-email` aren't set, then `--git-readonly` will automatically be set. --For more information, see the [Flux documentation](https://aka.ms/FluxcdReadme). -->[!NOTE] -> Flux defaults to sync from the `master` branch of the git repo. However, newer git repositories have the root branch named `main`, in which case you need to set `--git-branch=main` in the --operator-params. --> [!TIP] -> You can create a configuration in the Azure portal in the **GitOps** tab of the Azure Arc-enabled Kubernetes resource. --## Validate the configuration --Use the Azure CLI to validate that the configuration was successfully created. --```azurecli -az k8s-configuration show --name cluster-config --cluster-name AzureArcTest1 --resource-group AzureArcTest --cluster-type connectedClusters -``` --The configuration resource will be updated with compliance status, messages, and debugging information. --```output -{ - "complianceStatus": { - "complianceState": "Installed", - "lastConfigApplied": "2020-12-10T18:26:52.801000+00:00", - "message": "...", - "messageLevel": "Information" - }, - "configurationProtectedSettings": {}, - "enableHelmOperator": false, - "helmOperatorProperties": { - "chartValues": "", - "chartVersion": "" - }, - "id": "/subscriptions/<sub id>/resourceGroups/AzureArcTest/providers/Microsoft.Kubernetes/connectedClusters/AzureArcTest1/providers/Microsoft.KubernetesConfiguration/sourceControlConfigurations/cluster-config", - "name": "cluster-config", - "operatorInstanceName": "cluster-config", - "operatorNamespace": "cluster-config", - "operatorParams": "--git-readonly", - "operatorScope": "cluster", - "operatorType": "Flux", - "provisioningState": "Succeeded", - "repositoryPublicKey": "...", - "repositoryUrl": "git://github.com/Azure/arc-k8s-demo.git", - "resourceGroup": "AzureArcTest", - "sshKnownHostsContents": null, - "systemData": { - "createdAt": "2020-12-01T03:58:56.175674+00:00", - "createdBy": null, - "createdByType": null, - "lastModifiedAt": "2020-12-10T18:30:56.881219+00:00", - "lastModifiedBy": null, - "lastModifiedByType": null -}, - "type": "Microsoft.KubernetesConfiguration/sourceControlConfigurations" -} -``` --When a configuration is created or updated, a few things happen: --1. The Azure Arc `config-agent` monitors Azure Resource Manager for new or updated configurations (`Microsoft.KubernetesConfiguration/sourceControlConfigurations`) and notices the new `Pending` configuration. -1. The `config-agent` reads the configuration properties and creates the destination namespace. -1. The Azure Arc `controller-manager` creates a Kubernetes service account and maps it to [ClusterRoleBinding or RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) for the appropriate permissions (`cluster` or `namespace` scope). It then deploys an instance of `flux`. -1. If using the option of SSH with Flux-generated keys, `flux` generates an SSH key and logs the public key. -1. The `config-agent` reports status back to the configuration resource in Azure. --While the provisioning process happens, the configuration resource will move through a few state changes. Monitor progress with the `az k8s-configuration show ...` command above: --| Stage change | Description | -| - | - | -| `complianceStatus`-> `Pending` | Represents the initial and in-progress states. | -| `complianceStatus` -> `Installed` | `config-agent` successfully configured the cluster and deployed `flux` without error. | -| `complianceStatus` -> `Failed` | `config-agent` ran into an error deploying `flux`. Details are provided in `complianceStatus.message` response body. | --## Apply configuration from a private Git repository --If you are using a private Git repository, you need to configure the SSH public key in your repository. Either you provide or Flux generates the SSH public key. You can configure the public key either on the specific Git repository or on the Git user that has access to the repository. --### Get your own public key --If you generated your own SSH keys, then you already have the private and public keys. --#### Get the public key using Azure CLI --Use the following in Azure CLI if Flux is generating the keys. --```azurecli -az k8s-configuration show --resource-group <resource group name> --cluster-name <connected cluster name> --name <configuration name> --cluster-type connectedClusters --query 'repositoryPublicKey' -"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAREDACTED" -``` --#### Get the public key from the Azure portal --Walk through the following in Azure portal if Flux is generating the keys. --1. In the Azure portal, navigate to the connected cluster resource. -2. In the resource page, select "GitOps" and see the list of configurations for this cluster. -3. Select the configuration that uses the private Git repository. -4. In the context window that opens, at the bottom of the window, copy the **Repository public key**. --#### Add public key using GitHub --Use one of the following options: --* Option 1: Add the public key to your user account (applies to all repositories in your account): - 1. Open GitHub and click on your profile icon at the top-right corner of the page. - 2. Click on **Settings**. - 3. Click on **SSH and GPG keys**. - 4. Click on **New SSH key**. - 5. Supply a Title. - 6. Paste the public key without any surrounding quotes. - 7. Click on **Add SSH key**. --* Option 2: Add the public key as a deploy key to the Git repository (applies to only this repository): - 1. Open GitHub and navigate to your repository. - 1. Click on **Settings**. - 1. Click on **Deploy keys**. - 1. Click on **Add deploy key**. - 1. Supply a Title. - 1. Check **Allow write access**. - 1. Paste the public key without any surrounding quotes. - 1. Click on **Add key**. --#### Add public key using an Azure DevOps repository --Use the following steps to add the key to your SSH keys: --1. Under **User Settings** in the top right (next to the profile image), click **SSH public keys**. -1. Select **+ New Key**. -1. Supply a name. -1. Paste the public key without any surrounding quotes. -1. Click **Add**. --## Validate the Kubernetes configuration --After `config-agent` has installed the `flux` instance, resources held in the Git repository should begin to flow to the cluster. Check to see that the namespaces, deployments, and resources have been created with the following command: --```console -kubectl get ns --show-labels -``` --```output -NAME STATUS AGE LABELS -azure-arc Active 24h <none> -cluster-config Active 177m <none> -default Active 29h <none> -itops Active 177m fluxcd.io/sync-gc-mark=sha256.9oYk8yEsRwWkR09n8eJCRNafckASgghAsUWgXWEQ9es,name=itops -kube-node-lease Active 29h <none> -kube-public Active 29h <none> -kube-system Active 29h <none> -team-a Active 177m fluxcd.io/sync-gc-mark=sha256.CS5boSi8kg_vyxfAeu7Das5harSy1i0gc2fodD7YDqA,name=team-a -team-b Active 177m fluxcd.io/sync-gc-mark=sha256.vF36thDIFnDDI2VEttBp5jgdxvEuaLmm7yT_cuA2UEw,name=team-b -``` --We can see that `team-a`, `team-b`, `itops`, and `cluster-config` namespaces have been created. --The `flux` operator has been deployed to `cluster-config` namespace, as directed by the configuration resource: --```console -kubectl -n cluster-config get deploy -o wide -``` --```output -NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR -cluster-config 1/1 1 1 3h flux docker.io/fluxcd/flux:1.16.0 instanceName=cluster-config,name=flux -memcached 1/1 1 1 3h memcached memcached:1.5.15 name=memcached -``` --## Further exploration --You can explore the other resources deployed as part of the configuration repository using: --```console -kubectl -n team-a get cm -o yaml -kubectl -n itops get all -``` -## Clean up resources --Delete a configuration using the Azure CLI or Azure portal. After you run the delete command, the configuration resource will be deleted immediately in Azure. Full deletion of the associated objects from the cluster should happen within 10 minutes. If the configuration is in a failed state when removed, the full deletion of associated objects can take up to an hour. --When a configuration with `namespace` scope is deleted, the namespace is not deleted by Azure Arc to avoid breaking existing workloads. If needed, you can delete this namespace manually using `kubectl`. --```azurecli -az k8s-configuration delete --name cluster-config --cluster-name AzureArcTest1 --resource-group AzureArcTest --cluster-type connectedClusters -``` --> [!NOTE] -> Any changes to the cluster that were the result of deployments from the tracked Git repository are not deleted when the configuration is deleted. --## Next steps --Advance to the next tutorial to learn how to implement CI/CD with GitOps. -> [!div class="nextstepaction"] -> [Implement CI/CD with GitOps](./tutorial-gitops-ci-cd.md) |
azure-arc | Tutorial Use Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md | - Title: "Tutorial: Deploy applications using GitOps with Flux v2" -description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 08/01/2024-----# Tutorial: Deploy applications using GitOps with Flux v2 --This tutorial describes how to use GitOps in a Kubernetes cluster. GitOps with Flux v2 is enabled as a [cluster extension](conceptual-extensions.md) in Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters. After the `microsoft.flux` cluster extension is installed, you can create one or more `fluxConfigurations` resources that sync your Git repository sources to the cluster and reconcile the cluster to the desired state. With GitOps, you can use your Git repository as the source of truth for cluster configuration and application deployment. --In this tutorial, we use an example GitOps configuration with two [kustomizations](gitops-flux2-parameters.md#kustomization), so that you can see how one kustomization can have a dependency on another. You can add more kustomizations and dependencies as needed, depending on your scenario. --Before you dive in, take a moment to [learn how GitOps with Flux works conceptually](./conceptual-gitops-flux2.md). --> [!TIP] -> While the source in this tutorial is a Git repository, Flux also provides support for other common file sources such as Helm repositories, Buckets, and Azure Blob Storage. -> -> You can also create Flux configurations by using Bicep, ARM templates, or Terraform AzAPI provider. For more information, see [Microsoft.KubernetesConfiguration fluxConfigurations](/azure/templates/microsoft.kubernetesconfiguration/fluxconfigurations). --> [!IMPORTANT] -> The `microsoft.flux` extension released major version 1.0.0. This includes the [multi-tenancy feature](conceptual-gitops-flux2.md#multi-tenancy). If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension, you can upgrade to the [latest version](extensions-release.md#flux-gitops) manually using the Azure CLI: `az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>` (use `-t connectedClusters` for Arc clusters and `-t managedClusters` for AKS clusters). --## Prerequisites --To deploy applications using GitOps with Flux v2, you need: --### [Azure CLI](#tab/azure-cli) --#### For Azure Arc-enabled Kubernetes clusters --* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#flux-gitops). - - [Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server). --* Read and write permissions on the `Microsoft.Kubernetes/connectedClusters` resource type. --#### For Azure Kubernetes Service clusters --* An MSI-based AKS cluster that's up and running. -- > [!IMPORTANT] - > Ensure that the AKS cluster is created with MSI (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters. - > For new AKS clusters created with `az aks create`, the cluster is MSI-based by default. For already created SPN-based clusters that need to be converted to MSI, run `az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identity`. For more information, see [Use a managed identity in AKS](/azure/aks/use-managed-identity). --* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type. --#### Common to both cluster types --* Read and write permissions on these resource types: -- * `Microsoft.KubernetesConfiguration/extensions` - * `Microsoft.KubernetesConfiguration/fluxConfigurations` --* Azure CLI version 2.15 or later. [Install the Azure CLI](/cli/azure/install-azure-cli) or use the following commands to update to the latest version: -- ```azurecli - az version - az upgrade - ``` --* The Kubernetes command-line client, [kubectl](https://kubernetes.io/docs/reference/kubectl/). `kubectl` is already installed if you use Azure Cloud Shell. -- Install `kubectl` locally using the [`az aks install-cli`](/cli/azure/aks#az-aks-install-cli) command: -- ```azurecli - az aks install-cli - ``` --* Registration of the following Azure resource providers: -- ```azurecli - az provider register --namespace Microsoft.Kubernetes - az provider register --namespace Microsoft.ContainerService - az provider register --namespace Microsoft.KubernetesConfiguration - ``` -- Registration is an asynchronous process and should finish within 10 minutes. To monitor the registration process, use the following command: -- ```azurecli - az provider show -n Microsoft.KubernetesConfiguration -o table -- Namespace RegistrationPolicy RegistrationState - -- - - Microsoft.KubernetesConfiguration RegistrationRequired Registered - ``` --#### Version and region support --GitOps is currently supported in [all regions that Azure Arc-enabled Kubernetes supports](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=kubernetes-service,azure-arc). GitOps is currently supported in a subset of the regions that AKS supports. The GitOps service is adding new supported regions on a regular cadence. --The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. --#### Network requirements --The GitOps agents require outbound (egress) TCP to the repo source on either port 22 (SSH) or port 443 (HTTPS) to function. The agents also require access to the following outbound URLs: --| Endpoint (DNS) | Description | -| | | -| `https://management.azure.com` | Required for the agent to communicate with the Kubernetes Configuration service. | -| `https://<region>.dp.kubernetesconfiguration.azure.com` | Data plane endpoint for the agent to push status and fetch configuration information. Depends on `<region>` (the supported regions mentioned earlier). | -| `https://login.microsoftonline.com` | Required to fetch and update Azure Resource Manager tokens. | -| `https://mcr.microsoft.com` | Required to pull container images for Flux controllers. | --### Enable CLI extensions --Install the latest `k8s-configuration` and `k8s-extension` CLI extension packages: --```azurecli -az extension add -n k8s-configuration -az extension add -n k8s-extension -``` --To update these packages to the latest versions: --```azurecli -az extension update -n k8s-configuration -az extension update -n k8s-extension -``` --To see a list of all installed Azure CLI extensions and their versions, use the following command: --```azurecli -az extension list -o table --Experimental ExtensionType Name Path Preview Version -- -- -- -- -- ---False whl connectedk8s C:\Users\somename\.azure\cliextensions\connectedk8s False 1.2.7 -False whl k8s-configuration C:\Users\somename\.azure\cliextensions\k8s-configuration False 1.5.0 -False whl k8s-extension C:\Users\somename\.azure\cliextensions\k8s-extension False 1.1.0 -``` --> [!TIP] -> For help resolving any errors, see the GitOps (Flux v2) section of [Troubleshoot extension issues for Azure Arc-enabled Kubernetes clusters](extensions-troubleshooting.md). --### [Azure portal](#tab/azure-portal) --#### For Azure Arc-enabled Kubernetes clusters --* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#flux-gitops). - - [Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server). --* Read and write permissions on the `Microsoft.Kubernetes/connectedClusters` resource type. --#### For Azure Kubernetes Service clusters --* An MSI-based AKS cluster that's up and running. -- > [!IMPORTANT] - > Ensure that the AKS cluster is created with MSI (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters. - > For new AKS clusters created with `az aks create`, the cluster is MSI-based by default. For already created SPN-based clusters that need to be converted to MSI, run `az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identity`. For more information, see [Use a managed identity in AKS](/azure/aks/use-managed-identity). --* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type. --#### Common to both cluster types --* Read and write permissions on these resource types: -- * `Microsoft.KubernetesConfiguration/extensions` - * `Microsoft.KubernetesConfiguration/fluxConfigurations` --* [Registration](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal) of the following Azure resource providers: -- * Microsoft.ContainerService - * Microsoft.Kubernetes - * Microsoft.KubernetesConfiguration --#### Version and region support --GitOps is currently supported in [all regions that Azure Arc-enabled Kubernetes supports](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=kubernetes-service,azure-arc). GitOps is currently supported in a subset of the regions that AKS supports. The GitOps service is adding new supported regions on a regular cadence. --The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. --#### Network requirements --The GitOps agents require outbound (egress) TCP to the repo source on either port 22 (SSH) or port 443 (HTTPS) to function. The agents also require access to the following outbound URLs: --| Endpoint (DNS) | Description | -| | | -| `https://management.azure.com` | Required for the agent to communicate with the Kubernetes Configuration service. | -| `https://<region>.dp.kubernetesconfiguration.azure.com` | Data plane endpoint for the agent to push status and fetch configuration information. Depends on `<region>` (the supported regions mentioned earlier). | -| `https://login.microsoftonline.com` | Required to fetch and update Azure Resource Manager tokens. | -| `https://mcr.microsoft.com` | Required to pull container images for Flux controllers. | ----## Apply a Flux configuration --Use the `k8s-configuration` Azure CLI extension or the Azure portal to enable GitOps in an AKS or Arc-enabled Kubernetes cluster. For a demonstration, use the public [gitops-flux2-kustomize-helm-mt](https://github.com/Azure/gitops-flux2-kustomize-helm-mt) repository. --> [!IMPORTANT] -> The demonstration repo is designed to simplify your use of this tutorial and illustrate some key principles. To keep up to date, the repo can get breaking changes occasionally from version upgrades. These changes won't affect your new application of this tutorial, only previous tutorial applications that have not been deleted. To learn how to handle these changes please see the [breaking change disclaimer](https://github.com/Azure/gitops-flux2-kustomize-helm-mt#breaking-change-disclaimer-%EF%B8%8F). --### [Azure CLI](#tab/azure-cli) --The following example uses the `az k8s-configuration create` command to apply a Flux configuration to a cluster, using the following values and settings: --* The resource group that contains the cluster is `flux-demo-rg`. -* The name of the Azure Arc cluster is `flux-demo-arc`. -* The cluster type is Azure Arc (`-t connectedClusters`), but this example also works with AKS (`-t managedClusters`). -* The name of the Flux configuration is `cluster-config`. -* The namespace for configuration installation is `cluster-config`. -* The URL for the public Git repository is `https://github.com/Azure/gitops-flux2-kustomize-helm-mt`. -* The Git repository branch is `main`. -* The scope of the configuration is `cluster`. This scope gives the operators permissions to make changes throughout cluster. To use `namespace` scope with this tutorial, [see the changes needed](conceptual-gitops-flux2.md#multi-tenancy). -* Two kustomizations are specified with names `infra` and `apps`. Each is associated with a path in the repository. -* The `apps` kustomization depends on the `infra` kustomization. (The `infra` kustomization must finish before the `apps` kustomization runs.) -* Set `prune=true` on both kustomizations. This setting ensures that the objects that Flux deployed to the cluster are cleaned up if they're removed from the repository, or if the Flux configuration or kustomizations are deleted. --```azurecli -az k8s-configuration flux create -g flux-demo-rg \ --c flux-demo-arc \--n cluster-config \namespace cluster-config \--t connectedClusters \scope cluster \--u https://github.com/Azure/gitops-flux2-kustomize-helm-mt \branch main \kustomization name=infra path=./infrastructure prune=true \kustomization name=apps path=./apps/staging prune=true dependsOn=\["infra"\]-``` --The `microsoft.flux` extension is installed on the cluster (if it wasn't already installed in a previous GitOps deployment). --> [!TIP] -> The `az k8s-configuration create` command deploys the `microsoft.flux` extension to the cluster and creates the configuration. In some scenarios, you may want to create the flux extension instance separately before you create your configuration resources. To do so, use the `az k8s-extension create` command to [create an instance of the extension on your cluster](extensions.md#create-extension-instance). --When the flux configuration is first installed, the initial compliance state may be `Pending` or `Non-compliant` because reconciliation is still ongoing. After a minute or so, query the configuration again to see the final compliance state. --```azurecli -az k8s-configuration flux show -g flux-demo-rg -c flux-demo-arc -n cluster-config -t connectedClusters -``` --To confirm that the deployment was successful, run the following command: --```azurecli -az k8s-configuration flux show -g flux-demo-rg -c flux-demo-arc -n cluster-config -t connectedClusters -``` --With a successful deployment the following namespaces are created: --* `flux-system`: Holds the Flux extension controllers. -* `cluster-config`: Holds the Flux configuration objects. -* `nginx`, `podinfo`, `redis`: Namespaces for workloads described in manifests in the Git repository. --To confirm the namespaces, run the following command: --```azurecli -kubectl get namespaces -``` --The `flux-system` namespace contains the Flux extension objects: --* Azure Flux controllers: `fluxconfig-agent`, `fluxconfig-controller` -* OSS Flux controllers: `source-controller`, `kustomize-controller`, `helm-controller`, `notification-controller` --The Flux agent and controller pods should be in a running state. Confirm this using the following command: --```azurecli -kubectl get pods -n flux-system --NAME READY STATUS RESTARTS AGE -fluxconfig-agent-9554ffb65-jqm8g 2/2 Running 0 21m -fluxconfig-controller-9d99c54c8-nztg8 2/2 Running 0 21m -helm-controller-59cc74dbc5-77772 1/1 Running 0 21m -kustomize-controller-5fb7d7b9d5-cjdhx 1/1 Running 0 21m -notification-controller-7d45678bc-fvlvr 1/1 Running 0 21m -source-controller-df7dc97cd-4drh2 1/1 Running 0 21m -``` --The namespace `cluster-config` has the Flux configuration objects. --```azurecli -kubectl get crds --NAME CREATED AT -alerts.notification.toolkit.fluxcd.io 2022-04-06T17:15:48Z -arccertificates.clusterconfig.azure.com 2022-03-28T21:45:19Z -azureclusteridentityrequests.clusterconfig.azure.com 2022-03-28T21:45:19Z -azureextensionidentities.clusterconfig.azure.com 2022-03-28T21:45:19Z -buckets.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z -connectedclusters.arc.azure.com 2022-03-28T21:45:19Z -customlocationsettings.clusterconfig.azure.com 2022-03-28T21:45:19Z -extensionconfigs.clusterconfig.azure.com 2022-03-28T21:45:19Z -fluxconfigs.clusterconfig.azure.com 2022-04-06T17:15:48Z -gitconfigs.clusterconfig.azure.com 2022-03-28T21:45:19Z -gitrepositories.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z -helmcharts.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z -helmreleases.helm.toolkit.fluxcd.io 2022-04-06T17:15:48Z -helmrepositories.source.toolkit.fluxcd.io 2022-04-06T17:15:48Z -imagepolicies.image.toolkit.fluxcd.io 2022-04-06T17:15:48Z -imagerepositories.image.toolkit.fluxcd.io 2022-04-06T17:15:48Z -imageupdateautomations.image.toolkit.fluxcd.io 2022-04-06T17:15:48Z -kustomizations.kustomize.toolkit.fluxcd.io 2022-04-06T17:15:48Z -providers.notification.toolkit.fluxcd.io 2022-04-06T17:15:48Z -receivers.notification.toolkit.fluxcd.io 2022-04-06T17:15:48Z -volumesnapshotclasses.snapshot.storage.k8s.io 2022-03-28T21:06:12Z -volumesnapshotcontents.snapshot.storage.k8s.io 2022-03-28T21:06:12Z -volumesnapshots.snapshot.storage.k8s.io 2022-03-28T21:06:12Z -websites.extensions.example.com 2022-03-30T23:42:32Z -``` --Confirm other details of the configuration by using the following commands. --```azurecli -kubectl get fluxconfigs -A --NAMESPACE NAME SCOPE URL PROVISION AGE -cluster-config cluster-config cluster https://github.com/Azure/gitops-flux2-kustomize-helm-mt Succeeded 44m -``` --```azurecli -kubectl get gitrepositories -A --NAMESPACE NAME URL READY STATUS AGE -cluster-config cluster-config https://github.com/Azure/gitops-flux2-kustomize-helm-mt True Fetched revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf 45m -``` --```azurecli -kubectl get helmreleases -A --NAMESPACE NAME READY STATUS AGE -cluster-config nginx True Release reconciliation succeeded 66m -cluster-config podinfo True Release reconciliation succeeded 66m -cluster-config redis True Release reconciliation succeeded 66m -``` --```azurecli -kubectl get kustomizations -A ---NAMESPACE NAME READY STATUS AGE -cluster-config cluster-config-apps True Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf 65m -cluster-config cluster-config-infra True Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf 65m -``` --Workloads are deployed from manifests in the Git repository. --```azurecli -kubectl get deploy -n nginx --NAME READY UP-TO-DATE AVAILABLE AGE -nginx-ingress-controller 1/1 1 1 67m -nginx-ingress-controller-default-backend 1/1 1 1 67m --kubectl get deploy -n podinfo --NAME READY UP-TO-DATE AVAILABLE AGE -podinfo 1/1 1 1 68m --kubectl get all -n redis --NAME READY STATUS RESTARTS AGE -pod/redis-master-0 1/1 Running 0 68m --NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/redis-headless ClusterIP None <none> 6379/TCP 68m -service/redis-master ClusterIP 10.0.13.182 <none> 6379/TCP 68m --NAME READY AGE -statefulset.apps/redis-master 1/1 68m -``` --#### Control which controllers are deployed with the Flux cluster extension --For some scenarios, you may wish to change which Flux controllers are installed with the Flux cluster extension. --The `source`, `helm`, `kustomize`, and `notification` Flux controllers are installed by default. The [`image-automation` and `image-reflector` controllers](https://fluxcd.io/docs/components/image/), used to update a Git repository when new container images are available, must be enabled explicitly. --You can use the `k8s-extension` command to change the default options: --* `--config source-controller.enabled=<true/false>` (default `true`) -* `--config helm-controller.enabled=<true/false>` (default `true`) -* `--config kustomize-controller.enabled=<true/false>` (default `true`) -* `--config notification-controller.enabled=<true/false>` (default `true`) -* `--config image-automation-controller.enabled=<true/false>` (default `false`) -* `--config image-reflector-controller.enabled=<true/false>` (default `false`) --For instance, to disable notifications, you can set `notification-controller.enabled` to `false`. --This example command installs the `image-reflector` and `image-automation` controllers. If the Flux extension was created automatically when a Flux configuration was first created, the extension name is `flux`. --```azurecli -az k8s-extension create -g <cluster_resource_group> -c <cluster_name> -t <connectedClusters or managedClusters or provisionedClusters> --name flux --extension-type microsoft.flux --config image-automation-controller.enabled=true image-reflector-controller.enabled=true -``` --#### Using Kubelet identity as authentication method for AKS clusters --For AKS clusters, one of the authentication options to use is kubelet identity. By default, AKS creates its own kubelet identity in the managed resource group. If you prefer, you can use a [precreated kubelet managed identity](/azure/aks/use-managed-identity#use-a-pre-created-kubelet-managed-identity). To do so, add the parameter `--config useKubeletIdentity=true` at the time of Flux extension installation. --```azurecli -az k8s-extension create --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type managedClusters --name flux --extension-type microsoft.flux --config useKubeletIdentity=true -``` --#### Red Hat OpenShift onboarding guidance --Flux controllers require a **nonroot** [Security Context Constraint](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.2/html/authentication/managing-pod-security-policies) to properly provision pods on the cluster. These constraints must be added to the cluster before deploying the `microsoft.flux` extension. --```console -NS="flux-system" -oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:kustomize-controller -oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:helm-controller -oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:source-controller -oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:notification-controller -oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:image-automation-controller -oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:image-reflector-controller -``` --For more information on OpenShift guidance for onboarding Flux, see the [Flux documentation](https://fluxcd.io/docs/use-cases/openshift/#openshift-setup). --### [Azure portal](#tab/azure-portal) --The Azure portal is useful for managing GitOps configurations and the Flux extension in Azure Arc-enabled Kubernetes or AKS clusters. In the Azure portal, you can see all of the Flux configurations associated with each cluster and get detailed information, including the overall compliance state of each cluster. --> [!NOTE] -> Some options are not currently supported in the Azure portal. See the Azure CLI steps for additional options, including suspending continuous reconciliation, controlling which controllers are deployed with the Flux cluster extension, and using Kubelet identity as authentication method for AKS clusters. --Follow these steps to apply a sample Flux configuration to a cluster. As part of this process, Azure installs the `microsoft.flux` extension on the cluster, if it wasn't already installed in a previous deployment. --1. Navigate to your cluster in the Azure portal. -1. From the service menu, under **Settings**, select **GitOps** > **Create**. -1. In the **Basics** section: -- 1. Enter a name for the configuration. - 1. Enter the namespace within which the Flux custom resources will be installed. This can be an existing namespace or a new one that will be created when the configuration is deployed. - 1. Under **Scope**, select **Cluster** so that the Flux operator has access to apply the configuration to all namespaces in the cluster. To use `namespace` scope with this tutorial, [see the changes needed](conceptual-gitops-flux2.md#multi-tenancy). - 1. Select **Next** to continue to the **Source** section. -- :::image type="content" source="media/tutorial-use-gitops-flux2/portal-configuration-basics.png" alt-text="Screenshot showing the Basics options for a GitOps configuration in the Azure portal." lightbox="media/tutorial-use-gitops-flux2/portal-configuration-basics.png"::: --1. In the **Source** section: -- 1. In **Source type**, select **Git Repository.** - 1. Enter the URL for the repository where the Kubernetes manifests are located: `https://github.com/Azure/gitops-flux2-kustomize-helm-mt`. - 1. For reference type, select **Branch**. Leave **Branch** set to **main**. - 1. For **Repository type**, select **Public**. - 1. Leave the other options set to the default, then select **Next**. -- :::image type="content" source="media/tutorial-use-gitops-flux2/portal-configuration-source.png" alt-text="Screenshow showing the Source options for a GitOps configuration in the Azure portal." lightbox="media/tutorial-use-gitops-flux2/portal-configuration-source.png"::: --1. In the **Kustomizations** section, create two [kustomizations](gitops-flux2-parameters.md#kustomization): `infrastructure` and `staging`. These kustomizations are Flux resources, each associated with a path in the repository, that represent the set of manifests that Flux should reconcile to the cluster. -- 1. Select **Create**. - 1. In the **Create a Kustomization** screen: -- 1. For **Instance name**, enter **infrastructure**. - 1. For **Path**, enter **./infrastructure**. - 1. Check the box for **Prune**. This setting ensures that the objects that Flux deployed to the cluster are cleaned up if they're removed from the repository, or if the Flux configuration or kustomizations are deleted. - 1. Leave the other options as is, then select **Save** to create the `infrastructure` kustomization. -- :::image type="content" source="media/tutorial-use-gitops-flux2/portal-kustomization-infrastructure.png" alt-text="Screenshot showing the options to create the infrastructure kustomization in the Azure portal." lightbox="media/tutorial-use-gitops-flux2/portal-kustomization-infrastructure.png"::: -- 1. You'll see the `infrastructure` kustomization in the **Kustomizations** section. To create the next kustomization, select **Create**. - 1. In the **Create a Kustomization** screen: -- 1. For **Instance name**, enter **staging**. - 1. For **Path**, enter **./apps/staging**. - 1. Check the box for **Prune**. - 1. In the **Depends on** box, select **infrastructure**. - 1. Leave the other options as is, then select **Save** to create the `staging` kustomization. -- :::image type="content" source="media/tutorial-use-gitops-flux2/portal-kustomization-staging.png" alt-text="Screenshot showing the options to create the staging kustomization in the Azure portal." lightbox="media/tutorial-use-gitops-flux2/portal-kustomization-staging.png"::: -- 1. You now should see both kustomizations shown in the **Kustomizations** section. Select **Next** to continue. --1. Review the options you selected in the previous steps. Then select **Create** to finish creating your GitOps configuration. --#### View configurations and objects --To view all of the configurations for a cluster, navigate to the cluster and select **GitOps** from the service menu. --Select the name of a configuration to view more details such as the configuration's status, properties, and source. You can then select **Configuration objects** to view all of the objects that were created to enable the GitOps configuration. This lets you quickly see the compliance state and other details about each object. ---To see other Kubernetes resources deployed on the cluster, return to the cluster overview page and select **Kubernetes resources** from the service menu. --To view detailed conditions for a configuration object, select its name. ---For more information, see [Monitor GitOps (Flux v2) status and activity](monitor-gitops-flux-2.md). ----## Work with parameters --Flux supports many parameters to enable various scenarios. For a description of all parameters that Flux supports, see the [official Flux documentation](https://fluxcd.io/docs/). Flux in Azure doesn't support all parameters yet. Let us know if a parameter you need is missing from the Azure implementation. --For information about available parameters and how to use them, see [GitOps (Flux v2) supported parameters](gitops-flux2-parameters.md). --### Work with local secret authentication reference --To use a local secret authentication reference, the secret must exist within the same namespace where the `fluxConfiguration` will be deployed. The secret must also contain all of the authentication parameters needed for the source. --For information on creating secrets for various `fluxConfiguration` sources, see [Local secret for authentication with source](gitops-flux2-parameters.md#local-secret-for-authentication-with-source). --## Manage cluster configuration by using the Flux Kustomize controller --The [Flux Kustomize controller](https://fluxcd.io/docs/components/kustomize/) is installed as part of the `microsoft.flux` cluster extension. It allows the declarative management of cluster configuration and application deployment by using Kubernetes manifests synced from a Git repository. These Kubernetes manifests can optionally include a *kustomize.yaml* file. --For usage details, see the following resources: --* [Flux Kustomize controller](https://fluxcd.io/docs/components/kustomize/) -* [Kustomize reference documents](https://kubectl.docs.kubernetes.io/references/kustomize/) -* [The kustomization file](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/) -* [Kustomize project](https://kubectl.docs.kubernetes.io/references/kustomize/) -* [Kustomize guides](https://kubectl.docs.kubernetes.io/guides/config_management/) --## Manage Helm chart releases by using the Flux Helm controller --The Flux Helm controller is installed as part of the `microsoft.flux` cluster extension. It allows you to declaratively manage Helm chart releases with Kubernetes manifests that you maintain in your Git repository. --For usage details, see the following resources: --* [Flux for Helm users](https://fluxcd.io/docs/use-cases/helm/) -* [Manage Helm releases](https://fluxcd.io/docs/guides/helmreleases/) -* [Migrate to Flux v2 Helm from Flux v1 Helm](https://fluxcd.io/docs/migration/helm-operator-migration/) -* [Flux Helm controller](https://fluxcd.io/docs/components/helm/) --> [!TIP] -> Because of how Helm handles index files, processing Helm charts is an expensive operation and can have very high memory footprint. As a result, reconciling a large number of Helm charts at once can cause memory spikes and `OOMKilled` errors. By default, the controller sets its memory limit at 1Gi and its memory requests at 64Mi. To increase this limit and requests due to a high number of large Helm chart reconciliations, run the following command after installing the microsoft.flux extension: -> -> `az k8s-extension update -g <resource-group> -c <cluster-name> -n flux -t connectedClusters --config source-controller.resources.limits.memory=2Gi source-controller.resources.requests.memory=300Mi` --### Use the GitRepository source for Helm charts --If your Helm charts are stored in the `GitRepository` source that you configure as part of the `fluxConfigurations` resource, you can indicate that the configured source should be used as the source of the Helm charts by adding `clusterconfig.azure.com/use-managed-source: "true"` to your HelmRelease.yaml file, as shown in the following example: --```yaml --apiVersion: helm.toolkit.fluxcd.io/v2beta1 -kind: HelmRelease -metadata: - name: somename - namespace: somenamespace - annotations: - clusterconfig.azure.com/use-managed-source: "true" -spec: - ... -``` --When you use this annotation, the deployed HelmRelease is patched with the reference to the configured source. Currently, only `GitRepository` source is supported. --### Helm drift detection --[Drift detection for Helm releases](https://fluxcd.io/flux/components/helm/helmreleases/#drift-detection) isn't enabled by default. Starting with [`microsoft.flux` v1.7.5](extensions-release.md#flux-gitops), you can enable Helm drift detection by running the following command: --```azurecli -az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --name flux --cluster-type <cluster-type> --config helm-controller.detectDrift=true -``` --### Helm OOM watch --Starting with [`microsoft.flux` v1.7.5](extensions-release.md#flux-gitops), you can enable Helm OOM watch. For more information, see [Enable Helm near OOM detection](https://fluxcd.io/flux/cheatsheets/bootstrap/#enable-helm-near-oom-detection). --Be sure to review potential [remediation strategies](https://fluxcd.io/flux/components/helm/helmreleases/#configuring-failure-remediation) and apply them as needed when enabling this feature. --To enable OOM watch, run the following command: --```azurecli -az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --name flux --cluster-type <cluster-type> --config helm-controller.outOfMemoryWatch.enabled=true helm-controller.outOfMemoryWatch.memoryThreshold=70 helm-controller.outOfMemoryWatch.interval=700ms -``` --If you don't specify values for `memoryThreshold` and `outOfMemoryWatch`, the default memory threshold is set to 95%, with the interval at which to check the memory utilization set to 500 ms. --## Configurable log-level parameters --By default, the `log-level` for Flux controllers is set to `info`. Starting with `microsoft.flux` v1.8.3, you can modify these default settings using the `k8s-extension` command as follows: --```azurecli config helm-controller.log-level=<info/error/debug>config source-controller.log-level=<info/error/debug>config kustomize-controller.log-level=<info/error/debug>config notification-controller.log-level=<info/error/debug>config image-automation-controller.log-level=<info/error/debug>config image-reflector-controller.log-level=<info/error/debug>-``` --Valid values are `debug`, `info`, or `error`. For instance, to change the `log-level` for the `source-controller` and `kustomize-controller`, use the following command: --```azurecli -az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type <cluster-type> --name flux --config source-controller.log-level=error kustomize-controller.log-level=error -``` --Starting with [`microsoft.flux` v1.9.1](extensions-release.md#flux-gitops), `fluxconfig-agent` and `fluxconfig-controller` support `info` and `error` log levels (but not `debug`). These can be modified by using the k8s-extension command as follows: --```azurecli config fluxconfig-agent.log-level=<info/error>config fluxconfig-controller.log-level=<info/error>-``` --For example, the following command changes `log-level` to `error`: --```azurecli -az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type <cluster-type> --name flux --config fluxconfig-agent.log-level=error fluxconfig-controller.log-level=error -``` --### Azure DevOps SSH-RSA deprecation --Azure DevOps [announced the deprecation of SSH-RSA](https://aka.ms/ado-ssh-rsa-deprecation) as a supported encryption method for connecting to Azure repositories using SSH. If you use SSH keys to connect to Azure repositories in Flux configurations, we recommend moving to more secure RSA-SHA2-256 or RSA-SHA2-512 keys. --When reconciling Flux configurations, you might see an error message indicating ssh-rsa is about to be deprecated or is unsupported. If so, update the host key algorithm used to establish SSH connections to Azure DevOps repositories from the Flux `source-controller` and `image-automation-controller` (if enabled) by using the `az k8s-extension update` command. For example: --```azurecli -az k8s-extension update --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type <cluster-type> --name flux --config source-controller.ssh-host-key-args="--ssh-hostkey-algos=rsa-sha2-512,rsa-sha2-256" --az k8s-extension update --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type <cluster-type> --name flux --config image-automation-controller.ssh-host-key-args="--ssh-hostkey-algos=rsa-sha2-512,rsa-sha2-256" -``` --For more information on Azure DevOps SSH-RSA deprecation, see [End of SSH-RSA support for Azure Repos](https://aka.ms/ado-ssh-rsa-deprecation). --### Configure annotation on Flux extension pods --When configuring a solution other than Azure Firewall, [network and FQDN/application rules](/azure/aks/outbound-rules-control-egress#required-outbound-network-rules-and-fqdns-for-aks-clusters) are required for an AKS cluster. Starting with [`microsoft.flux` v1.11.1](extensions-release.md#flux-gitops), Flux controller pods can now set the annotation `kubernetes.azure.com/set-kube-service-host-fqdn` in their pod specifications. This allows traffic to the API Server's domain name even when a Layer 7 firewall is present, facilitating deployments during extension installation. To configure this annotation when using the Flux extension, use the following commands. --```azurecli -# Create flux extension with annotation --az k8s-extension create --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type <cluster-type> --name flux --extension-type microsoft.flux --config setKubeServiceHostFqdn=true - -# Update flux extension with annotation --az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type <cluster-type> --name flux --config setKubeServiceHostFqdn=true -``` --### Workload identity in AKS clusters --Starting with [`microsoft.flux` v1.8.0](extensions-release.md#flux-gitops), you can create Flux configurations in [AKS clusters with workload identity enabled](/azure/aks/workload-identity-deploy-cluster). To do so, modify the flux extension as shown in the following steps. --1. Retrieve the [OIDC issuer URL](/azure/aks/workload-identity-deploy-cluster#retrieve-the-oidc-issuer-url) for your cluster. -1. Create a [managed identity](/azure/aks/workload-identity-deploy-cluster#create-a-managed-identity) and note its client ID. -1. Create the flux extension on the cluster, using the following command: -- ```azurecli - az k8s-extension create --resource-group <resource_group_name> --cluster-name <aks_cluster_name> --cluster-type managedClusters --name flux --extension-type microsoft.flux --config workloadIdentity.enable=true workloadIdentity.azureClientId=<user_assigned_client_id> - ``` --1. Establish a [federated identity credential](/azure/aks/workload-identity-deploy-cluster#establish-federated-identity-credential). For example: -- ```azurecli - # For source-controller - az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"flux-system":"source-controller" --audience api://AzureADTokenExchange - - # For image-reflector controller if you plan to enable it during extension creation, it is not deployed by default - az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"flux-system":"image-reflector-controller" --audience api://AzureADTokenExchange -- # For kustomize-controller - az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"flux-system":"kustomize-controller" --audience api://AzureADTokenExchange - ``` --1. Make sure the custom resource that needs to use workload identity sets `.spec.provider` value to `azure` in the manifest. For example: -- ```json - apiVersion: source.toolkit.fluxcd.io/v1beta2 - kind: HelmRepository - metadata: - name: acrrepo - spec: - interval: 10m0s - type: <helm_repository_type> - url: <helm_repository_link> - provider: azure - ``` --1. Be sure to provide proper permissions for workload identity for the resource that you want source-controller or image-reflector controller to pull. For example, if using Azure Container Registry, `AcrPull` permissions are required. --## Delete the Flux configuration and extension --Use the following commands to delete your Flux configurations and, if desired, the Flux extension itself. --### [Azure CLI](#tab/azure-cli) --#### Delete the Flux configurations --The following command deletes both the `fluxConfigurations` resource in Azure and the Flux configuration objects in the cluster. Because the Flux configuration was originally created with the `prune=true` parameter for the kustomization, all of the objects created in the cluster based on manifests in the Git repository are removed when the Flux configuration is removed. However, this command doesn't remove the Flux extension itself. --```azurecli -az k8s-configuration flux delete -g flux-demo-rg -c flux-demo-arc -n cluster-config -t connectedClusters --yes -``` --#### Delete the Flux cluster extension --When you delete the Flux extension, both the `microsoft.flux` extension resource in Azure and the Flux extension objects in the cluster are removed. --> [!IMPORTANT] -> Be sure to delete all Flux configurations in the cluster before you delete the Flux extension. Deleting the extension without first deleting the Flux configurations may leave your cluster in an unstable condition. --If the Flux extension was created automatically when the Flux configuration was first created, the extension name is `flux`. --```azurecli -az k8s-extension delete -g flux-demo-rg -c flux-demo-arc -n flux -t connectedClusters --yes -``` --> [!TIP] -> These commands use `-t connectedClusters`, which is appropriate for an Azure Arc-enabled Kubernetes cluster. For an AKS cluster, use `-t managedClusters` instead. --### [Azure portal](#tab/azure-portal) --#### Delete the Flux configuration --To delete a Flux configuration, navigate to the cluster where the configuration was created and select **GitOps** from the service menu. Select the configuration you want to delete. From the top of the page, select **Delete**, then select **Delete** again when prompted to confirm. --When you delete a Flux configuration, all of the Flux configuration objects in the cluster are deleted. However, this action doesn't delete the `microsoft.flux` extension itself. --#### Delete the Flux cluster extension --When you delete the Flux extension, both the `microsoft.flux` extension resource in Azure and the Flux extension objects in the cluster are removed. --> [!IMPORTANT] -> Be sure to delete all Flux configurations in the cluster before you delete the Flux extension. Deleting the extension without first deleting the Flux configurations may leave your cluster in an unstable condition. --For an Azure Arc-enabled Kubernetes cluster, navigate to the cluster and select **Extensions**. Select the `flux` extension and select **Uninstall**, then confirm the deletion. --For AKS clusters, you can't use the Azure portal to delete the extension. Instead, use the following Azure CLI command: --```azurecli -az k8s-extension delete -g <resource-group> -c <cluster-name> -n flux -t managedClusters --yes -``` ----## Next steps --* Read more about [configurations and GitOps](conceptual-gitops-flux2.md). -* Learn how to [use Azure Policy to enforce GitOps at scale](./use-azure-policy-flux-2.md). -* Learn about [monitoring GitOps (Flux v2) status and activity](monitor-gitops-flux-2.md). |
azure-arc | Use Azure Policy Flux 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy-flux-2.md | - Title: "Deploy applications consistently at scale using Flux v2 configurations and Azure Policy" Previously updated : 12/13/2023- -description: "Use Azure Policy to apply Flux v2 configurations at scale on Azure Arc-enabled Kubernetes or AKS clusters." ---# Deploy applications consistently at scale using Flux v2 configurations and Azure Policy --You can use Azure Policy to apply Flux v2 configurations (`Microsoft.KubernetesConfiguration/fluxConfigurations` resource type) at scale on Azure Arc-enabled Kubernetes (`Microsoft.Kubernetes/connectedClusters`) or AKS (`Microsoft.ContainerService/managedClusters`) clusters. To use Azure Policy, you select a built-in policy definition and create a policy assignment. --Before you assign the policy that creates Flux configurations, you must ensure that the Flux extension is deployed to your clusters. You can do this by first assigning a policy that deploys the extension to all clusters in the selected scope (all resource groups in a subscription or management group, or to specific resource groups). Then, when creating the policy assignment to deploy configurations, you set parameters for the Flux configuration that will be applied to the clusters in that scope. --To enable separation of concerns, you can create multiple policy assignments, each with a different Flux v2 configuration pointing to a different source. For example, one Git repository can be used by cluster admins while other repositories can be used by application teams. --## Built-in policy definitions --The following [built-in policy definitions](policy-reference.md) provide support for these scenarios: --|Description |Policy | -||| -|Flux extension install (required for all scenarios) | `Configure installation of Flux extension on Kubernetes cluster` | -|Flux configuration using public Git repository (generally a test scenario) | `Configure Kubernetes clusters with Flux v2 configuration using public Git repository` | -|Flux configuration using private Git repository with SSH auth | `Configure Kubernetes clusters with Flux v2 configuration using Git repository and SSH secrets` | -|Flux configuration using private Git repository with HTTPS auth | `Configure Kubernetes clusters with Flux v2 configuration using Git repository and HTTPS secrets` | -|Flux configuration using private Git repository with HTTPS CA cert auth | `Configure Kubernetes clusters with Flux v2 configuration using Git repository and HTTPS CA Certificate` | -|Flux configuration using private Git repository with local K8s secret | `Configure Kubernetes clusters with Flux v2 configuration using Git repository and local secrets` | -|Flux configuration using private Bucket source and KeyVault secrets | `Configure Kubernetes clusters with Flux v2 configuration using Bucket source and secrets in KeyVault` | -|Flux configuration using private Bucket source and local K8s secret | `Configure Kubernetes clusters with specified Flux v2 Bucket source using local secrets` | --To find all of the Flux v2 policy definitions, search for **flux**. For more information, see [Azure policy built-in definitions for Azure Arc-enabled Kubernetes](policy-reference.md). --## Prerequisites --* One or more Arc-enabled Kubernetes clusters and/or AKS clusters. -* `Microsoft.Authorization/policyAssignments/write` permissions on the scope (subscription or resource group) where you'll create the policy assignments. --## Create a policy assignment to install the Flux extension --In order for a policy to apply Flux v2 configurations to a cluster, the Flux extension must first be installed on the cluster. To ensure that the extension is installed to each of your clusters, assign the **Configure installation of Flux extension on Kubernetes cluster** policy definition to the desired scope. --1. In the Azure portal, navigate to **Policy**. -1. In the **Authoring** section of the sidebar, select **Definitions**. -1. In the "Kubernetes" category, select the **Configure installation of Flux extension on Kubernetes cluster** built-in policy definition. -1. Select **Assign**. -1. Set the **Scope** to the management group, subscription, or resource group to which the policy assignment will apply. - * If you want to exclude any resources from the policy assignment scope, set **Exclusions**. -1. Give the policy assignment an easily identifiable **Assignment name** and **Description**. -1. Ensure **Policy enforcement** is set to **Enabled**. -1. Select **Review + create**, then select **Create**. --## Create a policy assignment to apply Flux configurations --Next, return to the **Definitions** list (in the **Authoring** section of **Policy**) to apply the configuration policy definition to the same scope. --1. In the "Kubernetes" category, select the **Configure Kubernetes clusters with Flux v2 configuration using public Git repository** -built-in policy definition, or one of the other policy definitions to apply Flux configurations. -1. Select **Assign**. -1. Set the **Scope** to the same scope that you selected when assigning the first policy, including any exclusions. -1. Give the policy assignment an easily identifiable **Assignment name** and **Description**. -1. Ensure **Policy enforcement** is set to **Enabled**. -1. Select **Next**, then select **Next** again to open the **Parameters** tab. -1. Set the parameter values to be used. - * For more information about parameters, see the [tutorial on deploying Flux v2 configurations](./tutorial-use-gitops-flux2.md). - * When creating Flux configurations, you must provide a value for one (and only one) of these parameters: `repositoryRefBranch`, `repositoryRefTag`, `repositoryRefSemver`, `repositoryRefCommit`. -1. Select **Next** to open the **Remediation** task. -1. Enable **Create a remediation task**. -1. Verify that **Create a Managed Identity** is checked, and that the identity has **Contributor** permissions. For more information, see [Quickstart: Create a policy assignment to identify non-compliant resources](../../governance/policy/assign-policy-portal.md) and [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md). --1. Select **Review + create**, then select **Create**. --The configuration is then applied to new Azure Arc-enabled Kubernetes or AKS clusters created within the scope of policy assignment. --For existing clusters, you might need to manually run a remediation task. This task typically takes 10 to 20 minutes for the policy assignment to take effect. --## Verify the policy assignment --1. In the Azure portal, navigate to one of your Azure Arc-enabled Kubernetes or AKS clusters. -1. In the **Settings** section of the sidebar, select **GitOps**. -- In the configurations list, you should see the configuration created by the policy assignment. --1. In the **Kubernetes resources** section of the sidebar, select **Namespaces** and **Workloads**. -- You should see the namespace and artifacts that were created by the Flux configuration. You should also see the objects described by the manifests in the Git repo deployed on the cluster. --## Customize a policy --The built-in policies cover the main scenarios for using GitOps with Flux v2 in your Kubernetes clusters. However, due to limitations on the number of parameters allowed in Azure Policy assignments (max of 20), not all parameters are present in the built-in policies. Also, to fit within the 20-parameter limit, only a single kustomization can be created with the built-in policies. --If you have a scenario that differs from the built-in policies, you can overcome the limitations by creating [custom policies](../../governance/policy/tutorials/create-custom-policy-definition.md) using the built-in policies as templates. You can create custom policies that contain only the parameters you need, and hard-code the rest, therefore working around the 20-parameter limit. --## Next steps --* [Set up Azure Monitor for Containers with Azure Arc-enabled Kubernetes clusters](/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters). -* Learn more about [deploying applications using GitOps with Flux v2](tutorial-use-gitops-flux2.md). |
azure-arc | Use Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy.md | - Title: "Apply Flux v1 configurations at-scale using Azure Policy" Previously updated : 05/08/2023- -description: "Apply Flux v1 configurations at-scale using Azure Policy" ---# Apply Flux v1 configurations at-scale using Azure Policy --You can use Azure Policy to apply Flux v1 configurations (`Microsoft.KubernetesConfiguration/sourceControlConfigurations` resource type) at scale on Azure Arc-enabled Kubernetes clusters (`Microsoft.Kubernetes/connectedclusters`). --> [!IMPORTANT] -> This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; learn about [using Azure Policy with Flux v2](./use-azure-policy-flux-2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. -> -> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources. --To use Azure Policy, select a built-in GitOps policy definition and create a policy assignment. When creating the policy assignment: -1. Set the scope for the assignment. - * The scope will be all resource groups in a subscription or management group or specific resource groups. -2. Set the parameters for the GitOps configuration that will be created. --Once the assignment is created, the Azure Policy engine identifies all Azure Arc-enabled Kubernetes clusters located within the scope and applies the GitOps configuration to each cluster. --To enable separation of concerns, you can create multiple policy assignments, each with a different GitOps configuration pointing to a different Git repo. For example, one repo may be used by cluster admins and other repositories may be used by application teams. --> [!TIP] -> There are built-in policy definitions for these scenarios: -> * Public repo or private repo with SSH keys created by Flux: `Configure Kubernetes clusters with specified GitOps configuration using no secrets` -> * Private repo with user-provided SSH keys: `Configure Kubernetes clusters with specified GitOps configuration using SSH secrets` -> * Private repo with user-provided HTTPS keys: `Configure Kubernetes clusters with specified GitOps configuration using HTTPS secrets` --## Prerequisite --Verify you have `Microsoft.Authorization/policyAssignments/write` permissions on the scope (subscription or resource group) where you'll create this policy assignment. --## Create a policy assignment --1. In the Azure portal, navigate to **Policy**. -1. In the **Authoring** section of the sidebar, select **Definitions**. -1. In the "Kubernetes" category, choose the "Configure Kubernetes clusters with specified GitOps configuration using no secrets" built-in policy definition. -1. Select **Assign**. -1. Set the **Scope** to the management group, subscription, or resource group to which the policy assignment will apply. - * If you want to exclude any resources from the policy assignment scope, set **Exclusions**. -1. Give the policy assignment an easily identifiable **Name** and **Description**. -1. Ensure **Policy enforcement** is set to **Enabled**. -1. Select **Next**. -1. Set the parameter values to be used while creating the `sourceControlConfigurations` resource. - * For more information about parameters, see the [tutorial on deploying GitOps configurations](./tutorial-use-gitops-connected-cluster.md). -1. Select **Next**. -1. Enable **Create a remediation task**. -1. Verify **Create a managed identity** is checked, and that the identity will have **Contributor** permissions. - * For more information, see the [Create a policy assignment quickstart](../../governance/policy/assign-policy-portal.md) and the [Remediate non-compliant resources with Azure Policy article](../../governance/policy/how-to/remediate-resources.md). -1. Select **Review + create**. --After creating the policy assignment, the configuration is applied to new Azure Arc-enabled Kubernetes clusters created within the scope of policy assignment. --For existing clusters, you may need to manually run a remediation task. This task typically takes 10 to 20 minutes for the policy assignment to take effect. --## Verify a policy assignment --1. In the Azure portal, navigate to one of your Azure Arc-enabled Kubernetes clusters. -1. In the **Settings** section of the sidebar, select **Policies**. - * In the list, you should see the policy assignment that you created earlier with the **Compliance state** set as *Compliant*. -1. In the **Settings** section of the sidebar, select **GitOps**. - * In the configurations list, you should see the configuration created by the policy assignment. -1. In the **Kubernetes resources** section of the sidebar, select **Namespaces** and **Workloads**. - * You should see the namespace and artifacts that were created by the Flux configuration. - * You should see the objects described by the manifests in the Git repo deployed on the cluster. --## Next steps --[Set up Azure Monitor for Containers with Azure Arc-enabled Kubernetes clusters](/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters). |
azure-arc | Use Gitops With Helm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-gitops-with-helm.md | - Title: "Deploy Helm Charts using GitOps on Azure Arc-enabled Kubernetes cluster" Previously updated : 05/08/2023- -description: "Use GitOps with Helm for an Azure Arc-enabled cluster configuration" ---# Deploy Helm Charts using GitOps on an Azure Arc-enabled Kubernetes cluster --> [!IMPORTANT] -> This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. -> -> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources. -Helm is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers like APT and Yum, Helm is used to manage Kubernetes charts, which are packages of pre-configured Kubernetes resources. --This article shows you how to configure and use Helm with Azure Arc-enabled Kubernetes. --## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An existing Azure Arc-enabled Kubernetes connected cluster.- - If you haven't connected a cluster yet, walk through our [Connect an Azure Arc-enabled Kubernetes cluster quickstart](quickstart-connect-cluster.md). -- An understanding of the benefits and architecture of this feature. Read more in [Configurations and GitOps - Azure Arc-enabled Kubernetes article](conceptual-configurations.md).-- Install the `k8s-configuration` Azure CLI extension of version >= 1.0.0:- - ```azurecli - az extension add --name k8s-configuration - ``` --## Overview of using GitOps and Helm with Azure Arc-enabled Kubernetes -- The Helm operator provides an extension to Flux that automates Helm Chart releases. A Helm Chart release is described via a Kubernetes custom resource named HelmRelease. Flux synchronizes these resources from Git to the cluster, while the Helm operator makes sure Helm Charts are released as specified in the resources. -- The [example repository](https://github.com/Azure/arc-helm-demo) used in this article is structured in the following way: --```console -├── charts -│   └── azure-arc-sample -│   ├── Chart.yaml -│   ├── templates -│   │   ├── NOTES.txt -│   │   ├── deployment.yaml -│   │   └── service.yaml -│   └── values.yaml -└── releases - └── app.yaml -``` --In the Git repo we have two directories: one containing a Helm Chart and one containing the releases config. In the `releases` directory, the `app.yaml` contains the HelmRelease config, shown below: --```yaml -apiVersion: helm.fluxcd.io/v1 -kind: HelmRelease -metadata: - name: azure-arc-sample - namespace: arc-k8s-demo -spec: - releaseName: arc-k8s-demo - chart: - git: https://github.com/Azure/arc-helm-demo - ref: master - path: charts/azure-arc-sample - values: - serviceName: arc-k8s-demo -``` --The Helm release config contains the following fields: --| Field | Description | -| - | - | -| `metadata.name` | Mandatory field. Needs to follow Kubernetes naming conventions. | -| `metadata.namespace` | Optional field. Determines where the release is created. | -| `spec.releaseName` | Optional field. If not provided the release name will be `$namespace-$name`. | -| `spec.chart.path` | The directory containing the chart (relative to the repository root). | -| `spec.values` | User customizations of default parameter values from the Chart itself. | --The options specified in the HelmRelease `spec.values` will override the options specified in `values.yaml` from the Chart source. --You can learn more about the HelmRelease in the official [Helm Operator documentation](https://docs.fluxcd.io/projects/helm-operator/en/stable/). --## Create a configuration --Using the Azure CLI extension for `k8s-configuration`, link your connected cluster to the example Git repository. Give this configuration the name `azure-arc-sample` and deploy the Flux operator in the `arc-k8s-demo` namespace. --```azurecli -az k8s-configuration create --name azure-arc-sample --cluster-name AzureArcTest1 --resource-group AzureArcTest --operator-instance-name flux --operator-namespace arc-k8s-demo --operator-params='--git-readonly --git-path=releases' --enable-helm-operator --helm-operator-chart-version='1.2.0' --helm-operator-params='--set helm.versions=v3' --repository-url https://github.com/Azure/arc-helm-demo.git --scope namespace --cluster-type connectedClusters -``` --### Configuration parameters --To customize the creation of the configuration, [learn about additional parameters](./tutorial-use-gitops-connected-cluster.md#additional-parameters). --## Validate the configuration --Using the Azure CLI, verify that the configuration was successfully created. --```azurecli -az k8s-configuration show --name azure-arc-sample --cluster-name AzureArcTest1 --resource-group AzureArcTest --cluster-type connectedClusters -``` --The configuration resource is updated with compliance status, messages, and debugging information. --```output -{ - "complianceStatus": { - "complianceState": "Installed", - "lastConfigApplied": "2019-12-05T05:34:41.481000", - "message": "{\"OperatorMessage\":null,\"ClusterState\":null}", - "messageLevel": "3" - }, - "enableHelmOperator": "True", - "helmOperatorProperties": { - "chartValues": "--set helm.versions=v3", - "chartVersion": "1.2.0" - }, - "id": "/subscriptions/57ac26cf-a9f0-4908-b300-9a4e9a0fb205/resourceGroups/AzureArcTest/providers/Microsoft.Kubernetes/connectedClusters/AzureArcTest1/providers/Microsoft.KubernetesConfiguration/sourceControlConfigurations/azure-arc-sample", - "name": "azure-arc-sample", - "operatorInstanceName": "flux", - "operatorNamespace": "arc-k8s-demo", - "operatorParams": "--git-readonly --git-path=releases", - "operatorScope": "namespace", - "operatorType": "Flux", - "provisioningState": "Succeeded", - "repositoryPublicKey": "", - "repositoryUrl": "https://github.com/Azure/arc-helm-demo.git", - "resourceGroup": "AzureArcTest", - "type": "Microsoft.KubernetesConfiguration/sourceControlConfigurations" -} -``` --## Validate application --Run the following command and navigate to `localhost:8080` on your browser to verify that application is running. --```console -kubectl port-forward -n arc-k8s-demo svc/arc-k8s-demo 8080:8080 -``` --## Next steps --Apply cluster configurations at scale using [Azure Policy](./use-azure-policy.md). |
azure-arc | Validation Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md | - Title: "Azure Arc-enabled Kubernetes validation" Previously updated : 10/26/2023- -description: "Describes Arc validation program for Kubernetes distributions" ---# Azure Arc-enabled Kubernetes validation --The Azure Arc team works with key industry Kubernetes offering providers to validate Azure Arc-enabled Kubernetes with their Kubernetes distributions. Future major and minor versions of Kubernetes distributions released by these providers will be validated for compatibility with Azure Arc-enabled Kubernetes. --> [!IMPORTANT] -> Azure Arc-enabled Kubernetes works with any Kubernetes clusters that are certified by the Cloud Native Computing Foundation (CNCF), even if they haven't been validated through conformance tests and are not listed on this page. --## Validated distributions --The following Microsoft-provided Kubernetes distributions and infrastructure providers have successfully passed the conformance tests for Azure Arc-enabled Kubernetes: --| Distribution and infrastructure provider | Version | -| - | - | -| Cluster API Provider on Azure | Release version: [0.4.12](https://github.com/kubernetes-sigs/cluster-api-provider-azure/releases/tag/v0.4.12); Kubernetes version: [1.18.2](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.2) | -| AKS on Azure Stack HCI | Release version: [December 2020 Update](https://github.com/Azure/aks-hci/releases/tag/AKS-HCI-2012); Kubernetes version: [1.18.8](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.8) | -| K8s on Azure Stack Edge | Release version: Azure Stack Edge 2207 (2.2.2037.5375); Kubernetes version: [1.22.6](https://github.com/kubernetes/kubernetes/releases/tag/v1.22.6) | -| AKS Edge Essentials | Release version [1.0.406.0]( https://github.com/Azure/AKS-Edge/releases/tag/1.0.406.0); Kubernetes version [1.24.3](https://github.com/kubernetes/kubernetes/releases/tag/v1.24.3) | --The following providers and their corresponding Kubernetes distributions have successfully passed the conformance tests for Azure Arc-enabled Kubernetes: --| Provider name | Distribution name | Version | -| | -- | - | -| RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) |[4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6, [4.13.4](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html), [4.15.0](https://docs.openshift.com/container-platform/4.15/release_notes/ocp-4-15-release-notes.html)| -| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) |TKGs 2.2; upstream K8s 1.25.7+vmware.3<br>TKGm 2.3; upstream K8s v1.26.5+vmware.2<br>TKGm 2.2; upstream K8s v1.25.7+vmware.2 <br>TKGm 2.1.0; upstream K8s v1.24.9+vmware.1| -| Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes)|[1.24](https://ubuntu.com/kubernetes/docs/1.24/components), [1.28](https://ubuntu.com/kubernetes/docs/1.28/components) | -| SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 | -| SUSE Rancher | [K3s](https://rancher.com/products/k3s/) | [v1.27.4+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.27.4%2Bk3s1), [v1.26.7+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.26.7%2Bk3s1), [v1.25.12+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.25.12%2Bk3s1) | -| Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 | -| Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution |[Kublr 1.26.0](https://docs.kublr.com/releasenotes/1.26/release-1.26.0/); Upstream K8s Versions: 1.21.3, 1.22.10, 1.22.17, 1.23.17, 1.24.13, 1.25.6, 1.26.4 | -| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version [3.6.0](https://docs.mirantis.com/mke/3.6/release-notes/3-6-0.html) <br> MKE Version [3.5.5](https://docs.mirantis.com/mke/3.5/release-notes/3-5-5.html) <br> MKE Version [3.4.7](https://docs.mirantis.com/mke/3.4/release-notes/3-4-7.html) | -| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) |Wind River Cloud Platform 22.12; Upstream K8s version: 1.24.4 <br>Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 | --The Azure Arc team also ran the conformance tests and validated Azure Arc-enabled Kubernetes scenarios on the following public cloud providers: --| Public cloud provider name | Distribution name | Version | -| -- | -- | - | -| Amazon Web Services | Elastic Kubernetes Service (EKS) | v1.18.9 | -| Google Cloud Platform | Google Kubernetes Engine (GKE) | v1.17.15 | --## Scenarios validated --The conformance tests run as part of the Azure Arc-enabled Kubernetes validation cover the following scenarios: --1. Connect Kubernetes clusters to Azure Arc: - * Deploy Azure Arc-enabled Kubernetes agent Helm chart on cluster. - * Agents send cluster metadata to Azure. --2. Configuration: - * Create configuration on top of Azure Arc-enabled Kubernetes resource. - * [Flux](https://docs.fluxcd.io/), needed for setting up [GitOps workflow](tutorial-use-gitops-flux2.md), is deployed on the cluster. - * Flux pulls manifests and Helm charts from demo Git repo and deploys to cluster. --## Next steps --* [Learn how to connect an existing Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md) -* Learn about the [Azure Arc agents](conceptual-agent-overview.md) deployed on Kubernetes clusters when connecting them to Azure Arc. ----- |
azure-arc | Workload Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/workload-management.md | - Title: 'Explore workload management in a multi-cluster environment with GitOps' -description: Explore typical use-cases that Platform and Application teams face on a daily basis working with Kubernetes workloads in a multi-cluster environment. -keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, ci/cd, devops" --- Previously updated : 03/29/2023---# Explore workload management in a multi-cluster environment with GitOps --Enterprise organizations, developing cloud native applications, face challenges to deploy, configure and promote a great variety of applications and services across multiple Kubernetes clusters at scale. This environment may include Azure Kubernetes Service (AKS) clusters, clusters running on other public cloud providers, or clusters in on-premises data centers that are connected to Azure through the Azure Arc. Refer to the [conceptual article](conceptual-workload-management.md) exploring the business process, challenges and solution architecture. --This article walks you through an example scenario of the workload deployment and configuration in a multi-cluster Kubernetes environment. First, you deploy a sample infrastructure with a few GitHub repositories and AKS clusters. Next, you work through a set of use cases where you act as different personas working in the same environment: the Platform Team and the Application Team. --## Prerequisites --In order to successfully deploy the sample, you need: --- An Azure subscription. If you don't already have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- [Azure CLI](/cli/azure/install-azure-cli)-- [GitHub CLI](https://cli.github.com)-- [Helm](https://helm.sh/docs/helm/helm_install/)-- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl)-- [jq](https://stedolan.github.io/jq/download/)-- GitHub token with the following scopes: `repo`, `workflow`, `write:packages`, `delete:packages`, `read:org`, `delete_repo`.--## 1 - Deploy the sample --To deploy the sample, run the following script: --```bash -mkdir kalypso && cd kalypso -curl -fsSL -o deploy.sh https://raw.githubusercontent.com/microsoft/kalypso/main/deploy/deploy.sh -chmod 700 deploy.sh -./deploy.sh -c -p <prefix. e.g. kalypso> -o <GitHub org. e.g. eedorenko> -t <GitHub token> -l <azure-location. e.g. westus2> -``` --This script may take 10-15 minutes to complete. After it's done, it reports the execution result in the output like this: --```output -Deployment is complete! --Created repositories: - - https://github.com/eedorenko/kalypso-control-plane - - https://github.com/eedorenko/kalypso-gitops - - https://github.com/eedorenko/kalypso-app-src - - https://github.com/eedorenko/kalypso-app-gitops --Created AKS clusters in kalypso-rg resource group: - - control-plane - - drone (Flux based workload cluster) - - large (Flux based workload cluster) - -``` --> [!NOTE] -> If something goes wrong with the deployment, you can delete the created resources with the following command: -> -> ```bash -> ./deploy.sh -d -p <preix. e.g. kalypso> -o <GitHub org. e.g. eedorenko> -t <GitHub token> -l <azure-location. e.g. westus2> -> ``` --### Sample overview --This deployment script created an infrastructure, shown in the following diagram: ---There are a few Platform Team repositories: --- [Control Plane](https://github.com/microsoft/kalypso-control-plane): Contains a platform model defined with high level abstractions such as environments, cluster types, applications and services, mapping rules and configurations, and promotion workflows.-- [Platform GitOps](https://github.com/microsoft/kalypso-gitops): Contains final manifests that represent the topology of the multi-cluster environment, such as which cluster types are available in each environment, what workloads are scheduled on them, and what platform configuration values are set.-- [Services Source](https://github.com/microsoft/kalypso-svc-src): Contains high-level manifest templates of sample dial-tone platform services.-- [Services GitOps](https://github.com/microsoft/kalypso-svc-gitops): Contains final manifests of sample dial-tone platform services to be deployed across the clusters.--The infrastructure also includes a couple of the Application Team repositories: --- [Application Source](https://github.com/microsoft/kalypso-app-src): Contains a sample application source code, including Docker files, manifest templates and CI/CD workflows.-- [Application GitOps](https://github.com/microsoft/kalypso-app-gitops): Contains final sample application manifests to be deployed to the deployment targets.--The script created the following Azure Kubernetes Service (AKS) clusters: --- `control-plane` - This cluster is a management cluster that doesn't run any workloads. The `control-plane` cluster hosts [Kalypso Scheduler](https://github.com/microsoft/kalypso-scheduler) operator that transforms high level abstractions from the [Control Plane](https://github.com/microsoft/kalypso-control-plane) repository to the raw Kubernetes manifests in the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository.-- `drone` - A sample workload cluster. This cluster has the [GitOps extension](conceptual-gitops-flux2.md) installed and it uses `Flux` to reconcile manifests from the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository. For this sample, the `drone` cluster can represent an Azure Arc-enabled cluster or an AKS cluster with the Flux/GitOps extension.-- `large` - A sample workload cluster. This cluster has the [GitOps extension](conceptual-gitops-flux2.md) installed and it uses `Flux` to reconcile manifests from the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository.--### Explore Control Plane --The `control plane` repository contains three branches: `main`, `dev` and `stage`. The `dev` and `stage` branches contain configurations that are specific for `Dev` and `Stage` environments. On the other hand, the `main` branch doesn't represent any specific environment. The content of the `main` branch is common and used by all environments. Any change to the `main` branch is a subject to be promoted across environments. For example, a new application or a new template can be promoted to the `Stage` environment only after successful testing on the `Dev` environment. --The `main` branch: --|Folder|Description| -||--| -|.github/workflows| Contains GitHub workflows that implement the promotional flow.| -|.environments| Contains a list of environments with pointers to the branches with the environment configurations.| -|templates| Contains manifest templates for various reconcilers and a template for the workload namespace.| -|workloads| Contains a list of onboarded applications and services with pointers to the corresponding GitOps repositories.| --The `dev` and `stage` branches: --|Item|Description| -|-|--| -|cluster-types| Contains a list of available cluster types in the environment. The cluster types are grouped in custom subfolders. Each cluster type is marked with a set of labels. It specifies a reconciler type that it uses to fetch the manifests from GitOps repositories. The subfolders also contain a number of config maps with the platform configuration values available on the cluster types.| -|configs/dev-config.yaml| Contains config maps with the platform configuration values applicable for all cluster types in the environment.| -|scheduling| Contains scheduling policies that map workload deployment targets to the cluster types in the environment.| -|base-repo.yaml| A pointer to the place in the `Control Plane` repository (`main`) from where the scheduler should take templates and workload registrations.| -|gitops-repo.yaml| A pointer to the place in the `Platform GitOps` repository to where the scheduler should PR generated manifests.| --> [!TIP] -> The folder structure in the `Control Plane` repository doesn't really matter. This example provides one way of organizing files in the repository, but feel free to do it in your own preferred way. The scheduler is interested in the content of the files, rather than where the files are located. --## 2 - Platform Team: Onboard a new application --The Application Team runs their software development lifecycle. They build their application and promote it across environments. They're not aware of what cluster types are available and where their application will be deployed. But they do know that they want to deploy their application in `Dev` environment for functional and performance testing and in `Stage` environment for UAT testing. --The Application Team describes this intention in the [workload](https://github.com/microsoft/kalypso-app-src/blob/main/workload/workload.yaml) file in the [Application Source](https://github.com/microsoft/kalypso-app-src) repository: --```yaml -apiVersion: scheduler.kalypso.io/v1alpha1 -kind: Workload -metadata: - name: hello-world-app - labels: - type: application - family: force -spec: - deploymentTargets: - - name: functional-test - labels: - purpose: functional-test - edge: "true" - environment: dev - manifests: - repo: https://github.com/microsoft/kalypso-app-gitops - branch: dev - path: ./functional-test - - name: performance-test - labels: - purpose: performance-test - edge: "false" - environment: dev - manifests: - repo: https://github.com/microsoft/kalypso-app-gitops - branch: dev - path: ./performance-test - - name: uat-test - labels: - purpose: uat-test - environment: stage - manifests: - repo: https://github.com/microsoft/kalypso-app-gitops - branch: stage - path: ./uat-test -``` --This file contains a list of three deployment targets. These targets are marked with custom labels and point to the folders in [Application GitOps](https://github.com/microsoft/kalypso-app-gitops) repository where the Application Team generates application manifests for each deployment target. --With this file, Application Team requests Kubernetes compute resources from the Platform Team. In response, the Platform Team must register the application in the [Control Plane](https://github.com/microsoft/kalypso-control-plane) repo. - -To register the application, open a terminal and use the following script: --```bash -export org=<GitHub org> -export prefix=<prefix> --# clone the control-plane repo -git clone https://github.com/$org/$prefix-control-plane control-plane -cd control-plane --# create workload registration file --cat <<EOF >workloads/hello-world-app.yaml -apiVersion: scheduler.kalypso.io/v1alpha1 -kind: WorkloadRegistration -metadata: - name: hello-world-app - labels: - type: application -spec: - workload: - repo: https://github.com/$org/$prefix-app-src - branch: main - path: workload/ - workspace: kaizen-app-team -EOF --git add . -git commit -m 'workload registration' -git push -``` --> [!NOTE] -> For simplicity, this example pushes changes directly to `main`. In practice, you'd create a pull request to submit the changes. --With that in place, the application is onboarded in the control plane. But the control plane still doesn't know how to map the application deployment targets to all of the cluster types. --### Define application scheduling policy on Dev --The Platform Team must define how the application deployment targets will be scheduled on cluster types in the `Dev` environment. To do this, submit scheduling policies for the `functional-test` and `performance-test` deployment targets with the following script: --```bash -# Switch to dev branch (representing Dev environemnt) in the control-plane folder -git checkout dev -mkdir -p scheduling/kaizen --# Create a scheduling policy for the functional-test deployment target -cat <<EOF >scheduling/kaizen/functional-test-policy.yaml -apiVersion: scheduler.kalypso.io/v1alpha1 -kind: SchedulingPolicy -metadata: - name: functional-test-policy -spec: - deploymentTargetSelector: - workspace: kaizen-app-team - labelSelector: - matchLabels: - purpose: functional-test - edge: "true" - clusterTypeSelector: - labelSelector: - matchLabels: - restricted: "true" - edge: "true" -EOF --# Create a scheduling policy for the performance-test deployment target -cat <<EOF >scheduling/kaizen/performance-test-policy.yaml -apiVersion: scheduler.kalypso.io/v1alpha1 -kind: SchedulingPolicy -metadata: - name: performance-test-policy -spec: - deploymentTargetSelector: - workspace: kaizen-app-team - labelSelector: - matchLabels: - purpose: performance-test - edge: "false" - clusterTypeSelector: - labelSelector: - matchLabels: - size: large -EOF --git add . -git commit -m 'application scheduling policies' -git config pull.rebase false -git pull --no-edit -git push -``` --The first policy states that all deployment targets from the `kaizen-app-team` workspace, marked with labels `purpose: functional-test` and `edge: "true"` should be scheduled on all environment cluster types that are marked with label `restricted: "true"`. You can treat a workspace as a group of applications produced by an application team. --The second policy states that all deployment targets from the `kaizen-app-team` workspace, marked with labels `purpose: performance-test` and `edge: "false"` should be scheduled on all environment cluster types that are marked with label `size: "large"`. --This push to the `dev` branch triggers the scheduling process and creates a PR to the `dev` branch in the `Platform GitOps` repository: ---Besides `Promoted_Commit_id`, which is just tracking information for the promotion CD flow, the PR contains assignment manifests. The `functional-test` deployment target is assigned to the `drone` cluster type, and the `performance-test` deployment target is assigned to the `large` cluster type. Those manifests will land in `drone` and `large` folders that contain all assignments to these cluster types in the `Dev` environment. - -The `Dev` environment also includes `command-center` and `small` cluster types: -- :::image type="content" source="media/workload-management/dev-cluster-types.png" alt-text="Screenshot showing cluster types in the Dev environment."::: --However, only the `drone` and `large` cluster types were selected by the scheduling policies that you defined. --### Understand deployment target assignment manifests --Before you continue, take a closer look at the generated assignment manifests for the `functional-test` deployment target. There are `namespace.yaml`, `platform-config.yaml` and `reconciler.yaml` manifest files. --`namespace.yaml` defines a namespace that will be created on any `drone` cluster where the `hello-world` application runs. - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: "dev-drone-hello-world-app-functional-test" - labels: - environment: "dev" - workspace: "kaizen-app-team" - workload: "hello-world-app" - deploymentTarget: "hello-world-app-functional-test" - someLabel: some-value -``` --`platform-config.yaml` contains all platform configuration values available on any `drone` cluster that the application can use in the `Dev` environment. - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: platform-config - namespace: dev-drone-hello-world-app-functional-test -data: - CLUSTER_NAME: Drone - DATABASE_URL: mysql://restricted-host:3306/mysqlrty123 - ENVIRONMENT: Dev - REGION: East US - SOME_COMMON_ENVIRONMENT_VARIABLE: "false" -``` --`reconciler.yaml` contains Flux resources that a `drone` cluster uses to fetch application manifests, prepared by the Application Team for the `functional-test` deployment target. - -```yaml -apiVersion: source.toolkit.fluxcd.io/v1beta2 -kind: GitRepository -metadata: - name: "hello-world-app-functional-test" - namespace: flux-system -spec: - interval: 15s - url: "https://github.com/eedorenko/kalypso-tut-test-app-gitops" - ref: - branch: "dev" - secretRef: - name: repo-secret --apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 -kind: Kustomization -metadata: - name: "hello-world-app-functional-test" - namespace: flux-system -spec: - interval: 30s - targetNamespace: "dev-drone-hello-world-app-functional-test" - sourceRef: - kind: GitRepository - name: "hello-world-app-functional-test" - path: "./functional-test" - prune: true -``` --> [!NOTE] -> The `control plane` defines that the `drone` cluster type uses `Flux` to reconcile manifests from the application GitOps repositories. Therefore `reconciler.yaml` file contains `GitRepository` and `Kustomization` resources. --### Promote application to Stage --Once you approve and merge the PR to the `Platform GitOps` repository, the `drone` and `large` AKS clusters that represent corresponding cluster types start fetching the assignment manifests. The `drone` cluster has [GitOps extension](conceptual-gitops-flux2.md) installed, pointing to the `Platform GitOps` repository. It reports its `compliance` status to Azure Resource Graph: ---The PR merging event starts a GitHub workflow `checkpromote` in the `control plane` repository. This workflow waits until all clusters with the [GitOps extension](conceptual-gitops-flux2.md) installed that are looking at the `dev` branch in the `Platform GitOps` repository are compliant with the PR commit. In this example, the only such cluster is `drone`. ---Once the `checkpromote` is successful, it starts the `cd` workflow that promotes the change (application registration) to the `Stage` environment. For better visibility, it also updates the git commit status in the `control plane` repository: ---> [!NOTE] -> If the `drone` cluster fails to reconcile the assignment manifests for any reason, the promotion flow will fail. The commit status will be marked as failed, and the application registration will not be promoted to the `Stage` environment. --Next, configure a scheduling policy for the `uat-test` deployment target in the stage environment: --```bash -# Switch to stage branch (representing Stage environemnt) in the control-plane folder -git checkout stage -mkdir -p scheduling/kaizen --# Create a scheduling policy for the uat-test deployment target -cat <<EOF >scheduling/kaizen/uat-test-policy.yaml -apiVersion: scheduler.kalypso.io/v1alpha1 -kind: SchedulingPolicy -metadata: - name: uat-test-policy -spec: - deploymentTargetSelector: - workspace: kaizen-app-team - labelSelector: - matchLabels: - purpose: uat-test - clusterTypeSelector: - labelSelector: {} -EOF --git add . -git commit -m 'application scheduling policies' -git config pull.rebase false -git pull --no-edit -git push -``` --The policy states that all deployment targets from the `kaizen-app-team` workspace marked with labels `purpose: uat-test` should be scheduled on all cluster types defined in the environment. --Pushing this policy to the `stage` branch triggers the scheduling process, which creates a PR with the assignment manifests to the `Platform GitOps` repository, similar to those for the `Dev` environment. --As in the case with the `Dev` environment, after reviewing and merging the PR to the `Platform GitOps` repository, the `checkpromote` workflow in the `control plane` repository waits until clusters with the [GitOps extension](conceptual-gitops-flux2.md) (`drone`) reconcile the assignment manifests. -- :::image type="content" source="media/workload-management/check-promote-to-stage.png" alt-text="Screenshot showing promotion to stage."::: --On successful execution, the commit status is updated. ---## 3 - Application Dev Team: Build and deploy application --The Application Team regularly submits pull requests to the `main` branch in the `Application Source` repository. Once a PR is merged to `main`, it starts a CI/CD workflow. Here, the workflow will be started manually. -- Go to the `Application Source` repository in GitHub. On the `Actions` tab, select `Run workflow`. ---The workflow performs the following actions: --- Builds the application Docker image and pushes it to the GitHub repository package.-- Generates manifests for the `functional-test` and `performance-test` deployment targets. It uses configuration values from the `dev-configs` branch. The generated manifests are added to a pull request and auto-merged in the `dev` branch.-- Generates manifests for the `uat-test` deployment target. It uses configuration values from the `stage-configs` branch. ---The generated manifests are added to a pull request to the `stage` branch waiting for approval: ---To test the application manually on the `Dev` environment before approving the PR to the `Stage` environment, first verify how the `functional-test` application instance works on the `drone` cluster: --```bash -kubectl port-forward svc/hello-world-service -n dev-drone-hello-world-app-functional-test 9090:9090 --context=drone --# output: -# Forwarding from 127.0.0.1:9090 -> 9090 -# Forwarding from [::1]:9090 -> 9090 --``` --While this command is running, open `localhost:9090` in your browser. You'll see the following greeting page: ---The next step is to check how the `performance-test` instance works on the `large` cluster: --```bash -kubectl port-forward svc/hello-world-service -n dev-large-hello-world-app-performance-test 8080:8080 --context=large --# output: -# Forwarding from 127.0.0.1:8080 -> 8080 -# Forwarding from [::1]:8080 -> 8080 --``` --This time, use `8080` port and open `localhost:8080` in your browser. --Once you're satisfied with the `Dev` environment, approve and merge the PR to the `Stage` environment. After that, test the `uat-test` application instance in the `Stage` environment on both clusters. --Run the following command for the `drone` cluster and open `localhost:8001` in your browser: - -```bash -kubectl port-forward svc/hello-world-service -n stage-drone-hello-world-app-uat-test 8001:8000 --context=drone -``` --Run the following command for the `large` cluster and open `localhost:8002` in your browser: --```bash -kubectl port-forward svc/hello-world-service -n stage-large-hello-world-app-uat-test 8002:8000 --context=large -``` --The application instance on the `large` cluster shows the following greeting page: -- :::image type="content" source="media/workload-management/stage-greeting-page.png" alt-text="Screenshot showing the greeting page on stage."::: --## 4 - Platform Team: Provide platform configurations --Applications in the clusters grab the data from the very same database in both `Dev` and `Stage` environments. Let's change it and configure `west-us` clusters to provide a different database url for the applications working in the `Stage` environment: --```bash -# Switch to stage branch (representing Stage environemnt) in the control-plane folder -git checkout stage --# Update a config map with the configurations for west-us clusters -cat <<EOF >cluster-types/west-us/west-us-config.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: west-us-config - labels: - platform-config: "true" - region: west-us -data: - REGION: West US - DATABASE_URL: mysql://west-stage:8806/mysql2 -EOF --git add . -git commit -m 'database url configuration' -git config pull.rebase false -git pull --no-edit -git push -``` --The scheduler scans all config maps in the environment and collects values for each cluster type based on label matching. Then, it puts a `platform-config` config map in every deployment target folder in the `Platform GitOps` repository. The `platform-config` config map contains all of the platform configuration values that the workload can use on this cluster type in this environment. --In a few seconds, a new PR to the `stage` branch in the `Platform GitOps` repository appears: ---Approve the PR and merge it. --Once the new configuration has arrived to the `large` cluster, check the `uat-test` application instance at `localhost:8002` after -running the following commands: --```bash -kubectl rollout restart deployment hello-world-deployment -n stage-large-hello-world-app-uat-test --context=large -kubectl port-forward svc/hello-world-service -n stage-large-hello-world-app-uat-test 8002:8000 --context=large -``` --You'll see the updated database url: ---## 5 - Platform Team: Add cluster type to environment --Currently, only `drone` and `large` cluster types are included in the `Stage` environment. Let's include the `small` cluster type to `Stage` as well. Even though there's no physical cluster representing this cluster type, you can see how the scheduler reacts to this change. --```bash -# Switch to stage branch (representing Stage environemnt) in the control-plane folder -git checkout stage --# Add "small" cluster type in west-us region -mkdir -p cluster-types/west-us/small -cat <<EOF >cluster-types/west-us/small/small-cluster-type.yaml -apiVersion: scheduler.kalypso.io/v1alpha1 -kind: ClusterType -metadata: - name: small - labels: - region: west-us - size: small -spec: - reconciler: arc-flux - namespaceService: default - configType: configmap -EOF --git add . -git commit -m 'add new cluster type' -git config pull.rebase false -git pull --no-edit -git push -``` --In a few seconds, the scheduler submits a PR to the `Platform GitOps` repository. According to the `uat-test-policy` that you created, it assigns the `uat-test` deployment target to the new cluster type, as it's supposed to work on all available cluster types in the environment. ---## Clean up resources -When no longer needed, delete the resources that you created. To do so, run the following command: --```bash -# In kalypso folder -./deploy.sh -d -p <preix. e.g. kalypso> -o <GitHub org. e.g. eedorenko> -t <GitHub token> -l <azure-location. e.g. westus2> -``` --## Next steps --You have performed tasks for a few common workload management scenarios in a multi-cluster Kubernetes environment. There are many other scenarios you may want to explore. Continue to use the sample and see how you can implement use cases that are most common in your daily activities. --To understand the underlying concepts and mechanics deeper, refer to the following resources: --> [!div class="nextstepaction"] -> - [Concept: Workload Management in Multi-cluster environment with GitOps](conceptual-workload-management.md) -> - [Sample implementation: Workload Management in Multi-cluster environment with GitOps](https://github.com/microsoft/kalypso) -> - [Concept: CD process with GitOps](https://github.com/microsoft/kalypso/blob/main/docs/cd-concept.md) -> - [Sample implementation: Explore CI/CD flow with GitOps](https://github.com/microsoft/kalypso/blob/main/cicd/tutorial/cicd-tutorial.md) - |
azure-arc | Connect To Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/multicloud-connector/connect-to-aws.md | - Title: "Connect to AWS with the multicloud connector in the Azure portal" -description: "Learn how to add an AWS cloud by using the multicloud connector enabled by Azure Arc." - Previously updated : 06/11/2024---# Connect to AWS with the multicloud connector in the Azure portal --The multicloud connector enabled by Azure Arc lets you connect non-Azure public cloud resources to Azure by using the Azure portal. Currently, AWS public cloud environments are supported. --As part of connecting an AWS account to Azure, you deploy a CloudFormation template to the AWS account. This template creates all of the required resources for the connection. --> [!IMPORTANT] -> Multicloud connector enabled by Azure Arc is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Prerequisites --To use the multicloud connector, you need the appropriate permissions in both AWS and Azure. --### AWS prerequisites --To create the connector and to use multicloud inventory, you need the following permissions in AWS: --- **AmazonS3FullAccess**-- **AWSCloudFormationFullAccess**-- **IAMFullAccess**--For Arc onboarding, there are [additional prerequisites that must be met](onboard-multicloud-vms-arc.md#prerequisites). --When you upload your CloudFormation template, additional permissions will be requested, based on the solutions that you selected: --- For **Inventory**, we request **Global Read** permission to your account.-- For **Arc Onboarding**, our service requires **EC2 Write** access in order to install the [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview).--### Azure prerequisites --To use the multicloud connector in an Azure subscription, you need the **Contributor** built-in role. --If this is the first time you're using the service, you need to [register these resource providers](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider), which requires **Contributor** access on the subscription: --- Microsoft.HybridCompute-- Microsoft.HybridConnectivity-- Microsoft.AwsConnector--> [!NOTE] -> The multicloud connector can work side-by-side with the [AWS connector in Defender for Cloud](/azure/defender-for-cloud/quickstart-onboard-aws). If you choose, you can use both of these connectors. --## Add your public cloud in the Azure portal --To add your AWS public cloud to Azure, use the Azure portal to enter details and generate a CloudFormation template. --1. In the Azure portal, navigate to **Azure Arc**. -1. Under **Management**, select **Multicloud connectors (preview)**. -1. In the **Connectors** pane, select **Create**. -1. On the **Basics** page: -- 1. Select the subscription and resource group in which to create your connector resource. - 1. Enter a unique name for the connector and select a [supported region](overview.md#supported-regions). - 1. Provide the ID for the AWS account that you want to connect, and indicate whether it's a single account or an organization account. - 1. Select **Next**. --1. On the **Solutions** page, select which solutions you'd like to use with this connector and configure them. Select **Add** to enable **[Inventory](view-multicloud-inventory.md)**, **[Arc onboarding](onboard-multicloud-vms-arc.md)**, or both. -- :::image type="content" source="media/add-aws-connector-solutions.png" alt-text="Screenshot showing the Solutions for the AWS connector in the Azure portal."::: -- - For **Inventory**, you can modify the following options: -- 1. Choose the **AWS Services** for which you want to scan and import resources. By default, all available services are selected. - 1. Choose whether or not to enable periodic sync. By default, this is enabled so that the connector will scan your AWS account regularly. If you uncheck the box, your AWS account will only be scanned once. - 1. If **Enable periodic sync** is checked, confirm or change the **Recur every** selection to specify how often your AWS account will be scanned. - 1. Choose which regions to scan for resources in your AWS account. By default, all available regions are selected. - 1. When you have finished making selections, select **Save** to return to the **Solutions** page. -- - For **Arc onboarding**: -- 1. Select a **Connectivity method** to determine whether the Connected Machine agent should connect to the internet via a public endpoint or by proxy server. If you select **Proxy server**, provide a **Proxy server URL** to which the EC2 instance can connect. - 1. Choose whether or not to enable periodic sync. By default, this is enabled so that the connector will scan your AWS account regularly. If you uncheck the box, your AWS account will only be scanned once. - 1. If **Enable periodic sync** is checked, confirm or change the **Recur every** selection to specify how often your AWS account will be scanned. - 1. Choose which regions to scan for EC2 instances in your AWS account. By default, all available regions are selected. --1. On the **Authentication template** page, download the CloudFormation template that you'll upload to AWS. This template is created based on the information you provided in **Basics** and the solutions you selected. You can [upload the template](#upload-cloudformation-template-to-aws) right away, or wait until you finish adding your public cloud. --1. On the **Tags** page, enter any tags you'd like to use. -1. On the **Review and create** page, confirm your information and then select **Create**. --If you didn't upload your template during this process, follow the steps in the next section to do so. --## Upload CloudFormation template to AWS --After you've saved the CloudFormation template generated in the previous section, you need to upload it to your AWS public cloud. If you upload the template before you finish connecting your AWS cloud in the Azure portal, your AWS resources will be scanned immediately. If you complete the **Add public cloud** process in the Azure portal before uploading the template, it will take a bit longer to scan your AWS resources and make them available in Azure. --### Create stack --Follow these steps to create a stack and upload your template: --1. Open the AWS CloudFormation console and select **Create stack**. -1. Select **Template is ready**, then select **Upload a template file**. Select **Choose file** and browse to select your template. Then select **Next**. -1. In **Specify stack details**, enter a stack name. Leave the other options set to their default settings and select **Next**. -1. In **Configure stack options**, leave the options set to their default settings and select **Next**. -1. In **Review and create**, confirm that the information is correct, select the acknowledgment checkbox, and then select **Submit**. --### Create StackSet --If your AWS account is an organization account, you also need to create a StackSet and upload your template again. To do so: --1. Open the AWS CloudFormation console and select **StackSets**, then select **Create StackSet**. -1. Select **Template is ready**, then select **Upload a template file**. Select **Choose file** and browse to select your template. Then select **Next**. -1. In **Specify stack details**, enter `AzureArcMultiCloudStackset` as the StackSet name, then select **Next**. -1. In **Configure stack options**, leave the options set to their default settings and select **Next**. -1. In **Set deployment options**, enter the ID for the AWS account where the StackSet will be deployed, and select any AWS region to deploy the stack. Leave the other options set to their default settings and select **Next**. -1. In **Review**, confirm that the information is correct, select the acknowledgment checkbox, and then select **Submit**. --## Confirm deployment --After you complete the **Add public cloud** option in Azure, and you upload your template to AWS, your connector and selected solutions will be created. On average, it takes about one hour for your AWS resources to become available in Azure. If you upload the template after creating the public cloud in Azure, it may take a bit more time before you see the AWS resources. --AWS resources are stored in a resource group using the naming convention `aws_yourAwsAccountId`. Scans will run regularly to update these resources, based on your **Enable periodic sync** selections. --## Next steps --- Query your inventory with [the multicloud connector **Inventory** solution](view-multicloud-inventory.md).-- Learn how to [use the multicloud connector **Arc onboarding** solution](onboard-multicloud-vms-arc.md).-- |
azure-arc | Onboard Multicloud Vms Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/multicloud-connector/onboard-multicloud-vms-arc.md | - Title: Onboard VMs to Azure Arc through the multicloud connector -description: Learn how to enable the Arc onboarding solution with the multicloud connector enabled by Azure Arc. - Previously updated : 06/11/2024---# Onboard VMs to Azure Arc through the multicloud connector --The **Arc onboarding** solution of the multicloud connector auto-discovers VMs in a [connected public cloud](connect-to-aws.md), then installs the [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) to onboard the VMs to Azure Arc. Currently, EC2 instances in AWS public cloud environments are supported. --This simplified experience lets you use Azure management services, such as Azure Monitor, providing a centralized way to manage Azure and AWS VMs together. --> [!IMPORTANT] -> Multicloud connector enabled by Azure Arc is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --You can enable the **Arc onboarding** solution when you [connect your public cloud to Azure](connect-to-aws.md). --## Prerequisites --In addition to the [general prerequisites](connect-to-aws.md#prerequisites) for connecting a public cloud, be sure to meet the requirements for the **Arc onboarding** solution. This includes requirements for each EC2 instance that will be onboarded to Azure Arc. --- You must have **AmazonEC2FullAccess** permissions in your public cloud.-- EC2 instances must meet the [general prerequisites for installing the Connected Machine agent](../servers/prerequisites.md).-- EC2 instances must have the SSM agent installed. Most EC2 instances have this preconfigured.-- EC2 instances must have a tag with the key of `arc` and any value. This tag can be assigned manually or via a policy.-- The **ArcForServerSSMRole** IAM role [attached on each EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#attach-iam-role). This role attachment must be done after you upload your Cloud Formation Template in the Connector creation steps.--## AWS resource representation in Azure --After you connect your AWS cloud and enable the **Arc onboarding** solution, the Multicloud Connector creates a new resource group with the naming convention `aws_yourAwsAccountId`. --When EC2 instances are connected to Azure Arc, representations of these machines appear in this resource group. These resources are placed in Azure regions, using a [standard mapping scheme](resource-representation.md#region-mapping). You can filter for which Azure regions you would like to scan for. By default, all regions are scanned, but you can choose to exclude certain regions when you [configure the solution](connect-to-aws.md#add-your-public-cloud-in-the-azure-portal). --## Connectivity method --When creating the [**Arc onboarding** solution](connect-to-aws.md#add-your-public-cloud-in-the-azure-portal), you select whether the Connected Machine agent should connect to the internet via a public endpoint or by proxy server. If you select **Proxy server**, you must provide a **Proxy server URL** to which the EC2 instance can connect. --For more information, see [Connected machine agent network requirements](../servers/network-requirements.md?tabs=azure-cloud). --## Periodic sync options --The periodic sync time that you select when configuring the **Arc onboarding** solution determines how often your AWS account is scanned and synced to Azure. By enabling periodic sync, any time there is a newly discovered EC2 instance that meets the prerequisites, the Arc agent will be installed automatically. --If you prefer, you can turn periodic sync off when configuring this solution. If you do so, new EC2 instances won't be automatically onboarded to Azure Arc, as Azure won't be able to scan for new instances. --## Next steps --- Learn more about [managing connected servers through Azure Arc](../servers/overview.md).-- Learn about the [Multicloud Connector **Inventory** solution](view-multicloud-inventory.md). |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/multicloud-connector/overview.md | - Title: "What is Multicloud connector enabled by Azure Arc (preview)?" -description: "The multicloud connector lets you connect non-Azure public cloud resources to Azure, providing a centralized source for management and governance." - Previously updated : 06/11/2024----# What is Multicloud connector enabled by Azure Arc (preview)? --Multicloud connector enabled by Azure Arc lets you connect non-Azure public cloud resources to Azure, providing a centralized source for management and governance. Currently, AWS public cloud environments are supported. --The Multicloud connector supports these solutions: --- **Inventory**: Allows you to see an up-to-date view of your resources from other public clouds in Azure, providing you with a single place to see all of your cloud resources. You can query all your cloud resources through Azure Resource Graph. When assets are represented in Azure, metadata from the source cloud is also included. For instance, if you need to query all of your Azure and AWS resources with a certain tag, you can do so. The **Inventory** solution will scan your source cloud on a periodic basis to ensure a complete, correct view is represented in Azure. You can also apply Azure tags or Azure policies on these resources.-- **Arc onboarding**: Auto-discovers EC2 instances running in your AWS environment and installs the [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) on the VMs so that they're onboarded to Azure Arc. This simplified experience lets you use Azure management services such as Azure Monitor on these VMs, providing a centralized way to manage Azure and AWS resources together.--For more information about how the multicloud connector works, including Azure and AWS prerequisites, see [Add a public cloud with the multicloud connector in the Azure portal](connect-to-aws.md). --The multicloud connector can work side-by-side with the [AWS connector in Defender for Cloud](/azure/defender-for-cloud/quickstart-onboard-aws). If you choose, you can use both of these connectors. --> [!IMPORTANT] -> Multicloud connector enabled by Azure Arc is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Supported regions --In Azure, the following regions are supported for the multicloud connector: --- East US, West Central US, Canada Central, West Europe--The multicloud connector isn't available in national clouds (Azure Government, Microsoft Azure operated by 21Vianet). --In AWS, we scan for resources in the following regions: --- us-east-1, us-east-2, us-west-1, us-west-2, ca-central-1, ap-southeast-1, ap-southeast-2, ap-northeast-1, ap-northeast-3, eu-west-1, eu-west-2, eu-central-1, eu-north-1, sa-east-1--Scanned AWS resources are automatically [mapped to corresponding Azure regions](resource-representation.md#region-mapping). --## Pricing --The multicloud connector is free to use, but it integrates with other Azure services that have their own pricing models. Any Azure service that is used with the Multicloud Connector, such as Azure Monitor, will be charged as per the pricing for that service. For more information, see the [Azure pricing page](https://azure.microsoft.com/pricing/). --After you connect your AWS cloud, the multicloud connector queries the AWS resource APIs several times a day. These read-only API calls incur no charges in AWS, but they *are* registered in CloudTrail if you've enabled a trail for read events. ---## Next steps --- Learn how to [connect a public cloud in the Azure portal](connect-to-aws.md).-- Learn how to [use the multicloud connector **Inventory** solution](view-multicloud-inventory.md).-- Learn how to [use the multicloud connector **Arc onboarding** solution](onboard-multicloud-vms-arc.md). |
azure-arc | Resource Representation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/multicloud-connector/resource-representation.md | - Title: Multicloud connector resource representation in Azure -description: Understand how AWS resources are represented in Azure after they're added through the multicloud connector enabled by Azure Arc. - Previously updated : 06/11/2024---# Multicloud connector resource representation in Azure --The multicloud connector enabled by Azure Arc lets you connect non-Azure public cloud resources to Azure, providing a centralized source for management and governance. Currently, AWS public cloud environments are supported. --This article describes how AWS resources from a connected public cloud are represented in your Azure environment. --> [!IMPORTANT] -> Multicloud connector enabled by Azure Arc is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Resource group name --After you [connect your AWS public cloud to Azure](connect-to-aws.md), the multicloud connector creates a new resource group with the following naming convention: --`aws_yourAwsAccountId` --> [!NOTE] -> Tags are not created when the Resource Group is created with the connector. Be sure to disable any policies for Tags being required when creating a Resource Group, otherwise the Resource Group creation process will fail due to the Tags being missing. --For every AWS resource discovered through the **[Inventory](view-multicloud-inventory.md)** solution, an Azure representation is created in the `aws_yourAwsAccountId` resource group. Each resource has the [`AwsConnector` namespace value associated with its AWS service](view-multicloud-inventory.md#supported-aws-services). --EC2 instances connected to Azure Arc through the **[Arc onboarding](onboard-multicloud-vms-arc.md)** solution are also represented as Arc-enabled server resources under `Microsoft.HybridCompute/machines` in the `aws_yourAwsAccountId` resource group. If you previously onboarded an EC2 machine to Azure Arc, you won't see that machine in this resource group, because it already has a representation in Azure. --## Region mapping --Resources that are discovered in AWS and projected in Azure are placed in Azure regions, using the following mapping scheme: --|AWS region |Mapped Azure region | -|--|--| -|us-east-1 | EastUS | -|us-east-2 | EastUS | -|us-west-1 | EastUS | -|us-west-2 | EastUS | -|ca-central-1 | EastUS | -|ap-southeast-1 | SoutheastAsia | -|ap-northeast-1 | SoutheastAsia | -|ap-northeast-3 | SoutheastAsia | -|ap-southeast-2 | AU East | -|eu-west-1 | West Europe | -|eu-central-1 | West Europe | -|eu-north-1 | West Europe | -|eu-west-2 | UK South | -|sa-east-1 | Brazil South | --## Removing resources --If you remove the connected cloud, or disable a solution, periodic syncs will stop for that solution, and resources will no longer be updated in Azure. However, the resources will remain in your Azure account unless you delete them. To avoid confusion, we recommend removing these AWS resource representations from Azure when you remove an AWS public cloud. --To remove all of the AWS resource representations from Azure, navigate to the `aws_yourAwsAccountId` resource group, then delete it. --If you delete the connector, you should delete the Cloud Formation template on AWS. If you delete a solution, you'll also need to update your Cloud Formation template to remove the required access for the deleted solution. You can find the updated template for the connector in the Azure portal under **Settings > Authentication template**. |
azure-arc | View Multicloud Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/multicloud-connector/view-multicloud-inventory.md | - Title: View multicloud inventory with the multicloud connector enabled by Azure Arc -description: View multicloud inventory with the multicloud connector enabled by Azure Arc - Previously updated : 06/11/2024---# View multicloud inventory with the multicloud connector enabled by Azure Arc --The **Inventory** solution of the multicloud connector shows an up-to-date view of your resources from other public clouds in Azure, providing you with a single place to see all your cloud resources. Currently, AWS public cloud environments are supported. --> [!IMPORTANT] -> Multicloud connector enabled by Azure Arc is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --After you enable the **Inventory** solution, metadata from the assets in the source cloud is included with the asset representations in Azure. You can also apply Azure tags or Azure policies to these resources. This solution allows you to query for all your cloud resources through Azure Resource Graph, such as querying to find all Azure and AWS resources with a specific tag. --The **Inventory** solution scans your source cloud regularly to update the view represented in Azure. You can specify the interval to query when you [connect your public cloud](connect-to-aws.md) and configure the **Inventory** solution. --> [!TIP] -> At this time, we recommend that you don't use the multicloud connector **Inventory** solution with EC2 instances that have already been [connected to Azure Arc](../servers/deployment-options.md) and reside in a different subscription than your connector resource. Doing so will create a duplicate record of the EC2 instance in Azure. --## Supported AWS services --Today, resources associated with the following AWS services are scanned and represented in Azure. When you [create the **Inventory** solution](connect-to-aws.md#add-your-public-cloud-in-the-azure-portal), all available services are selected by default, but you can optionally include any services. --The following table shows the AWS services that are scanned, the resource types associated with each service, and the Azure namespace that corresponds to each resource type. --| AWS service | AWS resource type | Azure namespace | -|--|--|--| -| API Gateway | `apiGatewayRestApis` | `Microsoft.AwsConnector/apiGatewayRestApis`| -| API Gateway | `apiGatewayStages` | `Microsoft.AwsConnector/apiGatewayStages`| -| Cloud Formation | `cloudFormationStacks` | `Microsoft.AwsConnector/cloudFormationStacks`| -| Cloud Formation | `cloudFormationStackSets` | `Microsoft.AwsConnector/cloudFormationStackSets`| -| Cloud Trail | `cloudTrailTrails` | `Microsoft.AwsConnector/cloudTrailTrails`| -| Cloud Watch | `cloudWatchAlarms` | `Microsoft.AwsConnector/cloudWatchAlarms`| -| Dynamo DB | `dynamoDBTables` | `Microsoft.AwsConnector/dynamoDBTables`| -| EC2 | `ec2Instances` | `Microsoft.HybridCompute/machines/EC2InstanceId`, `Microsoft.AwsConnector/Ec2Instances`| -| EC2 | `ec2KeyPairs` | `Microsoft.AwsConnector/ec2KeyPairs`| -| EC2 | `ec2Subnets` | `Microsoft.AwsConnector/ec2Subnets`| -| EC2 | `ec2Volumes` | `Microsoft.AwsConnector/ec2Volumes`| -| EC2 | `ec2VPCs` | `Microsoft.AwsConnector/ec2VPCs`| -| EC2 | `ec2NetworkAcls` | `Microsoft.AwsConnector/ec2NetworkAcls`| -| EC2 | `ec2NetworkInterfaces`| `Microsoft.AwsConnector/ec2NetworkInterfaces`| -| EC2 | `ec2RouteTables` | `Microsoft.AwsConnector/ec2RouteTables`| -| EC2 | `ec2VPCEndpoints` | `Microsoft.AwsConnector/ec2VPCEndpoints`| -| EC2 | `ec2VPCPeeringConnections` | `Microsoft.AwsConnector/ec2VPCPeeringConnections`| -| EC2 | `ec2InstanceStatuses` | `Microsoft.AwsConnector/ec2InstanceStatuses`| -| EC2 | `ec2SecurityGroups` | `Microsoft.AwsConnector/ec2SecurityGroups`| -| ECR | `ecrRepositories` | `Microsoft.AwsConnector/ecrRepositories`| -| ECS | `ecsClusters` | `Microsoft.AwsConnector/ecsClusters`| -| ECS | `ecsServices` | `Microsoft.AwsConnector/ecsServices`| -| ECS | `ecsTaskDefinitions` | `Microsoft.AwsConnector/ecsTaskDefinitions`| -| EFS | `efsFileSystems` | `Microsoft.AwsConnector/efsFileSystems`| -| EFS | `efsMountTargets` | `Microsoft.AwsConnector/efsMountTargets`| -| Elastic Beanstalk | `elasticBeanstalkEnvironments` | `Microsoft.AwsConnector/elasticBeanstalkEnvironments`| -| Elastic Load Balancer V2 | `elasticLoadBalancingV2LoadBalancers`| `Microsoft.AwsConnector/elasticLoadBalancingV2LoadBalancers`| -| Elastic Load Balancer V2 | `elasticLoadBalancingV2Listeners`| `Microsoft.AwsConnector/elasticLoadBalancingV2Listeners`| -| Elastic Load Balancer V2 | `elasticLoadBalancingV2TargetGroups`| `Microsoft.AwsConnector/elasticLoadBalancingV2TargetGroups`| -| Elastic Search | `elasticsearchDomains` | `Microsoft.AwsConnector/elasticsearchDomains`| -| GuardDuty | `guardDutyDetectors` | `Microsoft.AwsConnector/guardDutyDetectors`| -| IAM | `iamGroups` | `Microsoft.AwsConnector/iamGroups`| -| IAM | `iamManagedPolicies` | `Microsoft.AwsConnector/iamManagedPolicies`| -| IAM | `iamServerCertificates` | `Microsoft.AwsConnector/iamServerCertificates`| -| IAM | `iamUserPolicies` | `Microsoft.AwsConnector/iamUserPolicies`| -| IAM | `iamVirtualMFADevices` | `Microsoft.AwsConnector/iamVirtualMFADevices`| -| KMS | `kmsKeys` | `Microsoft.AwsConnector/kmsKeys`| -| Lambda | `lambdaFunctions` | `Microsoft.AwsConnector/lambdaFunctions`| -| Lightsail | `lightsailInstances` | `Microsoft.AwsConnector/lightsailInstances`| -| Lightsail | `lightsailBuckets`| `Microsoft.AwsConnector/lightsailBuckets`| -| Logs | `logsLogGroups` | `Microsoft.AwsConnector/logsLogGroups`| -| Logs | `logsLogStreams` | `Microsoft.AwsConnector/logsLogStreams`| -| Logs | `logsMetricFilters` | `Microsoft.AwsConnector/logsMetricFilters`| -| Logs | `logsSubscriptionFilters` | `Microsoft.AwsConnector/logsSubscriptionFilters`| -| Macie | `macieAllowLists` | `Microsoft.AwsConnector/macieAllowLists`| -| Network Firewalls | `networkFirewallFirewalls` | `Microsoft.AwsConnector/networkFirewallFirewalls`| -| Network Firewalls | `networkFirewallFirewallPolicies` | `Microsoft.AwsConnector/networkFirewallFirewallPolicies`| -| Network Firewalls | `networkFirewallRuleGroups` | `Microsoft.AwsConnector/networkFirewallRuleGroups`| -| Organization | `organizationsAccounts` | `Microsoft.AwsConnector/organizationsAccounts`| -| Organization | `organizationsOrganizations` | `Microsoft.AwsConnector/organizationsOrganizations`| -| RDS | `rdsDBInstances` | `Microsoft.AwsConnector/rdsDBInstances`| -| RDS | `rdsDBClusters` | `Microsoft.AwsConnector/rdsDBClusters`| -| RDS | `rdsEventSubscriptions` | `Microsoft.AwsConnector/rdsEventSubscriptions`| -| Redshift | `redshiftClusters` | `Microsoft.AwsConnector/redshiftClusters`| -| Redshift | `redshiftClusterParameterGroups` | `Microsoft.AwsConnector/redshiftClusterParameterGroups`| -| Route 53 | `route53HostedZones` | `Microsoft.AwsConnector/route53HostedZones`| -| SageMaker | `sageMakerApps` | `Microsoft.AwsConnector/sageMakerApps`| -| SageMaker | `sageMakerDevices` | `Microsoft.AwsConnector/sageMakerDevices`| -| SageMaker | `sageMakerImages` | `Microsoft.AwsConnector/sageMakerImages`| -| S3 | `s3Buckets` | `Microsoft.AwsConnector/s3Buckets`| -| S3 | `s3BucketPolicies` | `Microsoft.AwsConnector/s3BucketPolicies`| -| S3 | `s3AccessPoints` | `Microsoft.AwsConnector/s3AccessPoints`| -| SNS | `snsTopics` | `Microsoft.AwsConnector/snsTopics`| -| SQS | `sqsQueues` | `Microsoft.AwsConnector/sqsQueues`| --## AWS resource representation in Azure --After you connect your AWS cloud and enable the **Inventory** solution, the multicloud connector creates a new resource group using the naming convention `aws_yourAwsAccountId`. Azure representations of your AWS resources are created in this resource group, using the `AwsConnector` namespace values described in the previous section. You can apply Azure tags and policies to these resources. --Resources that are discovered in AWS and projected in Azure are placed in Azure regions, using a [standard mapping scheme](resource-representation.md#region-mapping). --## Periodic sync options --The periodic sync time that you select when configuring the **Inventory** solution determines how often your AWS account is scanned and synced to Azure. By enabling periodic sync, changes to your AWS resources are reflected in Azure. For instance, if a resource is deleted in AWS, that resource is also deleted in Azure. --If you prefer, you can turn periodic sync off when configuring this solution. If you do so, your Azure representation may become out of sync with your AWS resources, as Azure won't be able to rescan and detect any changes. --## Querying for resources in Azure Resource Graph --[Azure Resource Graph](/azure/governance/resource-graph/overview) is an Azure service designed to extend Azure Resource Management by providing efficient and performant resource exploration. Running queries at scale across a given set of subscriptions helps you effectively govern your environment. --You can run queries using [Resource Graph Explorer](/azure/governance/resource-graph/first-query-portal) in the Azure portal. Some example queries for common scenarios are shown here. --### Query all onboarded multicloud asset inventories --```kusto -resources -| where subscriptionId == "<subscription ID>" -| where id contains "microsoft.awsconnector" -| union (awsresources | where type == "microsoft.awsconnector/ec2instances" and subscriptionId =="<subscription ID>") -| extend awsTags= properties.awsTags, azureTags = ['tags'] -| project subscriptionId, resourceGroup, type, id, awsTags, azureTags, properties -``` --### Query for all resources under a specific connector --```kusto -resources -| extend connectorId = tolower(tostring(properties.publicCloudConnectorsResourceId)), resourcesId=tolower(id) -| join kind=leftouter ( - awsresources - | extend pccId = tolower(tostring(properties.publicCloudConnectorsResourceId)), awsresourcesId=tolower(id) - | extend parentId = substring(awsresourcesId, 0, strlen(awsresourcesId) - strlen("/providers/microsoft.awsconnector/ec2instances/default")) -) on $left.resourcesId == $right.parentId -| where connectorId =~ "yourConnectorId" or pccId =~ "yourConnectorId" -| extend resourceType = tostring(split(iif (type =~ "microsoft.hybridcompute/machines", type1, type), "/")[1]) -``` --### Query for all virtual machines in Azure and AWS, along with their instance size --```kusto -resources -| where (['type'] == "microsoft.compute/virtualmachines") -| union (awsresources | where type == "microsoft.awsconnector/ec2instances") -| extend cloud=iff(type contains "ec2", "AWS", "Azure") -| extend awsTags=iff(type contains "microsoft.awsconnector", properties.awsTags, ""), azureTags=tags -| extend size=iff(type contains "microsoft.compute", properties.hardwareProfile.vmSize, properties.awsProperties.instanceType.value) -| project subscriptionId, cloud, resourceGroup, id, size, azureTags, awsTags, properties -``` --### Query for all functions across Azure and AWS --```kusto -resources -| where (type == 'microsoft.web/sites' and ['kind'] contains 'functionapp') or type == "microsoft.awsconnector/lambdafunctionconfigurations" -| extend cloud=iff(type contains "awsconnector", "AWS", "Azure") -| extend functionName=iff(cloud=="Azure", properties.name,properties.awsProperties.functionName), state=iff(cloud=="Azure", properties.state, properties.awsProperties.state), lastModifiedTime=iff(cloud=="Azure", properties.lastModifiedTimeUtc,properties.awsProperties.lastModified), location=iff(cloud=="Azure", location,properties.awsRegion), tags=iff(cloud=="Azure", tags, properties.awsTags) -| project cloud, functionName, lastModifiedTime, location, tags -``` --### Query for all resources with a certain tag --```kusto -resources -| extend awsTags=iff(type contains "microsoft.awsconnector", properties.awsTags, ""), azureTags=tags -| where awsTags contains "<yourTagValue>" or azureTags contains "<yourTagValue>" -| project subscriptionId, resourceGroup, name, azureTags, awsTags -``` --## Next steps --- Learn about the [multicloud connector **Arc Onboarding** solution](onboard-multicloud-vms-arc.md).-- Learn more about [Azure Resource Graph](/azure/governance/resource-graph/overview).- |
azure-arc | Network Requirements Consolidated | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md | - Title: Azure Arc network requirements -description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols. Previously updated : 06/25/2024----# Azure Arc network requirements --This article lists the endpoints, ports, and protocols required for Azure Arc-enabled services and features. ---## Azure Arc-enabled Kubernetes endpoints --Connectivity to the Arc Kubernetes-based endpoints is required for all Kubernetes-based Arc offerings, including: --- Azure Arc-enabled Kubernetes-- Azure Arc-enabled App services-- Azure Arc-enabled Machine Learning-- Azure Arc-enabled data services (direct connectivity mode only)--For more information, see [Azure Arc-enabled Kubernetes network requirements](kubernetes/network-requirements.md). --## Azure Arc-enabled data services --This section describes requirements specific to Azure Arc-enabled data services, in addition to the Arc-enabled Kubernetes endpoints listed above. ---For more information, see [Connectivity modes and requirements](dat). --## Azure Arc-enabled servers --Connectivity to Arc-enabled server endpoints is required for: --- SQL Server enabled by Azure Arc-- Azure Arc-enabled VMware vSphere <sup>*</sup>-- Azure Arc-enabled System Center Virtual Machine Manager <sup>*</sup>-- Azure Arc-enabled Azure Stack (HCI) <sup>*</sup>-- <sup>*</sup>Only required for guest management enabled. ---### Subset of endpoints for ESU only ---For more information, see [Connected Machine agent network requirements](servers/network-requirements.md). --## Azure Arc resource bridge --This section describes additional networking requirements specific to deploying Azure Arc resource bridge in your enterprise. These requirements also apply to Azure Arc-enabled VMware vSphere and Azure Arc-enabled System Center Virtual Machine Manager. ---For more information, see [Azure Arc resource bridge network requirements](resource-bridge/network-requirements.md). --## Azure Arc-enabled VMware vSphere --Azure Arc-enabled VMware vSphere also requires: ---For more information, see [Support matrix for Azure Arc-enabled VMware vSphere](vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md). --## Azure Arc-enabled System Center Virtual Machine Manager --Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) also requires: --| **Service** | **Port** | **URL** | **Direction** | **Notes**| -| | | | | | -| SCVMM management Server | 443 | URL of the SCVMM management server | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. | --For more information, see [Overview of Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md). --## Additional endpoints --Depending on your scenario, you might need connectivity to other URLs, such as those used by the Azure portal, management tools, or other Azure services. In particular, review these lists to ensure that you allow connectivity to any necessary endpoints: --- [Azure portal URLs](../azure-portal/azure-portal-safelist-urls.md)-- [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints) |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md | - Title: Azure Arc overview -description: Learn about what Azure Arc is and how it helps customers enable management and governance of their hybrid resources with other Azure services and features. Previously updated : 11/03/2023----# Azure Arc overview --Today, companies struggle to control and govern increasingly complex environments that extend across data centers, multiple clouds, and edge. Each environment and cloud possesses its own set of management tools, and new DevOps and ITOps operational models can be hard to implement across resources. --Azure Arc simplifies governance and management by delivering a consistent multicloud and on-premises management platform. --Azure Arc provides a centralized, unified way to: --* Manage your entire environment together by projecting your existing non-Azure and/or on-premises resources into Azure Resource Manager. -* Manage virtual machines, Kubernetes clusters, and databases as if they are running in Azure. -* Use familiar Azure services and management capabilities, regardless of where your resources live. -* Continue using traditional ITOps while introducing DevOps practices to support new cloud native patterns in your environment. -* Configure custom locations as an abstraction layer on top of Azure Arc-enabled Kubernetes clusters and cluster extensions. ---Currently, Azure Arc allows you to manage the following resource types hosted outside of Azure: --* [Servers](servers/overview.md): Manage Windows and Linux physical servers and virtual machines hosted outside of Azure. -* [Kubernetes clusters](kubernetes/overview.md): Attach and configure Kubernetes clusters running anywhere, with multiple supported distributions. -* [Azure data services](dat): Run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. SQL Managed Instance -and PostgreSQL (preview) services are currently available. -* [SQL Server](/sql/sql-server/azure-arc/overview): Extend Azure services to SQL Server instances hosted outside of Azure. -* Virtual machines: Provision, resize, delete and manage virtual machines based on [VMware vSphere](./vmware-vsphere/overview.md) or [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) and enable VM self-service through role-based access. --> [!NOTE] -> For more information regarding the different services Azure Arc offers, see [Choosing the right Azure Arc service for machines](/azure/azure-arc/choose-service). --## Key features and benefits --Some of the key scenarios that Azure Arc supports are: --* Implement consistent inventory, management, governance, and security for servers across your environment. --* Configure [Azure VM extensions](./servers/manage-vm-extensions.md) to use Azure management services to monitor, secure, and update your servers. --* Manage and govern Kubernetes clusters at scale. --* [Use GitOps to deploy configurations](kubernetes/conceptual-gitops-flux2.md) across one or more clusters from Git repositories. --* Zero-touch compliance and configuration for Kubernetes clusters using Azure Policy. --* Run [Azure data services](../azure-arc/kubernetes/custom-locations.md) on any Kubernetes environment as if it runs in Azure (specifically Azure SQL Managed Instance and Azure Database for PostgreSQL server, with benefits such as upgrades, updates, security, and monitoring). Use elastic scale and apply updates without any application downtime, even without continuous connection to Azure. --* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc-enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc-enabled data services](./dat). --* Perform virtual machine lifecycle and management operations for [VMware vSphere](./vmware-vsphere/overview.md) and [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) environments. --* A unified experience viewing your Azure Arc-enabled resources, whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API. --## Pricing --Below is pricing information for the features available today with Azure Arc. --### Azure Arc-enabled servers --The following Azure Arc control plane functionality is offered at no extra cost: --* Resource organization through Azure management groups and tags -* Searching and indexing through Azure Resource Graph -* Access and security through Azure Role-based access control (RBAC) -* Environments and automation through templates and extensions --Any Azure service that is used on Azure Arc-enabled servers, such as Microsoft Defender for Cloud or Azure Monitor, will be charged as per the pricing for that service. For more information, see the [Azure pricing page](https://azure.microsoft.com/pricing/). --### Azure Arc-enabled Kubernetes --Any Azure service that is used on Azure Arc-enabled Kubernetes, such as Microsoft Defender for Cloud or Azure Monitor, will be charged as per the pricing for that service. --For more information on pricing for configurations on top of Azure Arc-enabled Kubernetes, see the [Azure pricing page](https://azure.microsoft.com/pricing/). --### Azure Arc-enabled data services --For information, see the [Azure pricing page](https://azure.microsoft.com/pricing/). --## Next steps --* Learn about [Azure Arc-enabled servers](./servers/overview.md). -* Learn about [Azure Arc-enabled Kubernetes](./kubernetes/overview.md). -* Learn about [Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/). -* Learn about [SQL Server enabled by Azure Arc](/sql/sql-server/azure-arc/overview). -* Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md). -* Learn about [Azure Arc-enabled VM Management on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview). -* Learn about [Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md). -* Experience Azure Arc by exploring the [Azure Arc Jumpstart](https://aka.ms/AzureArcJumpstart). -* Learn about best practices and design patterns through the [Azure Arc Landing Zone Accelerators](https://aka.ms/ArcLZAcceleratorReady). -* Understand [network requirements for Azure Arc](network-requirements-consolidated.md). |
azure-arc | Conceptual Custom Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/platform/conceptual-custom-locations.md | - Title: "Overview of custom locations with Azure Arc"-- Previously updated : 01/08/2024- -description: "This article provides a conceptual overview of the custom locations capability of Azure Arc." ---# Custom locations --As an extension of the Azure location construct, a *custom location* provides a reference as a deployment target that administrators can set up when creating an Azure resource. The custom location feature abstracts the backend infrastructure details from application developers, database admin users, or other users in the organization. These users can then reference the custom location without having to be aware of these details. --Custom locations can be used to enable [Azure Arc-enabled Kubernetes clusters](../kubernetes/overview.md) as target locations for deploying Azure services instances. Azure offerings that can be deployed on top of custom locations include databases, such as [SQL Managed Instance enabled by Azure Arc](/azure/azure-arc/data/managed-instance-overview) and [Azure Arc-enabled PostgreSQL server](/azure/azure-arc/data/what-is-azure-arc-enabled-postgresql). --On Arc-enabled Kubernetes clusters, a custom location represents an abstraction of a namespace within the Azure Arc-enabled Kubernetes cluster. Custom locations create the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the cluster. --## Custom location permissions --Since the custom location is an Azure Resource Manager resource that supports [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md), an administrator or operator can determine which users have access to create resource instances on: --* A namespace within a Kubernetes cluster to target deployment of SQL Managed Instance enabled by Azure Arc or Azure Arc-enabled PostgreSQL server. -* The compute, storage, networking, and other vCenter or Azure Stack HCI resources to deploy and manage VMs. --For example, a cluster operator could create a custom location **Contoso-Michigan-Healthcare-App** representing a namespace on a Kubernetes cluster in your organization's Michigan Data Center. The operator can assign Azure RBAC permissions to application developers on this custom location so that they can deploy healthcare-related web applications. The developers can then deploy these applications to **Contoso-Michigan-Healthcare-App** without having to know details of the namespace and Kubernetes cluster. --## Architecture for Arc-enabled Kubernetes --When an administrator enables the custom locations feature on a cluster, a ClusterRoleBinding is created, authorizing the Microsoft Entra application used by the Custom Locations Resource Provider (RP). Once authorized, the Custom Locations RP can create ClusterRoleBindings or RoleBindings needed by other Azure RPs to create custom resources on this cluster. The cluster extensions installed on the cluster determine the list of RPs to authorize. --[ ![Diagram showing custom locations architecture on Arc-enabled Kubernetes.](../kubernetes/media/conceptual-custom-locations-usage.png) ](../kubernetes/media/conceptual-custom-locations-usage.png#lightbox) --When the user creates a data service instance on the cluster: --1. The **PUT** request is sent to Azure Resource Manager. -1. The **PUT** request is forwarded to the Azure Arc-enabled Data Services RP. -1. The RP fetches the `kubeconfig` file associated with the Azure Arc-enabled Kubernetes cluster, on which the custom location exists. - * The custom location is referenced as `extendedLocation` in the original PUT request. -1. The Azure Arc-enabled Data Services RP uses the `kubeconfig` to communicate with the cluster to create a custom resource of the Azure Arc-enabled Data Services type on the namespace mapped to the custom location. - * The Azure Arc-enabled Data Services operator was deployed via cluster extension creation before the custom location existed. -1. The Azure Arc-enabled Data Services operator reads the new custom resource created on the cluster and creates the data controller, translating into realization of the desired state on the cluster. --The sequence of steps to create the SQL managed instance or PostgreSQL instance are identical to the sequence of steps described above. --## Next steps --* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](../kubernetes/quickstart-connect-cluster.md). -* Learn how to [create a custom location](../kubernetes/custom-locations.md) on your Azure Arc-enabled Kubernetes cluster. |
azure-arc | Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/deploy-cli.md | - Title: Azure Arc resource bridge deployment command overview -description: Learn about the Azure CLI commands that can be used to manage your Azure Arc resource bridge deployment. Previously updated : 02/09/2024-----# Azure Arc resource bridge deployment command overview --[Azure CLI](/cli/azure/install-azure-cli) is required to deploy the Azure Arc resource bridge. When deploying Arc resource bridge with a corresponding partner product, the Azure CLI commands may be combined into an automation script, along with additional provider-specific commands. To learn about installing Arc resource bridge with a corresponding partner product, see: --- [Connect VMware vCenter Server to Azure with Arc resource bridge](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md)-- [Connect System Center Virtual Machine Manager (SCVMM) to Azure with Arc resource bridge](../system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md#download-the-onboarding-script)-- [Azure Stack HCI VM Management through Arc resource bridge](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites)--This topic provides an overview of the [Azure CLI commands](/cli/azure/arcappliance) that are used to manage Arc resource bridge deployment, in the order in which they are typically used for deployment. --## `az arcappliance createconfig` --This command creates the configuration files used by Arc resource bridge. Credentials that are provided during `createconfig`, such as vCenter credentials for VMware vSphere, are stored in a configuration file and locally within Arc resource bridge. These credentials should be a separate user account used only by Arc resource bridge, with permission to view, create, delete, and manage on-premises resources. If the credentials change, then the credentials on the resource bridge should be updated. --The `createconfig` command features two modes: interactive and non-interactive. Interactive mode provides helpful prompts that explain the parameter and what to pass. To initiate interactive mode, pass only the three required parameters. Non-interactive mode allows you to pass all the parameters needed to create the configuration files without being prompted, which saves time and is useful for automation scripts. --Three configuration files are generated: resource.yaml, appliance.yaml and infra.yaml. These files should be kept and stored in a secure location, as they're required for maintenance of Arc resource bridge. --This command also calls the `validate` command to check the configuration files. --> [!NOTE] -> Azure Stack HCI uses different commands to create the Arc resource bridge configuration files. --## `az arcappliance validate` --The `validate` command checks the configuration files for a valid schema, cloud and core validations (such as management machine connectivity to [required URLs](network-requirements.md)), network settings, and proxy settings. It also performs tests on identity privileges and role assignments, network configuration, load balancer configuration and content delivery network connectivity. --## `az arcappliance prepare` --This command downloads the OS images from Microsoft that are used to deploy the on-premises appliance VM. Once downloaded, the images are then uploaded to the local cloud image gallery to prepare for the creation of the appliance VM. --This command takes about 10-30+ minutes to complete, depending on the network speed. Allow the command to complete before continuing with the deployment. --## `az arcappliance deploy` --The `deploy` command deploys an on-premises instance of Arc resource bridge as an appliance VM, bootstrapped to be a Kubernetes management cluster. This command gets all necessary pods and agents within the Kubernetes cluster into a running state. Once the appliance VM is up, the kubeconfig file is generated. --## `az arcappliance create` --This command creates Arc resource bridge in Azure as an ARM resource, then establishes the connection between the ARM resource and on-premises appliance VM. --Once the `create` command initiates the connection, it will return in the terminal, even though the connection between the ARM resource and on-premises appliance VM is not yet complete. The resource bridge needs about 5 minutes to establish the connection between the ARM resource and the on-premises VM. --## `az arcappliance show` --The `show` command gets the status of the Arc resource bridge and ARM resource information. It can be used to check the progress of the connection between the ARM resource and on-premises appliance VM. --While the Arc resource bridge is connecting the ARM resource to the on-premises VM, the resource bridge progresses through the following stages: --`ProvisioningState` may be `Creating`, `Created`, `Failed`, `Deleting`, or `Succeeded`. --`Status` transitions between `WaitingForHeartbeat` -> `Validating` -> `Connecting` -> `Connected` -> `Running`. --- `WaitingForHeartbeat`: Azure is waiting to receive a signal from the appliance VM.--- `Validating`: Appliance VM is checking Azure services for connectivity and serviceability.--- `Connecting`: Appliance VM is syncing on-premises resources to Azure.--- `Connected`: Appliance VM completed sync of on-premises resources to Azure.--- `Running`: Appliance VM and Azure have completed hybrid sync and Arc resource bridge is now operational.--Successful Arc resource bridge creation results in `ProvisioningState = Succeeded` and `Status = Running`. --## `az arcappliance delete` --This command deletes the appliance VM and Azure resources. It doesn't clean up the OS image, which remains in the on-premises cloud gallery. --If a deployment fails, run this command to clean up the environment before you attempt to deploy again. --## Next steps --- Explore the full list of [Azure CLI commands and required parameters](/cli/azure/arcappliance) for Arc resource bridge.-- Get [troubleshooting tips for Arc resource bridge](troubleshoot-resource-bridge.md). |
azure-arc | Maintenance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/maintenance.md | - Title: Azure Arc resource bridge maintenance operations -description: Learn how to manage Azure Arc resource bridge so that it remains online and operational. - Previously updated : 11/03/2023---# Azure Arc resource bridge maintenance operations --To keep your Azure Arc resource bridge deployment online and operational, you need to perform maintenance operations such as updating credentials, monitoring upgrades and ensuring the appliance VM is online. --## Prerequisites --To maintain the on-premises appliance VM, the [appliance configuration files generated during deployment](deploy-cli.md#az-arcappliance-createconfig) need to be saved in a secure location and made available on the management machine. --The management machine used to perform maintenance operations must meet all of [the Arc resource bridge requirements](system-requirements.md). --The following sections describe the maintenance tasks for Arc resource bridge. --## Update credentials in the appliance VM --Arc resource bridge consists of an on-premises appliance VM. The appliance VM [stores credentials](system-requirements.md#user-account-and-credentials) (for example, a user account for VMware vCenter) used to access the control center of the on-premises infrastructure to view and manage on-premises resources. The credentials used by Arc resource bridge are the same ones provided during deployment of the resource bridge. This allows the resource bridge visibility to on-premises resources for guest management in Azure. --If the credentials change, the credentials stored in the Arc resource bridge need to be updated with the [`update-infracredentials` command](/cli/azure/arcappliance/update-infracredentials). This command must be run from the management machine, and it requires a [kubeconfig file](system-requirements.md#kubeconfig). --Reference: [Arc-enabled VMware - Update the credentials stored in Arc resource bridge](../vmware-vsphere/administer-arc-vmware.md#updating-the-vsphere-account-credentials-using-a-new-password-or-a-new-vsphere-account-after-onboarding) --## Troubleshoot Arc resource bridge --If you experience problems with the appliance VM, the appliance configuration files can help with troubleshooting. You can include these files when you [open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). --You might want to [collect logs](/cli/azure/arcappliance/logs#az-arcappliance-logs-vmware), which requires you to pass credentials to the on-premises control center: --- For VMWare vSphere, use the username and password provided to Arc resource bridge at deployment.-- For Azure Stack HCI, use the cloud service IP and HCI login configuration file path.--## Delete Arc resource bridge --You might need to delete Arc resource bridge due to deployment failures or when no longer needed. To do so, you need the appliance configuration files. The [delete command](deploy-cli.md#az-arcappliance-delete) is the recommended way to delete the bridge. This command deletes the on-premises appliance VM along with the Azure resource and underlying components across the two environments. --## Next steps --- Learn about [upgrading Arc resource bridge](upgrade.md).-- Review the [Azure Arc resource bridge overview](overview.md) to understand more about requirements and technical details.-- Learn about [system requirements for Azure Arc resource bridge](system-requirements.md). |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md | - Title: Azure Arc resource bridge network requirements -description: Learn about network requirements for Azure Arc resource bridge including URLs that must be allowlisted. - Previously updated : 06/04/2024---# Azure Arc resource bridge network requirements --This article describes the networking requirements for deploying Azure Arc resource bridge in your enterprise. --## General network requirements --Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443. If the appliance needs to connect through a firewall or proxy server to communicate over the internet, it communicates outbound using the HTTPS protocol. ----> [!NOTE] -> The URLs listed here are required for Arc resource bridge only. Other Arc products (such as Arc-enabled VMware vSphere) may have additional required URLs. For details, see [Azure Arc network requirements](../network-requirements-consolidated.md#azure-arc-enabled-vmware-vsphere). -> --## SSL proxy configuration --> [!IMPORTANT] -> Arc Resource Bridge supports only direct (explicit) proxies, including unauthenticated proxies, proxies with basic authentication, SSL terminating proxies, and SSL passthrough proxies. -> --If using a proxy, the Arc Resource Bridge must be configured to use the proxy in order to connect to Azure services. --- To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files.--- The format of the certificate file is *Base-64 encoded X.509 (.CER)*.--- Only pass the single proxy certificate. If a certificate bundle is passed, the deployment will fail.--- The proxy server endpoint can't be a `.local` domain.--- The proxy server has to be reachable from all IPs within the IP address prefix, including the control plane and appliance VM IPs.--There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: --- SSL certificate for your SSL proxy (so that the management machine and appliance VM trust your proxy FQDN and can establish an SSL connection to it)--- SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted.--In order to deploy Arc resource bridge, images need to be downloaded to the management machine and then uploaded to the on-premises private cloud gallery. If your proxy server throttles download speed, you may not be able to download the required images (~3.5 GB) within the allotted time (90 min). ---## Exclusion list for no proxy --If a proxy server is being used, the following table contains the list of addresses that should be excluded from proxy by configuring the `noProxy` settings. --| **IP Address** | **Reason for exclusion** | -| -- | | -| localhost, 127.0.0.1 | Localhost traffic | -| .svc | Internal Kubernetes service traffic (.svc) where *.svc* represents a wildcard name. This is similar to saying \*.svc, but none is used in this schema. | -| 10.0.0.0/8 | private network address space | -| 172.16.0.0/12 |Private network address space - Kubernetes Service CIDR | -| 192.168.0.0/16 | Private network address space - Kubernetes Pod CIDR | -| .contoso.com | You may want to exempt your enterprise namespace (.contoso.com) from being directed through the proxy. To exclude all addresses in a domain, you must add the domain to the `noProxy` list. Use a leading period rather than a wildcard (\*) character. In the sample, the addresses `.contoso.com` excludes addresses `prefix1.contoso.com`, `prefix2.contoso.com`, and so on. | --The default value for `noProxy` is `localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16`. While these default values will work for many networks, you may need to add more subnet ranges and/or names to the exemption list. For example, you may want to exempt your enterprise namespace (.contoso.com) from being directed through the proxy. You can achieve that by specifying the values in the `noProxy` list. --> [!IMPORTANT] -> When listing multiple addresses for the `noProxy` settings, don't add a space after each comma to separate the addresses. The addresses must immediately follow the commas. -> --## Internal Port Listening --As a notice, you should be aware that the appliance VM is configured to listen on the following ports. These ports are used exclusively for internal processes and do not require external access: --- 8443 ΓÇô Endpoint for Microsoft Entra Authentication Webhook--- 10257 ΓÇô Endpoint for Arc resource bridge metrics--- 10250 ΓÇô Endpoint for Arc resource bridge metrics--- 2382 ΓÇô Endpoint for Arc resource bridge metrics---## Next steps --- Review the [Azure Arc resource bridge overview](overview.md) to understand more about requirements and technical details.-- Learn about [security configuration and considerations for Azure Arc resource bridge](security-overview.md).-- View [troubleshooting tips for networking issues](troubleshoot-resource-bridge.md#networking-issues).- |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md | - Title: Azure Arc resource bridge overview -description: Learn how to use Azure Arc resource bridge to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Previously updated : 08/26/2024-----# What is Azure Arc resource bridge? --Azure Arc resource bridge is a Microsoft managed product that is part of the core Azure Arc platform. It is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on Azure Stack HCI ([Azure Arc VM management](/azure-stack/hci/manage/azure-arc-vm-management-overview)), VMware ([Arc-enabled VMware vSphere](../vmware-vsphere/overview.md)), and System Center Virtual Machine Manager ([Arc-enabled SCVMM](../system-center-virtual-machine-manager/overview.md)). --Azure Arc resource bridge is a Kubernetes management cluster installed on the customerΓÇÖs on-premises infrastructure as an appliance VM (aka Arc appliance). The resource bridge is provided credentials to the infrastructure control plane that allows it to apply guest management services on the on-premises resources. Arc resource bridge enables projection of on-premises resources as ARM resources and management from ARM as "Arc-enabled" Azure resources. --Arc resource bridge delivers the following benefits: --* Enables VM self-servicing from Azure without having to create and manage a Kubernetes cluster. -* Fully supported by Microsoft, including updates to core components. -* Supports deployment to any private cloud hosted on Hyper-V or VMware from the Azure portal or using the Azure Command-Line Interface (CLI). --## Overview --Azure Arc resource bridge hosts other components such as [custom locations](..\platform\conceptual-custom-locations.md), cluster extensions, and other Azure Arc agents in order to deliver the level of functionality with the private cloud infrastructures it supports. This complex system is composed of three layers: --* The base layer that represents the resource bridge and the Arc agents. -* The platform layer that includes the custom location and cluster extension. -* The solution layer for each service supported by Arc resource bridge (that is, the different type of VMs). ---Azure Arc resource bridge can host other Azure services or solutions running on-premises. There are two objects hosted on the Arc resource bridge: --* Cluster extension: The Azure service deployed to run on-premises. Currently, it supports three -- * Azure Arc VM management on Azure Stack HCI - * Azure Arc-enabled VMware - * Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) --* Custom locations: A deployment target where you can create Azure resources. It maps to different resource for different Azure services. For example, for Arc-enabled VMware, the custom locations resource maps to an instance of vCenter, and for Azure Arc VM management on Azure Stack HCI, it maps to an HCI cluster instance. --Custom locations and cluster extension are both Azure resources, which are linked to the Azure Arc resource bridge resource in Azure Resource Manager. When you create an on-premises VM from Azure, you can select the custom location, and that routes that *create action* to the mapped vCenter, Azure Stack HCI cluster, or SCVMM. --Some resources are unique to the infrastructure. For example, vCenter has a resource pool, network, and template resources. During VM creation, these resources need to be specified. With Azure Stack HCI, you just need to select the custom location, network and template to create a VM. --To summarize, the Azure resources are projections of the resources running in your on-premises private cloud. If the on-premises resource is not healthy, it can impact the health of the related resources that are projected in Azure. For example, if the resource bridge is deleted by accident, all the resources projected in Azure by the resource bridge are impacted. The on-premises VMs in your on-premises private cloud aren't impacted, as they are running on vCenter but you won't be able to start or stop the VMs from Azure. It is not recommended to directly manage or modify the resource bridge using any on-premises applications. --## Benefits of Azure Arc resource bridge --Through Azure Arc resource bridge, you can accomplish the following tasks for each private cloud infrastructure from Azure: --### Azure Stack HCI --You can provision and manage on-premises Windows and Linux virtual machines (VMs) running on Azure Stack HCI clusters. --### VMware vSphere --By registering resource pools, networks, and VM templates, you can represent a subset of your vCenter resources in Azure to enable self-service. Integration with Azure allows you to manage access to your vCenter resources in Azure to maintain a secure environment. You can also perform various operations on the VMware virtual machines that are enabled by Arc-enabled VMware vSphere: --* Start, stop, and restart a virtual machine -* Control access and add Azure tags -* Add, remove, and update network interfaces -* Add, remove, and update disks and update VM size (CPU cores and memory) -* Enable guest management -* Install extensions --### System Center Virtual Machine Manager (SCVMM) --You can connect an SCVMM management server to Azure by deploying Azure Arc resource bridge in the VMM environment. Azure Arc resource bridge enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and perform various operations on them: --* Start, stop, and restart a virtual machine -* Control access and add Azure tags -* Add, remove, and update network interfaces -* Add, remove, and update disks and update VM size (CPU cores and memory) -* Enable guest management -* Install extensions --## Example scenarios --The following are just two examples of the many scenarios that can be enabled by using Arc resource bridge in a hybrid environment. --### Apply Azure Policy and other Azure services to on-premises VMware VMs --A customer deploys Arc Resource Bridge onto their on-premises VMware environment. They sign into the Azure portal and select the VMware VMs that they'd like to connect to Azure. Now they can manage these on-premises VMware VMs in Azure Resource Manager (ARM) as Arc-enabled machines, alongside their native Azure machines, achieving a single pane of glass to view their resources in a VMware/Azure hybrid environment. This includes deploying Azure services, such as Defender for Cloud and Azure Policy, to keep updated on the security and compliance posture of their on-premises VMware VMs in Azure. ---### Create physical HCI VMs on-premises from Azure --A customer has multiple datacenter locations in Canada and New York. They install an Arc resource bridge in each datacenter and connect their Azure Stack HCI VMs to Azure in each location. They can then sign into Azure portal and see all their Arc-enabled VMs from the two physical locations together in one central cloud location. From the portal, the customer can choose to create a new VM; that VM is also created on-premises at the selected datacenter, allowing the customer to manage VMs in different physical locations centrally through Azure. ---## Version and region support --### Supported regions --In order to use Arc resource bridge in a region, Arc resource bridge and the Arc-enabled feature for a private cloud must be supported in the region. For example, to use Arc resource bridge with Azure Stack HCI in East US, Arc resource bridge and the Arc VM management feature for Azure Stack HCI must be supported in East US. To confirm feature availability across regions for each private cloud provider, review their deployment guide and other documentation. There could be instances where Arc resource bridge is available in a region where the private cloud feature is not yet available. --Arc resource bridge supports the following Azure regions: --* East US -* East US 2 -* West US 2 -* West US 3 -* Central US -* North Central US -* South Central US -* Canada Central -* Australia East -* Australia SouthEast --* West Europe -* North Europe -* UK South -* UK West -* Sweden Central -* Japan East -* Southeast Asia -* East Asia -* Central India --### Regional resiliency --While Azure has redundancy features at every level of failure, if a service impacting event occurs, Azure Arc resource bridge currently does not support cross-region failover or other resiliency capabilities. In the event of the service becoming unavailable, the on-premises VMs continue to operate unaffected. Management from Azure is unavailable during that service outage. --### Private cloud environments --The following private cloud environments and their versions are officially supported for Arc resource bridge: --* VMware vSphere version 7.0, 8.0 -* Azure Stack HCI -* SCVMM --### Supported versions --Generally, the latest released version and the previous three versions (n-3) of Arc resource bridge are supported. For example, if the current version is 1.0.18, then the typical n-3 supported versions are: --* Current version: 1.0.18 -* n-1 version: 1.0.17 -* n-2 version: 1.0.16 -* n-3 version: 1.0.15 --There could be instances where supported versions are not sequential. For example, version 1.0.18 is released and later found to contain a bug; a hot fix is released in version 1.0.19 and version 1.0.18 is removed. In this scenario, n-3 supported versions become 1.0.19, 1.0.17, 1.0.16, 1.0.15. --Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month. Delays might occur that could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](release-notes.md). To learn more about upgrade options, visit [Upgrade Arc resource bridge](upgrade.md). --### Private Link Support --Arc resource bridge does not currently support private link. --## Next steps --* Learn how [Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md). -* Learn how [Azure Arc-enabled SCVMM extends Azure's governance and management capabilities to System Center managed infrastructure](../system-center-virtual-machine-manager/overview.md). -* Learn about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-vm-management-overview). -* Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge. - |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/release-notes.md | - Title: "What's new with Azure Arc resource bridge" Previously updated : 08/26/2024- -description: "Learn about the latest releases of Azure Arc resource bridge." ---# What's new with Azure Arc resource bridge --Azure Arc resource bridge is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about recent releases. --We generally recommend using the most recent versions of the agents. The [version support policy](overview.md#supported-versions) generally covers the most recent version and the three previous versions (n-3). --## Version 1.2.0 (July 2024) --- Appliance: 1.2.0-- CLI extension: 1.2.0-- SFS release: 0.1.32.10710-- Kubernetes: 1.28.5-- Mariner: 2.0.20240609--### Arc-enabled SCVMM --- `CreateConfig`: Improve prompt messages and reorder networking prompts for the custom IP range scenario-- `CreateConfig`: Validate Gateway IP input against specified IP range for the custom IP range scenario-- `CreateConfig`: Add validation to check infra configuration capability for HA VM deployment. If HA isn't supported, reprompt users to proceed with standalone VM deployment--### Arc-enabled VMware vSphere --- Improve prompt messages in createconfig for VMware-- Validate proxy scheme and check for required `no_proxy` entries--### Features --- Reject double commas (`,,`) in `no_proxy` string-- Add default folder to createconfig list-- Add conditional Fairfax URLs for US Gov Virginia support-- Add new error codes--### Bug fixes --- Fix for openSSH [CVE-2024-63870](https://github.com/advisories/GHSA-2x8c-95vh-gfv4)--## Version 1.1.1 (April 2024) --- Appliance: 1.1.1-- CLI extension: 1.1.1-- SFS release: 0.1.26.10327-- Kubernetes: 1.27.3-- Mariner: 2.0.20240301--### Arc-enabled SCVMM --- Add quotes for resource names--### Azure Stack HCI --- HCI auto rotation logic on upgrade--### Features --- Updated log collection with describe nodes-- Error message enhancement for failure to reach Arc resource bridge VM-- Improve troubleshoot command error handling with scoped access key-- Longer timeout for individual pod pulls-- Updated `execute` command to allow passing in a kubeconfig-- Catch `<>` in no_proxy string-- Add validation to see if connections from the client machine are proxied-- Diagnostic checker enhancement - Add default gateway and dns servers check to telemetry mode-- Log collection enhancement--### Bug fixes --- HCI MOC image client fix to set storage container on catalog--## Version 1.1.0 (April 2024) --- Appliance: 1.1.0-- CLI extension: 1.1.0-- SFS release: 0.1.25.10229-- Kubernetes: 1.27.3-- Mariner: 2.0.20240223--### Arc-enabled SCVMM --- Use same `vmnetwork` key for HG and Cloud (`vmnetworkid`)-- SCVMM - Add fallback for VMM IP pool with support for IP range in appliance network, add `--vlanid` parameter to accept `vlanid`-- Non-interactive mode for SCVMM `troubleshoot` and `logs` commands-- `Createconfig` command uses styled text to warn about saving config files instead of standard logger-- Improved handling and error reporting for timeouts while provisioning/deprovisioning images from the cloud fabric-- Verify template and snapshot health after provisioning an image, and clean up files associated to the template on image deprovision failures-- Missing VHD state handing in SCVMM-- SCVMM `validate` and `createconfig` fixes--### Arc-enabled VMware vSphere --- SSD storage validations added to VMware vSphere in telemetry mode to check if the ESXi host backing the resource pool has any SSD-backed storage-- Improve missing privilege error message, show some privileges in error message-- Validate host ESXi version and provide a concrete error message for placement profile-- Improve message for no datacenters found, display default folder-- Surface VMware error when finder fails during validate-- Verify template health and fix it during image provision--### Features --- `deploy` command - diagnostic checker enhancements that add retries with exponential backoff to proxy client calls-- `deploy` command - diagnostic checker enhancement: adds storage performance checker in telemetry mode to evaluate the storage performance of the VM used to deploy the appliance-- `deploy` command - Add Timeout for SSH connection: New error message: "Error: Timeout occurred due to management machine being unable to reach the appliance VM IP, 192.168.0.11. Ensure that the requirements are met: `https://aka.ms/arb-machine-reqs: dial tcp 192.168.0.11:22: connect: connection timed out`-- `validate` command - The appliance deployment now fails if Proxy Connectivity and No Proxy checks report any errors--### Bug fixes --- SCVMM ValueError fix - fallback option for VMM IP Pools with support for Custom IP Range based Appliance Network--## Version 1.0.18 (February 2024) --- Appliance: 1.0.18-- CLI extension: 1.0.3-- SFS release: 0.1.24.10201-- Kubernetes: 1.26.6-- Mariner: 2.0.20240123--### Fabric/Private cloud provider --- SCVMM `createconfig` command improvements - retry until valid Port and FQDN provided-- SCVMM and VMware - Validate control plane IP address; add reprompts-- SCVMM and VMware - extend `deploy` command timeout from 30 to 120 minutes--### Features --- `deploy` command - diagnostic checker enhancement: proxy checks in telemetry mode--### Product --- Reduction in CPU requests-- ETCD preflight check enhancements for upgrade--### Bug fixes --- Fix for clusters impacted by the `node-ip` being set as `kube-vip` IP issue-- Fix for SCVMM cred rotation with the same credentials--## Version 1.0.17 (December 2023) --- Appliance: 1.0.17-- CLI extension: 1.0.2-- SFS release: 0.1.22.11107-- Kubernetes: 1.26.6-- Mariner: 2.0.20231106--### Fabric/Private cloud provider --- SCVMM `createconfig` command improvements-- Azure Stack HCI - extend `deploy` command timeout from 30 to 120 minutes-- All private clouds - enable provider credential parameters to be passed in each command-- All private clouds - basic validations for select `createconfig` command inputs-- VMware - basic reprompts for select `createconfig` command inputs--### Features --- `deploy` command - diagnostic checker enhancement - improve `context` error messages--### Bug fixes --- Fix for `context` error always being returned as `Deploying`--### Known bugs --- Arc resource bridge upgrade shows appliance version as upgraded, but status shows upgrade failed--## Version 1.0.16 (November 2023) --- Appliance: 1.0.16-- CLI extension: 1.0.1-- SFS release: 0.1.21.11013-- Kubernetes: 1.25.7-- Mariner: 2.0.20231004--### Fabric/Private cloud provider --- SCVMM image provisioning and upgrade fixes-- VMware vSphere - use full inventory path for networks-- VMware vSphere error improvement for denied permission-- Azure Stack HCI - enable default storage container--### Features --- `deploy` command - diagnostic checker enhancement - add `azurearcfork8s.azurecr.io` URL--### Bug fixes --- vSphere credential issue-- Don't set storage container for non-`arc-appliance` catalog image provision requests-- Monitoring agent not installed issue--## Version 1.0.15 (September 2023) --- Appliance: 1.0.15-- CLI extension: 1.0.0-- SFS release: 0.1.20.10830-- Kubernetes: 1.25.7-- Mariner: 2.0.20230823--### Fabric/Infrastructure --- `az arcappliance` CLI commands now only support static IP deployments for VMware and SCVMM-- For test purposes only, Arc resource bridge on Azure Stack HCI may be deployed with DHCP configuration-- Support for using canonical region names-- Removal of VMware vSphere 6.7 fabric support (vSphere 7 and 8 are both supported)--### Features --- (new) `get-upgrades` command- fetches the new upgrade edge available for a current appliance cluster-- (new) `upgrade` command - upgrades the appliance to the next available version (not available for SCVMM)-- (update) `deploy` command - In addition to `deploy`, this command now also calls `create` command. `Create` command is now optional.-- (new) `get-credentials` command - now allows fetching of SSH keys and kubeconfig, which are needed to run the `logs` command from a different machine than the one used to deploy Arc resource bridge-- Allowing usage of `config-file` parameter for `get-credentials` command-(new) Troubleshoot command - help debug live-site issues by running allowed actions directly on the appliance using a JIT access key --### Bug fix --- IPClaim premature deletion issue vSphere static IP--## Next steps --- Learn more about [Arc resource bridge](overview.md).-- Learn how to [upgrade Arc resource bridge](upgrade.md). |
azure-arc | Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/security-overview.md | - Title: Azure Arc resource bridge security overview -description: Understand security configuration and considerations for Azure Arc resource bridge. - Previously updated : 11/03/2023---# Azure Arc resource bridge security overview --This article describes the security configuration and considerations you should evaluate before deploying Azure Arc resource bridge in your enterprise. --## Using a managed identity --By default, a Microsoft Entra system-assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) is created and assigned to the Azure Arc resource bridge. Azure Arc resource bridge currently supports only a system-assigned identity. The `clusteridentityoperator` identity initiates the first outbound communication and fetches the Managed Service Identity (MSI) certificate used by other agents for communication with Azure. --## Identity and access control --Azure Arc resource bridge is represented as a resource in a resource group inside an Azure subscription. Access to this resource is controlled by standard [Azure role-based access control](../../role-based-access-control/overview.md). From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.yml) page in the Azure portal, you can verify who has access to your Azure Arc resource bridge. --Users and applications who are granted the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or Administrator role to the resource group can make changes to the resource bridge, including deploying or deleting cluster extensions. --## Data residency --Azure Arc resource bridge follows data residency regulations specific to each region. If applicable, data is backed up in a secondary pair region in accordance with data residency regulations. Otherwise, data resides only in that specific region. Data isn't stored or processed across different geographies. --## Data encryption at rest --Azure Arc resource bridge stores resource information in Azure Cosmos DB. As described in [Encryption at rest in Azure Cosmos DB](/azure/cosmos-db/database-encryption-at-rest), all the data is encrypted at rest. --## Security audit logs --The [activity log](/azure/azure-monitor/essentials/activity-log-insights) is an Azure platform log that provides insight into subscription-level events. This includes tracking when the Azure Arc resource bridge is modified, deleted, or added. You can [view the activity log](/azure/azure-monitor/essentials/activity-log-insights#view-the-activity-log) in the Azure portal or retrieve entries with PowerShell and Azure CLI. By default, activity log events are [retained for 90 days](/azure/azure-monitor/essentials/activity-log-insights#retention-period) and then deleted. --## Next steps --- Understand [system requirements](system-requirements.md) and [network requirements](network-requirements.md) for Azure Arc resource bridge.-- Review the [Azure Arc resource bridge overview](overview.md) to understand more about features and benefits.-- Learn more about [Azure Arc](../overview.md). |
azure-arc | System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md | - Title: Azure Arc resource bridge system requirements -description: Learn about system requirements for Azure Arc resource bridge. - Previously updated : 05/22/2024---# Azure Arc resource bridge system requirements --This article describes the system requirements for deploying Azure Arc resource bridge. --Arc resource bridge is used with other partner products, such as [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), [Arc-enabled VMware vSphere](../vmware-vsphere/index.yml), and [Arc-enabled System Center Virtual Machine Manager (SCVMM)](../system-center-virtual-machine-manager/index.yml). These products may have additional requirements. --## Required Azure permissions --- To onboard Arc resource bridge, you must have the [Contributor](/azure/role-based-access-control/built-in-roles) role for the resource group.--- To read, modify, and delete Arc resource bridge, you must have the [Contributor](/azure/role-based-access-control/built-in-roles) role for the resource group.--## Management tool requirements --[Azure CLI](/cli/azure/install-azure-cli) is required to deploy the Azure Arc resource bridge on supported private cloud environments. --If deploying Arc resource bridge on VMware, Azure CLI 64-bit is required to be installed on the management machine to run the deployment commands. --If deploying on Azure Stack HCI, then Azure CLI 32-bit should be installed on the management machine. --Arc Appliance CLI extension, `arcappliance`, needs to be installed on the CLI. This can be done by running: `az extension add --name arcappliance` --## Minimum resource requirements --Arc resource bridge has the following minimum resource requirements: --- 200 GB disk space-- 4 vCPUs-- 8 GB memory-- supported storage configuration - hybrid storage (flash and HDD) or all-flash storage (SSDs or NVMe)--These minimum requirements enable most scenarios for products that use Arc resource bridge. Review the product's documentation for specific resource requirements. Failure to provide sufficient resources may cause errors during deployment or upgrade. --## IP address prefix (subnet) requirements --The IP address prefix (subnet) where Arc resource bridge will be deployed requires a minimum prefix of /29. The IP address prefix must have enough available IP addresses for the gateway IP, control plane IP, appliance VM IP, and reserved appliance VM IP. Arc resource bridge only uses the IP addresses assigned to the IP pool range (Start IP, End IP) and the Control Plane IP. We recommend that the End IP immediately follow the Start IP. Ex: Start IP = 192.168.0.2, End IP = 192.168.0.3. Please work with your network engineer to ensure that there is an available subnet with the required available IP addresses and IP address prefix for Arc resource bridge. --The IP address prefix is the subnet's IP address range for the virtual network and subnet mask (IP Mask) in CIDR notation, for example `192.168.7.1/29`. You provide the IP address prefix (in CIDR notation) during the creation of the configuration files for Arc resource bridge. --Consult your network engineer to obtain the IP address prefix in CIDR notation. An IP Subnet CIDR calculator may be used to obtain this value. --## Static IP configuration --If deploying Arc resource bridge to a production environment, static configuration must be used when deploying Arc resource bridge. Static IP configuration is used to assign three static IPs (that are in the same subnet) to the Arc resource bridge control plane, appliance VM, and reserved appliance VM. --DHCP is only supported in a test environment for testing purposes only for VM management on Azure Stack HCI. It should not be used in a production environment. DHCP isn't supported on any other Arc-enabled private cloud, including Arc-enabled VMware, Arc for AVS, or Arc-enabled SCVMM. --If using DHCP, you must reserve the IP addresses used by the control plane and appliance VM. In addition, these IPs must be outside of the assignable DHCP range of IPs. Ex: The control plane IP should be treated as a reserved/static IP that no other machine on the network will use or receive from DHCP. If the control plane IP or appliance VM IP changes, this impacts the resource bridge availability and functionality. --## Management machine requirements --The machine used to run the commands to deploy and maintain Arc resource bridge is called the *management machine*. --Management machine requirements: --- [Azure CLI x64](/cli/azure/install-azure-cli-windows?tabs=azure-cli) installed-- Communication to Control Plane IP (SSH TCP port 22, Kubernetes API port 6443)--- Communication to Appliance VM IPs (SSH TCP port 22, Kubernetes API port 6443)--- Communication to the reserved Appliance VM IPs (SSH TCP port 22, Kubernetes API port 6443)--- communication over port 443 to the private cloud management console (ex: VMware vCenter machine)--- Internal and external DNS resolution. The DNS server must resolve internal names, such as the vCenter endpoint for vSphere or cloud agent service endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses that are [required URLs](network-requirements.md#outbound-connectivity-requirements) for deployment.-- Internet access- -## Appliance VM IP address requirements --Arc resource bridge consists of an appliance VM that is deployed on-premises. The appliance VM has visibility into the on-premises infrastructure and can tag on-premises resources (guest management) for projection into Azure Resource Manager (ARM). The appliance VM is assigned an IP address from the `k8snodeippoolstart` parameter in the `createconfig` command. It may be referred to in partner products as Start Range IP, RB IP Start or VM IP 1. The appliance VM IP is the starting IP address for the appliance VM IP pool range; therefore, when you first deploy Arc resource bridge, this is the IP that's initially assigned to your appliance VM. The VM IP pool range requires a minimum of 2 IP addresses. --Appliance VM IP address requirements: --- Communication with the management machine (SSH TCP port 22, Kubernetes API port 6443)--- Communication with the private cloud management endpoint via Port 443 (such as VMware vCenter).--- Internet connectivity to [required URLs](network-requirements.md#outbound-connectivity-requirements) enabled in proxy/firewall.-- Static IP assigned and within the IP address prefix.--- Internal and external DNS resolution.-- If using a proxy, the proxy server has to be reachable from this IP and all IPs within the VM IP pool.--## Reserved appliance VM IP requirements --Arc resource bridge reserves an additional IP address to be used for the appliance VM upgrade. The reserved appliance VM IP is assigned an IP address via the `k8snodeippoolend` parameter in the `az arcappliance createconfig` command. This IP address may be referred to as End Range IP, RB IP End, or VM IP 2. The reserved appliance VM IP is the ending IP address for the appliance VM IP pool range. When your appliance VM is upgraded for the first time, this is the IP assigned to your appliance VM post-upgrade and the initial appliance VM IP is returned to the IP pool to be used for a future upgrade. If specifying an IP pool range larger than two IP addresses, the additional IPs are reserved. --Reserved appliance VM IP requirements: --- Communication with the management machine (SSH TCP port 22, Kubernetes API port 6443)--- Communication with the private cloud management endpoint via Port 443 (such as VMware vCenter).--- Internet connectivity to [required URLs](network-requirements.md#outbound-connectivity-requirements) enabled in proxy/firewall.--- Static IP assigned and within the IP address prefix.--- Internal and external DNS resolution.--- If using a proxy, the proxy server has to be reachable from this IP and all IPs within the VM IP pool.--## Control plane IP requirements --The appliance VM hosts a management Kubernetes cluster with a control plane that requires a single, static IP address. This IP is assigned from the `controlplaneendpoint` parameter in the `createconfig` command or equivalent configuration files creation command. --Control plane IP requirements: --- Communication with the management machine (SSH TCP port 22, Kubernetes API port 6443).--- Static IP address assigned and within the IP address prefix.--- If using a proxy, the proxy server has to be reachable from IPs within the IP address prefix, including the reserved appliance VM IP.--## DNS server --DNS server(s) must have internal and external endpoint resolution. The appliance VM and control plane need to resolve the management machine and vice versa. All three IPs must be able to reach the required URLs for deployment. --## Gateway --The gateway IP is the IP of the gateway for the network where Arc resource bridge is deployed. The gateway IP should be an IP from within the subnet designated in the IP address prefix. --## Example minimum configuration for static IP deployment --The following example shows valid configuration values that can be passed during configuration file creation for Arc resource bridge. --Notice that the IP addresses for the gateway, control plane, appliance VM and DNS server (for internal resolution) are within the IP address prefix. The VM IP Pool Start/End are sequential. This key detail helps ensure successful deployment of the appliance VM. -- IP Address Prefix (CIDR format): 192.168.0.0/29 -- Gateway IP: 192.168.0.1 -- VM IP Pool Start (IP format): 192.168.0.2 -- VM IP Pool End (IP format): 192.168.0.3 -- Control Plane IP: 192.168.0.4 -- DNS servers (IP list format): 192.168.0.1, 10.0.0.5, 10.0.0.6 --## User account and credentials --Arc resource bridge may require a separate user account with the necessary roles to view and manage resources in the on-premises infrastructure (such as Arc-enabled VMware vSphere). If so, during creation of the configuration files, the `username` and `password` parameters will be required. The account credentials are then stored in a configuration file locally within the appliance VM. --> [!WARNING] -> Arc resource bridge can only use a user account that does not have multifactor authentication enabled. If the user account is set to periodically change passwords, [the credentials must be immediately updated on the resource bridge](maintenance.md#update-credentials-in-the-appliance-vm). This user account can also be set with a lockout policy to protect the on-premises infrastructure, in case the credentials aren't updated and the resource bridge makes multiple attempts to use expired credentials to access the on-premises control center. --For example, with Arc-enabled VMware, Arc resource bridge needs a separate user account for vCenter with the necessary roles. If the [credentials for the user account change](troubleshoot-resource-bridge.md#insufficient-privileges), then the credentials stored in Arc resource bridge must be immediately updated by running `az arcappliance update-infracredentials` from the [management machine](#management-machine-requirements). Otherwise, the appliance will make repeated attempts to use the expired credentials to access vCenter, which will result in a lockout of the account. --## Configuration files --Arc resource bridge consists of an appliance VM that is deployed in the on-premises infrastructure. To maintain the appliance VM, the configuration files generated during deployment must be saved in a secure location and made available on the management machine. --There are several different types of configuration files, based on the on-premises infrastructure. --### Appliance configuration files --Three configuration files are created when deploying the Arc resource bridge: `<appliance-name>-resource.yaml`, `<appliance-name>-appliance.yaml` and `<appliance-name>-infra.yaml`. --By default, these files are generated in the current CLI directory of where the deployment commands are run. These files should be saved on the management machine because they're required for maintaining the appliance VM. The configuration files reference each other and should be stored in the same location. --### Kubeconfig --The appliance VM hosts a management Kubernetes cluster. The kubeconfig is a low-privilege Kubernetes configuration file that is used to maintain the appliance VM. By default, it's generated in the current CLI directory when the `deploy` command completes. The kubeconfig should be saved in a secure location on the management machine, because it's required for maintaining the appliance VM. If the kubeconfig is lost, it can be retrieved by running the `az arcappliance get-credentials` command. --> [!IMPORTANT] -> Once the Arc resource bridge VM is created, the configuration settings can't be modified or updated. Also, the appliance VM must stay in the location where it was initially deployed. Capabilities to allow appliance VM configuration and location changes post-deployment will be available in a future release. However, the Arc resource bridge VM name is a unique GUID that can't be renamed as it's an identifier used for cloud-managed upgrade. -## Next steps --- Understand [network requirements for Azure Arc resource bridge](network-requirements.md).-- Review the [Azure Arc resource bridge overview](overview.md) to understand more about features and benefits.-- Learn about [security configuration and considerations for Azure Arc resource bridge](security-overview.md). |
azure-arc | Troubleshoot Resource Bridge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md | - Title: Troubleshoot Azure Arc resource bridge issues -description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge when trying to deploy or connect to the service. Previously updated : 11/03/2023----# Troubleshoot Azure Arc resource bridge issues --This article provides information on troubleshooting and resolving issues that could occur while attempting to deploy, use, or remove the Azure Arc resource bridge. The resource bridge is a packaged virtual machine, which hosts a *management* Kubernetes cluster. For general information, see [Azure Arc resource bridge overview](./overview.md). --## General issues --### Logs collection --For issues encountered with Arc resource bridge, collect logs for further investigation using the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the management machine used to deploy the Arc resource bridge. If you're using a different machine, the machine must meet the [management machine requirements](system-requirements.md#management-machine-requirements). --If there's a problem collecting logs, most likely the management machine is unable to reach the Appliance VM. Contact your network administrator to allow SSH communication from the management machine to the Appliance VM on TCP port 22. --You can collect the Arc resource bridge logs by passing either the appliance VM IP or the kubeconfig in the logs command. --To collect Arc resource bridge logs on VMware using the appliance VM IP address: -- ```azurecli - az arcappliance logs vmware --ip <appliance VM IP> --username <vSphere username> --password <vSphere password> --address <vCenter address> --out-dir <path to output directory> - ``` --To collect Arc resource bridge logs for Azure Stack HCI using the appliance VM IP address: -- ```azurecli - az arcappliance logs hci --ip <appliance VM IP> --cloudagent <cloud agent service IP/FQDN> --loginconfigfile <file path of kvatoken.tok> - ``` --If you're unsure of your appliance VM IP, there's also the option to use the kubeconfig. You can retrieve the kubeconfig by running the [get-credentials command](/cli/azure/arcappliance) then run the logs command. --To retrieve the kubeconfig and log key then collect logs for Arc-enabled VMware from a different machine than the one used to deploy Arc resource bridge for Arc-enabled VMware: -- ```azurecli -az account set -s <subscription id> -az arcappliance get-credentials -n <Arc resource bridge name> -g <resource group name> -az arcappliance logs vmware --kubeconfig kubeconfig --out-dir <path to specified output directory> - ``` --### Download/upload connectivity was not successful -If your network speed is slow you may be unable to successfully download the Arc resource bridge VM image and this error may occur: `ErrorCode: ValidateKvaError, Error: Pre-deployment validation of your download/upload connectivity was not successful. Timeout error occurred during download and preparation of appliance image to the on-premises fabric storage. Common causes of this timeout error are slow network download/upload speeds, a proxy limiting the network speed or slow storage performance.` --If the error is due to slow network speed impacting upload, a workaround is to create a VM directly on the on-premises private cloud and then run the Arc resource bridge deployment script from that VM. This workaround ensures a faster upload of the image to the datastore. ---### Context timed out during phase `ApplyingKvaImageOperator` -You may receive the following error while deploying Arc resource bridge: `Deployment of the Arc resource bridge appliance VM timed out. Collect logs with _az arcappliance logs_ and create a support ticket for help. To troubleshoot the error, refer to aka.ms/arc-rb-error { _errorCode_: _ContextError_, _errorResponse_: _{\n\_message\_: \_Context timed out during phase _ApplyingKvaImageOperator_\_\n}_ }` --This error typically occurs when trying to download the `KVAIO` image (400 MB compressed) over a network that is slow or experiencing intermittent connectivity. The `KVAIO` controller manager is waiting for the image download to complete and times out. You may want to check that your network speed between the Arc resource bridge VM and Microsoft Container Registry (`mcr.microsoft.com`) is stable and at least 2 Mbps. If your network connectivity and speed are stable and you're still getting this error, wait at least 30 minutes before you re-try as Microsoft Container Registry may be receiving a high volume of traffic. --### Context timed out during phase `WaitingForAPIServer` -When deploying Arc resource bridge, you may receive the error: `Deployment of the Arc resource bridge appliance VM timed out. Collect logs with _az arcappliance logs_ and create a support ticket for help. To troubleshoot the error, refer to aka.ms/arc-rb-error { _errorCode_: _ContextError_, _errorResponse_: _{\n\_message\_: \_Context timed out during phase _WaitingForAPIServer` --This error indicates that the deployment machine is unable to contact the control plane IP for Arc resource bridge within the time limit. Common causes of the error are often networking related, such as communication between the deployment machine and control plane IP being routed through a proxy. Traffic from the deployment machine to the control plane and the appliance VM IPs must not pass through proxy. If traffic is being proxied, then configure the proxy settings on your network or deployment machine to not proxy traffic between the deployment machine to the control plane IP and appliance VM IPs. Another cause for this error is if a firewall is closing access to port 6443 and port 22 between the deployment machine and control plane IP or the deployment machine and appliance VM IPs. --### `UploadError` 403 Forbidden or 404 Site Not Found -When deploying Arc resource bridge, you may receive the error: `{ _errorCode_: _UploadError_, _errorResponse_: _{\n\_message\_: \_Pre-deployment validation of your download/upload connectivity was not successful. {\\n \\\_code\\\_: \\\_ImageProvisionError\\\_,\\n \\\_message\\\_: \\\_403 Forbidden` or `{ _errorCode_: _UploadError_, _errorResponse_: _{\n\_message\_: \_Pre-deployment validation of your download/upload connectivity was not successful. {\\n \\\_code\\\_: \\\_ImageProvisionError\\\_,\\n \\\_message\\\_: \\\_404 Site Not Found` --This error occurs in the deployment process when images need to be downloaded from Microsoft registries to the deployment machine and the download is being blocked by a proxy or firewall. Review the [network requirements](network-requirements.md#general-network-requirements) and verify that all required URLs are reachable. You may need to update your no proxy settings to ensure that traffic from your deployment machine to Microsoft required URLs aren't going through a proxy. --### SSH folder access denied --The CLI requires permission to access the SSH folder during deployment or operations that involve accessing files within the folder. This folder contains essential files such as the kubeconfig and logs key for the appliance VM. For instance, the CLI needs to access the logs key stored in the SSH folder to collect logs from the appliance VM. --If you encounter an error stating: `Access to the file in the SSH folder was denied. This may occur if the CLI doesn't have permission to the SSH folder or if another CLI instance is using the file`, there are two common causes for this issue: --1. Insufficient permissions: The CLI lacks the necessary permissions to access the SSH folder. Ensure that the user account running the CLI has appropriate permissions to access the SSH folder. --1. Concurrent file access: Another instance of the CLI might be using the file in the SSH folder. This often happens on workstations with shared profiles. Ensure that any other CLI instance completes or terminates its operation before you proceed. --### Arc resource bridge is offline --If the resource bridge is offline, this is typically due to a networking change in the infrastructure, environment or cluster that stops the appliance VM from being able to communicate with its counterpart Azure resource. If you're unable to determine what changed, you can reboot the appliance VM, collect logs and submit a support ticket for further investigation. --### Remote PowerShell isn't supported --If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote PowerShell, you might experience various problems. For instance, you might see an [authentication handshake failure error when trying to install the resource bridge on an Azure Stack HCI cluster](#authentication-handshake-failure) or another type of error. Using `az arcappliance` commands from remote PowerShell isn't currently supported. Instead, sign in to the node through Remote Desktop Protocol (RDP) or use a console session. --### Resource bridge configurations can't be updated --In this release, all the parameters are specified at time of creation. To update the Azure Arc resource bridge, you must delete it and redeploy it again. For example, if you specified the wrong location, or subscription during deployment, later the resource creation fails. If you only try to recreate the resource without redeploying the resource bridge VM, you'll see the status stuck at `WaitForHeartBeat`. To resolve this issue, delete the appliance and update the appliance YAML file. Then redeploy and create the resource bridge. --### Appliance Network Unavailable --If Arc resource bridge is experiencing a network problem, you may see an `Appliance Network Unavailable` error. In general, any network or infrastructure connectivity issue to the appliance VM may cause this error. This error can also surface as `Error while dialing dial tcp xx.xx.xxx.xx:55000: connect: no route to host`. The problem could be that communication from the host to the Arc resource bridge VM needs to be opened over TCP port 22 with the help of your network administrator. A temporary network issue may not allow the host to reach the Arc resource bridge VM. Once the network issue is resolved, you can retry the operation. You can also check that the appliance VM for Arc resource bridge isn't stopped or offline. In the case of Azure Stack HCI, the host storage may be full and the storage needs to be addressed. --### Token refresh error --When you run the Azure CLI commands, the following error might be returned: *The refresh token has expired or is invalid due to sign-in frequency checks by conditional access.* The error occurs because when you sign in to Azure, the token has a maximum lifetime. When that lifetime is exceeded, you need to sign in to Azure again by using the `az login` command. --### Default host resource pools are unavailable for deployment --When using the `az arcappliance createconfig` or `az arcappliance run` command, there's an interactive experience that shows the list of the VMware entities where you can select to deploy the virtual appliance. This list shows all user-created resource pools. along with default cluster resource pools, but the default host resource pools aren't listed. When the appliance is deployed to a host resource pool, there's no high availability if the host hardware fails. We recommend that you don't deploy the appliance in a host resource pool. --### Resource bridge status `Offline` and `provisioningState` `Failed` --When deploying Arc resource bridge, the bridge might appear to be successfully deployed, because no errors were encountered when running `az arcappliance deploy` or `az arcappliance create`. However, when viewing the bridge in Azure portal, you might see status shows as `Offline`, and `az arcappliance show` might show the `provisioningState` as `Failed`. This happens when required providers aren't registered before the bridge is deployed. --To resolve this problem, delete the resource bridge, register the providers, then redeploy the resource bridge. --1. Delete the resource bridge: -- ```azurecli - az arcappliance delete <fabric> --config-file <path to appliance.yaml> - ``` --1. Register the providers: -- ```azurecli - az provider register --namespace Microsoft.ExtendedLocation –-wait - az provider register --namespace Microsoft.ResourceConnector –-wait - ``` --1. Redeploy the resource bridge. --> [!NOTE] -> Partner products (such as Arc-enabled VMware vSphere) might have their own required providers to register. To see additional providers that must be registered, see the product's documentation. --### Expired credentials in the appliance VM --Arc resource bridge consists of an appliance VM that is deployed to the on-premises infrastructure. The appliance VM maintains a connection to the management endpoint of the on-premises infrastructure using locally stored credentials. If these credentials aren't updated, the resource bridge is no longer able to communicate with the management endpoint. This can cause problems when trying to upgrade the resource bridge or manage VMs through Azure. To fix this, the credentials in the appliance VM need to be updated. For more information, see [Update credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm). --### Private link is unsupported --Arc resource bridge doesn't support private link. All calls coming from the appliance VM shouldn't be going through your private link setup. The Private Link IPs may conflict with the appliance IP pool range, which isn't configurable on the resource bridge. Arc resource bridge reaches out to [required URLs](network-requirements.md#firewallproxy-url-allowlist) that shouldn't go through a private link connection. You must deploy Arc resource bridge on a separate network segment unrelated to the private link setup. ---## Networking issues --### Back-off pulling image error --When trying to deploy Arc resource bridge, you might see an error that contains `back-off pulling image \\\"url"\\\: FailFastPodCondition`. This error is caused when the appliance VM can't reach the URL specified in the error. To resolve this issue, make sure the appliance VM meets system requirements, including internet access connectivity to [required allowlist URLs](network-requirements.md). --### Management machine unable to reach appliance --When trying to deploy Arc resource bridge, you might receive an error message similar to: --`{ _errorCode_: _PostOperationsError_, _errorResponse_: _{\n\_message\_: \_Timeout occurred due to management machine being unable to reach the appliance VM IP, 10.2.196.170. Ensure that the requirements are met: https://aka.ms/arb-machine-reqs: dial tcp 10.2.196.170:22: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.\_\n}_, _errorMetadata_: { _errorCategory_: __ } ` --This occurs when the management machine is trying to reach the ARB VM IP by SSH (Port 22) or API Server (Port 6443) and is unable to. This error may also occur if the Arc resource bridge API server is being proxied - the Arc resource bridge API server needs to be added to the noproxy settings. For more information, see [Azure Arc resource bridge network requirements](network-requirements.md#inbound-connectivity-requirements). --### Not able to connect to URL --If you receive an error that contains `Not able to connect to https://example.url.com`, check with your network administrator to ensure your network allows all of the required firewall and proxy URLs to deploy Arc resource bridge. For more information, see [Azure Arc resource bridge network requirements](network-requirements.md). --### Not able to connect - network and internet connectivity validation failed --When deploying Arc resource bridge, you may receive an error with `errorCode` as `PostOperationsError`, `errorResponse` as code `GuestInternetConnectivityError` with a URL specifying port 53 (DNS). This may be due to the appliance VM IPs being unable to reach DNS servers, so they can't resolve the endpoint specified in the error. --Error example: --`{ _errorCode_: _PostOperationsError_, _errorResponse_: _{\n\_message\_: \_{\\n  \\\_code\\\_:\\\_GuestInternetConnectivityError\\\_,\\n\\\_message\\\_:\\\_Not able to connect to http://aszhcitest01.company.org:55000. Error returned: action failed after 5 attempts: Get \\\\\\\_http://aszhcitest01.company.org:55000\\\\\\\_: dial tcp: lookup aszhcitest01.company.org on 127.0.0.53:53: read udp 127.0.0.1:32975-\\u003e127.0.0.53:53: i/o timeout. Arc Resource Bridge network and internet connectivity validation failed: cloud-agent-connectivity-test. 1. check your networking setup and ensure the URLs mentioned in : https://aka.ms/AAla73m are reachable from the Appliance VM.   2. Check firewall/proxy settings\\\_\\n }\_\n}_ }` --Error example: --`{ _errorCode_: _PostOperationsError_, _errorResponse_: _{\n\_message\_: \_{\\n  \\\_code\\\_: \\\_GuestInternetConnectivityError\\\_,\\n  \\\_message\\\_: \\\_Not able to connect to https://linuxgeneva-microsoft.azurecr.io. Error returned: action failed after 5 attempts: Get \\\\\\\_https://linuxgeneva-microsoft.azurecr.io\\\\\\\_: dial tcp: lookup linuxgeneva-microsoft.azurecr.io on 127.0.0.53:53: server misbehaving. Arc Resource Bridge network and internet connectivity validation failed: http-connectivity-test-arc. 1. Please check your networking setup and ensure the URLs mentioned in : https://aka.ms/AAla73m are reachable from the Appliance VM.   2. Check firewall/proxy settings\\\_\\n }\_\n}_ }` --To resolve the error, work with your network administrator to allow the appliance VM IPs to reach the DNS servers. For more information, see [Azure Arc resource bridge network requirements](network-requirements.md). --### Http2 server sent GOAWAY --When trying to deploy Arc resource bridge, you might receive an error message similar to: --`"errorResponse": "{\n\"message\": \"Post \\\"https://region.dp.kubernetesconfiguration.azure.com/azure-arc-appliance-k8sagents/GetLatestHelmPackagePath?api-version=2019-11-01-preview\\u0026releaseTrain=stable\\\": http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=\\\"\\\"\"\n}"` --This occurs when a firewall or proxy has SSL/TLS inspection enabled and blocks http2 calls from the machine used to deploy the resource bridge. To confirm this is the problem, run the following PowerShell cmdlet to invoke the web request with http2 (requires PowerShell version 7 or above), replacing the region in the URL and api-version (ex:2019-11-01) with values from the error: --`Invoke-WebRequest -HttpVersion 2.0 -UseBasicParsing -Uri https://region.dp.kubernetesconfiguration.azure.com/azure-arc-appliance-k8sagents/GetLatestHelmPackagePath?api-version=2019-11-01-preview"&"releaseTrain=stable -Method Post -Verbose` --If the result is `The response ended prematurely while waiting for the next frame from the server`, then the http2 call is being blocked and needs to be allowed. Work with your network administrator to disable the SSL/TLS inspection to allow http2 calls from the machine used to deploy the bridge. --### No such host - .local not supported -When trying to set the configuration for Arc resource bridge, you might receive an error message similar to: --`"message": "Post \"https://esx.lab.local/52c-acac707ce02c/disk-0.vmdk\": dial tcp: lookup esx.lab.local: no such host"` --This occurs when a `.local` path is provided for a configuration setting, such as proxy, dns, datastore or management endpoint (such as vCenter). Arc resource bridge appliance VM uses Azure Linux OS, which doesn't support `.local` by default. A workaround could be to provide the IP address where applicable. --### Azure Arc resource bridge is unreachable --Azure Arc resource bridge runs a Kubernetes cluster, and its control plane requires a static IP address. The IP address is specified in the `infra.yaml` file. If the IP address is assigned from a DHCP server, the address can change if it's not reserved. Rebooting the Azure Arc resource bridge or VM can trigger an IP address change and result in failing services. --Arc resource bridge may intermittently lose the reserved IP configuration. This loss is due to the behavior described in [loss of VIPs when `systemd-networkd` is restarted](https://github.com/acassen/keepalived/issues/1385). When the IP address isn't assigned to the Azure Arc resource bridge VM, any call to the resource bridge API server fails. Core operations, such as creating a new resource, connecting to your private cloud from Azure, or creating a custom location, won't function as expected. To resolve this issue, reboot the resource bridge VM, and it should recover its IP address. If the address is assigned from a DHCP server, reserve the IP address associated with the resource bridge. --The Arc resource bridge may also be unreachable due to slow disk access. Azure Arc resource bridge uses Kubernetes extended configuration tree (ETCD), which requires [latency of 10ms or less](https://docs.openshift.com/container-platform/4.6/scalability_and_performance/recommended-host-practices.html#recommended-etcd-practices_). If the underlying disk has low performance, operations are impacted and failures may occur. --### SSL proxy configuration issues --Be sure that the proxy server on your management machine trusts both the SSL certificate for your SSL proxy and the SSL certificate of the Microsoft download servers. For more information, see [SSL proxy configuration](network-requirements.md#ssl-proxy-configuration). --### No such host - dp.kubernetesconfiguration.azure.com --An error that contains `dial tcp: lookup westeurope.dp.kubernetesconfiguration.azure.com: no such host` while deploying Arc resource bridge means that the configuration dataplane is currently unavailable in the specified region. The service may be temporarily unavailable. Wait for the service to be available and then retry the deployment. --### Proxy connect tcp - No such host for Arc resource bridge required URL --An error that contains an Arc resource bridge required URL with the message `proxyconnect tcp: dial tcp: lookup http: no such host` indicates that DNS is not able to resolve the URL. The error may look similar to the example below, where the required URL is `https://msk8s.api.cdp.microsoft.com`: --`Error: { _errorCode_: _InvalidEntityError_, _errorResponse_: _{\n\_message\_: \_Post \\\_https://msk8s.api.cdp.microsoft.com/api/v1.1/contents/default/namespaces/default/names/arc-appliance-stable-catalogs-ext/versions/latest?action=select\\\_: POST https://msk8s.api.cdp.microsoft.com/api/v1.1/contents/default/namespaces/default/names/arc-appliance-stable-catalogs-ext/versions/latest?action=select giving up after 6 attempt(s): Post \\\_https://msk8s.api.cdp.microsoft.com/api/v1.1/contents/default/namespaces/default/names/arc-appliance-stable-catalogs-ext/versions/latest?action=select\\\_: proxyconnect tcp: dial tcp: lookup http: no such host\_\n}_ }` --This error can occur if the DNS settings provided during deployment aren't correct or there's a problem with the DNS server(s). You can check if your DNS server is able to resolve the url by running the following command from the management machine or a machine that has access to the DNS server(s): --``` -nslookup -> set debug -> <hostname> <DNS server IP> -``` --In order to resolve the error, your DNS server(s) must be configured to resolve all Arc resource bridge required URLs and the DNS server(s) should be correctly provided during deployment of Arc resource bridge. --### KVA timeout error --The KVA timeout error is a generic error that can be the result of a variety of network misconfigurations that involve the management machine, Appliance VM, or Control Plane IP not having communication with each other, to the internet, or required URLs. This communication failure is often due to issues with DNS resolution, proxy settings, network configuration, or internet access. --For clarity, management machine refers to the machine where deployment CLI commands are being run. Appliance VM is the VM that hosts Arc resource bridge. Control Plane IP is the IP of the control plane for the Kubernetes management cluster in the Appliance VM. --#### Top causes of the KVA timeout error  --- Management machine is unable to communicate with Control Plane IP and Appliance VM IP.-- Appliance VM is unable to communicate with the management machine, vCenter endpoint (for VMware), or MOC cloud agent endpoint (for Azure Stack HCI).  -- Appliance VM doesn't have internet access.-- Appliance VM has internet access, but connectivity to one or more required URLs is being blocked, possibly due to a proxy or firewall.-- Appliance VM is unable to reach a DNS server that can resolve internal names, such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses, such as Azure service addresses and container registry names.  -- Proxy server configuration on the management machine or Arc resource bridge configuration files is incorrect. This can impact both the management machine and the Appliance VM. When the `az arcappliance prepare` command is run, the management machine won't be able to connect and download OS images if the host proxy isn't correctly configured. Internet access on the Appliance VM might be broken by incorrect or missing proxy configuration, which impacts the VM’s ability to pull container images.  --#### Troubleshoot KVA timeout error --To resolve the error, one or more network misconfigurations might need to be addressed. Follow the steps below to address the most common reasons for this error. --1. When there's a problem with deployment, the first step is to collect logs by Appliance VM IP (not by kubeconfig, as the kubeconfig could be empty if the deploy command didn't complete). Problems collecting logs are most likely due to the management machine being unable to reach the Appliance VM. -- Once logs are collected, extract the folder and open kva.log. Review the kva.log for more information on the failure to help pinpoint the cause of the KVA timeout error. --1. The management machine must be able to communicate with the Appliance VM IP and Control Plane IP. Ping the Control Plane IP and Appliance VM IP from the management machine and verify there's a response from both IPs. -- If a request times out, the management machine can't communicate with the IP(s). This could be caused by a closed port, network misconfiguration or a firewall block. Work with your network administrator to allow communication between the management machine to the Control Plane IP and Appliance VM IP. --1. Appliance VM IP and Control Plane IP must be able to communicate with the management machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This might require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md) for Arc resource bridge. --1. Appliance VM IP and Control Plane IP need internet access to [these required URLs](#not-able-to-connect-to-url). Azure Stack HCI requires [additional URLs](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites). Work with your network administrator to ensure that the IPs can access the required URLs. --1. In a non-proxy environment, the management machine must have external and internal DNS resolution. The management machine must be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to [resolve external addresses](#not-able-to-connect-to-url), such as Azure URLs and OS image download URLs. Work with your system administrator to ensure that the management machine has internal and external DNS resolution. In a proxy environment, the DNS resolution on the proxy server should resolve internal endpoints and [required external addresses](#not-able-to-connect-to-url). -- To test DNS resolution to an internal address from the management machine in a non-proxy scenario, open command prompt and run `nslookup <vCenter endpoint or HCI MOC cloud agent IP>`. You should receive an answer if the management machine has internal DNS resolution in a non-proxy scenario.  --1. Appliance VM needs to be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to resolve external/internal addresses, such as Azure service addresses and container registry names for download of the Arc resource bridge container images from the cloud. -- Verify that the DNS server IP used to create the configuration files has internal and external address resolution. If not, [delete the appliance](/cli/azure/arcappliance/delete), recreate the Arc resource bridge configuration files with the correct DNS server settings, and then deploy Arc resource bridge using the new configuration files. - -### Move Arc resource bridge location --Resource move of Arc resource bridge isn't currently supported. You'll need to delete the Arc resource bridge, then re-deploy it to the desired location. --## Azure Arc-enabled VMs on Azure Stack HCI issues --For general help resolving issues related to Azure Arc-enabled VMs on Azure Stack HCI, see [Troubleshoot Azure Arc-enabled virtual machines](/azure-stack/hci/manage/troubleshoot-arc-enabled-vms). --### Action failed - no such host --When deploying Arc resource bridge, if you receive an error with `errorCode` as `PostOperationsError`, `errorResponse` as code `GuestInternetConnectivityError` and `no such host`, then the error may be caused by the appliance VM IPs not having reachability to the endpoint specified in the error. --Error example: --`{ _errorCode_: _PostOperationsError_, _errorResponse_: _{\n\_message\_: \_{\\n  \\\_code\\\_: \\\_GuestInternetConnectivityError\\\_,\\n  \\\_message\\\_: \\\_Not able to connect to http://aszhcitest01.company.org:55000. Error returned: action failed after 5 attempts: Get \\\\\\\_http://aszhcitest01.company.org:55000\\\\\\\_: dial tcp: lookup aszhcitest01.company.org: on 127.0.0.53:53: no such host. Arc Resource Bridge network and internet connectivity validation failed: cloud-agent-connectivity-test. 1. check your networking setup and ensure the URLs mentioned in : https://aka.ms/AAla73m are reachable from the Appliance VM.   2. Check firewall/proxy settings` --In the example, the appliance VM IPs are not able to access `http://aszhcitest01.company.org:55000`, which is the MOC endpoint. Work with your network administrator to make sure that the DNS server is able to resolve the required URLs. --To test connectivity to the DNS server: --```ping <dns-server.com>``` --To check if the DNS server is able to resolve an address, from a machine where we can reach the DNS servers run: --```Resolve-DnsName -Name "http://aszhcitest01.company.org:55000" -Server "<dns-server.com>"``` --### Authentication handshake failure --When running an `az arcappliance` command, you might see a connection error: `authentication handshake failed: x509: certificate signed by unknown authority` --This is usually caused when trying to run commands from remote PowerShell, which isn't supported by Azure Arc resource bridge. --To install Azure Arc resource bridge on an Azure Stack HCI cluster, `az arcappliance` commands must be run locally on a node in the cluster. Sign in to the node through Remote Desktop Protocol (RDP) or use a console session to run these commands. --## Azure Arc-enabled VMware VCenter issues --### errorResponse: error getting the vsphere sdk client --For errors with errorCode `CreateConfigKvaCustomerError` and errorResponse `error getting the vsphere sdk client`, these errors occur when your deployment machine is trying to establish a TCP connection to your vCenter address but encounters a problem. You receive this errorCode and errorResponse if your vCenter address is incorrect (403 or 404 error) or if there's a network/proxy/firewall configuration blocking it (connection attempt failed). If you enter your vCenter address as a hostname and receive the error `no such host`, then your deployment machine isn't able to resolve the vCenter hostname via the client DNS. You may receive an error if the deployment machine is able to resolve the vCenter hostname but the deployment machine can't reach the IP address it received from DNS. You may receive an error if the endpoint returned by DNS isn't your vCenter address, or if the traffic was intercepted by proxy. Finally, you may get an error if your deployment machine is able to communicate with your vCenter address, but the username or password is incorrect. --### vSphere SDK client - Connection attempt failed --If you receive an error during deployment that states: `errorCode_: _CreateConfigKvaCustomerError_, _errorResponse_: _error getting the vsphere sdk client: Post \_https://ip.address/sdk\_: dial tcp ip.address:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond._ }` then your management machine is not able to communicate with your vCenter server. Ensure that your management machine meets the [management machine requirements](system-requirements.md#management-machine-requirements) and that there's not a firewall or proxy blocking communication. --### vSphere SDK client - 403 Forbidden or 404 not found --If you receive an error that contains `errorCode_: _CreateConfigKvaCustomerError_, _errorResponse_: _error getting the vsphere sdk client: POST \_/sdk\_: 403 Forbidden` or `404 not found` while deploying Arc resource bridge, this is most likely due to an incorrect vCenter address being provided during configuration file creation when you're prompted to enter the vCenter address as either a hostname or IP address. There are different ways to find your vCenter address. One option is to access the vSphere client via its web interface. The vCenter hostname or IP address is typically what you use in the browser to access the vSphere client. If you're already logged in, you can look at the browser's address bar; the URL you use to access vSphere is your vCenter server's hostname or IP address. Verify your vCenter address and then re-try the deployment. --### vSphere SDK client - no such host --If you encounter the error `{ _errorCode_: _CreateConfigKvaCustomerError_, _errorResponse_: _error getting the vsphere sdk client: Post \_https://your.vcenter.hostname/sdk\_: dial tcp: lookup your.vcenter.hostname: no such host_ }` during deployment, then the deployment machine can't resolve the vCenter hostname to an IP address. This issue arises because the deployment process is attempting to establish a TCP connection from your deployment machine to the vCenter hostname but fails due to DNS resolution problems. To address this, ensure the DNS configuration on your deployment machine is correct, verify that the DNS server is online, and check for a missing DNS entry for the vCenter hostname. You can test the DNS resolution by running `nslookup your.vcenter.hostname` or `ping your.vcenter.hostname` from the deployment machine. If you've specified your vCenter address as a hostname, consider using the IP address directly instead. --### Pre-deployment validation errors --If you're receiving a variety of `pre-deployment validation of your download\upload connectivity wasn't successful` errors, such as: --`Pre-deployment validation of your download/upload connectivity wasn't successful. {\\n \\\_code\\\_: \\\_ImageProvisionError\\\_,\\n \\\_message\\\_: \\\_Post \\\\\\\_https://vcenter-server.com/nfc/unique-identifier/disk-0.vmdk\\\\\\\_: Service Unavailable` --`Pre-deployment validation of your download/upload connectivity wasn't successful. {\\n \\\_code\\\_: \\\_ImageProvisionError\\\_,\\n \\\_message\\\_: \\\_Post \\\\\\\_https://vcenter-server.com/nfc/unique-identifier/disk-0.vmdk\\\\\\\_: dial tcp 172.16.60.10:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.` --`Pre-deployment validation of your download/upload connectivity wasn't successful. {\\n \\\_code\\\_: \\\_ImageProvisionError\\\_,\\n \\\_message\\\_: \\\_Post \\\\\\\_https://vcenter-server.com/nfc/unique-identifier/disk-0.vmdk\\\\\\\_: use of closed network connection.` --`Pre-deployment validation of your download/upload connectivity wasn't successful. {\\n \\\_code\\\_: \\\_ImageProvisionError\\\_,\\n \\\_message\\\_: \\\_Post \\\\\\\_https://vcenter-server.com/nfc/unique-identifier/disk-0.vmdk\\\\\\\_: dial tcp: lookup hostname.domain: no such host` --A combination of these errors usually indicates that the management machine has lost connection to the datastore or there's a networking issue causing the datastore to be unreachable. This connection is needed in order to upload the OVA from the management machine used to build the appliance VM in vCenter. The connection between the management machine and datastore needs to be reestablished, then retry deployment of Arc resource bridge. --### x509 certificate has expired or isn't yet valid --When you deploy Arc resource bridge, you may encounter the error: --`Error: { _errorCode_: _PostOperationsError_, _errorResponse_: _{\n\_message\_: \_{\\n \\\_code\\\_: \\\_GuestInternetConnectivityError\\\_,\\n \\\_message\\\_: \\\_Not able to connect to https://msk8s.api.cdp.microsoft.com. Error returned: action failed after 3 attempts: Get \\\\\\\_https://msk8s.api.cdp.microsoft.com\\\\\\\_: x509: certificate has expired or isn't yet valid: current time 2022-01-18T11:35:56Z is before 2023-09-07T19:13:21Z. Arc Resource Bridge network and internet connectivity validation failed: http-connectivity-test-arc. 1. check your networking setup and ensure the URLs mentioned in : https://aka.ms/AAla73m are reachable from the Appliance VM. 2. Check firewall/proxy settings` --This error is caused when there's a clock/time difference between ESXi host(s) and the management machine where the deployment commands for Arc resource bridge are being executed. To resolve this issue, turn on NTP time sync on the ESXi host(s) and confirm that the management machine is also synced to NTP, then try the deployment again. --### Resolves to multiple networks --When deploying or upgrading Arc resource bridge, you may encounter an error similar to: --`{ "ErrorCode": "PreflightcheckErrorOnPrem", -"ErrorDetails": "Upgrade Operation Failed with error: \"{\\n \\\"code\\\": \\\"PreflightcheckError\\\",\\n \\\"message\\\": \\\"{\\\\n \\\\\\\"code\\\\\\\": \\\\\\\"InvalidEntityError\\\\\\\",\\\\n \\\\\\\"message\\\\\\\": \\\\\\\"Cannot retrieve vSphere Network 'vmware-azure-arc-01': path 'vmware-azure-arc-01' resolves to multiple networks\\\\\\\",\\\\n \\\\\\\"category\\\\\\\": \\\\\\\"\\\\\\\"\\\\n }\\\",\\n \\\"category\\\": \\\"\\\"\\n }\"" }` --This error occurs when the vSphere network segment resolves to multiple networks due to multiple vSphere network segments having the same name that is specified in the error. To fix this error, you can change the duplicate network name in vCenter (not the network with the appliance VM) or deploy Arc resource bridge on a different network. --### Arc resource bridge status is disconnected --When running the initial Arc-enabled VMware onboarding script, you were prompted to provide a vSphere account. This account is stored locally within the Arc resource bridge as an encrypted Kubernetes secret. The account is used to allow the Arc resource bridge to interact with vCenter. If your Arc resource bridge status is disconnected, this may be due to the vSphere account stored locally within the resource bridge being expired. You must update the credentials within Arc resource bridge and for Arc-enabled VMware by [following the updating vSphere account credentials instructions](/azure/azure-arc/vmware-vsphere/administer-arc-vmware#updating-the-vsphere-account-credentials-using-a-new-password-or-a-new-vsphere-account-after-onboarding). --### Error during host configuration --If you have been using the same template to deploy and delete the Arc resource bridge multiple times, you might encounter the following error: --`Appliance cluster deployment failed with error: Error: An error occurred during host configuration` --To resolve this issue, manually delete the existing template. Then run [`az arcappliance prepare`](/cli/azure/arcappliance/prepare) to download a new template for deployment. --### Unable to find folders --When deploying Arc resource bridge on VMware, you specify the folder in which the template and VM are created. The selected folder must be a VM and template folder type. Other types of folder, such as storage folders, network folders, or host and cluster folders, can't be used for the resource bridge deployment. --### Cannot retrieve resource - not found or does not exist --When Arc resource bridge is deployed, you specify where the appliance VM will be deployed. The appliance VM can't be moved from that location path. If the appliance VM moved location, you may hit an error similar to the ones below when upgrading: --` -{\n \"code\": \"PreflightcheckError\",\n \"message\": \"{\\n \\\"code\\\": \\\"InvalidEntityError\\\",\\n \\\"message\\\": \\\"Cannot retrieve <resource> 'resource-name': <resource> 'resource-name' not found\\\"\\n }\"\n }" -` --` -{\n \"code\": \"PreflightcheckError\",\n \"message\": \"{\\n \\\"code\\\": \\\"InvalidEntityError\\\",\\n \\\"message\\\": \\\"The specified vSphere Datacenter '/VxRail-Datacenter' does not exist\\\"\\n }\"\n }" -` --These are the options to address either error: --- Move the appliance VM back to its original location and ensure RBAC credentials are updated for the location change.-- Create a resource with the same name, move Arc resource bridge to that new resource.-- If you're using Arc-enabled VMware, [run the Arc-enabled VMware disaster recovery script](../vmware-vsphere/disaster-recovery.md). The script will delete the appliance, deploy a new appliance and reconnect the appliance with the previously deployed custom location, cluster extension and Arc-enabled VMs.-- Delete and [redeploy the Arc resource bridge](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md).--### Insufficient privileges --When deploying or upgrading the resource bridge on VMware vCenter, you might get an error similar to: --`{ ""code"": ""PreflightcheckError"", ""message"": ""{\n \""code\"": \""InsufficientPrivilegesError\"",\n \""message\"": \""The provided vCenter account is missing required vSphere privileges on the resource 'root folder (MoRefId: Folder:group-d1)'. Missing privileges: [Sessions.ValidateSession]. add the privileges to the vCenter account and try again. To review the full list of required privileges, go to https://aka.ms/ARB-vsphere-privilege.\""\n }` --When deploying Arc resource bridge, you are asked to provide vCenter credentials. The Arc resource bridge locally stores the vCenter credentials to interact with vCenter. To resolve the missing privileges issue, the vCenter account used by the resource bridge needs the following privileges in VMware vCenter: --**Datastore**  --- Allocate space-- Browse datastore-- Low level file operations--**Folder**  --- Create folder--**vSphere Tagging** --- Assign or Unassign vSphere Tag--**Network**  --- Assign network--**Resource** --- Assign virtual machine to resource pool-- Migrate powered off virtual machine-- Migrate powered on virtual machine--**Sessions** --- Validate session--**vApp** --- Assign resource pool-- Import --**Virtual machine** --- Change Configuration- - Acquire disk lease - - Add existing disk - - Add new disk - - Add or remove device - - Advanced configuration - - Change CPU count - - Change Memory - - Change Settings - - Change resource - - Configure managedBy - - Display connection settings - - Extend virtual disk - - Modify device settings - - Query Fault Tolerance compatibility - - Query unowned files - - Reload from path - - Remove disk - - Rename - - Reset guest information - - Set annotation - - Toggle disk change tracking - - Toggle fork parent - - Upgrade virtual machine compatibility -- Edit Inventory- - Create from existing - - Create new - - Register - - Remove - - Unregister -- Guest operations- - Guest operation alias modification - - Guest operation modifications - - Guest operation program execution - - Guest operation queries -- Interaction- - Connect devices - - Console interaction - - Guest operating system management by VIX API - - Install VMware Tools - - Power off - - Power on - - Reset - - Suspend -- Provisioning- - Allow disk access - - Allow file access - - Allow read-only disk access - - Allow virtual machine download - - Allow virtual machine files upload - - Clone virtual machine - - Deploy template - - Mark as template - - Mark as virtual machine - - Customize guest -- Snapshot management- - Create snapshot - - Remove snapshot - - Revert to snapshot --## Next steps --[Understand recovery operations for resource bridge in Azure Arc-enabled VMware vSphere disaster scenarios](../vmware-vsphere/disaster-recovery.md) --If you don't see your problem here or you can't resolve your issue, try one of the following channels for support: --- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).-- Connect with [@AzureSupport](https://x.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.-- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).- |
azure-arc | Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md | - Title: Upgrade Arc resource bridge -description: Learn how to upgrade Arc resource bridge using either cloud-managed upgrade or manual upgrade. Previously updated : 08/26/2024----# Upgrade Arc resource bridge --This article describes how Arc resource bridge is upgraded, and the two ways upgrade can be performed: cloud-managed upgrade or manual upgrade. Currently, some private cloud providers differ in how they handle Arc resource bridge upgrades. --## Private cloud providers -Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider. --For **Arc-enabled VMware vSphere**, manual upgrade and cloud-managed upgrade are available. Appliances on version 1.0.15 and higher are automatically opted-in to cloud-managed upgrade. Cloud-managed upgrade helps ensure the appliance VM is kept within n-3 supported versions but not the latest version. If you would like to be on the latest version, you need to manual upgrade. In order for either upgrade option to work, [the upgrade prerequisites](#prerequisites) must be met. Microsoft may attempt to perform a cloud-managed upgrade of your Arc resource bridge at any time if your appliance will soon be out of support. While Microsoft offers cloud-managed upgrade, you’re still responsible for ensuring that your Arc resource bridge is within the supported n-3 versions. Disruptions could cause cloud-managed upgrade to fail and you may need to manual upgrade the Arc resource bridge. If your Arc resource bridge is close to being out of support, we recommend a manual upgrade to make sure you maintain a supported version, rather than waiting for cloud-managed upgrade. --For **Azure Arc VM management (preview) on Azure Stack HCI**, appliance version 1.0.15 or higher is only available on Azure Stack HCI build 23H2. In HCI 23H2, the LCM tool manages upgrades across all HCI, Arc resource bridge, and extension components as a "validated recipe" package. Any preview version of Arc resource bridge must be removed before updating from 22H2 to 23H2. Attempting to upgrade Arc resource bridge independent of other HCI environment components may cause problems in your environment that could result in a disaster recovery scenario. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq). --For **Arc-enabled System Center Virtual Machine Manager (SCVMM)**, the manual upgrade feature is available for appliance version 1.0.15 and higher. Appliances running a version lower than 1.0.15 need to perform the recovery option to get to version 1.0.15 or higher. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery). This deploys a new resource bridge and reconnects pre-existing Azure resources. --## Prerequisites --Before upgrading an Arc resource bridge, the following prerequisites must be met: --- The appliance VM must be on a General Availability version (1.0.15 or higher). If not, the Arc resource bridge VM needs to be redeployed. If you're using Arc-enabled VMware/AVS, you can [perform disaster recovery](../vmware-vsphere/recover-from-resource-bridge-deletion.md). If you're using Arc-enabled SCVMM, follow this [disaster recovery guide](../system-center-virtual-machine-manager/disaster-recovery.md).--- The appliance VM must be online, healthy with a Status of "Running". You can check the Azure resource of your Arc resource bridge to verify.--- The [credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm) must be up-to-date. To test that the credentials within the Arc resource bridge VM are valid, perform an operation on an Arc-enabled VM from Azure. You can also [update the credentials](/azure/azure-arc/resource-bridge/maintenance) to be certain.--- There must be sufficient space on the management machine (~3.5 GB) and appliance VM (35 GB) to download required images.- -- For Arc-enabled VMware, upgrading the resource bridge requires 200 GB of free space on the datastore. A new template is also created.--- The outbound connection from the Appliance VM IPs (`k8snodeippoolstart/end`, VM IP 1/2) to `msk8s.sb.tlu.dl.delivery.mp.microsoft.com`, port 443 must be enabled. Be sure the full list of [required endpoints for Arc resource bridge](network-requirements.md) are also enabled.--- When performing a manual upgrade, run the upgrade command from the management machine used to initially deploy the Arc resource bridge, which should still contain the [appliance configuration files](system-requirements.md#configuration-files). You can also run the upgrade command from a different machine that meets the [management machine requirements](system-requirements.md#management-machine-requirements) and also contains the appliance configuration files.--- Arc resource bridge configured with DHCP can't be upgraded and aren't supported in a production environment. Instead, a new Arc resource bridge should be deployed using [static IP configuration](system-requirements.md#static-ip-configuration). --## Overview --The upgrade process deploys a new resource bridge using the reserved appliance VM IP (`k8snodeippoolend` IP, VM IP 2). Once the new resource bridge is up, it becomes the active resource bridge. The old resource bridge is deleted, and its appliance VM IP (`k8dsnodeippoolstart`, VM IP 1) becomes the new reserved appliance VM IP that will be used in the next upgrade. --Deploying a new resource bridge is a process consisting of several steps: downloading the appliance image (~3.5 GB) from the cloud, using the image to deploy a new appliance VM, verifying the new resource bridge is running, connecting it to Azure, deleting the old appliance VM, and reserving the old IP to be used for a future upgrade. --Overall, the upgrade generally takes at least 30 minutes, depending on network speeds. A short intermittent downtime might happen during the handoff between the old Arc resource bridge to the new Arc resource bridge. Additional downtime can occur if prerequisites aren't met, or if a change in the network (DNS, firewall, proxy, etc.) impacts the Arc resource bridge's network connectivity. --There are two ways to upgrade Arc resource bridge: cloud-managed upgrades managed by Microsoft, or manual upgrades where Azure CLI commands are performed by an admin. --## Cloud-managed upgrade --Arc resource bridges on a supported [private cloud provider](#private-cloud-providers) with an appliance version 1.0.15 or higher are automatically opted into cloud-managed upgrade. With cloud-managed upgrade, Microsoft may attempt to upgrade your Arc resource bridge at any time if it is on an appliance version that will soon be out of support. The upgrade prerequisites must be met for cloud-managed upgrade to work. While Microsoft offers cloud-managed upgrade, you’re still responsible for checking that your resource bridge is healthy, online, in a "Running" status, and within the supported n-3 versions. Disruptions could cause cloud-managed upgrades to fail. If your Arc resource bridge is close to being out of support, we recommend a manual upgrade to make sure you maintain a supported version, rather than waiting for cloud-managed upgrade. --To check your resource bridge status and the appliance version, run the `az arcappliance show` command from your management machine or check the Azure resource of your Arc resource bridge. If your appliance VM isn't in a healthy, Running state, cloud-managed upgrade might fail. --Cloud-managed upgrades are handled through Azure. A notification is pushed to Azure to reflect the state of the appliance VM as it upgrades. As the resource bridge progresses through the upgrade, its status might switch back and forth between different upgrade steps. Upgrade is complete when the appliance VM `status` is `Running` and `provisioningState` is `Succeeded`. --To check the status of a cloud-managed upgrade, check the Azure resource in ARM, or run the following Azure CLI command from the management machine: --```azurecli -az arcappliance show --resource-group [REQUIRED] --name [REQUIRED] -``` --## Manual upgrade --Arc resource bridge can be manually upgraded from the management machine. You must meet all upgrade prerequisites before attempting to upgrade. The management machine must have the kubeconfig and [appliance configuration files](system-requirements.md#configuration-files) stored locally, or you won't be able to run the upgrade. --Manual upgrade generally takes between 30-90 minutes, depending on network speeds. The upgrade command takes your Arc resource bridge to the next appliance version, which might not be the latest available appliance version. Multiple upgrades could be needed to reach a [supported version](#supported-versions). You can check your appliance version by checking the Azure resource of your Arc resource bridge. --Before upgrading, you need the latest Azure CLI extension for `arcappliance`: --```azurecli -az extension add --upgrade --name arcappliance -``` --To manually upgrade your resource bridge, use the following command: --```azurecli -az arcappliance upgrade <private cloud> --config-file <file path to ARBname-appliance.yaml> -``` --For example, to upgrade a resource bridge on VMware, run: `az arcappliance upgrade vmware --config-file c:\contosoARB01-appliance.yaml` --To upgrade a resource bridge on SCVMM, run: `az arcappliance upgrade scvmm --config-file c:\contosoARB01-appliance.yaml` --To upgrade a resource bridge on Azure Stack HCI, transition to 23H2 and use the built-in upgrade management tool. For more information, see [About updates for Azure Stack HCI, version 23H2](/azure-stack/hci/update/about-updates-23h2). --## Version releases --The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there's a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. A new extension is typically released on a monthly cadence at the end of the month or early in the month. For detailed release info, see the [Arc resource bridge release notes](release-notes.md). --## Supported versions --Generally, the latest released version and the previous three versions (n-3) of Arc resource bridge are supported. An Arc resource bridge on an unsupported version must be upgraded or redeployed to be in a production support window. --For example, if the current version is 1.0.18, then the typical n-3 supported versions are: --- Current version: 1.0.18-- n-1 version: 1.0.17-- n-2 version: 1.0.16-- n-3 version: 1.0.15--There might be instances where supported versions aren't sequential. For example, version 1.0.18 is released and later found to contain a bug. A hot fix is released in version 1.0.19 and version 1.0.18 is removed. In this scenario, n-3 supported versions become 1.0.19, 1.0.17, 1.0.16, 1.0.15. --Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month, although it's possible that delays could push the release date further out. Regardless of when a new release comes out, if you're within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](release-notes.md). --If a resource bridge isn't upgraded to one of the supported versions (n-3), it falls outside the support window and will be unsupported. It might not always be possible to upgrade an unsupported resource bridge to a newer version, as component services used by Arc resource bridge may no longer be compatible. In addition, the unsupported resource bridge might not be able to provide reliable monitoring and health metrics. --If an Arc resource bridge can't be upgraded to a supported version, you must delete it and deploy a new resource bridge. Depending on which private cloud product you're using, there might be other steps required to reconnect the resource bridge to existing resources. For details, check the partner product's Arc resource bridge recovery documentation. --## Notification and upgrade availability --If your Arc resource bridge is at version n-3, you might receive an email notification letting you know that your resource bridge will be out of support once the next version is released. If you receive this notification, upgrade the resource bridge as soon as possible to allow debug time for any issues with manual upgrade, or submit a support ticket if cloud-managed upgrade was unable to upgrade your resource bridge. --To check if your Arc resource bridge has an upgrade available, run the command: --```azurecli -az arcappliance get-upgrades --resource-group [REQUIRED] --name [REQUIRED] -``` --To see the current version of an Arc resource bridge appliance, run `az arcappliance show` or check the Azure resource of your Arc resource bridge. --## Next steps --- Learn about [Arc resource bridge maintenance operations](maintenance.md).-- Learn about [troubleshooting Arc resource bridge](troubleshoot-resource-bridge.md).- |
azure-arc | Resource Graph Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-graph-samples.md | - Title: Azure Resource Graph sample queries for Azure Arc -description: Sample Azure Resource Graph queries for Azure Arc showing use of resource types and tables to access Azure Arc related resources and properties. Previously updated : 01/08/2024-----# Azure Resource Graph sample queries for Azure Arc --This page is a collection of [Azure Resource Graph](../governance/resource-graph/overview.md) sample queries for Azure Arc. --## Sample queries ---## Next steps --- Learn more about the [query language](../governance/resource-graph/concepts/query-language.md).-- Learn more about how to [explore resources](../governance/resource-graph/concepts/explore-resources.md). |
azure-arc | Agent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md | - Title: Overview of the Azure Connected Machine agent -description: This article provides a detailed overview of the Azure Connected Machine agent, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 08/07/2024----# Overview of Azure Connected Machine agent --The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. --> [!WARNING] -> Only Connected Machine agent versions within the last 1 year are officially supported by the product group. Customers should update to an agent version within this window. -> --## Agent components ---The Azure Connected Machine agent package contains several logical components bundled together: --* The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity. --* The guest configuration agent provides functionality such as assessing whether the machine complies with required policies and enforcing compliance. -- Note the following behavior with Azure Policy [guest configuration](../../governance/machine-configuration/overview.md) for a disconnected machine: -- * An Azure Policy assignment that targets disconnected machines is unaffected. - * Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied. - * Assignments are deleted after 14 days, and aren't reassigned to the machine after the 14-day period. --* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Azure downloads extensions and copies them to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension installs to the following path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension installs to `/var/lib/waagent/<extension>`. -->[!NOTE] -> The [Azure Monitor agent (AMA)](/azure/azure-monitor/agents/azure-monitor-agent-overview) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines. --### Azure Arc Proxy --The Azure Arc Proxy service is responsible for aggregating network traffic from the Azure Connected Machine agent services and any extensions youΓÇÖve installed and deciding where to route that data. If youΓÇÖre using the [Azure Arc gateway (Limited preview)](arc-gateway.md) to simplify your network endpoints, the Azure Arc Proxy service is the local component that forwards network requests via the Azure Arc gateway instead of the default route. The Azure Arc Proxy runs as a Network Service on Windows and a standard user account (arcproxy) on Linux. It's disabled by default until you configure the agent to use the Azure Arc gateway (Limited preview). --## Agent resources --The following information describes the directories and user accounts used by the Azure Connected Machine agent. --### Windows agent installation details --The Windows agent is distributed as a Windows Installer package (MSI). Download the Windows agent from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent). -Installing the Connected Machine agent for Window applies the following system-wide configuration changes: --* The installation process creates the following folders during setup. -- | Directory | Description | - |--|-| - | %ProgramFiles%\AzureConnectedMachineAgent | azcmagent CLI and instance metadata service executables.| - | %ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.| - | %ProgramFiles%\AzureConnectedMachineAgent\GCArcService\GC | Guest configuration (policy) service executables.| - | %ProgramData%\AzureConnectedMachineAgent | Configuration, log and identity token files for azcmagent CLI and instance metadata service.| - | %ProgramData%\GuestConfig | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.| - | %SYSTEMDRIVE%\packages | Extension package executables | --* Installing the agent creates the following Windows services on the target machine. -- | Service name | Display name | Process name | Description | - |--|--|--|-| - | himds | Azure Hybrid Instance Metadata Service | `himds.exe` | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens | - | GCArcService | Guest configuration Arc Service | `gc_arc_service.exe` (gc_service.exe prior to version 1.36) | Audits and enforces Azure guest configuration policies on the machine. | - | ExtensionService | Guest configuration Extension Service | `gc_extension_service.exe` (gc_service.exe prior to version 1.36) | Installs, updates, and manages extensions on the machine. | --* Agent installation creates the following virtual service account. -- | Virtual Account | Description | - ||-| - | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. | -- > [!TIP] - > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you might need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function. --* Agent installation creates the following local security group. -- | Security group name | Description | - ||-| - | Hybrid agent extension applications | Members of this security group can request Microsoft Entra tokens for the system-assigned managed identity | --* Agent installation creates the following environmental variables -- | Name | Default value | Description | - |||| - | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` | - | IMDS_ENDPOINT | `http://localhost:40342` | --* There are several log files available for troubleshooting, described in the following table. -- | Log | Description | - |--|-| - | %ProgramData%\AzureConnectedMachineAgent\Log\himds.log | Records details of the heartbeat and identity agent component. | - | %ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log | Contains the output of the azcmagent tool commands. | - | %ProgramData%\GuestConfig\arc_policy_logs\gc_agent.log | Records details about the guest configuration (policy) agent component. | - | %ProgramData%\GuestConfig\ext_mgr_logs\gc_ext.log | Records details about extension manager activity (extension install, uninstall, and upgrade events). | - | %ProgramData%\GuestConfig\extension_logs | Directory containing logs for individual extensions. | --* The process creates the local security group **Hybrid agent extension applications**. --* After uninstalling the agent, the following artifacts remain. -- * %ProgramData%\AzureConnectedMachineAgent\Log - * %ProgramData%\AzureConnectedMachineAgent - * %ProgramData%\GuestConfig - * %SystemDrive%\packages --### Linux agent installation details --The preferred package format for the distribution (`.rpm` or `.deb`) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/) provides the Connected Machine agent for Linux. The shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent) installs and configures the agent. --Installing, upgrading, and removing the Connected Machine agent isn't required after server restart. --Installing the Connected Machine agent for Linux applies the following system-wide configuration changes. --* Setup creates the following installation folders. -- | Directory | Description | - |--|-| - | /opt/azcmagent/ | azcmagent CLI and instance metadata service executables. | - | /opt/GC_Ext/ | Extension service executables. | - | /opt/GC_Service/ | Guest configuration (policy) service executables. | - | /var/opt/azcmagent/ | Configuration, log and identity token files for azcmagent CLI and instance metadata service.| - | /var/lib/GuestConfig/ | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.| --* Installing the agent creates the following daemons. -- | Service name | Display name | Process name | Description | - |--|--|--|-| - | himdsd.service | Azure Connected Machine Agent Service | himds | This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.| - | gcad.service | GC Arc Service | gc_linux_service | Audits and enforces Azure guest configuration policies on the machine. | - | extd.service | Extension Service | gc_linux_service | Installs, updates, and manages extensions on the machine. | --* There are several log files available for troubleshooting, described in the following table. -- | Log | Description | - |--|-| - | /var/opt/azcmagent/log/himds.log | Records details of the heartbeat and identity agent component. | - | /var/opt/azcmagent/log/azcmagent.log | Contains the output of the azcmagent tool commands. | - | /var/lib/GuestConfig/arc_policy_logs | Records details about the guest configuration (policy) agent component. | - | /var/lib/GuestConfig/ext_mgr_logs | Records details about extension manager activity (extension install, uninstall, and upgrade events). | - | /var/lib/GuestConfig/extension_logs | Directory containing logs for individual extensions. | --* Agent installation creates the following environment variables, set in `/lib/systemd/system.conf.d/azcmagent.conf`. -- | Name | Default value | Description | - |||-| - | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` | - | IMDS_ENDPOINT | `http://localhost:40342` | --* After uninstalling the agent, the following artifacts remain. -- * /var/opt/azcmagent - * /var/lib/GuestConfig --## Agent resource governance --The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions: --* The Machine Configuration (formerly Guest Configuration) service can use up to 5% of the CPU to evaluate policies. -* The Extension service can use up to 5% of the CPU on Windows machines and 30% of the CPU on Linux machines to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply: -- | Extension type | Operating system | CPU limit | - | -- | - | | - | AzureMonitorLinuxAgent | Linux | 60% | - | AzureMonitorWindowsAgent | Windows | 100% | - | LinuxOsUpdateExtension | Linux | 60% | - | MDE.Linux | Linux | 60% | - | MicrosoftDnsAgent | Windows | 100% | - | MicrosoftMonitoringAgent | Windows | 60% | - | OmsAgentForLinux | Linux | 60%| --During normal operations, defined as the Azure Connected Machine agent being connected to Azure and not actively modifying an extension or evaluating a policy, you can expect the agent to consume the following system resources: --| | Windows | Linux | -| | - | -- | -| **CPU usage (normalized to 1 core)** | 0.07% | 0.02% | -| **Memory usage** | 57 MB | 42 MB | --The performance data above was gathered in April 2023 on virtual machines running Windows Server 2022 and Ubuntu 20.04. Actual agent performance and resource consumption will vary based on the hardware and software configuration of your servers. --### Custom resource limits --The default resource governance limits are the best choice for most servers. However, small virtual machines and servers with limited CPU resources might encounter timeouts when managing extensions or evaluating policies because there aren't enough CPU resources to complete the tasks. Starting with agent version 1.39, you can customize the CPU limits applied to the extension manager and Machine Configuration services to help the agent complete these tasks faster. --To see the current resource limits for the extension manager and Machine Configuration services, run the following command. --```bash -azcmagent config list -``` --In the output, you'll see two fields, `guestconfiguration.agent.cpulimit` and `extensions.agent.cpulimit` with the current resource limit specified as a percentage. On a fresh install of the agent, both will show `5` because the default limit is 5% of the CPU. --To change the resource limit for the extension manager to 80%, run the following command: --```bash -azcmagent config set extensions.agent.cpulimit 80 -``` --## Instance metadata --Metadata information about a connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers. Specifically: --* Operating system name, edition, type, and version -* Computer name -* Computer manufacturer and model -* Computer fully qualified domain name (FQDN) -* Domain name (if joined to an Active Directory domain) -* Active Directory and DNS fully qualified domain name (FQDN) -* UUID (BIOS ID) -* Connected Machine agent heartbeat -* Connected Machine agent version -* Public key for managed identity -* Policy compliance status and details (if using guest configuration policies) -* SQL Server installed (Boolean value) -* Cluster resource ID (for Azure Stack HCI nodes) -* Hardware manufacturer -* Hardware model -* CPU family, socket, physical core and logical core counts -* Total physical memory -* Serial number -* SMBIOS asset tag -* Network interface information - * IP address - * Subnet -* Windows licensing information - * OS license status - * OS license channel - * Extended Security Updates eligibility - * Extended Security Updates license status - * Extended Security Updates license channel -* Cloud provider -* Amazon Web Services (AWS) metadata, when running in AWS: - * Account ID - * Instance ID - * Region -* Google Cloud Platform (GCP) metadata, when running in GCP: - * Instance ID - * Image - * Machine type - * Project ID - * Project number - * Service accounts - * Zone -* Oracle Cloud Infrastructure metadata, when running in OCI: - * Display name --The agent requests the following metadata information from Azure: --* Resource location (region) -* Virtual machine ID -* Tags -* Microsoft Entra managed identity certificate -* Guest configuration policy assignments -* Extension requests - install, update, and delete. --> [!NOTE] -> Azure Arc-enabled servers doesn't store/process customer data outside the region the customer deploys the service instance in. --## Deployment options and requirements --Agent deployment and machine connection require certain [prerequisites](prerequisites.md). There are also [networking requirements](network-requirements.md) to be aware of. --We provide several options for deploying the agent. For more information, see [Plan for deployment](plan-at-scale-deployment.md) and [Deployment options](deployment-options.md). --## Disaster Recovery --There are no customer-enabled disaster recovery options for Arc-enabled servers. In the event of an outage in an Azure region, the system will failover to another region in the same [Azure geography](https://azure.microsoft.com/explore/global-infrastructure/geographies/) (if one exists). While this failover procedure is automatic, it does take some time. The Connected Machine agent will be disconnected during this period and will show a status of **Disconnected** until the failover is complete. The system will failback to its original region once the outage has been restored. --An outage of Azure Arc won't affect the customer workload itself; only management of the applicable servers via Arc will be impaired. --## Next steps --* To begin evaluating Azure Arc-enabled servers, see [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](learn/quick-enable-hybrid-vm.md). -* Before you deploy the Azure Connected Machine agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md). -* Review troubleshooting information in the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md). |
azure-arc | Agent Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md | - Title: Archive for What's new with Azure Connected Machine agent -description: Release notes for Azure Connected Machine agent versions older than six months - Previously updated : 12/06/2023----# Archive for What's new with Azure Connected Machine agent --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). --The primary [What's new in Azure Connected Machine agent?](agent-release-notes.md) article contains updates for the last six months, while this article contains all the older information. --The Azure Connected Machine agent receives improvements on an ongoing basis. This article provides you with information about: --- Previous releases-- Known issues-- Bug fixes--## Version 1.37 - December 2023 --Download for [Windows](https://download.microsoft.com/download/f/6/4/f64c574f-d3d5-4128-8308-ed6a7097a93d/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- Rocky Linux 9 is now a [supported operating system](prerequisites.md#supported-environments)-- Added Oracle Cloud Infrastructure display name as a [detected property](agent-overview.md#instance-metadata)--### Fixed --- Restored access to servers with Windows Admin Center in Azure-- Improved detection logic for Microsoft SQL Server-- Agents connected to sovereign clouds should now see the correct cloud and portal URL in [azcmagent show](azcmagent-show.md)-- The installation script for Linux now automatically approves the request to import the packages.microsoft.com signing key to ensure a silent installation experience-- Agent installation and upgrades apply more restrictive permissions to the agent's data directories on Windows-- Improved reliability when detecting Azure Stack HCI as a cloud provider-- Removed the log zipping feature introduced in version 1.37 for extension manager and machine configuration agent logs. Log files are still rotated automatically.-- Removed the scheduled tasks for automatic agent upgrades (introduced in agent version 1.30). We'll reintroduce this functionality when the automatic upgrade mechanism is available.-- Resolved [Azure Connected Machine Agent Elevation of Privilege Vulnerability](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35624)--## Version 1.36 - November 2023 --Download for [Windows](https://download.microsoft.com/download/5/e/9/5e9081ed-2ee2-4b3a-afca-a8d81425bcce/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Known issues --The Windows Admin Center in Azure feature is incompatible with Azure Connected Machine agent version 1.36. Upgrade to version 1.37 or later to use this feature. --### New features --- [azcmagent show](azcmagent-show.md) now reports extended security license status on Windows Server 2012 server machines.-- Introduced a new [proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) option, `ArcData`, that covers the SQL Server enabled by Azure Arc endpoints. This enables you to use a private endpoint with Azure Arc-enabled servers with the public endpoints for SQL Server enabled by Azure Arc.-- The [CPU limit for extension operations](agent-overview.md#agent-resource-governance) on Linux is now 30%. This increase helps improve reliability of extension install, upgrade, and uninstall operations.-- Older extension manager and machine configuration agent logs are automatically zipped to reduce disk space requirements.-- New executable names for the extension manager (`gc_extension_service`) and machine configuration (`gc_arc_service`) agents on Windows to help you distinguish the two services. For more information, see [Windows agent installation details](./agent-overview.md#windows-agent-installation-details).--### Bug fixes --- [azcmagent connect](azcmagent-connect.md) now uses the latest API version when creating the Azure Arc-enabled server resource to ensure Azure policies targeting new properties can take effect.-- Upgraded the OpenSSL library and PowerShell runtime shipped with the agent to include the latest security fixes.-- Fixed an issue that could prevent the agent from reporting the correct product type on Windows machines.-- Improved handling of upgrades when the previously installed extension version wasn't in a successful state.--## Version 1.35 - October 2023 --Download for [Windows](https://download.microsoft.com/download/e/7/0/e70b1753-646e-4aea-bac4-40187b5128b0/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Known issues --The Windows Admin Center in Azure feature is incompatible with Azure Connected Machine agent version 1.35. Upgrade to version 1.37 or later to use this feature. --### New features --- The Linux installation script now downloads supporting assets with either wget or curl, depending on which tool is available on the system-- [azcmagent connect](azcmagent-connect.md) and [azcmagent disconnect](azcmagent-disconnect.md) now accept the `--user-tenant-id` parameter to enable Lighthouse users to use a credential from their tenant and onboard a server to a different tenant.-- You can configure the extension manager to run, without allowing any extensions to be installed, by configuring the allowlist to `Allow/None`. This supports Windows Server 2012 ESU scenarios where the extension manager is required for billing purposes but doesn't need to allow any extensions to be installed. Learn more about [local security controls](security-extensions.md#local-agent-security-controls).--### Fixed --- Improved reliability when installing Microsoft Defender for Endpoint on Linux by increasing [available system resources](agent-overview.md#agent-resource-governance) and extending the timeout-- Better error handling when a user specifies an invalid location name to [azcmagent connect](azcmagent-connect.md)-- Fixed a bug where clearing the `incomingconnections.enabled` [configuration setting](azcmagent-config.md) would show `<nil>` as the previous value-- Security fix for the extension allowlist and blocklist feature to address an issue where an invalid extension name could impact enforcement of the lists.--## Version 1.34 - September 2023 --Download for [Windows](https://download.microsoft.com/download/b/3/2/b3220316-13db-4f1f-babf-b1aab33b364f/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- [Extended Security Updates for Windows Server 2012 and 2012 R2](prepare-extended-security-updates.md) can be purchased and enabled through Azure Arc. If your server is already running the Azure Connected Machine agent, [upgrade to agent version 1.34](manage-agent.md#upgrade-the-agent) or later to take advantage of this new capability.-- New system metadata is collected to enhance your device inventory in Azure:- - Total physical memory - - More processor information - - Serial number - - SMBIOS asset tag -- Network requests to Microsoft Entra ID (formerly Azure Active Directory) now use `login.microsoftonline.com` instead of `login.windows.net`--### Fixed --- Better handling of disconnected agent scenarios in the extension manager and policy engine.--## Version 1.33 - August 2023 --Download for [Windows](https://download.microsoft.com/download/0/c/7/0c7a484b-e29e-42f9-b3e9-db431df2e904/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Security fix --Agent version 1.33 contains a fix for [CVE-2023-38176](https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2023-38176), a local elevation of privilege vulnerability. Microsoft recommends upgrading all agents to version 1.33 or later to mitigate this vulnerability. Azure Advisor can help you [identify servers that need to be upgraded](https://portal.azure.com/#view/Microsoft_Azure_Expert/RecommendationListBlade/recommendationTypeId/9d5717d2-4708-4e3f-bdda-93b3e6f1715b/recommendationStatus). Learn more about CVE-2023-38176 in the [Security Update Guide](https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2023-38176). --### Known issue --[azcmagent check](azcmagent-check.md) validates a new endpoint in this release: `<geography>-ats.his.arc.azure.com`. This endpoint is reserved for future use and not required for the Azure Connected Machine agent to operate successfully. However, if you're using a private endpoint, this endpoint will fail the network connectivity check. You can safely ignore this endpoint in the results and should instead confirm that all other endpoints are reachable. --This endpoint will be removed from `azcmagent check` in a future release. --### Fixed --- Fixed an issue that could cause a VM extension to disappear in Azure Resource Manager if it's installed with the same settings twice. After upgrading to agent version 1.33 or later, reinstall any missing extensions to restore the information in Azure Resource Manager.-- You can now set the [agent mode](security-extensions.md#agent-modes) before connecting the agent to Azure.-- The agent now responds to instance metadata service (IMDS) requests even when the connection to Azure is temporarily unavailable.--## Version 1.32 - July 2023 --Download for [Windows](https://download.microsoft.com/download/7/e/5/7e51205f-a02e-4fbe-94fe-f36219be048c/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- Added support for the Debian 12 operating system-- [azcmagent show](azcmagent-show.md) now reflects the "Expired" status when a machine has been disconnected long enough for the managed identity to expire. Previously, the agent only showed "Disconnected" while the Azure portal and API showed the correct state, "Expired."--### Fixed --- Fixed an issue that could result in high CPU usage if the agent was unable to send telemetry to Azure.-- Improved local logging when there are network communication errors--## Version 1.31 - June 2023 --Download for [Windows](https://download.microsoft.com/download/2/6/e/26e2b001-1364-41ed-90b0-1340a44ba409/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Known issue --The first release of agent version 1.31 had a known issue affecting customers using proxy servers. The issue displays as `AZCM0026: Network Error` and a message about "no IP addresses found" when connecting a server to Azure Arc using a proxy server. A newer version of agent 1.31 was released on June 14, 2023 that addresses this issue. --To check if you're running the latest version of the Azure connected machine agent, navigate to the server in the Azure portal or run `azcmagent show` from a terminal on the server itself and look for the "Agent version." The table below shows the version numbers for the first and patched releases of agent 1.31. --| Package type | Version number with proxy issue | Version number of patched agent | -| | - | - | -| Windows | 1.31.02347.1069 | 1.31.02356.1083 | -| RPM-based Linux | 1.31.02347.957 | 1.31.02356.970 | -| DEB-based Linux | 1.31.02347.939 | 1.31.02356.952 | --### New features --- Added support for Amazon Linux 2023-- [azcmagent show](azcmagent-show.md) no longer requires administrator privileges-- You can now filter the output of [azcmagent show](azcmagent-show.md) by specifying the properties you wish to output--### Fixed --- Added an error message when a pending reboot on the machine affects extension operations-- The scheduled task that checks for agent updates no longer outputs a file-- Improved formatting for clock skew calculations-- Improved reliability when upgrading extensions by explicitly asking extensions to stop before trying to upgrade.-- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Update Manager extension for Linux, Microsoft Defender Endpoint for Linux, and Azure Security Agent for Linux to prevent timeouts during installation-- [azcmagent disconnect](azcmagent-disconnect.md) now closes any active SSH or Windows Admin Center connections-- Improved output of the [azcmagent check](azcmagent-check.md) command-- Better handling of spaces in the `--location` parameter of [azcmagent connect](azcmagent-connect.md)--## Version 1.30 - May 2023 --Download for [Windows](https://download.microsoft.com/download/7/7/9/779eae73-a12b-4170-8c5e-abec71bc14cf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- Introduced a scheduled task that checks for agent updates on a daily basis. Currently, the update mechanism is inactive and no changes are made to your server even if a newer agent version is available. In the future, you'll be able to schedule updates of the Azure Connected Machine agent from Azure. For more information, see [Automatic agent upgrades](manage-agent.md#automatic-agent-upgrades).--### Fixed --- Resolved an issue that could cause the agent to go offline after rotating its connectivity keys.-- `azcmagent show` no longer shows an incomplete resource ID or Azure portal page URL when the agent isn't configured.--## Version 1.29 - April 2023 --Download for [Windows](https://download.microsoft.com/download/2/7/0/27063536-949a-4b16-a29a-3d1dcb29cff7/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- The agent now compares the time on the local system and Azure service when checking network connectivity and creating the resource in Azure. If the clocks are offset by more than 120 seconds (2 minutes), a nonblocking error is shown. You might encounter TLS connection errors if the time of your computer doesn't match the time in Azure.-- `azcmagent show` now supports an `--os` flag to print extra OS information to the console--### Fixed --- Fixed an issue that could cause the guest configuration service (gc_service) to repeatedly crash and restart on Linux systems-- Resolved a rare condition under which the guest configuration service (gc_service) could consume excessive CPU resources-- Removed "sudo" calls in internal install script that could be blocked if SELinux is enabled-- Reduced how long network checks wait before determining a network endpoint is unreachable-- Stopped writing error messages in "himds.log" referring to a missing certificate key file for the ATS agent, an inactive component reserved for future use.--## Version 1.28 - March 2023 --Download for [Windows](https://download.microsoft.com/download/5/9/7/59789af8-5833-4c91-8dc5-91c46ad4b54f/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --- Improved reliability of delete requests for extensions-- More frequent reporting of VM UUID (system firmware identifier) changes-- Improved reliability when writing changes to agent configuration files-- JSON output for `azcmagent connect` now includes Azure portal URL for the server-- Linux installation script now installs the `gnupg` package if it's missing on Debian operating systems-- Removed weekly restarts for the extension and guest configuration services--## Version 1.27 - February 2023 --Download for [Windows](https://download.microsoft.com/download/8/4/5/845d5e04-bb09-4ed2-9ca8-bb51184cddc9/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --- The extension service now correctly restarts when the Azure Connected Machine agent is upgraded by Update Manager-- Resolved issues with the hybrid connectivity component that could result in the "himds" service crashing, the server showing as "disconnected" in Azure, and connectivity issues with Windows Admin Center and SSH-- Improved handling of resource move scenarios that could impact Windows Admin Center and SSH connectivity-- Improved reliability when changing the [agent configuration mode](security-extensions.md#local-agent-security-controls) from "monitor" mode to "full" mode.-- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Microsoft Sentinel DNS extension to improve log collection reliability-- Tenant IDs are better validated when connecting the server--## Version 1.26 - January 2023 --Download for [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --> [!NOTE] -> Version 1.26 is only available for Linux operating systems. --### Fixed --- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Microsoft Defender for Endpoint extension (MDE.Linux) on Linux to improve installation reliability--## Version 1.25 - January 2023 --Download for [Windows](https://download.microsoft.com/download/2/#installing-a-specific-version-of-the-agent) --### New features --- Red Hat Enterprise Linux (RHEL) 9 is now a [supported operating system](prerequisites.md#supported-operating-systems)--### Fixed --- Reliability improvements in the machine (guest) configuration policy engine-- Improved error messages in the Windows MSI installer-- Additional improvements to the detection logic for machines running on Azure Stack HCI-## Version 1.24 - November 2022 --Download for [Windows](https://download.microsoft.com/download/f/9/d/f9d60cc9-7c2a-4077-b890-f6a54cc55775/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- `azcmagent logs` improvements:- - Only the most recent log file for each component is collected by default. To collect all log files, use the new `--full` flag. - - Journal logs for the agent services are now collected on Linux operating systems - - Logs from extensions are now collected -- Agent telemetry is no longer sent to `dc.services.visualstudio.com`. You might be able to remove this URL from any firewall or proxy server rules if no other applications in your environment require it.-- Failed extension installs can now be retried without removing the old extension as long as the extension settings are different-- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Azure Update Manager extension on Linux to reduce downtime during update operations--### Fixed --- Improved logic for detecting machines running on Azure Stack HCI to reduce false positives-- Auto-registration of required resource providers only happens when they are unregistered-- Agent will now detect drift between the proxy settings of the command line tool and background services-- Fixed a bug with proxy bypass feature that caused the agent to incorrectly use the proxy server for bypassed URLs-- Improved error handling when extensions don't download successfully, fail validation, or have corrupt state files--## Version 1.23 - October 2022 --Download for [Windows](https://download.microsoft.com/download/3/9/8/398f6036-958d-43c4-ad7d-4576f1d860a#installing-a-specific-version-of-the-agent) --### New features --- The minimum PowerShell version required on Windows Server has been reduced to PowerShell 4.0-- The Windows agent installer is now compatible with systems that enforce a Microsoft publisher-based Windows Defender Application Control policy.-- Added support for Rocky Linux 8 and Debian 11.--### Fixed --- Tag values are correctly preserved when connecting a server and specifying multiple tags (fixes known issue from version 1.22).-- An issue preventing some users who tried authenticating with an identity from a different tenant than the tenant where the server is (will be) registered has been fixed.-- The `azcamgent check` command no longer validates CNAME records to reduce warnings that did not impact agent functionality.-- The agent will now try to obtain an access token for up to 5 minutes when authenticating with an Azure Active Directory service principal.-- Cloud presence checks now only run once at the time the `himds` service starts on the server to reduce local network traffic. If you live migrate your virtual machine to a different cloud provider, it will not reflect the new cloud provider until the service or computer has rebooted.-- Improved logging during the installation process.-- The install script for Windows now saves the MSI to the TEMP directory instead of the current directory.--## Version 1.22 - September 2022 --Download for [Windows](https://download.microsoft.com/download/1/3/5/135f1f2b-7b14-40f6-bceb-3af4ebadf434/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Known issues --- The 'connect' command uses the value of the last tag for all tags. You will need to fix the tags after onboarding to use the correct values.--### New features --- The default login flow for Windows computers now loads the local web browser to authenticate with Azure Active Directory instead of providing a device code. You can use the `--use-device-code` flag to return to the old behavior or [provide service principal credentials](onboard-service-principal.md) for a non-interactive authentication experience.-- If the resource group provided to `azcmagent connect` does not exist, the agent tries to create it and continue connecting the server to Azure.-- Added support for Ubuntu 22.04-- Added `--no-color` flag for all azcmagent commands to suppress the use of colors in terminals that do not support ANSI codes.--### Fixed --- The agent now supports Red Hat Enterprise Linux 8 servers that have FIPS mode enabled.-- Agent telemetry uses the proxy server when configured.-- Improved accuracy of network connectivity checks-- The agent retains extension allow and blocklists when switching the agent from monitoring mode to full mode. Use [azcmagent config clear](azcmagent-config.md) to reset individual configuration settings to the default state.--## Version 1.21 - August 2022 --Download for [Windows](https://download.microsoft.com/download/#installing-a-specific-version-of-the-agent) --### New features --- `azcmagent connect` usability improvements:- - The `--subscription-id (-s)` parameter now accepts friendly names in addition to subscription IDs - - Automatic registration of any missing resource providers for first-time users (extra user permissions required to register resource providers) - - Added a progress bar during onboarding - - The onboarding script now supports both the yum and dnf package managers on RPM-based Linux systems -- You can now restrict the URLs used to download machine configuration (formerly Azure Policy guest configuration) packages by setting the `allowedGuestConfigPkgUrls` tag on the server resource and providing a comma-separated list of URL patterns to allow.--### Fixed --- Improved reliability when reporting extension installation failures to prevent extensions from staying in the "creating" state-- Support for retrieving metadata for Google Cloud Platform virtual machines when the agent uses a proxy server-- Improved network connection retry logic and error handling-- Linux only: resolves local escalation of privilege vulnerability [CVE-2022-38007](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-38007)--## Version 1.20 - July 2022 --Download for [Windows](https://download.microsoft.com/download/f/b/1/fb143ada-1b82-4d19-a125-40f2b352e257/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Known issues --- Some systems might incorrectly report their cloud provider as Azure Stack HCI.--### New features --- Added support for connecting the agent to the Microsoft Azure operated by 21Vianet cloud-- Added support for Debian 10-- Updates to the [instance metadata](agent-overview.md#instance-metadata) collected on each machine:- - GCP VM OS is no longer collected - - CPU logical core count is now collected -- Improved error messages and colorization--### Fixed --- Agents configured to use private endpoints correctly download extensions over the private endpoint-- Renamed the `--use-private-link` flag on [azcmagent check](azcmagent-check.md) to `--enable-pls-check` to more accurately represent its function--## Version 1.19 - June 2022 --Download for [Windows](https://download.microsoft.com/download/8/9/f/89f80a2b-32c3-43e8-b3b8-fce6cea8e2cf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Known issues --- Agents configured to use private endpoints incorrectly download extensions from a public endpoint. [Upgrade the agent](manage-agent.md#upgrade-the-agent) to version 1.20 or later to restore correct functionality.-- Some systems might incorrectly report their cloud provider as Azure Stack HCI.--### New features --- When installed on a Google Compute Engine virtual machine, the agent detects and reports Google Cloud metadata in the "detected properties" of the Azure Arc-enabled servers resource. [Learn more](agent-overview.md#instance-metadata) about the new metadata.--### Fixed --- Resolved an issue that could cause the extension manager to hang during extension installation, update, and removal operations.-- Improved support for TLS 1.3--## Version 1.18 - May 2022 --Download for [Windows](https://download.microsoft.com/download/2/5/6/25685d0f-2895-4b80-9b1d-5ba53a46097f/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- You can configure the agent to operate in [monitoring mode](security-extensions.md#agent-modes), which simplifies configuration of the agent for scenarios where you only want to use Arc for monitoring and security scenarios. This mode disables other agent functionality and prevents use of extensions that could make changes to the system (for example, the Custom Script Extension).-- VMs and hosts running on Azure Stack HCI now report the cloud provider as "HCI" when [Azure benefits are enabled](/azure-stack/hci/manage/azure-benefits#enable-azure-benefits).--### Fixed --- `systemd` is now an official prerequisite on Linux-- Guest configuration policies no longer create unnecessary files in the `/tmp` directory on Linux servers-- Improved reliability when extracting extensions and guest configuration policy packages-- Improved reliability for guest configuration policies that have child processes--## Version 1.17 - April 2022 --Download for [Windows](https://download.microsoft.com/download/#installing-a-specific-version-of-the-agent) --### New features --- The default resource name for AWS EC2 instances is now the instance ID instead of the hostname. To override this behavior, use the `--resource-name PreferredResourceName` parameter to specify your own resource name when connecting a server to Azure Arc.-- The network connectivity check during onboarding now verifies private endpoint configuration if you specify a private link scope. You can run the same check anytime by running [azcmagent check](azcmagent-check.md) with the new `--use-private-link` parameter.-- You can now disable the extension manager with the [local agent security controls](security-extensions.md#local-agent-security-controls).--### Fixed --- If you attempt to run `azcmagent connect` on a server already connected to Azure, the resource ID is shown on the console to help you locate the resource in Azure.-- Extended the `azcmagent connect` timeout to 10 minutes.-- `azcmagent show` no longer prints the private link scope ID. You can check if the server is associated with an Azure Arc private link scope by reviewing the machine details in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/servers), [CLI](/cli/azure/connectedmachine?view=azure-cli-latest#az-connectedmachine-show&preserve-view=true), or [PowerShell](/powershell/module/az.connectedmachine/get-azconnectedmachine).-- `azcmagent logs` collects only the two most recent logs for each service to reduce ZIP file size.-- `azcmagent logs` collects Guest Configuration logs again.--## Version 1.16 - March 2022 --Download for [Windows](https://download.microsoft.com/download/e/#installing-a-specific-version-of-the-agent) --### Known issues --- `azcmagent logs` doesn't collect Guest Configuration logs in this release. You can locate the log directories in the [agent installation details](agent-overview.md#agent-resources).--### New features --- You can now granularly control allowed and blocked extensions on your server and disable the Guest Configuration agent. See [local agent controls to enable or disable capabilities](security-extensions.md#local-agent-security-controls) for more information.--### Fixed --- The "Arc" proxy bypass keyword no longer includes Azure Active Directory endpoints on Linux-- The "Arc" proxy bypass keyword now includes Azure Storage endpoints for extension downloads--## Version 1.15 - February 2022 --Download for [Windows](https://download.microsoft.com/download/0/7/4/074a7a9e-1d86-4588-8297-b4e587ea0307/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Known issues --- The "Arc" proxy bypass feature on Linux includes some endpoints that belong to Azure Active Directory. As a result, if you only specify the "Arc" bypass rule, traffic destined for Azure Active Directory endpoints will not use the proxy server as expected.--### New features --- Network check improvements during onboarding:- - Added TLS 1.2 check - - Onboarding aborts when required networking endpoints are inaccessible - - New `--skip-network-check` flag to override the new network check behavior - - On-demand network check now available using `azcmagent check` -- [Proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) is now available for customers using private endpoints. This feature allows you to send Azure Active Directory and Azure Resource Manager traffic through a proxy server, but skip the proxy server for traffic that should stay on the local network to reach private endpoints.-- Oracle Linux 8 is now supported--### Fixed --- Improved reliability when disconnecting the agent from Azure-- Improved reliability when installing and uninstalling the agent on Active Directory Domain Controllers-- Extended the device login timeout to 5 minutes-- Removed resource constraints for Azure Monitor Agent to support high throughput scenarios--## Version 1.14 - January 2022 --Download for [Windows](https://download.microsoft.com/download/e/8/1/e816ff18-251b-4160-b421-a4f8ab9c2bfe/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --- Fixed a state corruption issue in the extension manager that could cause extension operations to get stuck in transient states. Customers running agent version 1.13 are encouraged to upgrade to version 1.14 as soon as possible. If you continue to have issues with extensions after upgrading the agent, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).--## Version 1.13 - November 2021 --Download for [Windows](https://download.microsoft.com/download/8/#installing-a-specific-version-of-the-agent) --### Known issues --- Extensions might get stuck in transient states (creating, deleting, updating) on Windows machines running the 1.13 agent in certain conditions. Microsoft recommends upgrading to agent version 1.14 as soon as possible to resolve this issue.--### Fixed --- Improved reliability when installing or upgrading the agent.--### New features --- Local configuration of agent settings now available using the [azcmagent config command](azcmagent-config.md).-- Support for configuring proxy server settings [using agent-specific settings](manage-agent.md#update-or-remove-proxy-settings) instead of environment variables.-- Extension operations execute faster using a new notification pipeline. You might need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](network-requirements.md)). The extension manager falls back to the existing behavior of checking every 5 minutes when the notification service is inaccessible.-- Detection of the AWS account ID, instance ID, and region information for servers running in Amazon Web Services.--## Version 1.12 - October 2021 --Download for [Windows](https://download.microsoft.com/download/9/e/e/9eec9acb-53f1-4416-9e10-afdd8e5281ad/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --- Improved reliability when validating signatures of extension packages.-- `azcmagent_proxy remove` command on Linux now correctly removes environment variables on Red Hat Enterprise Linux and related distributions.-- `azcmagent logs` now includes the computer name and timestamp to help disambiguate log files.--## Version 1.11 - September 2021 --Download for [Windows](https://download.microsoft.com/download/6/d/b/6dbf7141-0bf0-4b18-93f5-20de4018369d/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --- The agent now supports on Windows systems with the [System objects: Require case insensitivity for non-Windows subsystems](/windows/security/threat-protection/security-policy-settings/system-objects-require-case-insensitivity-for-non-windows-subsystems) policy set to Disabled.-- The guest configuration policy agent automatically retries if an error occurs during service start or restart events.-- Fixed an issue that prevented guest configuration audit policies from successfully executing on Linux machines.--## Version 1.10 - August 2021 --Download for [Windows](https://download.microsoft.com/download/1/c/4/1c4a0bde-0b6c-4c52-bdaf-04851c567f43/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --- The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/machine-configuration/remediation-options.md).-- The guest configuration policy agent now restarts every 48 hours instead of every 6 hours.--## Version 1.9 - July 2021 --Download for [Windows](https://download.microsoft.com/download/5/1/d/51d4340b-c927-4fc9-a0da-0bb8556338d0/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --Added support for the Indonesian language --### Fixed --Fixed a bug that prevented extension management in the West US 3 region --## Version 1.8 - July 2021 --Download for [Windows](https://download.microsoft.com/download/1/7/5/1758f4ea-3114-4a20-9113-6bc5fff1c3e8/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- Improved reliability when installing the Azure Monitor Agent extension on Red Hat and CentOS systems-- Added agent-side enforcement of max resource name length (54 characters)-- Guest Configuration policy improvements:- - Added support for PowerShell-based Guest Configuration policies on Linux operating systems - - Added support for multiple assignments of the same Guest Configuration policy on the same server - - Upgraded PowerShell Core to version 7.1 on Windows operating systems --### Fixed --- The agent continues running if it is unable to write service start/stop events to the Windows Application event log--## Version 1.7 - June 2021 --Download for [Windows](https://download.microsoft.com/download/6/1/c/61c69f31-8e22-4298-ac9d-47cd2090c81d/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- Improved reliability during onboarding:- - Improved retry logic when HIMDS is unavailable - - Onboarding continues instead of aborting if OS information isn't available -- Improved reliability when installing the Log Analytics agent for Linux extension on Red Hat and CentOS systems--## Version 1.6 - May 2021 --Download for [Windows](https://download.microsoft.com/download/d/3/d/d3df034a-d231-4ca6-9199-dbaa139b1eaf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- Added support for SUSE Enterprise Linux 12-- Updated Guest Configuration agent to version 1.26.12.0 to include:- - Policies execute in a separate process. - - Added V2 signature support for extension validation. - - Minor update to data logging. --## Version 1.5 - April 2021 --Download for [Windows](https://download.microsoft.com/download/1/d/4/1d44ef2e-dcc9-42e4-b76c-2da6a6e852af/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- Added support for Red Hat Enterprise Linux 8 and CentOS Linux 8.-- New `-useStderr` parameter to direct error and verbose output to stderr.-- New `-json` parameter to direct output results in JSON format (when used with -useStderr).-- Collect other instance metadata - Manufacturer, model, and cluster resource ID (for Azure Stack HCI nodes).--## Version 1.4 - March 2021 --Download for [Windows](https://download.microsoft.com/download/e/b/1/eb128465-8830-47b0-b89e-051eefd33f7c/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- Added support for private endpoints, which is currently in limited preview.-- Expanded list of exit codes for azcmagent.-- You can pass agent configuration parameters from a file with the `--config` parameter.-- Automatically detects the presence of Microsoft SQL Server on the server--### Fixed --Network endpoint checks are now faster. --## Version 1.3 - December 2020 --Download for [Windows](https://download.microsoft.com/download/5/4/c/54c2afd8-e559-41ab-8aa2-cc39bc13156b/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --Added support for Windows Server 2008 R2 SP1. --### Fixed --Resolved issue preventing the Custom Script Extension on Linux from installing successfully. --## Version 1.2 - November 2020 --Download for [Windows](https://download.microsoft.com/download/4/c/2/4c287d81-6657-4cd8-9254-881ae6a2d1f4/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --Resolved issue where proxy configuration resets after upgrade on RPM-based distributions. --## Version 1.1 - October 2020 --### Fixed --- Fixed proxy script to handle alternate GC daemon unit file location.-- GuestConfig agent reliability changes.-- GuestConfig agent support for US Gov Virginia region.-- GuestConfig agent extension report messages to be more verbose if there is a failure.--## Version 1.0 - September 2020 --This version is the first generally available release of the Azure Connected Machine Agent. --### Plan for change --- Support for preview agents (all versions older than 1.0) will be removed in a future service update.-- Removed support for fallback endpoint `.azure-automation.net`. If you have a proxy, you need to allow the endpoint `*.his.arc.azure.com`.-- VM extensions can't be installed or modified from Azure Arc if the agent detects it's running in an Azure VM. This is to avoid conflicting extension operations being performed from the virtual machine's **Microsoft.Compute** and **Microsoft.HybridCompute** resource. Use the **Microsoft.Compute** resource for the machine for all extension operations.-- Name of guest configuration process has changed, from *gcd* to *gcad* on Linux, and *gcservice* to *gcarcservice* on Windows.--### New features --- Added `azcmagent logs` option to collect information for support.-- Added `azcmagent license` option to display EULA.-- Added `azcmagent show --json` option to output agent state in easily parseable format.-- Added flag in `azcmagent show` output to indicate if server is on a virtual machine hosted in Azure.-- Added `azcmagent disconnect --force-local-only` option to allow reset of local agent state when Azure service cannot be reached.-- Added `azcmagent connect --cloud` option to support other clouds. In this release, only Azure is supported by service at time of agent release.-- Agent has been localized into Azure-supported languages.--### Fixed --- Improvements to connectivity check.-- Corrected issue with proxy server settings being lost when upgrading agent on Linux.-- Resolved issues when attempting to install agent on server running Windows Server 2012 R2.-- Improvements to extension installation reliability--## Next steps --- Before evaluating or enabling Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.--- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. |
azure-arc | Agent Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md | - Title: What's new with Azure Connected Machine agent -description: This article has release notes for Azure Connected Machine agent. For many of the summarized issues, there are links to more details. - Previously updated : 09/13/2024----# What's new with Azure Connected Machine agent --The Azure Connected Machine agent receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about: --- The latest releases-- Known issues-- Bug fixes--This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Connected Machine agent](agent-release-notes-archive.md). --> [!WARNING] -> Only Connected Machine agent versions within the last 1 year are officially supported by the product group. Customers should update to an agent version within this window. -> --## Version 1.46 - September 2024 --Download for [Windows](https://aka.ms/AzureConnectedMachineAgent) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --- Fixed a bug causing the Guest Config agent to hang in extension creating state when the download of an extension package failed.-- Fixed a bug where onboarding treated conflicting errors as success.--### New features and enhancements --- Improved error messaging for scenarios with extension installation and enablement blockage in the presence of a sideloaded extension.-- Increased checks for recovery of sequence number if the previous request failed.-- Removed casing requirements when reading the proxy from the configuration file.-- Added supported for Azure Linux 3 (Mariner).-- Added initial Linux ARM64 architecture support.-- Added Gateway URL to the output of the show command.--## Version 1.45 - August 2024 --Download for [Windows](https://download.microsoft.com/download/0/6/1/061e3c68-5603-4c0e-bb78-2e3fd10fef30/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --- Fixed an issue where EnableEnd telemetry would sometimes be sent too soon.-- Added sending a failed timed-out EnableEnd telemetry log if extension takes longer than the allowed time to complete.--### New features --- Azure Arc proxy now supports HTTP traffic.-- New proxy.bypass value 'AMA' added to support AMA VM extension proxy bypass.--## Version 1.44 - July 2024 --Download for [Windows](https://download.microsoft.com/download/d/#installing-a-specific-version-of-the-agent) --### Fixed --- Fixed a bug where the service would sometimes reject reports from an upgraded extension if the previous extension was in a failed state.-- Setting OPENSSL_CNF environment at process level to override build openssl.cnf path on Windows.-- Fixed access denied errors in writing configuration files.-- Fixed SYMBIOS GUID related bug with Windows Server 2012 and Windows Server 2012 R2 [Extended Security Updates](/windows-server/get-started/extended-security-updates-overview) enabled by Azure Arc.--### New features --- Extension service enhancements: Added download/validation error details to extension report. Increased unzipped extension package size limit to 1 GB.-- Update of hardwareprofile information to support upcoming Windows Server licensing capabilities.-- Update of the error json output to include more detailed recommended actions for troubleshooting scenarios.-- Block on installation of unsupported operating systems and distribution versions. See [Supported operating systems](prerequisites.md#supported-operating-systems) for details.--> [!NOTE] -> Azure Connected Machine agent version 1.44 is the last version to officially support Debian 10, Ubuntu 16.04, and Azure Linux (CBL-Mariner) 1.0. -> --## Version 1.43 - June 2024 --Download for [Windows](https://download.microsoft.com/download/0/7/8/078f3bb7-6a42-41f7-b9d3-9a0eb4c94df8/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --- Fix for OpenSSL Vulnerability for Linux (Upgrading OpenSSL version from 3.0.13 to 3.014)-- Added Server Name Indicator (SNI) to our service calls, fixing Proxy and Firewall scenarios-- Skipped lockdown policy on the downloads directory under Guest Configuration--## Version 1.42 - May 2024 (Second Release) --Download for [Windows](https://download.microsoft.com/download/9/6/0/9600825a-e532-4e50-a2d5-7f07e400afc1/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --- Extensions and machine configuration policies can be used with private endpoints again--## Version 1.41 - May 2024 --Download for [Windows](https://download.microsoft.com/download/2/#installing-a-specific-version-of-the-agent) --### Known issues --Customers using private endpoints with Azure Arc may encounter issues with extension management and machine configuration policies with agent version 1.41. Agent version 1.42 resolves this issue. --### New features --- Certificate-based authentication is now supported when using a service principal to connect or disconnect the agent. For more information, see [authentication options for the azcmagent CLI](azcmagent-connect.md#authentication-options).-- [azcmagent check](azcmagent-check.md) now allows you to also check for the endpoints used by the SQL Server enabled by Azure Arc extension using the new `--extensions` flag. This can help you troubleshoot networking issues for both the OS and SQL management components. You can try this out by running `azcmagent check --extensions sql --location eastus` on a server, either before or after it is connected to Azure Arc.--### Fixed --- Fixed a memory leak in the Hybrid Instance Metadata service-- Better handling when IPv6 local loopback is disabled-- Improved reliability when upgrading extensions-- Improved reliability when enforcing CPU limits on Linux extensions-- PowerShell telemetry is now disabled by default for the extension manager and policy services-- The extension manager and policy services now support OpenSSL 3-- Colors are now disabled in the onboarding progress bar when the `--no-color` flag is used-- Improved detection and reporting for Windows machines that have custom [logon as a service rights](prerequisites.md#local-user-logon-right-for-windows-systems) configured.-- Improved accuracy when obtaining system metadata on Windows:- - VMUUID is now obtained from the Win32 API - - Physical memory is now checked using WMI -- Fixed an issue that could prevent the region selector in the [Windows GUI installer](onboard-windows-server.md) from loading-- Fixed permissions issues that could prevent the "himds" service from accessing necessary directories on Windows--## Version 1.40 - April 2024 --Download for [Windows](https://download.microsoft.com/download/2/1/0/210f77ca-e069-412b-bd94-eac02a63255d/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Known issues --The first release of the 1.40 agent may impact SQL Server enabled by Azure Arc when configured with least privileges on Windows servers. The 1.40 agent was re-released to address this problem. To check if your server is affected, run `azcmagent show` and locate the agent version number. Agent version `1.40.02664.1629` has the known issue and agent `1.40.02669.1635` fixes it. Download and install the [latest version of the agent](https://aka.ms/AzureConnectedMachineAgent) to restore functionality for SQL Server enabled by Azure Arc. --### New features --- Oracle Linux 9 is now a [supported operating system](prerequisites.md#supported-operating-systems)-- Customers no longer need to download an intermediate CA certificate for delivery of WS2012/R2 ESUs (Requires April 2024 SSU update)--### Fixed --- Improved error handling when a machine configuration policy has an invalid SAS token-- The installation script for Windows now includes a flag to suppress reboots in case any agent executables are in use during an upgrade-- Fixed an issue that could block agent installation or upgrades on Windows when the installer can't change the access control list on the agent's log directories.-- Extension package maximum download size increased to fix access to the [latest versions of the Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-extension-versions) on Azure Arc-enabled servers.--## Version 1.39 - March 2024 --Download for [Windows](https://download.microsoft.com/download/1/9/f/19f44dde-2c34-4676-80d7-9fa5fc44d2a8/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- Check which extensions are installed and manually remove them with the new [azcmagent extension](azcmagent-extension.md) command group. These commands run locally on the machine and work even if a machine has lost its connection to Azure.-- You can now [customize the CPU limit](agent-overview.md#custom-resource-limits) applied to the extension manager and machine configuration policy evaluation engine. This might be helpful on small or under-powered VMs where the [default resource governance limits](agent-overview.md#agent-resource-governance) can cause extension operations to time out.--### Fixed --- Improved reliability of the run command feature with long-running commands-- Removed an unnecessary endpoint from the network connectivity check when onboarding machines via an Azure Arc resource bridge-- Improved heartbeat reliability-- Removed unnecessary dependencies--## Version 1.38 - February 2024 --Download for [Windows](https://download.microsoft.com/download/4/8/f/48f69eb1-f7ce-499f-b9d3-5087f330ae79/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Known issues --Windows machines that try and fail to upgrade to version 1.38 manually or via Microsoft Update might not roll back to the previously installed version. As a result, the machine will appear "Disconnected" and won't be manageable from Azure. A new version of 1.38 was released to Microsoft Update and the Microsoft Download Center on March 5, 2024 that resolves this issue. --If your machine was affected by this issue, you can repair the agent by downloading and installing the agent again. The agent will automatically discover the existing configuration and restore connectivity with Azure. You don't need to run `azcmagent connect`. --### New features --- AlmaLinux 9 is now a [supported operating system](prerequisites.md#supported-operating-systems)--### Fixed --- The hybrid instance metadata service (HIMDS) now listens on the IPv6 local loopback address (::1)-- Improved logging in the extension manager and policy engine-- Improved reliability when fetching the latest operating system metadata-- Reduced extension manager CPU usage--## Next steps --- Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.-- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. |
azure-arc | Api Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/api-extended-security-updates.md | - Title: Programmatically deploy and manage Azure Arc Extended Security Updates licenses -description: Learn how to programmatically deploy and manage Azure Arc Extended Security Updates licenses for Windows Server 2012. Previously updated : 08/28/2024----# Programmatically deploy and manage Azure Arc Extended Security Updates licenses --This article provides instructions to programmatically provision and manage Windows Server 2012 and Windows Server 2012 R2 Extended Security Updates lifecycle operations through the Azure Arc WS2012 ESU ARM APIs. --For each of the API commands explained in this article, be sure to enter accurate parameter information for location, state, edition, type, and processors depending on your particular scenario --> [!NOTE] -> You'll need to create a service principal to use the Azure API to manage ESUs. See [Connect hybrid machines to Azure at scale](onboard-service-principal.md) and [Azure REST API reference](/rest/api/azure/) for more information. -> --## Provision a license --To provision a license, execute the following commands: --``` -PUT -https://management.azure.com/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP_NAME/providers/Microsoft.HybridCompute/licenses/LICENSE_NAME?api-version=2023-06-20-preview -{  -    "location": "ENTER-REGION",  -    "properties": {  -        "licenseDetails": {  -            "state": "Activated",  -            "target": "Windows Server 2012",  -            "Edition": "Datacenter",  -            "Type": "pCore",  -            "Processors": 12  -        }  -    }  -} -``` --### Transitioning from volume licensing --Programmatically, you can use Azure CLI to generate new licenses, specifying the `Volume License Details` parameter in your Year 1 Volume Licensing entitlements by entering the respective invoice numbers. You must explicitly specify the Invoice Id (Number) in your license provisioning for Azure Arc: --```azurecli -az connectedmachine license create --license-name - --resource-group - [--edition {Datacenter, Standard}] - [--license-type {ESU}] - [--location] - [--no-wait {0, 1, f, false, n, no, t, true, y, yes}] - [--processors] - [--state {Activated, Deactivated}] - [--tags] - [--target {Windows Server 2012, Windows Server 2012 R2}] - [--tenant-id] - [--type {pCore, vCore}] - [--volume-license-details] -``` --## Link a license --To link a license, execute the following commands: --``` -PUT -https://management.azure.com/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP_NAME/providers/Microsoft.HybridCompute/machines/MACHINE_NAME/licenseProfiles/default?api-version=2023-06-20-preview -{ - "location": "SAME_REGION_AS_MACHINE", - "properties": { - "esuProfile": { - "assignedLicense": "RESOURCE_ID_OF_LICENSE" - } - } -} -``` --## Unlink a license --To unlink a license, execute the following commands: --``` -PUT -https://management.azure.com/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP_NAME/providers/Microsoft.HybridCompute/machines/MACHINE_NAME/licenseProfiles/default?api-version=2023-06-20-preview -{ - "location": "SAME_REGION_AS_MACHINE", - "properties": { - "esuProfile": { - } - } -} -``` --## Modify a license --To modify a license, execute the following commands: --``` -PUT/PATCH -https://management.azure.com/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP_NAME/providers/Microsoft.HybridCompute/licenses/LICENSE_NAME?api-version=2023-06-20-preview -{  -    "location": "ENTER-REGION",  -    "properties": {  -        "licenseDetails": {  -            "state": "Activated",  -            "target": "Windows Server 2012",  -            "Edition": "Datacenter",  -            "Type": "pCore",  -            "Processors": 12  -        }  -    }  -} -``` --> [!NOTE] -> For PUT, all of the properties must be provided. For PATCH, a subset may be provided. -> --## Delete a license --To delete a license, execute the following commands: --``` -DELETE -https://management.azure.com/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP_NAME/providers/Microsoft.HybridCompute/licenses/LICENSE_NAME?api-version=2023-06-20-preview -``` |
azure-arc | Arc Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/arc-gateway.md | - Title: How to simplify network configuration requirements through Azure Arc gateway (Limited preview) -description: Learn how to simplify network configuration requirements through Azure Arc gateway (Limited preview). Previously updated : 06/26/2024----# Simplify network configuration requirements through Azure Arc gateway (Limited preview) --> [!NOTE] -> **This is a Limited Public Preview, so customer subscriptions must be allowed by Microsoft to use the feature. To participate, complete the [Azure Arc gateway Limited Public Preview Sign-up form](https://forms.office.com/r/bfTkU2i0Qw).** -> --If you use enterprise firewalls or proxies to manage outbound traffic, the Azure Arc gateway lets you onboard infrastructure to Azure Arc using only seven (7) endpoints. With Azure Arc gateway, you can: --- Connect to Azure Arc by opening public network access to only seven Fully Qualified Domains (FQDNs).-- View and audit all traffic an Azure Connected Machine agent sends to Azure via the Arc gateway.--This article explains how to set up and use an Arc gateway Resource. --> [!IMPORTANT] -> The Arc gateway feature for [Azure Arc-enabled servers](overview.md) is currently in Limited preview in all regions where Azure Arc-enabled servers is present. See the Supplemental Terms of Use for Microsoft Azure Limited previews for legal terms that apply to Azure features that are in beta, limited preview, or otherwise not yet released into general availability. -> --## Supported scenarios --Azure Arc gateway supports the following scenarios: --- Azure Monitor (Azure Monitor Agent + Dependency Agent) <sup>1</sup>-- Microsoft Defender for Cloud <sup>2</sup>-- Windows Admin Center-- SSH-- Microsoft Sentinel-- Azure Update Management-- Azure Extension for SQL Server--<sup>1</sup> Traffic to Log Analytics workspaces isn't covered by Arc gateway, so the FQDNs for your Log Analytics workspaces must still be allowed in your firewalls or enterprise proxies. --<sup>2</sup> To send Microsoft Defender traffic via Arc gateway, you must configure the extension’s proxy settings. --## How it works --Azure Arc gateway consists of two main components: --**The Arc gateway resource:** An Azure resource that serves as a common front-end for Azure traffic. This gateway resource is served on a specific domain. Once the Arc gateway resource is created, the domain is returned to you in the success response. --**The Arc Proxy:** A new component added to Arc agentry. This component runs as a service called "Azure Arc Proxy" and acts as a forward proxy used by the Azure Arc agents and extensions. No configuration is required on your part for the gateway router. This router is part of Arc core agentry and runs within the context of an Arc-enabled resource. --When the gateway is in place, traffic flows via the following hops: **Arc agentry → Arc Proxy → Enterprise proxy → Arc gateway → Target service** ---## Restrictions and limitations --The Arc gateway object has limits you should consider when planning your setup. These limitations apply only to the Limited public preview. These limitations might not apply when the Arc gateway feature is generally available. --- TLS Terminating Proxies aren't supported.-- ExpressRoute/Site-to-Site VPN used with the Arc gateway (Limited preview) isn't supported.-- The Arc gateway (Limited preview) is only supported for Azure Arc-enabled servers.-- There's a limit of five Arc gateway (Limited preview) resources per Azure subscription.--## How to use the Arc gateway (Limited preview) --After completing the [Azure Arc gateway Limited Public Preview Sign-up form](https://forms.office.com/r/bfTkU2i0Qw), your subscription will be allowed to use the feature within 1 business day. You'll receive an email when the Arc gateway (Limited preview) feature has been allowed on the subscription you submitted. --There are six main steps to use the feature: --1. Download the az connected.whl file and use it to install the az connectedmachine extension. -1. Create an Arc gateway resource. -1. Ensure the required URLs are allowed in your environment. -1. Associate new or existing Azure Arc resources with your Arc gateway resource. -1. Verify that the setup succeeded. -1. Ensure other scenarios use the Arc gateway (Linux only). --### Step 1: Download the az connectedmachine.whl file --1. Select the link to [download the az connectedmachine.whl file](https://aka.ms/ArcGatewayWhl). -- This file contains the az connected machine commands required to create and manage your gateway Resource. --1. Install the [Azure CLI](/cli/azure/install-azure-cli) (if you haven't already). --1. Execute the following command to add the connectedmachine extension: -- `az extension add --allow-preview true --source [whl file path]` --### Step 2: Create an Arc gateway resource --On a machine with access to Azure, run the following commands to create your Arc gateway resource: --```azurecli -az login --use-device-code -az account set --subscription [subscription name or id] -az connectedmachine gateway create --name [Your gateway’s Name] --resource-group [Your Resource Group] --location [Location] --gateway-type public --allowed-features * --subscription [subscription name or id] -``` -The gateway creation process takes 9-10 minutes to complete. --### Step 3: Ensure the required URLs are allowed in your environment --When the resource is created, the success response includes the Arc gateway URL. Ensure your Arc gateway URL and all URLs in the following table are allowed in the environment where your Arc resources live: --|URL |Purpose | -||| -|[Your URL Prefix].gw.arc.azure.com |Your gateway URL (This URL can be obtained by running `az connectedmachine gateway list` after you create your gateway Resource) | -|management.azure.com |Azure Resource Manager Endpoint, required for Azure Resource Manager control channel | -|login.microsoftonline.com |Microsoft Entra ID’s endpoint, for acquiring Identity access tokens | -|gbl.his.arc.azure.com |The cloud service endpoint for communicating with Azure Arc agents | -|\<region\>.his.arc.azure.com |Used for Arc’s core control channel | -|packages.microsoft.com |Required to acquire Linux based Arc agentry payload, only needed to connect Linux servers to Arc | -|download.microsoft.com |Used to download the Windows installation package | --### Step 4: Associate new or existing Azure Arc resources with your gateway resource --**To onboard a new server with Arc gateway**, generate an installation script, then edit the script to specify your gateway resource: --1. Generate the installation script. - Follow the instructions at [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](learn/quick-enable-hybrid-vm.md) to create a script that automates the downloading and installation of the Azure Connected Machine agent and establishes the connection with Azure Arc. - -1. Edit the installation script. - Your gateway Resource must be specific in the installation script. To accomplish this, a new parameter called `--gateway-id` is added to the connect command. -- **For Linux servers:** - - 1. Obtain your gateway's Resource ID by running the `az connectedmachine gateway list` command. Note the "id" parameter in the output (that is, the full ARM resource ID). - 1. In the installation script, add the "id" found in the previous step as the following parameter: `--gateway-id "[Your-gateway’s-Resource-ID]"` - - Linux server onboarding script example: -- This script template includes parameters for you to specify your enterprise proxy server. - - ``` - export subscriptionId="SubscriptionId"; - export resourceGroup="ResourceGroup"; - export tenantId="TenantID"; - export location="Region"; - export authType="AuthType"; - export cloud="AzureCloud"; - export gatewayID="gatewayResourceID"; - - # Download the installation package - output=$(wget https://aka.ms/azcmagent -e use_proxy=yes -e https_proxy="[Your Proxy URL]" -O /tmp/install_linux_azcmagent.sh 2>&1); - if [ $? != 0 ]; then wget -qO- -e use_proxy=yes -e https_proxy="[Your Proxy URL]" --method=PUT --body-data="{\"subscriptionId\":\"$subscriptionId\",\"resourceGroup\":\"$resourceGroup\",\"tenantId\":\"$tenantId\",\"location\":\"$location\",\"correlationId\":\"$correlationId\",\"authType\":\"$authType\",\"operation\":\"onboarding\",\"messageType\":\"DownloadScriptFailed\",\"message\":\"$output\"}" "https://gbl.his.arc.azure.com/log" &> || true; fi; - echo "$output"; - - # Install the hybrid agent - bash /tmp/install_linux_azcmagent.sh --proxy "[Your Proxy URL]"; - - # Run connect command - sudo azcmagent connect --resource-group "$resourceGroup" --tenant-id "$tenantId" --location "$location" --subscription-id "$subscriptionId" --cloud "$cloud" --correlation-id "$correlationId" --gateway-id "$gatewayID"; - ``` - - **For Windows servers:** - - 1. Obtain your gateway's Resource ID by running the `az connectedmachine gateway list` command. This command outputs information about all the gateway resources in your subscription. Note the ID parameter in the output (that is, the full ARM resource ID). - 1. In the **try section** of the installation script, add the ID found in the previous step as the following parameter: `--gateway-id "[Your-gateway’s-Resource-ID]"` - 1. In the **catch section** of the installation script, add the ID found in the previous step as the following parameter: `gateway-id="[Your-gateway’s-Resource-ID]"` - - Windows server onboarding script example: -- This script template includes parameters for you to specify your enterprise proxy server. -- ``` - $global:scriptPath = $myinvocation.mycommand.definition - - function Restart-AsAdmin { -     $pwshCommand = "powershell" -     if ($PSVersionTable.PSVersion.Major -ge 6) { -         $pwshCommand = "pwsh" -     } - -     try { -         Write-Host "This script requires administrator permissions to install the Azure Connected Machine Agent. Attempting to restart script with elevated permissions..." -         $arguments = "-NoExit -Command `"& '$scriptPath'`"" -         Start-Process $pwshCommand -Verb runAs -ArgumentList $arguments -         exit 0 -     } catch { -         throw "Failed to elevate permissions. Please run this script as Administrator." -     } - } - - try { -     if (-not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) { -         if ([System.Environment]::UserInteractive) { -             Restart-AsAdmin -         } else { -             throw "This script requires administrator permissions to install the Azure Connected Machine Agent. Please run this script as Administrator." -         } -     } - -     $env:SUBSCRIPTION_ID = "SubscriptionId"; -     $env:RESOURCE_GROUP = "ResourceGroup"; -     $env:TENANT_ID = "TenantID"; -     $env:LOCATION = "Region"; -     $env:AUTH_TYPE = "AuthType"; -     $env:CLOUD = "AzureCloud"; - $env:GATEWAY_ID = "gatewayResourceID"; - -     [Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor 3072; - -     # Download the installation package -     Invoke-WebRequest -UseBasicParsing -Uri "https://aka.ms/azcmagent-windows" -TimeoutSec 30 -OutFile "$env:TEMP\install_windows_azcmagent.ps1" -proxy "[Your Proxy URL]"; - -     # Install the hybrid agent -     & "$env:TEMP\install_windows_azcmagent.ps1" -proxy "[Your Proxy URL]"; -     if ($LASTEXITCODE -ne 0) { exit 1; } - -     # Run connect command -     & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --resource-group "$env:RESOURCE_GROUP" --tenant-id "$env:TENANT_ID" --location "$env:LOCATION" --subscription-id "$env:SUBSCRIPTION_ID" --cloud "$env:CLOUD" --gateway-id "$env:GATEWAY_ID"; - } - catch { -     $logBody = @{subscriptionId="$env:SUBSCRIPTION_ID";resourceGroup="$env:RESOURCE_GROUP";tenantId="$env:TENANT_ID";location="$env:LOCATION";authType="$env:AUTH_TYPE";gatewayId="$env:GATEWAY_ID";operation="onboarding";messageType=$_.FullyQualifiedErrorId;message="$_";}; -     Invoke-WebRequest -UseBasicParsing -Uri "https://gbl.his.arc.azure.com/log" -Method "PUT" -Body ($logBody | ConvertTo-Json) -proxy "[Your Proxy URL]" | out-null; -     Write-Host  -ForegroundColor red $_.Exception; - } - ``` - - -1. Run the installation script to onboard your servers to Azure Arc. --To configure an existing machine to use Arc gateway, follow these steps: --> [!NOTE] -> The existing machine must be using the Arc-enabled servers connected machine agent version 1.43 or higher to use the Arc gateway Limited Public preview. --1. Associate your existing machine with your Arc gateway resource: -- ```azurecli - az connectedmachine setting update --resource-group [res-group] --subscription [subscription name] --base-provider Microsoft.HybridCompute --base-resource-type machines --base-resource-name [Arc-server's resource name] --settings-resource-name default --gateway-resource-id [Full Arm resourceid] - ``` - -1. Update the machine to use the Arc gateway resource. - Run the following command on the Arc-enabled server to set it to use Arc gateway: -- ```azurecli - azcmagent config set connection.type gateway - ``` -1. Await reconciliation. -- Once your machines have been updated to use the Arc gateway, some Azure Arc endpoints that were previously allowed in your enterprise proxy or firewalls won't be needed. However, there's a transition period, so allow **1 hour** before removing unneeded endpoints from your firewall/enterprise proxy. - -### Step 5: Verify that the setup succeeded -On the onboarded server, run the following command: `azcmagent show` -The result should indicate the following values: --- **Agent Status** should show as **Connected**.-- **Using HTTPS Proxy** should show as **http://localhost:40343**-- **Upstream Proxy** should show as your enterprise proxy (if you set one)--Additionally, to verify successful set-up, you can run the following command: `azcmagent check` -The result should indicate that the `connection.type` is set to gateway, and the **Reachable** column should indicate **true** for all URLs. --### Step 6: Ensure additional scenarios use the Arc gateway (Linux only) --On Linux, to use Azure Monitor or Microsoft Defender for Endpoint, additional commands need to be executed to work with the Azure Arc gateway (Limited preview). --For **Azure Monitor**, explicit proxy settings should be provided when deploying Azure Monitor Agent. From Azure Cloud Shell, execute the following commands: --``` -$settings = @{"proxy" = @{mode = "application"; address = "http://127.0.0.1:40343"; auth = false}} --New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settings -``` --If you’re deploying Azure Monitor through the Azure portal, be sure to select the **Use Proxy** setting and set the **Proxy Address** to `http://127.0.0.1:40343`. --For **Microsoft Defender for Endpoint**, run the following command: --`mdatp config proxy set --value http://127.0.0.1:40343` --## Cleanup instructions --To clean up your gateway, detach the gateway resource from the applicable server(s); the resource can then be deleted safely: --1. Set the connection type of the Azure Arc-enabled server to "direct" instead of "gateway": -- `azcmagent config set connection.type direct` --1. Run the following command to delete the resource: -- `az connectedmachine gateway delete --resource group [resource group name] --gateway-name [gateway resource name]` -- This operation can take couple of minutes. --## Troubleshooting --You can audit your Arc gateway’s traffic by viewing the gateway Router’s logs. --To view gateway Router logs on **Windows**: -1. Run `azcmagent logs` in PowerShell. -1. In the resulting .zip file, the logs are located in the `C:\ProgramData\Microsoft\ArcGatewayRouter` folder. --To view gateway Router logs on **Linux**: -1. Run `sudo azcmagent logs`. -1. In the resulting log file, the logs are located in the `/usr/local/arcrtr/logs/` folder. --## Known issues --It's not yet possible to use the Azure CLI to disassociate a gateway Resource from an Arc-enabled server. To make an Arc-enabled server stop using an Arc gateway, use the `azcmagent config set connection.type direct` command. This command configures the Arc-enabled resource to use the direct route instead of the Arc gateway. - |
azure-arc | Azcmagent Check | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-check.md | - Title: azcmagent check CLI reference -description: Syntax for the azcmagent check command line tool - Previously updated : 05/22/2024---# azcmagent check --Run a series of network connectivity checks to see if the agent can successfully communicate with required network endpoints. The command outputs a table showing connectivity test results for each required endpoint, including whether the agent used a private endpoint and/or proxy server. --## Usage --``` -azcmagent check [flags] -``` --## Examples --Check connectivity with the agent's configured cloud and region. --``` -azcmagent check -``` --Check connectivity with the East US region using public endpoints. --``` -azcmagent check --location "eastus" -``` --Check connectivity for supported extensions (SQL Server enabled by Azure Arc) using public endpoints: --``` -azcmagent check --extensions all -``` ---Check connectivity with the Central India region using private endpoints. --``` -azcmagent check --location "centralindia" --enable-pls-check -``` --## Flags --`--cloud` --Specifies the Azure cloud instance. Must be used with the `--location` flag. If the machine is already connected to Azure Arc, the default value is the cloud to which the agent is already connected. Otherwise, the default value is AzureCloud. --Supported values: --* AzureCloud (public regions) -* AzureUSGovernment (Azure US Government regions) -* AzureChinaCloud (Microsoft Azure operated by 21Vianet regions) --`-e`, `--extensions` --Includes extra checks for extension endpoints to help validate end-to-end scenario readiness. This flag is available in agent version 1.41 and later. --Supported values: --* all (checks all supported extension endpoints) -* sql (SQL Server enabled by Azure Arc) --`-l`, `--location` --The Azure region to check connectivity with. If the machine is already connected to Azure Arc, the current region is selected as the default. --Sample value: westeurope --`-p`, `--enable-pls-check` --Checks if supported Azure Arc endpoints resolve to private IP addresses. This flag should be used when you intend to connect the server to Azure using an Azure Arc private link scope. - |
azure-arc | Azcmagent Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-config.md | - Title: azcmagent config CLI reference -description: Syntax for the azcmagent config command line tool - Previously updated : 04/20/2023---# azcmagent config --Configure settings for the Azure connected machine agent. Configurations are stored locally and are unique to each machine. Available configuration properties vary by agent version. Use [azcmagent config info](#azcmagent-config-info) to see all available configuration properties and supported values for the currently installed agent. --## Commands --| Command | Purpose | -| - | - | -| [azcmagent config clear](#azcmagent-config-clear) | Clear a configuration property's value | -| [azcmagent config get](#azcmagent-config-get) | Gets a configuration property's value | -| [azcmagent config info](#azcmagent-config-info) | Describes all available configuration properties and supported values | -| [azcmagent config list](#azcmagent-config-list) | Lists all configuration properties and values | -| [azcmagent config set](#azcmagent-config-set) | Set a value for a configuration property | --## azcmagent config clear --Clear a configuration property's value and reset it to its default state. --### Usage --``` -azcmagent config clear [property] [flags] -``` --### Examples --Clear the proxy server URL property. --``` -azcmagent config clear proxy.url -``` --### Flags ---## azcmagent config get --Get a configuration property's value. --### Usage --``` -azcmagent config get [property] [flags] -``` --### Examples --Get the agent mode. --``` -azcmagent config get config.mode -``` --### Flags ---## azcmagent config info --Describes available configuration properties and supported values. When run without specifying a specific property, the command describes all available properties their supported values. --### Usage --``` -azcmagent config info [property] [flags] -``` --### Examples --Describe all available configuration properties and supported values. --``` -azcmagent config info -``` --Learn more about the extensions allowlist property and its supported values. --``` -azcmagent config info extensions.allowlist -``` --### Flags ---## azcmagent config list --Lists all configuration properties and their current values --### Usage --``` -azcmagent config list [flags] -``` --### Examples --List the current agent configuration. --``` -azcmagent config list -``` --### Flags ---## azcmagent config set --Set a value for a configuration property. --### Usage --``` -azcmagent config set [property] [value] [flags] -``` --### Examples --Configure the agent to use a proxy server. --``` -azcmagent config set proxy.url "http://proxy.contoso.corp:8080" -``` --Append an extension to the extension allowlist. --``` -azcmagent config set extensions.allowlist "Microsoft.Azure.Monitor/AzureMonitorWindowsAgent" --add -``` --### Flags --`-a`, `--add` --Append the value to the list of existing values. If not specified, the default behavior is to replace the list of existing values. This flag is only supported for configuration properties that support more than one value. Can't be used with the `--remove` flag. --`-r`, `--remove` --Remove the specified value from the list, retaining all other values. If not specified, the default behavior is to replace the list of existing values. This flag is only supported for configuration properties that support more than one value. Can't be used in conjunction with the `--add` flag. - |
azure-arc | Azcmagent Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-connect.md | - Title: azcmagent connect CLI reference -description: Syntax for the azcmagent connect command line tool - Previously updated : 10/05/2023---# azcmagent connect --Connects the server to Azure Arc by creating a metadata representation of the server in Azure and associating the Azure connected machine agent with it. The command requires information about the tenant, subscription, and resource group where you want to represent the server in Azure and valid credentials with permissions to create Azure Arc-enabled server resources in that location. --## Usage --``` -azcmagent connect [authentication] --subscription-id [subscription] --resource-group [resourcegroup] --location [region] [flags] -``` --## Examples --Connect a server using the default login method (interactive browser or device code). --``` -azcmagent connect --subscription-id "Production" --resource-group "HybridServers" --location "eastus" -``` --``` -azcmagent connect --subscription-id "Production" --resource-group "HybridServers" --location "eastus" --use-device-code -``` --Connect a server using a service principal. --``` -azcmagent connect --subscription-id "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" --resource-group "HybridServers" --location "australiaeast" --service-principal-id "ID" --service-principal-secret "SECRET" --tenant-id "TENANT" -``` --Connect a server using a private endpoint and device code login method. --``` -azcmagent connect --subscription-id "Production" --resource-group "HybridServers" --location "koreacentral" --use-device-code --private-link-scope "/subscriptions/.../Microsoft.HybridCompute/privateLinkScopes/ScopeName" -``` --## Authentication options --There are four ways to provide authentication credentials to the Azure connected machine agent. Choose one authentication option and replace the `[authentication]` section in the usage syntax with the recommended flags. --### Interactive browser login (Windows-only) --This option is the default on Windows operating systems with a desktop experience. It login page opens in your default web browser. This option might be required if your organization configured conditional access policies that require you to log in from trusted machines. --No flag is required to use the interactive browser login. --### Device code login --This option generates a code that you can use to log in on a web browser on another device. This is the default option on Windows Server core editions and all Linux distributions. When you execute the connect command, you have 5 minutes to open the specified login URL on an internet-connected device and complete the login flow. --To authenticate with a device code, use the `--use-device-code` flag. If the account you're logging in with and the subscription where you're registering the server aren't in the same tenant, you must also provide the tenant ID for the subscription with `--tenant-id [tenant]`. --### Service principal with secret --Service principals allow you to authenticate non-interactively and are often used for at-scale deployments where the same script is run across multiple servers. Microsoft recommends providing service principal information via a configuration file (see `--config`) to avoid exposing the secret in any console logs. The service principal should also be dedicated for Arc onboarding and have as few permissions as possible, to limit the impact of a stolen credential. --To authenticate with a service principal using a secret, provide the service principal's application ID, secret, and tenant ID: `--service-principal-id [appid] --service-principal-secret [secret] --tenant-id [tenantid]` --### Service principal with certificate --Certificate-based authentication is a more secure way to authenticate using service principals. The agent accepts both PCKS #12 (.PFX) files and ASCII-encoded files (such as .PEM) that contain both the private and public keys. The certificate must be available on the local disk and the user running the `azcmagent` command needs read access to the file. Password-protected PFX files are not supported. --To authenticate with a service principal using a certificate, provide the service principal's application ID, tenant ID, and path to the certificate file: `--service-principal-id [appId] --service-principal-cert [pathToPEMorPFXfile] --tenant-id [tenantid]` --For more information, see [create a service principal for RBAC with certificate-based authentication](/cli/azure/azure-cli-sp-tutorial-3). --### Access token --Access tokens can also be used for non-interactive authentication, but are short-lived and typically used by automation solutions onboarding several servers over a short period of time. You can get an access token with [Get-AzAccessToken](/powershell/module/az.accounts/get-azaccesstoken) or any other Microsoft Entra client. --To authenticate with an access token, use the `--access-token [token]` flag. If the account you're logging in with and the subscription where you're registering the server aren't in the same tenant, you must also provide the tenant ID for the subscription with `--tenant-id [tenant]`. --## Flags --`--access-token` --Specifies the Microsoft Entra access token used to create the Azure Arc-enabled server resource in Azure. For more information, see [authentication options](#authentication-options). --`--automanage-profile` --Resource ID of an Azure Automanage best practices profile that will be applied to the server once it's connected to Azure. --Sample value: /providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction --`--cloud` --Specifies the Azure cloud instance. Must be used with the `--location` flag. If the machine is already connected to Azure Arc, the default value is the cloud to which the agent is already connected. Otherwise, the default value is "AzureCloud". --Supported values: --* AzureCloud (public regions) -* AzureUSGovernment (Azure US Government regions) -* AzureChinaCloud (Microsoft Azure operated by 21Vianet regions) --`--correlation-id` --Identifies the mechanism being used to connect the server to Azure Arc. For example, scripts generated in the Azure portal include a GUID that helps Microsoft track usage of that experience. This flag is optional and only used for telemetry purposes to improve your experience. --`--ignore-network-check` --Instructs the agent to continue onboarding even if the network check for required endpoints fails. You should only use this option if you're sure that the network check results are incorrect. In most cases, a failed network check indicates that the Azure Connected Machine agent won't function correctly on the server. --`-l`, `--location` --The Azure region to check connectivity with. If the machine is already connected to Azure Arc, the current region is selected as the default. --Sample value: westeurope --`--private-link-scope` --Specifies the resource ID of the Azure Arc private link scope to associate with the server. This flag is required if you're using private endpoints to connect the server to Azure. --`-g`, `--resource-group` --Name of the Azure resource group where you want to create the Azure Arc-enabled server resource. --Sample value: HybridServers --`-n`, `--resource-name` --Name for the Azure Arc-enabled server resource. By default, the resource name is: --* The AWS instance ID, if the server is on AWS -* The hostname for all other machines --You can override the default name with a name of your own choosing to avoid naming conflicts. Once chosen, the name of the Azure resource can't be changed without disconnecting and re-connecting the agent. --If you want to force AWS servers to use the hostname instead of the instance ID, pass in `$(hostname)` to have the shell evaluate the current hostname and pass that in as the new resource name. --Sample value: FileServer01 --`-i`, `--service-principal-id` --Specifies the application ID of the service principal used to create the Azure Arc-enabled server resource in Azure. Must be used with the `--tenant-id` and either the `--service-principal-secret` or `--service-principal-cert` flags. For more information, see [authentication options](#authentication-options). --`--service-principal-cert` --Specifies the path to a service principal certificate file. Must be used with the `--service-principal-id` and `--tenant-id` flags. The certificate must include a private key and can be in a PKCS #12 (.PFX) or ASCII-encoded text (.PEM, .CRT) format. Password-protected PFX files are not supported. For more information, see [authentication options](#authentication-options). --`-p`, `--service-principal-secret` --Specifies the service principal secret. Must be used with the `--service-principal-id` and `--tenant-id` flags. To avoid exposing the secret in console logs, Microsoft recommended providing the service principal secret in a configuration file. For more information, see [authentication options](#authentication-options). --`-s`, `--subscription-id` --The subscription name or ID where you want to create the Azure Arc-enabled server resource. --Sample values: Production, aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee --`--tags` --Comma-delimited list of tags to apply to the Azure Arc-enabled server resource. Each tag should be specified in the format: TagName=TagValue. If the tag name or value contains a space, use single quotes around the name or value. --Sample value: Datacenter=NY3,Application=SharePoint,Owner='Shared Infrastructure Services' --`-t`, `--tenant-id` --The tenant ID for the subscription where you want to create the Azure Arc-enabled server resource. This flag is required when authenticating with a service principal. For all other authentication methods, the home tenant of the account used to authenticate with Azure is used for the resource as well. If the tenants for the account and subscription are different (guest accounts, Lighthouse), you must specify the tenant ID to clarify the tenant where the subscription is located. --`--use-device-code` --Generate a Microsoft Entra device login code that can be entered in a web browser on another computer to authenticate the agent with Azure. For more information, see [authentication options](#authentication-options). --`--user-tenant-id` --The tenant ID for the account used to connect the server to Azure. This field is required when the tenant of the onboarding account isn't the same as the desired tenant for the Azure Arc-enabled server resource. - |
azure-arc | Azcmagent Disconnect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-disconnect.md | - Title: azcmagent disconnect CLI reference -description: Syntax for the azcmagent disconnect command line tool - Previously updated : 04/20/2023---# azcmagent disconnect --Deletes the Azure Arc-enabled server resource in the cloud and resets the configuration of the local agent. For detailed information on removing extensions and disconnecting and uninstalling the agent, see [uninstall the agent](manage-agent.md#uninstall-the-agent). --## Usage --``` -azcmagent disconnect [authentication] [flags] -``` --## Examples --Disconnect a server using the default login method (interactive browser or device code). --``` -azcmagent disconnect -``` --Disconnect a server using a service principal. --``` -azcmagent disconnect --service-principal-id "ID" --service-principal-secret "SECRET" -``` --Disconnect a server if the corresponding resource in Azure has already been deleted. --``` -azcmagent disconnect --force-local-only -``` --## Authentication options --There are four ways to provide authentication credentials to the Azure connected machine agent. Choose one authentication option and replace the `[authentication]` section in the usage syntax with the recommended flags. --> [!NOTE] -> The account used to disconnect a server must be from the same tenant as the subscription where the server is registered. --### Interactive browser login (Windows-only) --This option is the default on Windows operating systems with a desktop experience. The login page opens in your default web browser. This option might be required if your organization configured conditional access policies that require you to log in from trusted machines. --No flag is required to use the interactive browser login. --### Device code login --This option generates a code that you can use to log in on a web browser on another device. This is the default option on Windows Server core editions and all Linux distributions. When you execute the connect command, you have 5 minutes to open the specified login URL on an internet-connected device and complete the login flow. --To authenticate with a device code, use the `--use-device-code` flag. --### Service principal with secret --Service principals allow you to authenticate non-interactively and are often used for at-scale operations where the same script is run across multiple servers. It's recommended that you provide service principal information via a configuration file (see `--config`) to avoid exposing the secret in any console logs. The service principal should also be dedicated for Arc onboarding and have as few permissions as possible, to limit the impact of a stolen credential. --To authenticate with a service principal using a secret, provide the service principal's application ID, secret, and tenant ID: `--service-principal-id [appid] --service-principal-secret [secret] --tenant-id [tenantid]` --### Service principal with certificate --Certificate-based authentication is a more secure way to authenticate using service principals. The agent accepts both PCKS #12 (.PFX) files and ASCII-encoded files (such as .PEM) that contain both the private and public keys. The certificate must be available on the local disk and the user running the `azcmagent` command needs read access to the file. Password-protected PFX files are not supported. --To authenticate with a service principal using a certificate, provide the service principal's application ID, tenant ID, and path to the certificate file: `--service-principal-id [appId] --service-principal-cert [pathToPEMorPFXfile] --tenant-id [tenantid]` --For more information, see [create a service principal for RBAC with certificate-based authentication](/cli/azure/azure-cli-sp-tutorial-3). --### Access token --Access tokens can also be used for non-interactive authentication, but are short-lived and typically used by automation solutions operating on several servers over a short period of time. You can get an access token with [Get-AzAccessToken](/powershell/module/az.accounts/get-azaccesstoken) or any other Microsoft Entra client. --To authenticate with an access token, use the `--access-token [token]` flag. --## Flags --`--access-token` --Specifies the Microsoft Entra access token used to create the Azure Arc-enabled server resource in Azure. For more information, see [authentication options](#authentication-options). --`-f`, `--force-local-only` --Disconnects the server without deleting the resource in Azure. Primarily used if the Azure resource was deleted and the local agent configuration needs to be cleaned up. --`-i`, `--service-principal-id` --Specifies the application ID of the service principal used to create the Azure Arc-enabled server resource in Azure. Must be used with the `--tenant-id` and either the `--service-principal-secret` or `--service-principal-cert` flags. For more information, see [authentication options](#authentication-options). --`--service-principal-cert` --Specifies the path to a service principal certificate file. Must be used with the `--service-principal-id` and `--tenant-id` flags. The certificate must include a private key and can be in a PKCS #12 (.PFX) or ASCII-encoded text (.PEM, .CRT) format. Password-protected PFX files are not supported. For more information, see [authentication options](#authentication-options). --`-p`, `--service-principal-secret` --Specifies the service principal secret. Must be used with the `--service-principal-id` and `--tenant-id` flags. To avoid exposing the secret in console logs, Microsoft recommends providing the service principal secret in a configuration file. For more information, see [authentication options](#authentication-options). --`--use-device-code` --Generate a Microsoft Entra device login code that can be entered in a web browser on another computer to authenticate the agent with Azure. For more information, see [authentication options](#authentication-options). --`--user-tenant-id` --The tenant ID for the account used to connect the server to Azure. This field is required when the tenant of the onboarding account isn't the same as the desired tenant for the Azure Arc-enabled server resource. - |
azure-arc | Azcmagent Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-extension.md | - Title: azcmagent extension CLI reference -description: Syntax for the azcmagent extension command line tool - Previously updated : 03/11/2024---# azcmagent extension --Local management of Azure Arc extensions installed on the machine. These commands can be run even when a machine is in a disconnected state. --The extension manager must be stopped before running any of these commands. Stopping the extension manager interrupts any in-progress extension installs, upgrades, and removals. To disable the extension manager, run `Stop-Service ExtensionService` on Windows or `systemctl stop extd`. When you're done managing extensions locally, start the extension manager again with `Start-Service ExtensionService` on Windows or `systemctl start extd` on Linux. --## Commands --| Command | Purpose | -| - | - | -| [azcmagent extension list](#azcmagent-extension-list) | Lists extensions installed on the machine | -| [azcmagent extension remove](#azcmagent-extension-remove) | Uninstalls extensions on the machine | --## azcmagent extension list --Lists extensions installed on the machine. --### Usage --``` -azcmagent extension list [flags] -``` --### Examples --See which extensions are installed on your machine. --``` -azcmagent extension list -``` --### Flags ---## azcmagent extension remove --Uninstalls extensions on the machine. --### Usage --``` -azcmagent extension remove [flags] -``` --### Examples --Remove the "AzureMonitorWindowsAgent" extension from the local machine. --``` -azcmagent extension remove --name AzureMonitorWindowsAgent -``` --Remove all extensions from the local machine. --``` -azcmagent extension remove --all -``` --### Flags --`--all`, `-a` --Removes all extensions from the machine. --`--name`, `-n` --Removes the specified extension from the machine. Use [azcmagent extension list](#azcmagent-extension-list) to get the name of the extension. - |
azure-arc | Azcmagent Genkey | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-genkey.md | - Title: azcmagent genkey CLI reference -description: Syntax for the azcmagent genkey command line tool - Previously updated : 04/20/2023---# azcmagent genkey --Generates a private-public key pair that can be used to onboard a machine asynchronously. This command is used when connecting a server to an Azure Arc-enabled virtual machine offering (for example, [Azure Arc-enabled VMware vSphere VMs](../vmware-vsphere/overview.md)). You should normally use [azcmagent connect](azcmagent-connect.md) to configure the agent. --## Usage --``` -azcmagent genkey [flags] -``` --## Examples --Generate a key pair and print the public key to the console. --``` -azcmagent genkey -``` --## Flags - |
azure-arc | Azcmagent Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-help.md | - Title: azcmagent help CLI reference -description: Syntax for the azcmagent help command line tool - Previously updated : 04/20/2023---# azcmagent help --Prints usage information and a list of all available commands for the Azure Connected Machine agent CLI. For help with a particular command, use `azcmagent COMMANDNAME --help`. --## Usage --``` -azcmagent help [flags] -``` --## Examples --Show all available commands for the command line interface. --``` -azcmagent help -``` --## Flags - |
azure-arc | Azcmagent License | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-license.md | - Title: azcmagent license CLI reference -description: Syntax for the azcmagent license command line tool - Previously updated : 04/20/2023---# azcmagent license --Show the license agreement for the Azure Connected Machine agent. --## Usage --``` -azcmagent license [flags] -``` --## Examples --Show the license agreement. --``` -azcmagent license -``` --## Flags - |
azure-arc | Azcmagent Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-logs.md | - Title: azcmagent logs CLI reference -description: Syntax for the azcmagent logs command line tool - Previously updated : 04/20/2023---# azcmagent logs --Collects log files for the Azure connected machine agent and extensions into a ZIP archive. --## Usage --``` -azcmagent logs [flags] -``` --## Examples --Collect the most recent log files and store them in a ZIP archive in the current directory. --``` -azcmagent logs -``` --Collect all log files and store them in a specific location. --``` -azcmagent logs --full --output "/tmp/azcmagent-logs.zip" -``` --## Flags --`-f`, `--full` --Collect all log files on the system instead of just the most recent. Useful when troubleshooting older problems. --`-o`, `--output` --Specifies the path and name for the ZIP file. If this flag isn't specified, the ZIP is saved to the console's current directory with the name "azcmagent-_TIMESTAMP_-_COMPUTERNAME_.zip" --Sample value: custom-logname.zip - |
azure-arc | Azcmagent Show | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-show.md | - Title: azcmagent show CLI reference -description: Syntax for the azcmagent show command line tool - Previously updated : 06/06/2023---# azcmagent show --Displays the current state of the Azure Connected Machine agent, including whether or not it's connected to Azure, the Azure resource information, and the status of dependent services. --> [!NOTE] -> **azcmagent show** does not require administrator privileges --## Usage --``` -azcmagent show [property1] [property2] ... [propertyN] [flags] -``` --## Examples --Check the status of the agent. --``` -azcmagent show -``` --Check the status of the agent and save it in a JSON file in the current directory. --``` -azcmagent show -j > "agent-status.json" -``` --Show only the agent status and last heartbeat time (using display names) --``` -azcmagent show "Agent Status" "Agent Last Heartbeat" -``` --Show only the agent status and last heartbeat time (using JSON keys) --``` -azcmagent show status lastHeartbeat -``` --## Flags --`[property]` --The name of a property to include in the output. If you want to show more than one property, separate them by spaces. You can use either the display name or the JSON key name to specify a property. For display names with spaces, enclose the property in quotes. --`--os` --Outputs additional information about the operating system. - |
azure-arc | Azcmagent Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-version.md | - Title: azcmagent version CLI reference -description: Syntax for the azcmagent version command line tool - Previously updated : 04/20/2023---# azcmagent version --Shows the version of the currently installed agent. --## Usage --``` -azcmagent version [flags] -``` --## Examples --Show the agent version. --``` -azcmagent version -``` --## Flags - |
azure-arc | Azcmagent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent.md | - Title: azcmagent CLI reference -description: Reference documentation for the Azure Connected Machine agent command line tool - Previously updated : 04/20/2023---# azcmagent CLI reference --The Azure Connected Machine agent command line tool, azcmagent, helps you configure, manage, and troubleshoot a server's connection with Azure Arc. The azcmagent CLI is installed with the Azure Connected Machine agent and controls actions specific to the server where it's running. Once the server is connected to Azure Arc, you can use the [Azure CLI](/cli/azure/connectedmachine) or [Azure PowerShell](/powershell/module/az.connectedmachine/) module to enable extensions, manage tags, and perform other operations on the server resource. --Unless otherwise specified, the command syntax and flags represent available options in the most recent release of the Azure Connected Machine agent. For more information, see [What's new with the Azure Connected Machine agent](agent-release-notes.md). --## Commands --| Command | Purpose | -| - | - | -| [azcmagent check](azcmagent-check.md) | Run network connectivity checks for Azure Arc endpoints | -| [azcmagent config](azcmagent-config.md) | Manage agent settings | -| [azcmagent connect](azcmagent-connect.md) | Connect the server to Azure Arc | -| [azcmagent disconnect](azcmagent-disconnect.md) | Disconnect the server from Azure Arc | -| [azcmagent genkey](azcmagent-genkey.md) | Generate a public-private key pair for asynchronous onboarding | -| [azcmagent help](azcmagent-help.md) | Get help for commands | -| [azcmagent license](azcmagent-license.md) | Display the end-user license agreement | -| [azcmagent logs](azcmagent-logs.md) | Collect logs to troubleshoot agent issues | -| [azcmagent show](azcmagent-show.md) | Display the agent status | -| [azcmagent version](azcmagent-version.md) | Display the agent version | --## Frequently asked questions --### How can I install the azcmagent CLI? --The azcmagent CLI is bundled with the Azure Connected Machine agent. Review your [deployment options](deployment-options.md) for Azure Arc to learn how to install and configure the agent. --### Where is the CLI installed? --On Windows operating systems, the CLI is installed at `%PROGRAMFILES%\AzureConnectedMachineAgent\azcmagent.exe`. This path is automatically added to the system PATH variable during the installation process. You may need to close and reopen your console to refresh the PATH variable and be able to run `azcmagent` without specifying the full path. --On Linux operating systems, the CLI is installed at `/opt/azcmagent/bin/azcmagent` --### What's the difference between the azcmagent CLI and the Azure CLI for Azure Arc-enabled servers? --The azcmagent CLI is used to configure the local agent. It's responsible for connecting the agent to Azure, disconnecting it, and configuring local settings like proxy URLs and security features. --The Azure CLI and other management experiences are used to interact with the Azure Arc resource in Azure once the agent is connected. These tools help you manage extensions, move the resource to another subscription or resource group, and change certain settings of the Arc server remotely. |
azure-arc | Billing Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/billing-extended-security-updates.md | - Title: Billing service for Extended Security Updates for Windows Server 2012 through Azure Arc -description: Learn about billing services for Extended Security Updates for Windows Server 2012 enabled by Azure Arc. Previously updated : 04/10/2023----# Billing service for Extended Security Updates for Windows Server 2012 enabled by Azure Arc --Three factors impact billing for Extended Security Updates (ESUs): --- The number of cores provisioned-- The edition of the license (Standard vs. Datacenter)-- The application of any eligible discounts--Billing is monthly. Decrementing, deactivating, or deleting a license results in charges for up to five more calendar days from the time of decrement, deactivation, or deletion. Reduction in billing isn't immediate. This is an Azure-billed service and can be used to decrement a customer's Microsoft Azure Consumption Commitment (MACC) and be eligible for Azure Consumption Discount (ACD). --> [!NOTE] -> Licenses or additional cores provisioned after End of Support are subject to a one-time back-billing charge during the month in which the license was provisioned. This isn't reflective of the recurring monthly bill. --## Back-billing for ESUs enabled by Azure Arc --Licenses that are provisioned after the End of Support (EOS) date of October 10, 2023 are charged a back bill for the time elapsed since the EOS date. For example, an ESU license provisioned in December 2023 is back-billed for October and November upon provisioning. Enrolling late in WS2012 ESUs makes you eligible for all the critical security patches up to that point. The back-billing charge reflects the value of these critical security patches. --If you deactivate and then later reactivate a license, you're billed for the window during which the license was deactivated. It isn't possible to evade charges by deactivating a license before a critical security patch and reactivating it shortly before. --If the region or the tenant of an ESU license is changed, this is subject to back-billing charges. --> [!NOTE] -> The back-billing cost appears as a separate line item in invoicing. If you acquired a discount for your core WS2012 ESUs enabled by Azure Arc, the same discount may or may not apply to back-billing. You should verify that the same discounting, if applicable, has been applied to back-billing charges as well. -> --Note that estimates in the Azure Cost Management forecast may not accurately project monthly costs. Due to the episodic nature of back-billing charges, the projection of monthly costs may appear as overestimated during initial months. --## Billing associated with modifications to an Azure Arc ESU license --- **License type:** License type (either Standard or Datacenter) is an immutable property. The billing associated with a license is specific to the edition of the provisioned license.-- > [!NOTE] - > If you previously provisioned a Datacenter Virtual Core license, it will be charged with and offer the virtualization benefits associated with the pricing of a Datacenter edition license. - > --- **Core modification:** If cores are added to an existing ESU license, they're subject to back-billing (that is, charges for the time elapsed since EOS) and regularly billed from the calendar month in which they were added. If cores are reduced or decremented to an existing ESU license, the billing rate reflects the reduced number of cores within 5 days of the change.--- **Activation:** Licenses are billed for their number and edition of cores from the point at which they're activated. The activated license doesn't need to be linked to any Azure Arc-enabled servers to initiate billing. Activation and reactivation are subject to back-billing. Note that licenses that were activated but not linked to any servers may be back-billed if they weren't billed upon creation. Customers are responsible for deletion of any activated but unlinked ESU licenses.--- **Deactivation or deletion:** Licenses that are deactivated or deleted will be billed through up to five calendar days from the time of the change.--## Services included with WS2012 ESUs enabled by Azure Arc --Purchase of Windows Server 2012/R2 ESUs enabled by Azure Arc provides you with the benefit of access to more Azure management services at no additional cost for enrolled servers. See [Access to Azure services](prepare-extended-security-updates.md#access-to-azure-services) to learn more. --Azure Arc-enabled servers allow you the flexibility to evaluate and operationalize AzureΓÇÖs robust security, monitoring, and governance capabilities for your non-Azure infrastructure, delivering key value beyond the observability, ease of enrollment, and financial flexibility of WS2012 ESUs enabled by Azure Arc. --## Additional notes --- You'll be billed if you connect an activated Azure Arc ESU license to environments like Azure Stack HCI or Azure VMware Solution. These environments are eligible for free Windows Server 2012 ESUs enabled by Azure Arc and shouldn't be activated through Azure Arc.--- You'll be billed for all of the cores provisioned in the license. If provision licenses for free ESU usage like Visual Studio Development environments, you shouldn't provision additional cores for the scope of licensing applied to non-paid ESU coverage.--- Migration and modernization of End-of-Life infrastructure to Azure, including Azure VMware Solution and Azure Stack HCI, can reduce the need for paid WS2012 ESUs. You must decrement the cores with their Azure Arc ESU licenses or deactivate and delete ESU licenses to benefit from the cost savings associated with Azure ArcΓÇÖs flexible monthly billing model. This isn't an automatic process.- -- For customers seeking to transition from Volume Licensing based MAK Keys for Year 1 of WS2012/R2 ESUs to WS2012/R2 ESUs enabled by Azure Arc for Year 2, [there's a transition process](license-extended-security-updates.md#scenario-5-you-have-already-purchased-the-traditional-windows-server-2012-esus-through-volume-licensing) that is exempt from back-billing. - |
azure-arc | Concept Log Analytics Extension Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/concept-log-analytics-extension-deployment.md | - Title: Deploy Azure Monitor agent on Arc-enabled servers -description: This article reviews the different methods to deploy the Azure Monitor agent on Windows and Linux-based machines registered with Azure Arc-enabled servers in your local datacenter or other cloud environment. Previously updated : 05/08/2024-----# Deployment options for Azure Monitor agent on Azure Arc-enabled servers --Azure Monitor supports multiple methods to install the Azure Monitor agent and connect your machine or server registered with Azure Arc-enabled servers to the service. Azure Arc-enabled servers support the Azure VM extension framework, which provides post-deployment configuration and automation tasks, enabling you to simplify management of your hybrid machines like you can with Azure VMs. --The Azure Monitor agent is required if you want to: --* Monitor the operating system and any workloads running on the machine or server using [VM insights](/azure/azure-monitor/vm/vminsights-overview) -* Analyze and alert using [Azure Monitor](/azure/azure-monitor/overview) -* Perform security monitoring in Azure by using [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction) or [Microsoft Sentinel](../../sentinel/overview.md) -* Collect inventory and track changes by using [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md) --> [!NOTE] -> Azure Monitor agent logs are stored locally and are updated after temporary disconnection of an Arc-enabled machine. -> --This article reviews the deployment methods for the Azure Monitor agent VM extension, across multiple production physical servers or virtual machines in your environment, to help you determine which works best for your organization. If you are interested in the new Azure Monitor agent and want to see a detailed comparison, see [Azure Monitor agents overview](/azure/azure-monitor/agents/agents-overview). --## Installation options --Review the different methods to install the VM extension using one method or a combination and determine which one works best for your scenario. --### Use Azure Arc-enabled servers --This method supports managing the installation, management, and removal of VM extensions (including the Azure Monitor agent) from the [Azure portal](manage-vm-extensions-portal.md), using [PowerShell](manage-vm-extensions-powershell.md), the [Azure CLI](manage-vm-extensions-cli.md), or with an [Azure Resource Manager (ARM) template](manage-vm-extensions-template.md). --#### Advantages --* Can be useful for testing purposes -* Useful if you have a few machines to manage --#### Disadvantages --* Limited automation when using an Azure Resource Manager template -* Can only focus on a single Arc-enabled server, and not multiple instances -* Only supports specifying a single workspace to report to; requires using PowerShell or the Azure CLI to configure the Log Analytics Windows agent VM extension to report to up to four workspaces -* Doesn't support deploying the Dependency agent from the portal; you can only use PowerShell, the Azure CLI, or ARM template --### Use Azure Policy --You can use Azure Policy to deploy the Azure Monitor agent VM extension at-scale to machines in your environment, and maintain configuration compliance. This is accomplished by using either the [**Configure Linux Arc-enabled machines to run Azure Monitor Agent**](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F845857af-0333-4c5d-bbbc-6076697da122) or the [**Configure Windows Arc-enabled machines to run Azure Monitor Agent**](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94f686d6-9a24-4e19-91f1-de937dc171a4) policy definition. --Azure Policy includes several prebuilt definitions related to Azure Monitor. For a complete list of the built-in policies in the **Monitoring** category, see [Azure Policy built-in definitions for Azure Monitor](/azure/azure-monitor/policy-reference). --#### Advantages --* Reinstalls the VM extension if removed (after policy evaluation) -* Identifies and installs the VM extension when a new Azure Arc-enabled server is registered with Azure --#### Disadvantages --* The **Configure** *operating system* **Arc-enabled machines to run Azure Monitor Agent** policy only installs the Azure Monitor agent extension and configures the agent to report to a specified Log Analytics workspace. -* Standard compliance evaluation cycle is once every 24 hours. An evaluation scan for a subscription or a resource group can be started with Azure CLI, Azure PowerShell, a call to the REST API, or by using the Azure Policy Compliance Scan GitHub Action. For more information, see [Evaluation triggers](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers). --### Use Azure Automation --The process automation operating environment in Azure Automation and its support for PowerShell and Python runbooks can help you automate the deployment of the Azure Monitor agent VM extension at scale to machines in your environment. --#### Advantages --* Can use a scripted method to automate its deployment and configuration using scripting languages you're familiar with -* Runs on a schedule that you define and control -* Authenticate securely to Arc-enabled servers from the Automation account using a managed identity --#### Disadvantages --* Requires an Azure Automation account -* Experience authoring and managing runbooks in Azure Automation -* Must create a runbook based on PowerShell or Python, depending on the target operating system --### Use Azure portal --The Azure Monitor agent VM extension can be installed using the Azure portal. See [Automatic extension upgrade for Azure Arc-enabled servers](manage-automatic-vm-extension-upgrade.md) for more information about installing extensions from the Azure portal. --#### Advantages --* Point and click directly from Azure portal -* Useful for testing with small set of servers -* Immediate deployment of extension --#### Disadvantages --* Not scalable to many servers -* Limited automation --## Next steps --* To start collecting security-related events with Microsoft Sentinel, see [onboard to Microsoft Sentinel](scenario-onboard-azure-sentinel.md), or to collect with Microsoft Defender for Cloud, see [onboard to Microsoft Defender for Cloud](../../security-center/quickstart-onboard-machines.md). --* Read the VM insights [Monitor performance](/azure/azure-monitor/vm/vminsights-performance) and [Map dependencies](/azure/azure-monitor/vm/vminsights-maps) articles to see how well your machine is performing and view discovered application components. |
azure-arc | Deliver Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md | - Title: Deliver Extended Security Updates for Windows Server 2012 -description: Learn how to deliver Extended Security Updates for Windows Server 2012. Previously updated : 02/20/2024----# Deliver Extended Security Updates for Windows Server 2012 --This article provides steps to enable delivery of Extended Security Updates (ESUs) to Windows Server 2012 machines onboarded to Arc-enabled servers. You can enable ESUs to these machines individually or at scale. --## Before you begin --Plan and prepare to onboard your machines to Azure Arc-enabled servers. See [Prepare to deliver Extended Security Updates for Windows Server 2012](prepare-extended-security-updates.md) to learn more. --You'll also need the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in [Azure RBAC](../../role-based-access-control/overview.md) to create and assign ESUs to Arc-enabled servers. --## Manage ESU licenses --1. From your browser, sign in to the [Azure portal](https://portal.azure.com). --1. On the **Azure Arc** page, select **Extended Security Updates** in the left pane. -- :::image type="content" source="media/deliver-extended-security-updates/extended-security-updates-main-window.png" alt-text="Screenshot of main ESU window showing licenses tab and eligible resources tab." lightbox="media/deliver-extended-security-updates/extended-security-updates-main-window.png"::: -- From here, you can view and create ESU **Licenses** and view **Eligible resources** for ESUs. --> [!NOTE] -> When viewing all your Arc-enabled servers from the **Servers** page, a banner specifies how many Windows 2012 machines are eligible for ESUs. You can then select **View servers in Extended Security Updates** to view a list of resources that are eligible for ESUs, together with machines already ESU enabled. -> -## Create Azure Arc WS2012 licenses --The first step is to provision Windows Server 2012 and 2012 R2 Extended Security Update licenses from Azure Arc. You link these licenses to one or more Arc-enabled servers that you select in the next section. --After you provision an ESU license, you need to specify the SKU (Standard or Datacenter), type of cores (Physical or vCore), and number of 16-core and 2-core packs to provision an ESU license. You can also provision an Extended Security Update license in a deactivated state so that it wonΓÇÖt initiate billing or be functional on creation. Moreover, the cores associated with the license can be modified after provisioning. --> [!NOTE] -> The provisioning of ESU licenses requires you to attest to their SA or SPLA coverage. -> --The **Licenses** tab displays Azure Arc WS2012 licenses that are available. From here, you can select an existing license to apply or create a new license. ---1. To create a new WS2012 license, select **Create**, and then provide the information required to configure the license on the page. -- For details on how to complete this step, see [License provisioning guidelines for Extended Security Updates for Windows Server 2012](license-extended-security-updates.md). --1. Review the information provided, and then select **Create**. -- The license you created appears in the list and you can link it to one or more Arc-enabled servers by following the steps in the next section. -- :::image type="content" source="media/deliver-extended-security-updates/extended-security-updates-new-license.png" alt-text="Screenshot of licenses tab showing the newly created license in the list." lightbox="media/deliver-extended-security-updates/extended-security-updates-new-license.png"::: --## Link ESU licenses to Arc-enabled servers --You can select one or more Arc-enabled servers to link to an Extended Security Update license. Once you've linked a server to an activated ESU license, the server is eligible to receive Windows Server 2012 and 2012 R2 ESUs. --> [!NOTE] -> You have the flexibility to configure your patching solution of choice to receive these updates ΓÇô whether thatΓÇÖs [Update Manager](../../update-center/overview.md), [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus), Microsoft Updates, [Microsoft Endpoint Configuration Manager](/mem/configmgr/core/understand/introduction), or a third-party patch management solution. -> -1. Select the **Eligible Resources** tab to view a list of all your Arc-enabled servers running Windows Server 2012 and 2012 R2. -- :::image type="content" source="media/deliver-extended-security-updates/extended-security-updates-eligible-resources.png" alt-text="Screenshot of eligible resources tab showing servers eligible to receive ESUs." lightbox="media/deliver-extended-security-updates/extended-security-updates-eligible-resources.png"::: -- The **ESUs status** column indicates whether or not the machine is ESUs-enabled. --1. To enable ESUs for one or more machines, select them in the list, and then select **Enable ESUs**. - -1. On the **Enable Extended Security Updates** page, it shows the number of machines selected to enable ESU and the WS2012 licenses available to apply. Select a license to link to the selected machine(s) and then select **Enable**. -- :::image type="content" source="media/deliver-extended-security-updates/extended-security-updates-select-license.png" alt-text="Screenshot of window for selecting the license to apply to previously chosen machines." lightbox="media/deliver-extended-security-updates/extended-security-updates-select-license.png"::: -- > [!NOTE] - > You can also create a license from this page by selecting **Create an ESU license**. - > --The status of the selected machines changes to **Enabled**. ---If any problems occur during the enablement process, see [Troubleshoot delivery of Extended Security Updates for Windows Server 2012](troubleshoot-extended-security-updates.md) for assistance. --## At-scale Azure Policy --For at-scale linking of servers to an Azure Arc Extended Security Update license and locking down license modification or creation, consider the usage of the following built-in Azure policies: --- [Enable Extended Security Updates (ESUs) license to keep Windows 2012 machines protected after their support lifecycle has ended (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4864134f-d306-4ff5-94d8-ea4553b18c97)--- [Deny Extended Security Updates (ESUs) license creation or modification (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4c660f31-eafb-408d-a2b3-6ed2260bd26c)--Azure policies can be specified to a targeted subscription or resource group for both auditing and management scenarios. --## Additional scenarios --There are some scenarios in which you may be eligible to receive Extended Security Updates patches at no additional cost. Two of these scenarios supported by Azure Arc are (1) [Dev/Test (Visual Studio)](license-extended-security-updates.md#visual-studio-subscription-benefit-for-devtest-scenarios) and (2) [Disaster Recovery (Entitled benefit DR instances from Software Assurance](https://www.microsoft.com/en-us/licensing/licensing-programs/software-assurance-by-benefits) or subscription only. Both of these scenarios require the customer is already using Windows Server 2012/R2 ESUs enabled by Azure Arc for billable, production machines. --> [!WARNING] -> Don't create a Windows Server 2012/R2 ESU License for only Dev/Test or Disaster Recovery workloads. You shouldn't provision an ESU License only for non-billable workloads. Moreover, you'll be billed fully for all of the cores provisioned with an ESU license, and any dev/test cores on the license won't be billed as long as they're tagged accordingly based on the following qualifications. --To qualify for these scenarios, you must already have: --- **Billable ESU License.** You must already have provisioned and activated a WS2012 Arc ESU License intended to be linked to regular Azure Arc-enabled servers running in production environments (i.e., normally billed ESU scenarios). This license should be provisioned only for billable cores, not cores that are eligible for free Extended Security Updates, for example, dev/test cores.- -- **Arc-enabled servers.** Onboarded your Windows Server 2012 and Windows Server 2012 R2 machines to Azure Arc-enabled servers for the purpose of Dev/Test with Visual Studio subscriptions or Disaster Recovery.--To enroll Azure Arc-enabled servers eligible for ESUs at no additional cost, follow these steps to tag and link: --1. Tag both the WS2012 Arc ESU License (created for the production environment with cores for only the production environment servers) and the non-production Azure Arc-enabled servers with one of the following name-value pairs, corresponding to the appropriate exception: -- 1. Name: ΓÇ£ESU UsageΓÇ¥; Value: ΓÇ£WS2012 VISUAL STUDIO DEV TESTΓÇ¥ - - 1. Name: ΓÇ£ESU UsageΓÇ¥; Value: ΓÇ£WS2012 DISASTER RECOVERYΓÇ¥ -- In the case that you're using the ESU License for multiple exception scenarios, mark the license with the tag: Name: ΓÇ£ESU UsageΓÇ¥; Value: ΓÇ£WS2012 MULTIPURPOSEΓÇ¥ --1. Link the tagged license (created for the production environment with cores only for the production environment servers) to your tagged non-production Azure Arc-enabled Windows Server 2012 and Windows Server 2012 R2 machines. **Do not license cores for these servers or create a new ESU license for only these servers.** --This linking won't trigger a compliance violation or enforcement block, allowing you to extend the application of a license beyond its provisioned cores. The expectation is that the license only includes cores for production and billed servers. Any additional cores will be charged and result in over-billing. --> [!IMPORTANT] -> Adding these tags to your license will NOT make the license free or reduce the number of license cores that are chargeable. These tags allow you to link your Azure machines to existing licenses that are already configured with payable cores without needing to create any new licenses or add additional cores to your free machines. --**Example:** -- You have 8 Windows Server 2012 R2 Standard instances, each with 8 physical cores. Six of these Windows Server 2012 R2 Standard machines are for production, and 2 of these Windows Server 2012 R2 Standard machines are eligible for free ESUs because the operating system was licensed through a Visual Studio Dev Test subscription.- - You should first provision and activate a regular ESU License for Windows Server 2012/R2 that's Standard edition and has 48 physical cores to cover the 6 production machines. You should link this regular, production ESU license to your 6 production servers. - - Next, you should reuse this existing license, don't add any more cores or provision a separate license, and link this license to your 2 non-production Windows Server 2012 R2 standard machines. You should tag the ESU license and the 2 non-production Windows Server 2012 R2 Standard machines with Name: "ESU Usage" and Value: "WS2012 VISUAL STUDIO DEV TEST". - - This will result in an ESU license for 48 cores, and you'll be billed for those 48 cores. You won't be charged for the additional 16 cores of the dev test servers that you added to this license, as long as the ESU license and the dev test server resources are tagged appropriately. --> [!NOTE] -> You needed a regular production license to start with, and you'll be billed only for the production cores. --## Upgrading from Windows Server 2012/2012 R2 --When upgrading a Windows Server 2012/2012R machine to Windows Server 2016 or above, it's not necessary to remove the Connected Machine agent from the machine. The new operating system will be visible for the machine in Azure within a few minutes of upgrade completion. Upgraded machines no longer require ESUs and are no longer eligible for them. Any ESU license associated with the machine isn't automatically unlinked from the machine. See [Unlink a license](api-extended-security-updates.md#unlink-a-license) for instructions on doing so manually. --## Assess WS2012 ESU patch Status --To detect whether your Azure Arc-enabled servers are patched with the most recent Windows Server 2012/R2 Extended Security Updates, you can use the Azure Policy [Extended Security Updates should be installed on Windows Server 2012 Arc machines-Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetail.ReactView/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F14b4e776-9fab-44b0-b53f-38d2458ea8be/version~/null/scopes~/%5B%22%2Fsubscriptions%2F4fabcc63-0ec0-4708-8a98-04b990085bf8%22%5D). This Azure Policy, powered by Machine Configuration, identifies if the server has received the most recent ESU Patches. This is observable from the Guest Assignment and Azure Policy Compliance views built into Azure portal. |
azure-arc | Deploy Ama Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deploy-ama-policy.md | - Title: How to deploy and configure Azure Monitor Agent using Azure Policy -description: Learn how to deploy and configure Azure Monitor Agent using Azure Policy. Previously updated : 05/17/2023----# Deploy and configure Azure Monitor Agent using Azure Policy --This article covers how to deploy and configure the Azure Monitor Agent (AMA) to Arc-enabled servers through Azure Policy using a custom Policy definition. Using Azure Policy ensures that Azure Monitor is running on your selected Arc-enabled servers, and automatically install the Azure Monitor Agent on newly added Arc resources. --Deploying the Azure Monitor Agent through a custom Policy definition involves two main steps: --- Selecting an existing or creating a new Data Collection Rule (DCR)--- Creating and deploying the Policy definition--In this scenario, the Policy definition is used to verify that the AMA is installed on your Arc-enabled servers. It will also install the AMA on newly added machines or on existing machines that don't have the AMA installed. --In order for Azure Monitor to work on a machine, it needs to be associated with a Data Collection Rule. Therefore, you'll need to include the resource ID of the DCR when you create your Policy definition. --## Select a Data Collection Rule --Data Collection Rules define the data collection process in Azure Monitor. They specify what data should be collected and where that data should be sent. You'll need to select or create a DCR to be associated with your Policy definition. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --1. Navigate to the **Monitor | Overview** page. Under **Settings**, select **Data Collection Rules**. - A list of existing DCRs displays. You can filter this at the top of the window. If you need to create a new DCR, see [Data collection rules in Azure Monitor](/azure/azure-monitor/essentials/data-collection-rule-overview) for more information. --1. Select the DCR to apply to your ARM template to view its overview. --1. Select **Resources** to view a list of resources (such as Arc-enabled VMs) assigned to the DCR. To add more resources, select *Add**. (You'll need to add resources if you created a new DCR.) --1. Select **Overview**, then select **JSON View** to view the JSON code for the DCR: - - :::image type="content" source="media/deploy-ama-policy/dcr-overview.png" alt-text="Screenshot of the Overview window for a data collection rule highlighting the JSON view button."::: --1. Locate the **Resource ID** field at the top of the window and select the button to copy the resource ID for the DCR to the clipboard. Save this resource ID; you'll need to use it when creating your Policy definition. - - :::image type="content" source="media/deploy-ama-policy/dcr-json-view.png" alt-text="Screenshot of the Resource JSON window showing the JSON code for a data collection rule and highlighting the resource ID copy button."::: --## Create and deploy the Policy definition --In order for Azure Policy to check if AMA is installed on your Arc-enabled, you'll need to create a custom policy definition that does the following: --- Evaluates if new VMs have the AMA installed and the association with the DCR.--- Enforces a remediation task to install the AMA and create the association with the DCR on VMs that aren't compliant with the policy.--1. Select one of the following policy definition templates (that is, for Windows or Linux machines): - - [Configure Windows machines](https://portal.azure.com/#view/Microsoft_Azure_Policy/InitiativeDetail.ReactView/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2F9575b8b7-78ab-4281-b53b-d3c1ace2260b/scopes/undefined) - - [Configure Linux machines](https://portal.azure.com/#view/Microsoft_Azure_Policy/InitiativeDetail.ReactView/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2F118f04da-0375-44d1-84e3-0fd9e1849403/scopes/undefined) - - These templates are used to create a policy to configure machines to run Azure Monitor Agent and associate those machines to a DCR. --1. Select **Assign** to begin creating the policy definition. Enter the applicable information for each tab (that is, **Basics**, **Advanced**, etc.). -1. On the **Parameters** tab, paste the **Data Collection Rule Resource ID** that you copied during the previous procedure: -- :::image type="content" source="media/deploy-ama-policy/resource-id-field.png" alt-text="Screenshot of the Parameters tab of the Configure Windows Machines dialog highlighting the Data Collection Rule Resource ID field."::: -1. Complete the creation of the policy to deploy it for the applicable machines. Once Azure Monitor Agent is deployed, your Azure Arc-enabled servers can apply its services and use it for log collection. --## Additional resources --* [Azure Monitor overview](/azure/azure-monitor/overview) --* [Tutorial: Monitor a hybrid machine with VM insights](learn/tutorial-enable-vm-insights.md) |
azure-arc | Deployment Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deployment-options.md | - Title: Azure Connected Machine agent deployment options -description: Learn about the different options to onboard machines to Azure Arc-enabled servers. Previously updated : 01/03/2024----# Azure Connected Machine agent deployment options --Connecting machines in your hybrid environment directly with Azure can be accomplished using different methods, depending on your requirements and the tools you prefer to use. --## Onboarding methods --The following table highlights each method so that you can determine which works best for your deployment. For detailed information, follow the links to view the steps for each topic. --| Method | Description | -|--|-| -| Interactively | Manually install the agent on a single or small number of machines by [connecting machines using a deployment script](onboard-portal.md).<br> From the Azure portal, you can generate a script and execute it on the machine to automate the install and configuration steps of the agent.| -| Interactively | [Connect machines from Windows Admin Center](onboard-windows-admin-center.md) | -| Interactively or at scale | [Connect machines using PowerShell](onboard-powershell.md) | -| At scale | [Connect machines using a service principal](onboard-service-principal.md) to install the agent at scale non-interactively.| -| At scale | [Connect machines by running PowerShell scripts with Configuration Manager](onboard-configuration-manager-powershell.md) -| At scale | [Connect machines with a Configuration Manager custom task sequence](onboard-configuration-manager-custom-task.md) -| At scale | [Connect Windows machines using Group Policy](onboard-group-policy-powershell.md) -| At scale | [Connect machines from Automation Update Management](onboard-update-management-machines.md) to create a service principal that installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. | -| At scale | [Install the Arc agent on VMware VMs at scale using Arc enabled VMware vSphere](../vmware-vsphere/enable-guest-management-at-scale.md). Arc enabled VMware vSphere allows you to [connect your VMware vCenter server to Azure](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md), automatically discover your VMware VMs, and install the Arc agent on them. Requires VMware tools on VMs.| -| At scale | [Install the Arc agent on SCVMM VMs at scale using Arc-enabled System Center Virtual Machine Manager](../system-center-virtual-machine-manager/enable-guest-management-at-scale.md). Arc-enabled System Center Virtual Machine Manager allows you to [connect your SCVMM management server to Azure](../system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md), automatically discover your SCVMM VMs, and install the Arc agent on them. | -| At scale | [Connect your AWS cloud through the multicloud connector enabled by Azure Arc](../multicloud-connector/connect-to-aws.md) and [enable the **Arc onboarding** solution](../multicloud-connector/onboard-multicloud-vms-arc.md) to auto-discover and onboard EC2 VMs. | --> [!IMPORTANT] -> The Connected Machine agent cannot be installed on an Azure virtual machine. The install script will warn you and roll back if it detects the server is running in Azure. --Be sure to review the basic [prerequisites](prerequisites.md) and [network configuration requirements](network-requirements.md) before deploying the agent, as well as any specific requirements listed in the steps for the onboarding method you choose. To learn more about what changes the agent will make to your system, see [Overview of the Azure Connected Machine Agent](agent-overview.md). ---## Next steps --* Learn about the Azure Connected Machine agent [prerequisites](prerequisites.md) and [network requirements](network-requirements.md). -* Review the [Planning and deployment guide for Azure Arc-enabled servers](plan-at-scale-deployment.md) -* Learn about [reconfiguring, upgrading, and removing the Connected Machine agent](manage-agent.md). -* Try out Arc-enabled servers by using the [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_servers). |
azure-arc | Quick Enable Hybrid Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/quick-enable-hybrid-vm.md | - Title: Quickstart - Connect hybrid machine with Azure Arc-enabled servers -description: In this quickstart, you connect and register a hybrid machine with Azure Arc-enabled servers. - Previously updated : 11/03/2023----# Quickstart: Connect hybrid machines with Azure Arc-enabled servers --Get started with [Azure Arc-enabled servers](../overview.md) to manage and govern your Windows and Linux machines hosted across on-premises, edge, and multicloud environments. --In this quickstart, you'll deploy and configure the Azure Connected Machine agent on a Windows or Linux machine hosted outside of Azure, so that it can be managed through Azure Arc-enabled servers. --> [!TIP] -> If you prefer to try out things in a sample/practice experience, get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_servers). --## Prerequisites --* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* Deploying the Connected Machine agent on a machine requires that you have administrator permissions to install and configure the agent. On Linux this is done by using the root account, and on Windows, with an account that is a member of the Local Administrators group. -* The Microsoft.HybridCompute, Microsoft.GuestConfiguration, Microsoft.HybridConnectivity, and Microsoft.AzureArcData resource providers must be registered on your subscription. Please [register these resource providers ahead of time](../prerequisites.md#azure-resource-providers). -* Before you get started, be sure to review the [agent prerequisites](../prerequisites.md) and verify the following: - * Your target machine is running a supported [operating system](../prerequisites.md#supported-operating-systems). - * Your account has the [required Azure built-in roles](../prerequisites.md#required-permissions). - * Ensure the machine is in a [supported region](../overview.md#supported-regions). - * Confirm that the Linux hostname or Windows computer name doesn't use a [reserved word or trademark](../../../azure-resource-manager/templates/error-reserved-resource-name.md). - * If the machine connects through a firewall or proxy server to communicate over the Internet, make sure the URLs [listed](../network-requirements.md#urls) are not blocked. --## Generate installation script --Use the Azure portal to create a script that automates the agent download and installation and establishes the connection with Azure Arc. --<!--1. Launch the Azure Arc service in the Azure portal by searching for and selecting **Servers - Azure Arc**. -- :::image type="content" source="media/quick-enable-hybrid-vm/search-machines.png" alt-text="Search for Azure Arc-enabled servers in the Azure portal."::: --1. On the **Servers - Azure Arc** page, select **Add** near the upper left.--> --1. [Go to the Azure portal page for adding servers with Azure Arc](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/HybridVmAddBlade). Select the **Add a single server** tile, then select **Generate script**. -- :::image type="content" source="media/quick-enable-hybrid-vm/add-single-server.png" alt-text="Screenshot of Azure portal's add server page." lightbox="media/quick-enable-hybrid-vm/add-single-server.png"::: - > [!NOTE] - > In the portal, you can also reach this page by searching for and selecting "Servers - Azure Arc" and then selecting **+Add**. --1. On the **Basics** page, provide the following: -- 1. Select the subscription and resource group where you want the machine to be managed within Azure. - 1. For **Region**, choose the Azure region in which the server's metadata will be stored. - 1. For **Operating system**, select the operating system of the server you want to connect. - 1. For **Connectivity method**, choose how the Azure Connected Machine agent should connect to the internet. If you select **Proxy server**, enter the proxy server IP address or the name and port number that the machine will use in the format `http://<proxyURL>:<proxyport>`. - 1. Select **Next**. --1. On the **Tags** page, review the default **Physical location tags** suggested and enter a value, or specify one or more **Custom tags** to support your standards. Then select **Next**. --1. In the **Download or copy the following script** section, review the script. If you want to make any changes, use the **Previous** button to go back and update your selections. Otherwise, select **Download** to save the script file. --## Install the agent using the script --Now that you've generated the script, the next step is to run it on the server that you want to onboard to Azure Arc. The script will download the Connected Machine agent from the Microsoft Download Center, install the agent on the server, create the Azure Arc-enabled server resource, and associate it with the agent. --Follow the steps below for the operating system of your server. --### Windows agent --1. Log in to the server. --1. Open an elevated 64-bit PowerShell command prompt. --1. Change to the folder or share that you copied the script to, then execute it on the server by running the `./OnboardingScript.ps1` script. --### Linux agent --1. To install the Linux agent on the target machine that can directly communicate to Azure, run the following command: -- ```bash - bash ~/Install_linux_azcmagent.sh - ``` --1. Alternately, if the target machine communicates through a proxy server, run the following command: -- ```bash - bash ~/Install_linux_azcmagent.sh --proxy "{proxy-url}:{proxy-port}" - ``` --## Verify the connection with Azure Arc --After you install the agent and configure it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machine in the [Azure portal](https://aka.ms/hybridmachineportal). ---> [!TIP] -> You can repeat these steps as needed to onboard additional machines. We also provide a variety of other options for deploying the agent, including several methods designed to onboard machines at scale. For more information, see [Azure Connected Machine agent deployment options](../deployment-options.md). --## Next steps --Now that you've enabled your Linux or Windows hybrid machine and successfully connected to the service, you are ready to enable Azure Policy to understand compliance in Azure. --> [!div class="nextstepaction"] -> [Create a policy assignment to identify non-compliant resources](tutorial-assign-policy-portal.md) |
azure-arc | Tutorial Assign Policy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/tutorial-assign-policy-portal.md | - Title: Tutorial - New policy assignment with Azure portal -description: In this tutorial, you use Azure portal to create an Azure Policy assignment to identify non-compliant resources. - Previously updated : 04/20/2022---# Tutorial: Create a policy assignment to identify non-compliant resources --The first step in understanding compliance in Azure is to identify the status of your resources. Azure Policy supports auditing the state of your Azure Arc-enabled server with guest configuration policies. Azure Policy's guest configuration definitions can audit or apply settings inside the machine. --This tutorial steps you through the process of creating and assigning a policy in order to identify which of your Azure Arc-enabled servers don't have the Log Analytics agent for Windows or Linux installed. These machines are considered _non-compliant_ with the policy assignment. --In this tutorial, you will learn how to: --> [!div class="checklist"] -> * Create policy assignment and assign a definition to it -> * Identify resources that aren't compliant with the new policy -> * Remove the policy from non-compliant resources ---## Prerequisites --If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account -before you begin. --## Create a policy assignment --Follow the steps below to create a policy assignment and assign the policy definition _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_: --1. Launch the Azure Policy service in the Azure portal by selecting **All services**, then searching - for and selecting **Policy**. -- :::image type="content" source="./media/tutorial-assign-policy-portal/all-services-page.png" alt-text="Screenshot of All services window showing search for policy service." border="true"::: --1. Select **Assignments** on the left side of the Azure Policy page. An assignment is a policy that - has been assigned to take place within a specific scope. -- :::image type="content" source="./media/tutorial-assign-policy-portal/assignments-tab.png" alt-text="Screenshot of All services Policy window showing policy assignments." border="true"::: --1. Select **Assign Policy** from the top of the **Policy - Assignments** page. --1. On the **Assign Policy** page, select the **Scope** by clicking the ellipsis and selecting either - a management group or subscription. Optionally, select a resource group. A scope determines what - resources or grouping of resources the policy assignment gets enforced on. Then click **Select** - at the bottom of the **Scope** page. -- This example uses the **Parnell Aerospace** subscription. Your subscription will differ. --1. Resources can be excluded based on the **Scope**. **Exclusions** start at one level lower than - the level of the **Scope**. **Exclusions** are optional, so leave it blank for now. --1. Select the **Policy definition** ellipsis to open the list of available definitions. Azure Policy - comes with built-in policy definitions you can use. Many are available, such as: -- - Enforce tag and its value - - Apply tag and its value - - Inherit a tag from the resource group if missing -- For a partial list of available built-in policies, see [Azure Policy samples](../../../governance/policy/samples/index.md). --1. Search through the policy definitions list to find the _\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines_ - definition (if you have enabled the Azure Connected Machine agent on a Windows-based machine). For a Linux-based machine, find the corresponding _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ policy definition. Click on that policy and click **Add**. --1. The **Assignment name** is automatically populated with the policy name you selected, but you can - change it. For this example, leave the policy name as is, and don't change any of the remaining options on the page. - -1. For this example, we don't need to change any settings on the other tabs. Select **Review + Create** to review your new policy assignment, then select **Create**. --You're now ready to identify non-compliant resources to understand the compliance state of your -environment. --## Identify non-compliant resources --Select **Compliance** in the left side of the page. Then locate the **\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines** or **\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines** policy assignment you created. ---If there are any existing resources that aren't compliant with this new assignment, they appear -under **Non-compliant resources**. --When a condition is evaluated against your existing resources and found true, then those resources -are marked as non-compligitant with the policy. The following table shows how different policy effects -work with the condition evaluation for the resulting compliance state. Although you don't see the -evaluation logic in the Azure portal, the compliance state results are shown. The compliance state -result is either compliant or non-compliant. --| **Resource state** | **Effect** | **Policy evaluation** | **Compliance state** | -| | | | | -| Exists | Deny, Audit, Append\*, DeployIfNotExist\*, AuditIfNotExist\* | True | Non-compliant | -| Exists | Deny, Audit, Append\*, DeployIfNotExist\*, AuditIfNotExist\* | False | Compliant | -| New | Audit, AuditIfNotExist\* | True | Non-compliant | -| New | Audit, AuditIfNotExist\* | False | Compliant | --\* The Append, DeployIfNotExist, and AuditIfNotExist effects require the IF statement to be TRUE. -The effects also require the existence condition to be FALSE to be non-compliant. When TRUE, the IF -condition triggers evaluation of the existence condition for the related resources. --## Clean up resources --To remove the assignment created, follow these steps: --1. Select **Compliance** (or **Assignments**) in the left side of the Azure Policy page and locate - the **\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines** or **\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines** policy assignment you created. --1. Right-click the policy assignment and select **Delete assignment**. --## Next steps --In this tutorial, you assigned a policy definition to a scope and evaluated its compliance report. The policy definition validates that all the resources in the scope are compliant and identifies which ones aren't. Now you are ready to monitor your Azure Arc-enabled servers machine by enabling [VM insights](/azure/azure-monitor/vm/vminsights-overview). --To learn how to monitor and view the performance, running process and their dependencies from your machine, continue to the tutorial: --> [!div class="nextstepaction"] -> [Enable VM insights](tutorial-enable-vm-insights.md) |
azure-arc | Tutorial Enable Vm Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/tutorial-enable-vm-insights.md | - Title: Tutorial - Monitor a hybrid machine with Azure Monitor VM insights -description: Learn how to collect and analyze data from a hybrid machine in Azure Monitor. - Previously updated : 04/25/2022---# Tutorial: Monitor a hybrid machine with VM insights --[Azure Monitor](/azure/azure-monitor/overview) can collect data directly from your hybrid machines into a Log Analytics workspace for detailed analysis and correlation. Typically, this would require installing the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) on the machine using a script, manually, or an automated method following your configuration management standards. Now, Azure Arc-enabled servers can install the Log Analytics and Dependency agent [VM extension](../manage-vm-extensions.md) for Windows and Linux, enabling [VM insights](/azure/azure-monitor/vm/vminsights-overview) to collect data from your non-Azure VMs. --<!This tutorial shows you how to configure and collect data from your Linux or Windows machines by enabling VM insights following a simplified set of steps, which streamlines the experience and takes a shorter amount of time.> --In this tutorial, you will learn how to: --> [!div class="checklist"] -> * Enable and configure VM insights for your Linux or Windows non-azure VMs -> * Collect and view data from these VMs --## Prerequisites --* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --* VM extension functionality is available only in the list of [supported regions](../overview.md#supported-regions). --* See [Supported operating systems](/azure/azure-monitor/vm/vminsights-enable-overview#supported-operating-systems) to ensure that the servers operating system you're enabling is supported by VM insights. --* Review firewall requirements for the Log Analytics agent provided in the [Log Analytics agent overview](/azure/azure-monitor/agents/log-analytics-agent#network-requirements). The VM insights Map Dependency agent doesn't transmit any data itself, and it doesn't require any changes to firewalls or ports. --<!## Sign in to Azure portal --Sign in to the [Azure portal](https://portal.azure.com).> --## Enable VM insights --1. Launch the Azure Arc service in the Azure portal by clicking **All services**, then searching for and selecting **Machines - Azure Arc**. -- :::image type="content" source="./media/quick-enable-hybrid-vm/search-machines.png" alt-text="Screenshot of Azure portal showing search for Servers, Azure Arc." border="false"::: --1. On the **Azure Arc - Machines** page, select the connected machine you created in the [quickstart](quick-enable-hybrid-vm.md) article. --1. From the left-pane under the **Monitoring** section, select **Insights** and then **Enable**. -- :::image type="content" source="./media/tutorial-enable-vm-insights/insights-option.png" alt-text="Screenshot of left-side navigation menu for the machine with Insights selected." border="false"::: --1. On the Azure Monitor **Insights Onboarding** page, you're prompted to create a workspace. For this tutorial, don't select an existing Log Analytics workspace if you already have one. Instead, select the default, which is a workspace with a unique name in the same region as your registered connected machine. This workspace is created and configured for you. -- :::image type="content" source="./media/tutorial-enable-vm-insights/enable-vm-insights.png" alt-text="Screenshot of Insights Onboarding screen with button to enable VM insights." border="false"::: -- Status messages display while the configuration is performed and extensions are installed on your connected machine. This process takes a few minutes. -- :::image type="content" source="./media/tutorial-enable-vm-insights/onboard-vminsights-vm-portal-status.png" alt-text="Screenshot of Insights installation page for machine showing progress status message." border="false"::: -- When the process is complete, a message displays that the machine has been onboarded and that insight has been successfully deployed. --## View data collected --1. After deployment and configuration is complete, select **Insights**, and then select the **Performance** tab. The Performance tab shows a select group of performance counters collected from the guest operating system of your machine. Scroll down to view more counters, and move the mouse over a graph to view average and percentiles taken starting from the time when the Log Analytics VM extension was installed on the machine. -- :::image type="content" source="./media/tutorial-enable-vm-insights/insights-performance-charts.png" alt-text="Screenshot of Insights Performance tab with charts for selected machine." border="false"::: --1. Select **Map**. The maps feature shows the processes running on the machine and their dependencies. Select **Properties** to open the property pane (if it isn't already open). -- :::image type="content" source="./media/tutorial-enable-vm-insights/insights-map.png" alt-text="Screenshot of Insights Map tab with map for selected machine." border="false"::: --1. Expand the processes for your machine. Select one of the processes to view its details and to highlight its dependencies. --1. Select your machine again and then select **Log Events**. You see a list of tables that are stored in the Log Analytics workspace for the machine. This list will be different depending whether you're using a Windows or Linux machine. --1. Select the **Event** table. The **Event** table includes all events from the Windows event log. Log Analytics opens with a simple query to retrieve collected event log entries. --## Next steps --To learn more about Azure Monitor, see the following article: --> [!div class="nextstepaction"] -> [Azure Monitor overview](/azure/azure-monitor/overview) |
azure-arc | License Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md | - Title: License provisioning guidelines for Extended Security Updates for Windows Server 2012 -description: Learn about license provisioning guidelines for Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 02/05/2024----# License provisioning guidelines for Extended Security Updates for Windows Server 2012 --Flexibility is critical when enrolling end of support infrastructure in Extended Security Updates (ESUs) through Azure Arc to receive critical patches. To give ease of options across virtualization and disaster recovery scenarios, you must first provision Windows Server 2012 Arc ESU licenses and then link those licenses to your Azure Arc-enabled servers. The linking and provisioning of licenses can be done through the Azure portal. --When provisioning WS2012 ESU licenses, you need to specify: --* Either virtual core or physical core license -* Standard or Datacenter license --You also need to attest to the number of associated cores (broken down by the number of 2-core and 16-core packs). --To assist with the license provisioning process, this article provides general guidance and sample customer scenarios for planning your deployment of WS2012 ESUs through Azure Arc. --## General guidance: Standard vs. Datacenter, Physical vs. Virtual Cores --### Physical core licensing --If you choose to license based on physical cores, the licensing requires a minimum of 16 physical cores per machine. Most customers choose to license based on physical cores and select Standard or Datacenter edition to match their original Windows Server licensing. While Standard licensing can be applied to up to two virtual machines (VMs), Datacenter licensing has no limit to the number of VMs it can be applied to. Depending on the number of VMs covered, it may make sense to choose the Datacenter license instead of the Standard license. --### Virtual core licensing --If you choose to license based on virtual cores, the licensing requires a minimum of eight virtual cores per Virtual Machine. There are two main scenarios where this model is advisable: --1. If the VM is running on a third-party host or cloud service provider like AWS, GCP, or OCI. --1. The Windows Server operating system was licensed on a virtualization basis. --Another scenario (scenario 1, below) is a candidate for VM/Virtual core licensing when the WS2012 VMs are running on a newer Windows Server host (that is, Windows Server 2016 or later). --> [!IMPORTANT] -> Virtual core licensing can't be used on physical servers. When creating a license with virtual cores, always select the standard edition instead of datacenter, even if the operating system is datacenter edition. --### License limits --Each WS2012 ESU license can cover up to and including 10,000 cores. If you need ESUs for more than 10,000 cores, split the total number of cores across multiple licenses. Additionally, only 800 licenses can be created in a single resource group. Use more resource groups if you need to create more than 800 license resources. --### SA/SPLA conformance --In all cases, you're required to attest to conformance with SA or SPLA. There is no exception for these requirements. Software Assurance or an equivalent Server Subscription is required for you to purchase Extended Security Updates on-premises and in hosted environments. You are able to purchase Extended Security Updates from Enterprise Agreement (EA), Enterprise Subscription Agreement (EAS), a Server & Cloud Enrollment (SCE), and Enrollment for Education Solutions (EES). On Azure, you do not need Software Assurance to get free Extended Security Updates, but Software Assurance or Server Subscription is required to take advantage of the Azure Hybrid Benefit. --### Visual Studio subscription benefit for dev/test scenarios --Visual Studio subscriptions [allow developers to get product keys](/visualstudio/subscriptions/product-keys) for Windows Server at no extra cost to help them develop and test their software. If a Windows Server 2012 server's operating system is licensed through a product key obtained from a Visual Studio subscription, you can also get extended security updates for these servers at no extra cost. To configure ESU licenses for these servers using Azure Arc, you must have at least one server with paid ESU usage. You can't create an ESU license where all associated servers are entitled to the Visual Studio subscription benefit. See [additional scenarios](deliver-extended-security-updates.md#additional-scenarios) in the deployment article for more information on how to provision an ESU license correctly for this scenario. --Development, test, and other non-production servers that have a paid operating system license (from your organization's volume licensing key, for example) **must** use a paid ESU license. The only dev/test servers entitled to ESU licenses at no extra cost are those whose operating system licenses came from a Visual Studio subscription. --## Cost savings with migration and modernization of workloads --As you migrate and modernize your Windows Server 2012 and Windows 2012 R2 infrastructure through the end of 2023, you can utilize the flexibility of monthly billing with Windows Server 2012 ESUs enabled by Azure Arc for cost savings benefits. --As servers no longer require ESUs because they've been migrated to Azure, Azure VMware Solution (AVS), or Azure Stack HCI **where theyΓÇÖre eligible for free ESUs**, or updated to Windows Server 2016 or higher, you can modify the number of cores associated with a license or delete/deactivate licenses. You can also link the license to a new scope of additional servers. See [Programmatically deploy and manage Azure Arc Extended Security Updates licenses](api-extended-security-updates.md) to learn more. For information about no-cost ESUs through Azure Stack HCI, see [Free Extended Security Updates through Azure Stack HCI](/azure-stack/hci/manage/azure-benefits-esu?tabs=windows-server-2012). --> [!NOTE] -> This process is not automatic; billing is tied to the activated licenses and you are responsible for modifying your provisioned licensing to take advantage of cost savings. --## Scenario based examples: Compliant and Cost Effective Licensing --### Scenario 1: Eight modern 32-core hosts (not Windows Server 2012). While each of these hosts are running four 8-core VMs, only one VM on each host is running Windows Server 2012 R2 --In this scenario, you can use virtual core-based licensing to avoid covering the entire host by provisioning eight Windows Server 2012 Standard licenses for eight virtual cores each and link each of those licenses to the VMs running Windows Server 2012 R2. Alternatively, you could consider consolidating your Windows Server 2012 R2 VMs into two of the hosts to take advantage of physical core-based licensing options. --### Scenario 2: A branch office with four VMs, each 8-cores, on a 32-core Windows Server 2012 Standard host --In this case, you should provision two WS2012 Standard licenses for 16 physical cores each and apply to the four Arc-enabled servers. Alternatively, you could provision four WS2012 Standard licenses for eight virtual cores each and apply individually to the four Arc-enabled servers. --### Scenario 3: Eight physical servers in retail stores, each server is standard with eight cores each and there's no virtualization --In this scenario, you should apply eight WS2012 Standard licenses for 16 physical cores each and link each license to a physical server. Note that the 16 physical core minimum applies to the provisioned licenses. --### Scenario 4: Multicloud environment with 12 AWS VMs, each of which have 12 cores and are running Windows Server 2012 R2 Standard --In this scenario, you should apply 12 Windows Server 2012 Standard licenses with 12 virtual cores each, and link individually to each AWS VM. --### Scenario 5: You have already purchased the traditional Windows Server 2012 ESUs through Volume Licensing --In this scenario, the Azure Arc-enabled servers that have been enrolled in Extended Security Updates through an activated MAK Key are as enrolled in ESUs in the Azure portal. You have the flexibility to switch from this key-based traditional ESU model to WS2012 ESUs enabled by Azure Arc between Year one and Year two. --### Scenario 6: Migrating or retiring your Azure Arc-enabled servers enrolled in Windows Server 2012 ESUs --In this scenario, you can deactivate or decommission the ESU Licenses associated with these servers. If only part of the server estate covered by a license no longer requires ESUs, you can modify the ESU license details to reduce the number of associated cores. --### Scenario 7: 128-core Windows Server 2012 Datacenter server running between 10 and 15 Windows Server 2012 R2 VMs that get provisioned and deprovisioned regularly --In this scenario, you should provision a Windows Server 2012 Datacenter license associated with 128 physical cores and link this license to the Arc-enabled Windows Server 2012 R2 VMs running on it. The deletion of the underlying VM also deletes the corresponding Arc-enabled server resource, enabling you to link another Arc-enabled server. --### Scenario 8: An insurance customer is running a 16 node VMware cluster with 1024 physical cores on-premises. 44 of the VMs on the cluster are running Windows Server 2012 R2. Those 44 VMs consume 506 virtual cores, which was calculated by summing up the maximum of 8 or the actual number of cores assigned to each VM. --In this scenario, you could either license the entire cluster with 1024 Windows Server 2012 Datacenter ESU physical cores or license each VM individually with a total of 506 standard edition virtual cores. In this case, it's cheaper to purchase an Arc ESU Windows Server 2012 Standard edition license associated with 506 virtual cores. You'll need to onboard each of the 44 VMs to Azure Arc and then link the license to the Arc machines. --> [!IMPORTANT] -> If you migrate the VMs to Azure VMware Solution (AVS), these servers become eligible for free WS2012 ESUs and should not enroll in ESUs enabled through Azure Arc. -> --## License operations --There are several limitations in the management scenarios for provisioned WS2012 Arc ESU license resources: --- License cores are a mutable property, and customers are able to increment or decrement cores. This is subject to the mandatory minimums of both: (i) 16 cores for Physical core based licenses and (ii) 8 cores for Virtual core based licenses. --- License edition and type is not a mutable property. Standard licenses can't be changed to Datacenter licenses, and vice versa. Similarly, Physical core licenses can't be changed to Virtual core licenses, and vice versa. Note that there are three valid licensing combinations: Standard Virtual Core, Standard Physical Core, and Datacenter Physical Core. Datacenter Virtual cores aren't a viable licensing combination. Erroneously provisioned Datacenter Virtual core licenses have been translated to Datacenter Physical core licenses with core counts compliant with licensing guidelines. --- Licenses can be moved between resource groups and subscriptions. License are modeled in Azure Resource Manager and can be queried using Azure Resource Graph. --- Licenses can be linked to servers in another subscription within the same tenant, but licenses can't be linked to servers within subscriptions of other tenants.--- Tagging a license under evaluation scenarios such as Dev Test or Disaster Recovery doesn't impact billing. Billing is strictly tied to the number of cores associated with the license regardless of tags. The cores used for evaluation or free scenarios shouldn't be provisioned for the Azure Arc ESU license. --## Next steps --* Find out more about [planning for Windows Server and SQL Server end of support](https://www.microsoft.com/en-us/windows-server/extended-security-updates) and [getting Extended Security Updates](/windows-server/get-started/extended-security-updates-deploy). --* Learn about best practices and design patterns through the [Azure Arc landing zone accelerator for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-servers/eslz-identity-and-access-management). -* Learn more about [Arc-enabled servers](overview.md) and how they work with Azure through the Azure Connected Machine agent. -* Explore options for [onboarding your machines](plan-at-scale-deployment.md) to Azure Arc-enabled servers. |
azure-arc | Manage Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md | - Title: Managing the Azure Connected Machine agent -description: This article describes the different management tasks that you'll typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 07/24/2024----# Managing and maintaining the Connected Machine agent --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). --After initial deployment of the Azure Connected Machine agent, you may need to reconfigure the agent, upgrade it, or remove it from the computer. These routine maintenance tasks can be done manually or through automation (which reduces both operational error and expenses). This article describes the operational aspects of the agent. See the [azcmagent CLI documentation](azcmagent.md) for command line reference information. --## Installing a specific version of the agent --Microsoft recommends using the most recent version of the Azure Connected Machine agent for the best experience. However, if you need to run an older version of the agent for any reason, you can follow these instructions to install a specific version of the agent. --### [Windows](#tab/windows) --Links to the current and previous releases of the Windows agents are available below the heading of each [release note](agent-release-notes.md). If you're looking for an agent version that's more than six months old, check out the [release notes archive](agent-release-notes-archive.md). --### [Linux - apt](#tab/linux-apt) --1. If you haven't already, configure your package manager with the [Linux Software Repository for Microsoft Products](/windows-server/administration/linux-package-repository-for-microsoft-software). -1. Search for available agent versions with `apt-cache`: -- ```bash - sudo apt-cache madison azcmagent - ``` --1. Find the version you want to install, replace `VERSION` in the following command with the full (4-part) version number, and run the command to install the agent: -- ```bash - sudo apt install azcmagent=VERSION - ``` -- For example, to install version 1.28, the install command is: -- ```bash - sudo apt install azcmagent=1.28.02260.736 - ``` --### [Linux - yum](#tab/linux-yum) --1. If you haven't already, configure your package manager with the [Linux Software Repository for Microsoft Products](/windows-server/administration/linux-package-repository-for-microsoft-software). -1. Search for available agent versions with `yum list`: -- ```bash - sudo yum list azcmagent --showduplicates - ``` --1. Find the version you want to install, replace `VERSION` in the following command with the full (4-part) version number, and run the command to install the agent: -- ```bash - sudo yum install azcmagent-VERSION - ``` -- For example, to install version 1.28, the install command would look like: -- ```bash - sudo yum install azcmagent-1.28.02260-755 - ``` --### [Linux - zypper](#tab/linux-zypper) --1. If you haven't already, configure your package manager with the [Linux Software Repository for Microsoft Products](/windows-server/administration/linux-package-repository-for-microsoft-software). -1. Search for available agent versions with `zypper search`: -- ```bash - sudo zypper search -s azcmagent - ``` --1. Find the version you want to install, replace `VERSION` in the following command with the full (4-part) version number, and run the command to install the agent: -- ```bash - sudo zypper install -f azcmagent-VERSION - ``` -- For example, to install version 1.28, the install command would look like: -- ```bash - sudo zypper install -f azcmagent-1.28.02260-755 - ``` -----## Upgrade the agent --The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](/azure/advisor/advisor-overview) identifies resources that aren't using the latest version of the machine agent and recommends that you upgrade to the latest version. It notifies you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal. --The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. Installing, upgrading, or uninstalling the Azure Connected Machine Agent doesn't require you to restart your server. --The following table describes the methods supported to perform the agent upgrade: --| Operating system | Upgrade method | -||-| -| Windows | Manually<br> Microsoft Update | -| Ubuntu | [apt](https://help.ubuntu.com/lts/serverguide/apt.html) | -| SUSE Linux Enterprise Server | [zypper](https://en.opensuse.org/SDB:Zypper_usage_11.3) | --### Windows agent --The latest version of the Azure Connected Machine agent for Windows-based machines can be obtained from: --* Microsoft Update --* [Microsoft Update Catalog](https://www.catalog.update.microsoft.com/Home.aspx) --* [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent) --#### Microsoft Update configuration --The recommended way of keeping the Windows agent up to date is to automatically obtain the latest version through Microsoft Update. This allows you to utilize your existing update infrastructure (such as Microsoft Configuration Manager or Windows Server Update Services) and include Azure Connected Machine agent updates with your regular OS update schedule. --Windows Server doesn't check for updates in Microsoft Update by default. To receive automatic updates for the Azure Connected Machine Agent, you must configure the Windows Update client on the machine to check for other Microsoft products. --For Windows Servers that belong to a workgroup and connect to the Internet to check for updates, you can enable Microsoft Update by running the following commands in PowerShell as an administrator: --```powershell -$ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager") -$ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d" -$ServiceManager.AddService2($ServiceId,7,"") -``` --For Windows Servers that belong to a domain and connect to the Internet to check for updates, you can configure this setting at-scale using Group Policy: --1. Sign into a computer used for server administration with an account that can manage Group Policy Objects (GPO) for your organization. --1. Open the **Group Policy Management Console**. --1. Expand the forest, domain, and organizational unit(s) to select the appropriate scope for your new GPO. If you already have a GPO you wish to modify, skip to step 6. --1. Right-click the container and select **Create a GPO in this domain, and Link it here...**. --1. Provide a name for your policy such as "Enable Microsoft Update". --1. Right-click the policy and select **Edit**. --1. Navigate to **Computer Configuration > Administrative Templates > Windows Components > Windows Update**. --1. Select the **Configure Automatic Updates** setting to edit it. --1. Select the **Enabled** radio button to allow the policy to take effect. --1. At the bottom of the **Options** section, check the box for **Install updates for other Microsoft products** at the bottom. --1. Select **OK**. --The next time computers in your selected scope refresh their policy, they'll start to check for updates in both Windows Update and Microsoft Update. --For organizations that use Microsoft Configuration Manager (MECM) or Windows Server Update Services (WSUS) to deliver updates to their servers, you need to configure WSUS to synchronize the Azure Connected Machine Agent packages and approve them for installation on your servers. Follow the guidance for [Windows Server Update Services](/windows-server/administration/windows-server-update-services/manage/setting-up-update-synchronizations#to-specify-update-products-and-classifications-for-synchronization) or [MECM](/mem/configmgr/sum/get-started/configure-classifications-and-products#to-configure-classifications-and-products-to-synchronize) to add the following products and classifications to your configuration: --* **Product Name**: Azure Connected Machine Agent (select all 3 sub-options) -* **Classifications**: Critical Updates, Updates --Once the updates are being synchronized, you can optionally add the Azure Connected Machine Agent product to your auto-approval rules so your servers automatically stay up to date with the latest agent software. --#### To manually upgrade using the Setup Wizard --1. Sign in to the computer with an account that has administrative rights. --1. Download the latest agent installer from https://aka.ms/AzureConnectedMachineAgent --1. Run **AzureConnectedMachineAgent.msi** to start the Setup Wizard. --If the Setup Wizard discovers a previous version of the agent, it upgrades it automatically. When the upgrade completes, the Setup Wizard closes automatically. --#### To upgrade from the command line --If you're unfamiliar with the command-line options for Windows Installer packages, review [Msiexec standard command-line options](/windows/win32/msi/standard-installer-command-line-options) and [Msiexec command-line options](/windows/win32/msi/command-line-options). --1. Sign on to the computer with an account that has administrative rights. --1. Download the latest agent installer from https://aka.ms/AzureConnectedMachineAgent --1. To upgrade the agent silently and create a setup log file in the `C:\Support\Logs` folder, run the following command: -- ```dos - msiexec.exe /i AzureConnectedMachineAgent.msi /qn /l*v "C:\Support\Logs\azcmagentupgradesetup.log" - ``` --### Linux agent --Updating the agent on a Linux machine involves two commands; one command to update the local package index with the list of latest available packages from the repositories, and another command to upgrade the local package. --You can download the latest agent package from Microsoft's [package repository](https://packages.microsoft.com/). --> [!NOTE] -> To upgrade the agent, you must have *root* access permissions or an account that has elevated rights using Sudo. --#### Upgrade the agent on Ubuntu --1. To update the local package index with the latest changes made in the repositories, run the following command: -- ```bash - sudo apt update - ``` --2. To upgrade your system, run the following command: -- ```bash - sudo apt upgrade azcmagent - ``` --Actions of the [apt](https://help.ubuntu.com/lts/serverguide/apt.html) command, such as installation and removal of packages, are logged in the `/var/log/dpkg.log` log file. --#### Upgrade the agent on Red Hat/CentOS/Oracle Linux/Amazon Linux --1. To update the local package index with the latest changes made in the repositories, run the following command: -- ```bash - sudo yum check-update - ``` --2. To upgrade your system, run the following command: -- ```bash - sudo yum update azcmagent - ``` --Actions of the [yum](https://access.redhat.com/articles/yum-cheat-sheet) command, such as installation and removal of packages, are logged in the `/var/log/yum.log` log file. --#### Upgrade the agent on SUSE Linux Enterprise --1. To update the local package index with the latest changes made in the repositories, run the following command: -- ```bash - sudo zypper refresh - ``` --2. To upgrade your system, run the following command: -- ```bash - sudo zypper update azcmagent - ``` --Actions of the [zypper](https://en.opensuse.org/Portal:Zypper) command, such as installation and removal of packages, are logged in the `/var/log/zypper.log` log file. --### Automatic agent upgrades --The Azure Connected Machine agent doesn't automatically upgrade itself when a new version is released. You should include the latest version of the agent with your scheduled patch cycles. --## Renaming an Azure Arc-enabled server resource --When you change the name of a Linux or Windows machine connected to Azure Arc-enabled servers, the new name isn't recognized automatically because the resource name in Azure is immutable. As with other Azure resources, you must delete the resource and re-create it in order to use the new name. --For Azure Arc-enabled servers, before you rename the machine, it's necessary to remove the VM extensions before proceeding: --1. Audit the VM extensions installed on the machine and note their configuration using the [Azure CLI](manage-vm-extensions-cli.md#list-extensions-installed) or [Azure PowerShell](manage-vm-extensions-powershell.md#list-extensions-installed). --2. Remove any VM extensions installed on the machine. You can do this using the [Azure portal](manage-vm-extensions-portal.md#remove-extensions), the [Azure CLI](manage-vm-extensions-cli.md#remove-extensions), or [Azure PowerShell](manage-vm-extensions-powershell.md#remove-extensions). --3. Use the **azcmagent** tool with the [Disconnect](azcmagent-disconnect.md) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. You can run this manually while logged on interactively, with a Microsoft identity [access token](../../active-directory/develop/access-tokens.md), or with the service principal you used for onboarding (or with a [new service principal that you create](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). -- Disconnecting the machine from Azure Arc-enabled servers doesn't remove the Connected Machine agent, and you don't need to remove the agent as part of this process. --4. Re-register the Connected Machine agent with Azure Arc-enabled servers. Run the `azcmagent` tool with the [Connect](azcmagent-connect.md) parameter to complete this step. The agent will default to using the computer's current hostname, but you can choose your own resource name by passing the `--resource-name` parameter to the connect command. --5. Redeploy the VM extensions that were originally deployed to the machine from Azure Arc-enabled servers. If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure Policy definition, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers). --## Uninstall the agent --For servers you no longer want to manage with Azure Arc-enabled servers, follow the steps below to remove any VM extensions from the server, disconnect the agent, and uninstall the software from your server. It's important to complete all of these steps to fully remove all related software components from your system. --### Step 1: Remove VM extensions --If you have deployed Azure VM extensions to an Azure Arc-enabled server, you must uninstall the extensions before disconnecting the agent or uninstalling the software. Uninstalling the Azure Connected Machine agent doesn't automatically remove extensions, and these extensions won't be recognized if you reconnect the server to Azure Arc. --For guidance on how to identify and remove any extensions on your Azure Arc-enabled server, see the following resources: --* [Manage VM extensions with the Azure portal](manage-vm-extensions-portal.md#remove-extensions) -* [Manage VM extensions with Azure PowerShell](manage-vm-extensions-powershell.md#remove-extensions) -* [Manage VM extensions with Azure CLI](manage-vm-extensions-cli.md#remove-extensions) --### Step 2: Disconnect the server from Azure Arc --Disconnecting the agent deletes the corresponding Azure resource for the server and clears the local state of the agent. To disconnect the agent, run the `azcmagent disconnect` command as an administrator on the server. You'll be prompted to sign in with an Azure account that has permission to delete the resource in your subscription. If the resource has already been deleted in Azure, pass an additional flag to clean up the local state: `azcmagent disconnect --force-local-only`. --### Step 3a: Uninstall the Windows agent --Both of the following methods remove the agent, but they don't remove the *C:\Program Files\AzureConnectedMachineAgent* folder on the machine. --#### Uninstall from Control Panel --Follow these steps to uninstall the Windows agent from the machine: --1. Sign in to the computer with an account that has administrator permissions. --1. In **Control panel**, select **Programs and Features**. --1. In **Programs and Features**, select **Azure Connected Machine Agent**, select **Uninstall**, and then select **Yes**. --You can also delete the Windows agent directly from the agent setup wizard. Run the **AzureConnectedMachineAgent.msi** installer package to do so. --#### Uninstall from the command line --You can uninstall the agent manually from the Command Prompt or by using an automated method (such as a script) by following the example below. First you need to retrieve the product code, which is a GUID that is the principal identifier of the application package, from the operating system. The uninstall is performed by using the Msiexec.exe command line - `msiexec /x {Product Code}`. --1. Open the Registry Editor. --2. Under registry key `HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall`, look for and copy the product code GUID. --3. Uninstall the agent using Msiexec, as in the following examples: -- * From the command-line type: -- ```dos - msiexec.exe /x {product code GUID} /qn - ``` -- * You can perform the same steps using PowerShell: -- ```powershell - Get-ChildItem -Path HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall | ` - Get-ItemProperty | ` - Where-Object {$_.DisplayName -eq "Azure Connected Machine Agent"} | ` - ForEach-Object {MsiExec.exe /x "$($_.PsChildName)" /qn} - ``` --### Step 3b: Uninstall the Linux agent --> [!NOTE] -> To uninstall the agent, you must have *root* access permissions or an account that has elevated rights using sudo. --The command used to uninstall the Linux agent depends on the Linux operating system. --* For Ubuntu, run the following command: -- ```bash - sudo apt purge azcmagent - ``` --* For RHEL, CentOS, Oracle Linux, and Amazon Linux, run the following command: -- ```bash - sudo yum remove azcmagent - ``` --* For SLES, run the following command: -- ```bash - sudo zypper remove azcmagent - ``` --## Update or remove proxy settings --To configure the agent to communicate to the service through a proxy server or to remove this configuration after deployment, use one of the methods described below. Note that the agent communicates outbound using the HTTP protocol under this scenario. --As of agent version 1.13, proxy settings can be configured using the `azcmagent config` command or system environment variables. If a proxy server is specified in both the agent configuration and system environment variables, the agent configuration will take precedence and become the effective setting. Use `azcmagent show` to view the effective proxy configuration for the agent. --> [!NOTE] -> Azure Arc-enabled servers doesn't support using proxy servers that require authentication, TLS (HTTPS) connections, or a [Log Analytics gateway](/azure/azure-monitor/agents/gateway) as a proxy for the Connected Machine agent. --### Agent-specific proxy configuration --Agent-specific proxy configuration is available starting with version 1.13 of the Azure Connected Machine agent and is the preferred way of configuring proxy server settings. This approach prevents the proxy settings for the Azure Connected Machine agent from interfering with other applications on your system. --> [!NOTE] -> Extensions deployed by Azure Arc will not inherit the agent-specific proxy configuration. -> Refer to the documentation for the extensions you deploy for guidance on how to configure proxy settings for each extension. --To configure the agent to communicate through a proxy server, run the following command: --```bash -azcmagent config set proxy.url "http://ProxyServerFQDN:port" -``` --You can use an IP address or simple hostname in place of the FQDN if your network requires it. If your proxy server runs on port 80, you may omit ":80" at the end. --To check if a proxy server URL is configured in the agent settings, run the following command: --```bash -azcmagent config get proxy.url -``` --To stop the agent from communicating through a proxy server, run the following command: --```bash -azcmagent config clear proxy.url -``` --You do not need to restart any services when reconfiguring the proxy settings with the `azcmagent config` command. --### Proxy bypass for private endpoints --Starting with agent version 1.15, you can also specify services which should **not** use the specified proxy server. This can help with split-network designs and private endpoint scenarios where you want Microsoft Entra ID and Azure Resource Manager traffic to go through your proxy server to public endpoints but want Azure Arc traffic to skip the proxy and communicate with a private IP address on your network. --The proxy bypass feature doesn't require you to enter specific URLs to bypass. Instead, you provide the name of the service(s) that shouldn't use the proxy server. The location parameter refers to the Azure region of the Arc Server(s). --Proxy bypass value when set to `ArcData` only bypasses the traffic of the Azure extension for SQL Server and not the Arc agent. --| Proxy bypass value | Affected endpoints | -| | | -| `AAD` | `login.windows.net`</br>`login.microsoftonline.com`</br> `pas.windows.net` | -| `ARM` | `management.azure.com` | -| `Arc` | `his.arc.azure.com`</br>`guestconfiguration.azure.com` | -| `ArcData` <sup>1</sup> | `*.<region>.arcdataservices.com`| --<sup>1</sup> The proxy bypass value `ArcData` is available starting with Azure Connected Machine agent version 1.36 and Azure Extension for SQL Server version 1.1.2504.99. Earlier versions include the SQL Server enabled by Azure Arc endpoints in the "Arc" proxy bypass value. --To send Microsoft Entra ID and Azure Resource Manager traffic through a proxy server but skip the proxy for Azure Arc traffic, run the following command: --```bash -azcmagent config set proxy.url "http://ProxyServerFQDN:port" -azcmagent config set proxy.bypass "Arc" -``` --To provide a list of services, separate the service names by commas: --```bash -azcmagent config set proxy.bypass "ARM,Arc" -``` --To clear the proxy bypass, run the following command: --```bash -azcmagent config clear proxy.bypass -``` --You can view the effective proxy server and proxy bypass configuration by running `azcmagent show`. --### Windows environment variables --On Windows, the Azure Connected Machine agent will first check the `proxy.url` agent configuration property (starting with agent version 1.13), then the system-wide `HTTPS_PROXY` environment variable to determine which proxy server to use. If both are empty, no proxy server is used, even if the default Windows system-wide proxy setting is configured. --Microsoft recommends using the agent-specific proxy configuration instead of the system environment variable. --To set the proxy server environment variable, run the following commands: --```powershell -# If a proxy server is needed, execute these commands with the proxy URL and port. -[Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://ProxyServerFQDN:port", "Machine") -$env:HTTPS_PROXY = [System.Environment]::GetEnvironmentVariable("HTTPS_PROXY", "Machine") -# For the changes to take effect, the agent services need to be restarted after the proxy environment variable is set. -Restart-Service -Name himds, ExtensionService, GCArcService -``` --To configure the agent to stop communicating through a proxy server, run the following commands: --```powershell -[Environment]::SetEnvironmentVariable("HTTPS_PROXY", $null, "Machine") -$env:HTTPS_PROXY = [System.Environment]::GetEnvironmentVariable("HTTPS_PROXY", "Machine") -# For the changes to take effect, the agent services need to be restarted after the proxy environment variable removed. -Restart-Service -Name himds, ExtensionService, GCArcService -``` --### Linux environment variables --On Linux, the Azure Connected Machine agent first checks the `proxy.url` agent configuration property (starting with agent version 1.13), and then the `HTTPS_PROXY` environment variable set for the himds, GC_Ext, and GCArcService daemons. There's an included script that will configure systemd's default proxy settings for the Azure Connected Machine agent and all other services on the machine to use a specified proxy server. --To configure the agent to communicate through a proxy server, run the following command: --```bash -sudo /opt/azcmagent/bin/azcmagent_proxy add "http://ProxyServerFQDN:port" -``` --To remove the environment variable, run the following command: --```bash -sudo /opt/azcmagent/bin/azcmagent_proxy remove -``` --### Migrating from environment variables to agent-specific proxy configuration --If you're already using environment variables to configure the proxy server for the Azure Connected Machine agent and want to migrate to the agent-specific proxy configuration based on local agent settings, follow these steps: --1. [Upgrade the Azure Connected Machine agent](#upgrade-the-agent) to the latest version (starting with version 1.13) to use the new proxy configuration settings. --1. Configure the agent with your proxy server information by running `azcmagent config set proxy.url "http://ProxyServerFQDN:port"`. --1. Remove the unused environment variables by following the steps for [Windows](#windows-environment-variables) or [Linux](#linux-environment-variables). --## Alerting for Azure Arc-enabled server disconnection --The Connected Machine agent [sends a regular heartbeat message](overview.md#agent-status) to the service every five minutes. If an Arc-enabled server stops sending heartbeats to Azure for longer than 15 minutes, it can mean that it's offline, the network connection has been blocked, or the agent isn't running. Develop a plan for how youΓÇÖll respond and investigate these incidents, including setting up [Resource Health alerts](/azure/service-health/resource-health-alert-monitor-guide) to get notified when such incidents occur. ---## Next steps --* Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md). --* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. --* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and much more. |
azure-arc | Manage Automatic Vm Extension Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-automatic-vm-extension-upgrade.md | - Title: Automatic extension upgrade for Azure Arc-enabled servers -description: Learn how to enable automatic extension upgrades for your Azure Arc-enabled servers. - Previously updated : 09/03/2024---# Automatic extension upgrade for Azure Arc-enabled servers --Automatic extension upgrade is available for Azure Arc-enabled servers that have supported VM extensions installed. Automatic extension upgrades reduce the amount of operational overhead for you by scheduling the installation of new extension versions when they become available. The Azure Connected Machine agent takes care of upgrading the extension (preserving its settings along the way) and automatically rolling back to the previous version if something goes wrong during the upgrade process. --Automatic extension upgrade has the following features: --- You can opt in and out of automatic upgrades at any time. By default, all extensions are opted into automatic extension upgrades.-- Each supported extension is enrolled individually, and you can choose which extensions to upgrade automatically.-- Supported in all Azure Arc regions.--## How does automatic extension upgrade work? --The extension upgrade process replaces the existing Azure VM extension version supported by Azure Arc-enabled servers with a new version of the same extension when published by the extension publisher. This feature is enabled by default for all extensions you deploy the Azure Arc-enabled servers unless you explicitly opt-out of automatic upgrades. --### Availability-first updates --The availability-first model for platform orchestrated updates ensures that availability configurations in Azure are respected across multiple availability levels. --For a group of Arc-enabled servers undergoing an update, the Azure platform will orchestrate updates following the model described in the [Automation Extension Upgrade](/azure/virtual-machines/automatic-extension-upgrade#availability-first-updates). However, there are some notable differences between Arc-enabled servers and Azure VMs: --**Across regions:** --- Geo-paired regions aren't applicable.--**Within a region:** --- Availability Zones aren't applicable.-- Machines are batched on a best effort basis to avoid concurrent updates for all machines registered with Arc-enabled servers in a subscription.--### Automatic rollback and retries --If an extension upgrade fails, Azure will try to repair the extension by performing the following actions: --1. The Azure Connected Machine agent will automatically reinstall the last known good version of the extension to attempt to restore functionality. -1. If the rollback is successful, the extension status will show as **Succeeded** and the extension will be added to the automatic upgrade queue again. The next upgrade attempt can be as soon as the next hour and will continue until the upgrade is successful. -1. If the rollback fails, the extension status will show as **Failed** and the extension will no longer function as intended. You'll need to [remove](manage-vm-extensions-cli.md#remove-extensions) and [reinstall](manage-vm-extensions-cli.md#enable-extension) the extension to restore functionality. --If you continue to have trouble upgrading an extension, you can [disable automatic extension upgrade](#manage-automatic-extension-upgrade) to prevent the system from trying again while you troubleshoot the issue. You can [enable automatic extension upgrade](#manage-automatic-extension-upgrade) again when you're ready. --### Timing of automatic extension upgrades --When a new version of a VM extension is published, it becomes available for installation and manual upgrade on Arc-enabled servers. For servers that already have the extension installed and automatic extension upgrade enabled, it might take 5 - 8 weeks for every server with that extension to get the automatic upgrade. Upgrades are issued in batches across Azure regions and subscriptions, so you might see the extension get upgraded on some of your servers before others. If you need to upgrade an extension immediately, follow the guidance to manually upgrade extensions using the [Azure portal](manage-vm-extensions-portal.md#upgrade-extensions), [Azure PowerShell](manage-vm-extensions-powershell.md#upgrade-extension) or [Azure CLI](manage-vm-extensions-cli.md#upgrade-extensions). --Extension versions fixing critical security vulnerabilities are rolled out much faster. These automatic upgrades happen using a specialized roll out process which can take 1 - 3 weeks to automatically upgrade every server with that extension. Azure handles identifying which extension version should be rollout quickly to ensure all servers are protected. If you need to upgrade the extension immediately, follow the guidance to manually upgrade extensions using the [Azure portal](manage-vm-extensions-portal.md#upgrade-extensions), [Azure PowerShell](manage-vm-extensions-powershell.md#upgrade-extension) or [Azure CLI](manage-vm-extensions-cli.md#upgrade-extensions). --## Supported extensions --Automatic extension upgrade supports the following extensions: --- Azure Monitor agent - Linux and Windows-- Dependency agent ΓÇô Linux and Windows-- Azure Security agent - Linux and Windows-- Key Vault Extension - Linux only-- Azure Update Manager - Linux and Windows-- Azure Automation Hybrid Runbook Worker - Linux and Windows-- Azure extension for SQL Server - Linux and Windows--More extensions will be added over time. Extensions that do not support automatic extension upgrade today are still configured to enable automatic upgrades by default. This setting will have no effect until the extension publisher chooses to support automatic upgrades. --## Manage automatic extension upgrade --Automatic extension upgrade is enabled by default when you install extensions on Azure Arc-enabled servers. To enable automatic upgrades for an existing extension, you can use Azure CLI or Azure PowerShell to set the `enableAutomaticUpgrade` property on the extension to `true`. You'll need to repeat this process for every extension where you'd like to enable or disable automatic upgrades. --### [Azure portal](#tab/azure-portal) --Use the following steps to configure automatic extension upgrades in using the Azure portal: --1. Go to the [Azure portal](https://portal.azure.com) navigate to **Machines - Azure Arc**. -1. Select the applicable server. -1. In the left pane, select the **Extensions** tab to see a list of all extensions installed on the server. - :::image type="content" source="media/manage-automatic-vm-extension-upgrade/portal-navigation-extensions.png" alt-text="Screenshot of an Azure Arc-enabled server in the Azure portal showing where to navigate to extensions." border="true"::: -1. The **Automatic upgrade** column in the table shows whether upgrades are enabled, disabled, or not supported for each extension. Select the checkbox next to the extensions for which you want automatic upgrades enabled, then select **Enable automatic upgrade** to turn on the feature. Select **Disable automatic upgrade** to turn off the feature. --### [Azure CLI](#tab/azure-cli) --To check the status of automatic extension upgrade for all extensions on an Arc-enabled server, run the following command: --```azurecli -az connectedmachine extension list --resource-group resourceGroupName --machine-name machineName --query "[].{Name:name, AutoUpgrade:properties.enableAutoUpgrade}" --output table -``` --Use the [az connectedmachine extension update](/cli/azure/connectedmachine/extension) command to enable automatic upgrades on an extension: --```azurecli -az connectedmachine extension update \ - --resource-group resourceGroupName \ - --machine-name machineName \ - --name extensionName \ - --enable-auto-upgrade true -``` --To disable automatic upgrades, set the `--enable-auto-upgrade` parameter to `false`, as shown below: --```azurecli -az connectedmachine extension update \ - --resource-group resourceGroupName \ - --machine-name machineName \ - --name extensionName \ - --enable-auto-upgrade false -``` --### [Azure PowerShell](#tab/azure-powershell) --To check the status of automatic extension upgrade for all extensions on an Arc-enabled server, run the following command: --```azurepowershell -Get-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName machineName | Format-Table Name, EnableAutomaticUpgrade -``` --To enable automatic upgrades for an extension using Azure PowerShell, use the [Update-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/update-azconnectedmachineextension) cmdlet: --```azurepowershell -Update-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName machineName -Name extensionName -EnableAutomaticUpgrade -``` --To disable automatic upgrades, set `-EnableAutomaticUpgrade:$false` as shown in the example below: --```azurepowershell -Update-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName machineName -Name extensionName -EnableAutomaticUpgrade:$false -``` --> [!TIP] -> The cmdlets above come from the [Az.ConnectedMachine](/powershell/module/az.connectedmachine) PowerShell module. You can install this PowerShell module with `Install-Module Az.ConnectedMachine` on your computer or in Azure Cloud Shell. ----## Extension upgrades with multiple extensions --A machine managed by Arc-enabled servers can have multiple extensions with automatic extension upgrade enabled. The same machine can also have other extensions without automatic extension upgrade enabled. --If multiple extension upgrades are available for a machine, the upgrades might be batched together, but each extension upgrade is applied individually on a machine. A failure on one extension doesn't impact the other extension(s) to be upgraded. For example, if two extensions are scheduled for an upgrade, and the first extension upgrade fails, the second extension will still be upgraded. --## Check automatic extension upgrade history --You can use the Azure Activity Log to identify extensions that were automatically upgraded. You can find the Activity Log tab on individual Azure Arc-enabled server resources, resource groups, and subscriptions. Extension upgrades are identified by the `Upgrade Extensions on Azure Arc machines (Microsoft.HybridCompute/machines/upgradeExtensions/action)` operation. --To view automatic extension upgrade history, search for the **Azure Activity Log** in the Azure portal. Select **Add filter** and choose the Operation filter. For the filter criteria, search for "Upgrade Extensions on Azure Arc machines" and select that option. You can optionally add a second filter for **Event initiated by** and set "Azure Regional Service Manager" as the filter criteria to only see automatic upgrade attempts and exclude upgrades manually initiated by users. ---## Next steps --- You can deploy, manage, and remove VM extensions using the [Azure CLI](manage-vm-extensions-cli.md), [PowerShell](manage-vm-extensions-powershell.md), or [Azure Resource Manager templates](manage-vm-extensions-template.md).--- Troubleshooting information can be found in the [Troubleshoot VM extensions guide](troubleshoot-vm-extensions.md). |
azure-arc | Manage Howto Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-howto-migrate.md | - Title: How to migrate Azure Arc-enabled servers across regions -description: Learn how to migrate an Azure Arc-enabled server from one region to another. Previously updated : 3/29/2022----# How to migrate Azure Arc-enabled servers across regions --There are scenarios in which you'll want to move your existing Azure Arc-enabled server from one region to another. For example, you might want to move regions to improve manageability, for governance reasons, or because you realized the machine was originally registered in the wrong region. --To migrate an Azure Arc-enabled server from one Azure region to another, you have to uninstall the VM extensions, delete the resource in Azure, and re-create it in the other region. Before you perform these steps, you should audit the machine to verify which VM extensions are installed. --> [!NOTE] -> While installed extensions continue to run and perform their normal operation after this procedure is complete, you won't be able to manage them. If you attempt to redeploy the extensions on the machine, you may experience unpredictable behavior. --## Move machine to other region --> [!NOTE] -> Performing this operation will result in downtime during the migration. --1. Remove any VM extensions that are installed on the machine. You can do this by using the [Azure portal](manage-vm-extensions-portal.md#remove-extensions), [Azure CLI](manage-vm-extensions-cli.md#remove-extensions), or [Azure PowerShell](manage-vm-extensions-powershell.md#remove-extensions). --2. Use the **azcmagent** tool with the [Disconnect](azcmagent-disconnect.md) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. You can run this manually while logged on interactively, with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md), or with the service principal you used for onboarding (or with a [new service principal that you create](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale)). -- Disconnecting the machine from Azure Arc-enabled servers does not remove the Connected Machine agent, and you don't need to remove the agent as part of this process. --3. Run the `azcmagent` tool with the [Connect](azcmagent-connect.md) parameter to re-register the Connected Machine agent with Azure Arc-enabled servers in the other region. --4. Redeploy the VM extensions that were originally deployed to the machine from Azure Arc-enabled servers. -- If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure Policy definition, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers). --## Next steps --* Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md). --* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy) policy, and much more. |
azure-arc | Manage Vm Extensions Ansible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-ansible.md | - Title: Enable VM extension using Red Hat Ansible -description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using Red Hat Ansible Automation. Previously updated : 09/03/2024-----# Enable Azure VM extensions using Red Hat Ansible automation --This article shows you how to deploy VM extensions to Azure Arc-enabled servers at scale using the Red Hat Ansible Automation Platform. The examples in this article rely on content developed and incubated by Red Hat through the [Ansible Content Lab for Cloud Content](https://cloud.lab.ansible.io/). This article also uses the [Azure Infrastructure Configuration Demo](https://github.com/ansible-content-lab/azure.infrastructure_config_demos) collection. This collection contains many roles and playbooks that are pertinent to this article, including the following: --|File or Folder |Description | -||| -|playbook_enable_arc_extension.yml |Playbook that's used as a job template to enable Azure Arc extensions. | -|playbook_disable_arc-extension.yml |Playbook that's used as a job template to disable Azure Arc extensions. | -|roles/arc |Ansible role that contains the reusable automation leveraged by the playbooks. | --> [!NOTE] -> The examples in this article target Linux hosts. -> --## Prerequisites --### Automation controller 2.x --This article is applicable to both self-managed Ansible Automation Platform and Red Hat Ansible Automation Platform on Microsoft Azure. --### Automation execution environment --To use the examples in this article, you'll need an automation execution environment with both the Azure Collection and the Azure CLI installed, since both are required to run the automation. --If you don't have an automation execution environment that meets these requirements, you can [use this example](https://github.com/scottharwell/cloud-ee). --See the [Red Hat Ansible documentation](https://docs.ansible.com/automation-controller/latest/html/userguide/execution_environments.html) for more information about building and configuring automation execution environments. --### Azure Resource Manager credential --A working account credential configured in Ansible Automation Platform for the Azure Resource Manager is required. This credential is used by Ansible Automation Platform to authenticate operations using the Azure Collection and the Azure CLI. --## Configuring the content --To use the [Azure Infrastructure Configuration Demo collection](https://github.com/ansible-content-lab/azure.infrastructure_config_demos) in Automation Controller, follow the steps below to set up a project with the repository: --1. Log in to automation controller. -1. In the left menu, select **Projects**. -1. Select **Add**, and then complete the fields of the form as follows: -- **Name:** Content Lab - Azure Infrastructure Configuration Collection -- **Automation Environment:** (select with the Azure Collection and CLI instead) -- **Source Control Type:** Git -- **Source Control URL:** https://github.com/ansible-content-lab/azure.infrastructure_config_demos.git --1. Select **Save**. - :::image type="content" source="media/migrate-ama/configure-content.png" alt-text="Screenshot of Projects window to edit details." lightbox="media/migrate-ama/configure-content.png"::: --Once saved, the project should be synchronized with the automation controller. --## Create job templates --The project you created from the Azure Infrastructure Configuration Demo collection contains example playbooks that implement the reusable content implemented in roles. You can learn more about the individual roles in the collection by viewing the [README file](https://github.com/ansible-content-lab/azure.infrastructure_config_demos/blob/main/README.md) included with the collection. Within the collection, the following mapping has been performed to make it easy to identify which extension you want to enable. --|Extension |Extension Variable Name | -||| -|Microsoft Defender for Cloud integrated vulnerability scanner |microsoft_defender | -|Custom Script extension |custom_script | -|Azure Monitor for VMs (insights) |azure_monitor_for-vms | -|Azure Key Vault Certificate Sync |azure_key_vault | -|Azure Monitor Agent |azure_monitor_agent | -|Azure Automation Hybrid Runbook Worker extension |azure_hybrid_rubook | --You'll need to create templates in order to enable and disable Arc-enabled server VM extensions (explained below). --> [!NOTE] -> There are additional VM extensions not included in this collection, outlined in [Virtual machine extension management with Azure Arc-enabled servers](manage-vm-extensions.md#extensions). -> --### Enable Azure Arc VM extensions --This template is responsible for enabling an Azure Arc-enabled server VM extension on the hosts you identify. --> [!IMPORTANT] -> Arc only supports enabling or disabling a single extension at a time, so this process can take some time. If you attempt to enable or disable another VM extension with this template prior to Azure completing this process, the template reports an error. -> -> Once the job template has run, it may take minutes to hours for each machine to report that the extension is operational. Once the extension is operational, then this job template can be run again with another extension and will not report an error. --Follow the steps below to create the template: --1. On the right menu, select **Templates**. -1. Select **Add**. -1. Select **Add job template**, then complete the fields of the form as follows: -- **Name:** Content Lab - Enable Arc Extension -- **Job Type:** Run -- **Inventory:** localhost -- **Project:** Content Lab - Azure Infrastructure Configuration Collection -- **Playbook:** `playbook_enable_arc-extension.yml` -- **Credentials:** - - Your Azure Resource Manager credential - - **Variables:** -- ```bash - - resource_group: <your_resource_group> - region: <your_region> - arc_hosts: - <first_arc_host> - <second_arc_host> - extension: microsoft_defender - ``` - - > [!NOTE] - > Change the `resource group` and `arc_hosts` to match the names of your Azure resources. If you have a large number of Arc hosts, use Jinja2 formatting to extract the list from your inventory sources. --1. Check the **Prompt on launch** box for Variables so you can change the extension at run time. -1. Select **Save**. --### Disable Azure Arc VM extensions --This template is responsible for disabling an Azure Arc-enabled server VM extension on the hosts you identify. Follow the steps below to create the template: --1. On the right menu, select **Templates**. -1. Select **Add**. -1. Select **Add job template**, then complete the fields of the form as follows: -- **Name:** Content Lab - Disable Arc Extension -- **Job Type:** Run -- **Inventory:** localhost -- **Project:** Content Lab - Azure Infrastructure Configuration Collection -- **Playbook:** `playbook_disable_arc-extension.yml` -- **Credentials:** - - Your Azure Resource Manager credential - - **Variables:** - - ```bash - - resource_group: <your_resource_group> - region: <your_region> - arc_hosts: - <first_arc_host> - <second_arc_host> - extension: microsoft_defender - ``` - - > [!NOTE] - > Change the `resource group` and `arc_hosts` to match the names of your Azure resources. If you have a large number of Arc hosts, use Jinja2 formatting to extract the list from your inventory sources. --1. Check the **Prompt on launch** box for Variables so you can change the extension at run time. -1. Select **Save**. --### Run the automation --Now that you have the job templates created, you can enable or disable Arc extensions by simply changing the name of the `extension` variable. Azure Arc extensions are mapped in the "arc" role in [this file](https://github.com/ansible-content-lab/azure.infrastructure_config_demos/blob/main/roles/arc/defaults/main.yml). --When you click the “launch” 🚀 icon, the template will ask you to confirm that the variables are accurate. For example, to enable the Microsoft Defender extension, ensure that the extension variable is set to `microsoft_defender`. Then, click **Next** and then **Launch** to run the template: ----If no errors are reported, the extension will be enabled and active on the applicable servers after a short period of time. You can then proceed to enable (or disable) other extensions by changing the extension variable in the template. --## Next steps --* You can deploy, manage, and remove VM extensions using the [Azure PowerShell](manage-vm-extensions-powershell.md), from the [Azure portal](manage-vm-extensions-portal.md), or the [Azure CLI](manage-vm-extensions-cli.md). --* Troubleshooting information can be found in the [Troubleshoot VM extensions guide](troubleshoot-vm-extensions.md). |
azure-arc | Manage Vm Extensions Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-cli.md | - Title: Enable VM extension using Azure CLI -description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using the Azure CLI. Previously updated : 03/30/2022-----# Enable Azure VM extensions using the Azure CLI --This article shows you how to deploy, upgrade, update, and uninstall VM extensions, supported by Azure Arc-enabled servers, to a Linux or Windows hybrid machine using the Azure CLI. --> [!NOTE] -> Azure Arc-enabled servers does not support deploying and managing VM extensions to Azure virtual machines. For Azure VMs, see the following [VM extension overview](/azure/virtual-machines/extensions/overview) article. ---## Install the Azure CLI extension --The ConnectedMachine commands aren't shipped as part of the Azure CLI. Before using the Azure CLI to connect to Azure and manage VM extensions on your hybrid server managed by Azure Arc-enabled servers, you need to load the ConnectedMachine extension. These management operations can be performed from your workstation, you don't need to run them on the Azure Arc-enabled server. --Run the following command to get it: --```azurecli -az extension add --name connectedmachine -``` --## Enable extension --To enable a VM extension on your Azure Arc-enabled server, use [az connectedmachine extension create](/cli/azure/connectedmachine/extension#az-connectedmachine-extension-create) with the `--machine-name`, `--extension-name`, `--location`, `--type`, `settings`, and `--publisher` parameters. --The following example enables the Log Analytics VM extension on an Azure Arc-enabled server: --```azurecli -az connectedmachine extension create --machine-name "myMachineName" --name "OmsAgentForLinux or MicrosoftMonitoringAgent" --location "regionName" --settings '{\"workspaceId\":\"myWorkspaceId\"}' --protected-settings '{\"workspaceKey\":\"myWorkspaceKey\"}' --resource-group "myResourceGroup" --type-handler-version "1.13" --type "OmsAgentForLinux or MicrosoftMonitoringAgent" --publisher "Microsoft.EnterpriseCloud.Monitoring" -``` --The following example enables the Custom Script Extension on an Azure Arc-enabled server: --```azurecli -az connectedmachine extension create --machine-name "myMachineName" --name "CustomScriptExtension" --location "regionName" --type "CustomScriptExtension" --publisher "Microsoft.Compute" --settings "{\"commandToExecute\":\"powershell.exe -c \\\"Get-Process | Where-Object { $_.CPU -gt 10000 }\\\"\"}" --type-handler-version "1.10" --resource-group "myResourceGroup" -``` --The following example enables the Key Vault VM extension on an Azure Arc-enabled server: --```azurecli -az connectedmachine extension create --resource-group "resourceGroupName" --machine-name "myMachineName" --location "regionName" --publisher "Microsoft.Azure.KeyVault" --type "KeyVaultForLinux or KeyVaultForWindows" --name "KeyVaultForLinux or KeyVaultForWindows" --settings '{"secretsManagementSettings": { "pollingIntervalInS": "60", "observedCertificates": ["observedCert1"] }, "authenticationSettings": { "msiEndpoint": "http://localhost:40342/metadata/identity" }}' -``` --The following example enables the Microsoft Antimalware extension on an Azure Arc-enabled Windows server: --```azurecli -az connectedmachine extension create --resource-group "resourceGroupName" --machine-name "myMachineName" --location "regionName" --publisher "Microsoft.Azure.Security" --type "IaaSAntimalware" --name "IaaSAntimalware" --settings '"{\"AntimalwareEnabled\": \"true\"}"' -``` --The following example enables the Datadog extension on an Azure Arc-enabled Windows server: --```azurecli -az connectedmachine extension create --resource-group "resourceGroupName" --machine-name "myMachineName" --location "regionName" --publisher "Datadog.Agent" --type "DatadogWindowsAgent" --settings '{"site": "us3.datadoghq.com"}' --protected-settings '{"api_key": "YourDatadogAPIKey" }' -``` --## List extensions installed --To get a list of the VM extensions on your Azure Arc-enabled server, use [az connectedmachine extension list](/cli/azure/connectedmachine/extension#az-connectedmachine-extension-list) with the `--machine-name` and `--resource-group` parameters. --Example: --```azurecli -az connectedmachine extension list --machine-name "myMachineName" --resource-group "myResourceGroup" -``` --By default, the output of Azure CLI commands is in JSON (JavaScript Object Notation). To change the default output to a list or table, for example, use [az config set core.output=table](/cli/azure/reference-index). You can also add `--output` to any command for a one time change in output format. --The following example shows the partial JSON output from the `az connectedmachine extension -list` command: --```json -[ - { - "autoUpgradingMinorVersion": "false", - "forceUpdateTag": null, - "id": "/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.HybridCompute/machines/SVR01/extensions/DependencyAgentWindows", - "location": "regionName", - "name": "DependencyAgentWindows", - "namePropertiesInstanceViewName": "DependencyAgentWindows", -``` --## Update extension configuration --Some VM extensions require configuration settings in order to install them on the Arc-enabled server, like the Custom Script Extension and the Log Analytics agent VM extension. To upgrade the configuration of an extension, use [az connectedmachine extension update](/cli/azure/connectedmachine/extension#az-connectedmachine-extension-update). --The following example shows how to configure the Custom Script Extension: --```azurecli -az connectedmachine extension update --name "CustomScriptExtension" --type "CustomScriptExtension" --publisher "Microsoft.HybridCompute" --settings "{\"commandToExecute\":\"powershell.exe -c \\\"Get-Process | Where-Object { $_.CPU -lt 100 }\\\"\"}" --type-handler-version "1.10" --machine-name "myMachine" --resource-group "myResourceGroup" -``` --## Upgrade extensions --When a new version of a supported VM extension is released, you can upgrade it to that latest release. To upgrade a VM extension, use [az connectedmachine upgrade-extension](/cli/azure/connectedmachine) with the `--machine-name`, `--resource-group`, and `--extension-targets` parameters. --For the `--extension-targets` parameter, you need to specify the extension and the latest version available. To find out what the latest version available is, you can get this information from the **Extensions** page for the selected Arc-enabled server in the Azure portal, or by running [az vm extension image list](/cli/azure/vm/extension/image#az-vm-extension-image-list). You may specify multiple extensions in a single upgrade request by providing a comma-separated list of extensions, defined by their publisher and type (separated by a period) and the target version for each extension, as shown in the example below. --To upgrade the Log Analytics agent extension for Windows that has a newer version available, run the following command: --```azurecli -az connectedmachine upgrade-extension --machine-name "myMachineName" --resource-group "myResourceGroup" --extension-targets '{"Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent":{"targetVersion":"1.0.18053.0"}}' -``` --You can review the version of installed VM extensions at any time by running the command [az connectedmachine extension list](/cli/azure/connectedmachine/extension#az-connectedmachine-extension-list). The `typeHandlerVersion` property value represents the version of the extension. --## Remove extensions --To remove an installed VM extension on your Azure Arc-enabled server, use [az connectedmachine extension delete](/cli/azure/connectedmachine/extension#az-connectedmachine-extension-delete) with the `--extension-name`, `--machine-name`, and `--resource-group` parameters. --For example, to remove the Log Analytics VM extension for Linux, run the following command: --```azurecli -az connectedmachine extension delete --machine-name "myMachineName" --name "OmsAgentForLinux" --resource-group "myResourceGroup" -``` --## Next steps --- You can deploy, manage, and remove VM extensions using the [Azure PowerShell](manage-vm-extensions-powershell.md), from the [Azure portal](manage-vm-extensions-portal.md), or [Azure Resource Manager templates](manage-vm-extensions-template.md).--- Troubleshooting information can be found in the [Troubleshoot VM extensions guide](troubleshoot-vm-extensions.md).--- Review the Azure CLI VM extension [Overview](/cli/azure/connectedmachine/extension) article for more information about the commands. |
azure-arc | Manage Vm Extensions Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-portal.md | - Title: Enable VM extension from the Azure portal -description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments from the Azure portal. Previously updated : 10/15/2021----# Enable Azure VM extensions from the Azure portal --This article shows you how to deploy, update, and uninstall Azure VM extensions supported by Azure Arc enabled servers, on a Linux or Windows hybrid machine using the Azure portal. --> [!NOTE] -> The Key Vault VM extension does not support deployment from the Azure portal, only using the Azure CLI, the Azure PowerShell, or using an Azure Resource Manager template. --> [!NOTE] -> Azure Arc-enabled servers does not support deploying and managing VM extensions to Azure virtual machines. For Azure VMs, see the following [VM extension overview](/azure/virtual-machines/extensions/overview) article. --## Enable extensions --VM extensions can be applied to your Azure Arc-enabled server-managed machine via the Azure portal. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --2. In the portal, browse to **Machines - Azure Arc** and select your machine from the list. --3. Choose **Extensions**, then select **Add**. --4. Choose the extension you want from the list of available extensions and follow the instructions in the wizard. In this example, we will deploy the Log Analytics VM extension. -- ![Install Log Analytics VM extension](./media/manage-vm-extensions/mma-extension-config.png) -- To complete the installation, you are required to provide the workspace ID and primary key. If you are not familiar with how to find this information, see [obtain workspace ID and key](/azure/azure-monitor/agents/agent-windows#workspace-id-and-key). --5. After confirming the required information provided, select **Review + Create**. A summary of the deployment is displayed and you can review the status of the deployment. -->[!NOTE] ->While multiple extensions can be batched together and processed, they are installed serially. Once the first extension installation is complete, installation of the next extension is attempted. --## List extensions installed --You can get a list of the VM extensions on your Azure Arc-enabled server from the Azure portal. Perform the following steps to see them. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --2. In the portal, browse to **Machines - Azure Arc** and select your machine from the list. --3. Choose **Extensions**, and the list of installed extensions is returned. -- :::image type="content" source="media/manage-vm-extensions/list-vm-extensions.png" alt-text="List VM extension deployed to selected machine." border="true"::: --## Upgrade extensions --When a new version of a supported extension is released, you can upgrade the extension to that latest release. Azure Arc-enabled servers presents a banner in the Azure portal when you navigate to Azure Arc-enabled servers, informing you there are upgrades available for one or more extensions installed on a machine. When you view the list of installed extensions for a selected Azure Arc-enabled server, you'll notice a column labeled **Update available**. If a newer version of an extension is released, the **Update available** value for that extension shows a value of **Yes**. -->[!NOTE] ->While the word **Update** is used in the Azure portal for this experience currently, it does not accurately represent the behavior of the operation. Extensions are upgraded by installing a newer version of the extension currently installed on the machine or server. --Upgrading an extension to the newest version does not affect the configuration of that extension. You are not required to respecify configuration information for any extension you upgrade. ---You can upgrade one, or select multiple extensions eligible for an upgrade from the Azure portal by performing the following steps. --> [!NOTE] -> Currently you can only upgrade extensions from the Azure portal. Performing this operation from the Azure CLI or using an Azure Resource Manager template is not supported at this time. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --2. In the portal, browse to **Machines - Azure Arc** and select your hybrid machine from the list. --3. Choose **Extensions**, and review the status of extensions under the **Update available** column. --You can upgrade one extension by one of three ways: --* By selecting an extension from the list of installed extensions, and under the properties of the extension, select the **Update** option. -- :::image type="content" source="media/manage-vm-extensions-portal/vm-extensions-update-from-extension.png" alt-text="Upgrade extension from selected extension." border="true"::: --* By selecting the extension from the list of installed extensions, and select the **Update** option from the top of the page. --* By selecting one or more extensions that are eligible for an upgrade from the list of installed extensions, and then select the **Update** option. -- :::image type="content" source="media/manage-vm-extensions-portal/vm-extensions-update-selected.png" alt-text="Update selected extension." border="true"::: --## Remove extensions --You can remove one or more extensions from an Azure Arc-enabled server from the Azure portal. Perform the following steps to remove an extension. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --2. In the portal, browse to **Machines - Azure Arc** and select your hybrid machine from the list. --3. Choose **Extensions**, and then select an extension from the list of installed extensions. --4. Select **Uninstall** and when prompted to verify, select **Yes** to proceed. --## Next steps --- You can deploy, manage, and remove VM extensions using the [Azure CLI](manage-vm-extensions-cli.md), [PowerShell](manage-vm-extensions-powershell.md), or [Azure Resource Manager templates](manage-vm-extensions-template.md).--- Troubleshooting information can be found in the [Troubleshoot VM extensions guide](troubleshoot-vm-extensions.md). |
azure-arc | Manage Vm Extensions Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-powershell.md | - Title: Enable VM extension using Azure PowerShell -description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using Azure PowerShell. Previously updated : 03/30/2022-----# Enable Azure VM extensions using Azure PowerShell --This article shows you how to deploy, update, and uninstall Azure VM extensions, supported by Azure Arc-enabled servers, to a Linux or Windows hybrid machine using Azure PowerShell. --> [!NOTE] -> Azure Arc-enabled servers does not support deploying and managing VM extensions to Azure virtual machines. For Azure VMs, see the following [VM extension overview](/azure/virtual-machines/extensions/overview) article. --## Prerequisites --- A computer with Azure PowerShell. For instructions, see [Install and configure Azure PowerShell](/powershell/azure/).--Before using Azure PowerShell to manage VM extensions on your hybrid server managed by Azure Arc-enabled servers, you need to install the `Az.ConnectedMachine` module. These management operations can be performed from your workstation, you don't need to run them on the Azure Arc-enabled server. --Run the following command on your Azure Arc-enabled server: --`Install-Module -Name Az.ConnectedMachine`. --When the installation completes, the following message is returned: --`The installed extension 'Az.ConnectedMachine' is experimental and not covered by customer support. Please use with discretion.` --## Enable extension --To enable a VM extension on your Azure Arc-enabled server, use [New-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/new-azconnectedmachineextension) with the `-Name`, `-ResourceGroupName`, `-MachineName`, `-Location`, `-Publisher`, -`ExtensionType`, and `-Settings` parameters. --The following example enables the Log Analytics VM extension on a Azure Arc-enabled Linux server: --```powershell -$Setting = @{ "workspaceId" = "workspaceId" } -$protectedSetting = @{ "workspaceKey" = "workspaceKey" } -New-AzConnectedMachineExtension -Name OMSLinuxAgent -ResourceGroupName "myResourceGroup" -MachineName "myMachineName" -Location "regionName" -Publisher "Microsoft.EnterpriseCloud.Monitoring" -Settings $Setting -ProtectedSetting $protectedSetting -ExtensionType "OmsAgentForLinux" -``` --To enable the Log Analytics VM extension on an Azure Arc-enabled Windows server, change the value for the `-ExtensionType` parameter to `"MicrosoftMonitoringAgent"` in the previous example. --The following example enables the Custom Script Extension on an Azure Arc-enabled server: --```powershell -$Setting = @{ "commandToExecute" = "powershell.exe -c Get-Process" } -New-AzConnectedMachineExtension -Name "custom" -ResourceGroupName "myResourceGroup" -MachineName "myMachineName" -Location "regionName" -Publisher "Microsoft.Compute" -Settings $Setting -ExtensionType CustomScriptExtension -``` --The following example enables the Microsoft Antimalware extension on an Azure Arc-enabled Windows server: --```powershell -$Setting = @{ "AntimalwareEnabled" = $true } -New-AzConnectedMachineExtension -Name "IaaSAntimalware" -ResourceGroupName "myResourceGroup" -MachineName "myMachineName" -Location "regionName" -Publisher "Microsoft.Azure.Security" -Settings $Setting -ExtensionType "IaaSAntimalware" -``` --### Key Vault VM extension --> [!WARNING] -> PowerShell clients often add `\` to `"` in the settings.json which will cause akvvm_service fails with error: `[CertificateManagementConfiguration] Failed to parse the configuration settings with:not an object.` --The following example enables the Key Vault VM extension on an Azure Arc-enabled server: --```powershell -# Build settings - $settings = @{ - secretsManagementSettings = @{ - observedCertificates = @( - "observedCert1" - ) - certificateStoreLocation = "myMachineName" # For Linux use "/var/lib/waagent/Microsoft.Azure.KeyVault.Store/" - certificateStore = "myCertificateStoreName" - pollingIntervalInS = "pollingInterval" - } - authenticationSettings = @{ - msiEndpoint = "http://localhost:40342/metadata/identity" - } - } -- $resourceGroup = "resourceGroupName" - $machineName = "myMachineName" - $location = "regionName" -- # Start the deployment - New-AzConnectedMachineExtension -ResourceGroupName $resourceGroup -Location $location -MachineName $machineName -Name "KeyVaultForWindows or KeyVaultforLinux" -Publisher "Microsoft.Azure.KeyVault" -ExtensionType "KeyVaultforWindows or KeyVaultforLinux" -Setting $settings -``` --### Datadog VM extension --The following example enables the Datadog VM extension on an Azure Arc-enabled server: --```azurepowershell -$resourceGroup = "resourceGroupName" -$machineName = "machineName" -$location = "machineRegion" -$osType = "Windows" # change to Linux if appropriate -$settings = @{ - # change to your preferred Datadog site - site = "us3.datadoghq.com" -} -$protectedSettings = @{ - # change to your Datadog API key - api_key = "APIKEY" -} --New-AzConnectedMachineExtension -ResourceGroupName $resourceGroup -Location $location -MachineName $machineName -Name "Datadog$($osType)Agent" -Publisher "Datadog.Agent" -ExtensionType "Datadog$($osType)Agent" -Setting $settings -ProtectedSetting $protectedSettings -``` --## List extensions installed --To get a list of the VM extensions on your Azure Arc-enabled server, use [Get-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/get-azconnectedmachineextension) with the `-MachineName` and `-ResourceGroupName` parameters. --Example: --```powershell -Get-AzConnectedMachineExtension -ResourceGroupName myResourceGroup -MachineName myMachineName --Name Location PropertiesType ProvisioningState -- -- -- ---custom westus2 CustomScriptExtension Succeeded -``` --## Update extension configuration --To reconfigure an installed extension, you can use the [Update-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/update-azconnectedmachineextension) cmdlet with the `-Name`, `-MachineName`, `-ResourceGroupName`, and `-Settings` parameters. --Refer to the reference article for the cmdlet to understand the different methods to provide the changes you want to the extension. --## Upgrade extension --When a new version of a supported VM extension is released, you can upgrade it to that latest release. To upgrade a VM extension, use [Update-AzConnectedExtension](/powershell/module/az.connectedmachine/update-azconnectedextension) with the `-MachineName`, `-ResourceGroupName`, and `-ExtensionTarget` parameters. --For the `-ExtensionTarget` parameter, you need to specify the extension and the latest version available. To find out what the latest version available is, you can get this information from the **Extensions** page for the selected Arc-enabled server in the Azure portal, or by running [Get-AzVMExtensionImage](/powershell/module/az.compute/get-azvmextensionimage). You may specify multiple extensions in a single upgrade request by providing a comma-separated list of extensions, defined by their publisher and type (separated by a period) and the target version for each extension, as shown in the example below. --To upgrade the Log Analytics agent extension for Windows that has a newer version available, run the following command: --```powershell -Update-AzConnectedExtension -MachineName "myMachineName" -ResourceGroupName "myResourceGroup" -ExtensionTarget '{\"Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent\":{\"targetVersion\":\"1.0.18053.0\"}}' -``` --You can review the version of installed VM extensions at any time by running the command [Get-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/get-azconnectedmachineextension). The `TypeHandlerVersion` property value represents the version of the extension. --## Remove extensions --To remove an installed VM extension on your Azure Arc-enabled server, use [Remove-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/remove-azconnectedmachineextension) with the `-Name`, `-MachineName` and `-ResourceGroupName` parameters. --For example, to remove the Log Analytics VM extension for Linux, run the following command: --```powershell -Remove-AzConnectedMachineExtension -MachineName myMachineName -ResourceGroupName myResourceGroup -Name OmsAgentforLinux -``` --## Next steps --- You can deploy, manage, and remove VM extensions using the [Azure CLI](manage-vm-extensions-cli.md), from the [Azure portal](manage-vm-extensions-portal.md), or [Azure Resource Manager templates](manage-vm-extensions-template.md).--- Troubleshooting information can be found in the [Troubleshoot VM extensions guide](troubleshoot-vm-extensions.md). |
azure-arc | Manage Vm Extensions Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-template.md | - Title: Enable VM extension using Azure Resource Manager template -description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using an Azure Resource Manager template. Previously updated : 06/02/2022-----# Enable Azure VM extensions by using ARM template --This article shows you how to use an Azure Resource Manager template (ARM template) to deploy Azure VM extensions, supported by Azure Arc-enabled servers. --VM extensions can be added to an Azure Resource Manager template and executed with the deployment of the template. With the VM extensions supported by Azure Arc-enabled servers, you can deploy the supported VM extension on Linux or Windows machines using Azure PowerShell. Each sample below includes a template file and a parameters file with sample values to provide to the template. -->[!NOTE] ->While multiple extensions can be batched together and processed, they are installed serially. Once the first extension installation is complete, installation of the next extension is attempted. --> [!NOTE] -> Azure Arc-enabled servers does not support deploying and managing VM extensions to Azure virtual machines. For Azure VMs, see the following [VM extension overview](/azure/virtual-machines/extensions/overview) article. --## Deploy the Log Analytics VM extension --To easily deploy the Log Analytics agent, the following sample is provided to install the agent on either Windows or Linux. --### Template file for Linux --```json -{ - "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json", - "contentVersion": "1.0.0.0", - "parameters": { - "vmName": { - "type": "string" - }, - "location": { - "type": "string" - }, - "workspaceId": { - "type": "string" - }, - "workspaceKey": { - "type": "string" - } - }, - "resources": [ - { - "name": "[concat(parameters('vmName'),'/OMSAgentForLinux')]", - "type": "Microsoft.HybridCompute/machines/extensions", - "location": "[parameters('location')]", - "apiVersion": "2022-03-10", - "properties": { - "publisher": "Microsoft.EnterpriseCloud.Monitoring", - "type": "OmsAgentForLinux", - "enableAutomaticUpgrade": true, - "settings": { - "workspaceId": "[parameters('workspaceId')]" - }, - "protectedSettings": { - "workspaceKey": "[parameters('workspaceKey')]" - } - } - } - ] -} -``` --### Template file for Windows --```json -{ - "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json", - "contentVersion": "1.0.0.0", - "parameters": { - "vmName": { - "type": "string" - }, - "location": { - "type": "string" - }, - "workspaceId": { - "type": "string" - }, - "workspaceKey": { - "type": "string" - } - }, - "resources": [ - { - "name": "[concat(parameters('vmName'),'/MicrosoftMonitoringAgent')]", - "type": "Microsoft.HybridCompute/machines/extensions", - "location": "[parameters('location')]", - "apiVersion": "2022-03-10", - "properties": { - "publisher": "Microsoft.EnterpriseCloud.Monitoring", - "type": "MicrosoftMonitoringAgent", - "autoUpgradeMinorVersion": true, - "enableAutomaticUpgrade": true, - "settings": { - "workspaceId": "[parameters('workspaceId')]" - }, - "protectedSettings": { - "workspaceKey": "[parameters('workspaceKey')]" - } - } - } - ] -} -``` --### Parameter file --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "vmName": { - "value": "<vmName>" - }, - "location": { - "value": "<region>" - }, - "workspaceId": { - "value": "<MyWorkspaceID>" - }, - "workspaceKey": { - "value": "<MyWorkspaceKey>" - } - } -} -``` --Save the template and parameter files to disk, and edit the parameter file with the appropriate values for your deployment. You can then install the extension on all the connected machines within a resource group with the following command. The command uses the *TemplateFile* parameter to specify the template and the *TemplateParameterFile* parameter to specify a file that contains parameters and parameter values. --```powershell -New-AzResourceGroupDeployment -ResourceGroupName "ContosoEngineering" -TemplateFile "D:\Azure\Templates\LogAnalyticsAgent.json" -TemplateParameterFile "D:\Azure\Templates\LogAnalyticsAgentParms.json" -``` --## Deploy the Custom Script extension --To use the Custom Script extension, the following sample is provided to run on Windows and Linux. If you are unfamiliar with the Custom Script extension, see [Custom Script extension for Windows](/azure/virtual-machines/extensions/custom-script-windows) or [Custom Script extension for Linux](/azure/virtual-machines/extensions/custom-script-linux). There are a couple of differing characteristics that you should understand when using this extension with hybrid machines: --* The list of supported operating systems with the Azure VM Custom Script extension is not applicable to Azure Arc-enabled servers. The list of supported OSs for Azure Arc-enabled servers can be found [here](prerequisites.md#supported-operating-systems). --* Configuration details regarding Azure Virtual Machine Scale Sets or Classic VMs are not applicable. --* If your machines need to download a script externally and can only communicate through a proxy server, you need to [configure the Connected Machine agent](manage-agent.md#update-or-remove-proxy-settings) to set the proxy server environmental variable. --The Custom Script extension configuration specifies things like script location and the command to be run. This configuration is specified in an Azure Resource Manager template, provided below for both Linux and Windows hybrid machines. --### Template file for Linux --```json -{ - "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "vmName": { - "type": "string" - }, - "location": { - "type": "string" - }, - "fileUris": { - "type": "array" - }, - "commandToExecute": { - "type": "securestring" - } - }, - "resources": [ - { - "name": "[concat(parameters('vmName'),'/CustomScript')]", - "type": "Microsoft.HybridCompute/machines/extensions", - "location": "[parameters('location')]", - "apiVersion": "2022-03-10", - "properties": { - "publisher": "Microsoft.Azure.Extensions", - "type": "CustomScript", - "autoUpgradeMinorVersion": true, - "settings": {}, - "protectedSettings": { - "commandToExecute": "[parameters('commandToExecute')]", - "fileUris": "[parameters('fileUris')]" - } - } - } - ] -} -``` --### Template file for Windows --```json -{ - "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "vmName": { - "type": "string" - }, - "location": { - "type": "string" - }, - "fileUris": { - "type": "string" - }, - "arguments": { - "type": "securestring", - "defaultValue": " " - } - }, - "variables": { - "UriFileNamePieces": "[split(parameters('fileUris'), '/')]", - "firstFileNameString": "[variables('UriFileNamePieces')[sub(length(variables('UriFileNamePieces')), 1)]]", - "firstFileNameBreakString": "[split(variables('firstFileNameString'), '?')]", - "firstFileName": "[variables('firstFileNameBreakString')[0]]" - }, - "resources": [ - { - "name": "[concat(parameters('vmName'),'/CustomScriptExtension')]", - "type": "Microsoft.HybridCompute/machines/extensions", - "location": "[parameters('location')]", - "apiVersion": "2022-03-10", - "properties": { - "publisher": "Microsoft.Compute", - "type": "CustomScriptExtension", - "autoUpgradeMinorVersion": true, - "settings": { - "fileUris": "[split(parameters('fileUris'), ' ')]" - }, - "protectedSettings": { - "commandToExecute": "[concat ('powershell -ExecutionPolicy Unrestricted -File ', variables('firstFileName'), ' ', parameters('arguments'))]" - } - } - } - ] -} -``` --### Parameter file --```json -{ - "$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json#", - "handler": "Microsoft.Azure.CreateUIDef", - "version": "0.1.2-preview", - "parameters": { - "basics": [ - {} - ], - "steps": [ - { - "name": "customScriptExt", - "label": "Add Custom Script Extension", - "elements": [ - { - "name": "fileUris", - "type": "Microsoft.Common.FileUpload", - "label": "Script files", - "toolTip": "The script files that will be downloaded to the virtual machine.", - "constraints": { - "required": false - }, - "options": { - "multiple": true, - "uploadMode": "url" - }, - "visible": true - }, - { - "name": "commandToExecute", - "type": "Microsoft.Common.TextBox", - "label": "Command", - "defaultValue": "sh script.sh", - "toolTip": "The command to execute, for example: sh script.sh", - "constraints": { - "required": true - }, - "visible": true - } - ] - } - ], - "outputs": { - "vmName": "[vmName()]", - "location": "[location()]", - "fileUris": "[steps('customScriptExt').fileUris]", - "commandToExecute": "[steps('customScriptExt').commandToExecute]" - } - } -} -``` --## Deploy the Dependency agent extension --To use the Azure Monitor Dependency agent extension, the following sample is provided to run on Windows and Linux. If you are unfamiliar with the Dependency agent, see [Overview of Azure Monitor agents](/azure/azure-monitor/vm/vminsights-dependency-agent-maintenance). --### Template file for Linux --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "vmName": { - "type": "string", - "metadata": { - "description": "The name of existing Linux machine." - } - } - }, - "resources": [ - { - "type": "Microsoft.HybridCompute/machines/extensions", - "name": "[concat(parameters('vmName'),'/DAExtension')]", - "apiVersion": "2022-03-10", - "location": "[resourceGroup().location]", - "dependsOn": [ - ], - "properties": { - "publisher": "Microsoft.Azure.Monitoring.DependencyAgent", - "type": "DependencyAgentLinux", - "enableAutomaticUpgrade": true - } - } - ], - "outputs": { - } -} -``` --### Template file for Windows --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "vmName": { - "type": "string", - "metadata": { - "description": "The name of existing Windows machine." - } - } - }, - "resources": [ - { - "type": "Microsoft.HybridCompute/machines/extensions", - "name": "[concat(parameters('vmName'),'/DAExtension')]", - "apiVersion": "2022-03-10", - "location": "[resourceGroup().location]", - "dependsOn": [ - ], - "properties": { - "publisher": "Microsoft.Azure.Monitoring.DependencyAgent", - "type": "DependencyAgentWindows", - "enableAutomaticUpgrade": true - } - } - ], - "outputs": { - } -} -``` --### Template deployment --Save the template file to disk. You can then deploy the extension to the connected machine with the following command. --```powershell -New-AzResourceGroupDeployment -ResourceGroupName "ContosoEngineering" -TemplateFile "D:\Azure\Templates\DependencyAgent.json" -``` --## Deploy Azure Key Vault VM extension (preview) --The following JSON shows the schema for the Key Vault VM extension (preview). The extension does not require protected settings - all its settings are considered public information. The extension requires a list of monitored certificates, polling frequency, and the destination certificate store. Specifically: --### Template file for Linux --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "vmName": { - "type": "string" - }, - "location": { - "type": "string" - }, - "autoUpgradeMinorVersion":{ - "type": "bool" - }, - "pollingIntervalInS":{ - "type": "int" - }, - "certificateStoreName":{ - "type": "string" - }, - "certificateStoreLocation":{ - "type": "string" - }, - "observedCertificates":{ - "type": "string" - }, - "msiEndpoint":{ - "type": "string" - }, - "msiClientId":{ - "type": "string" - } -}, -"resources": [ - { - "type": "Microsoft.HybridCompute/machines/extensions", - "name": "[concat(parameters('vmName'),'/KVVMExtensionForLinux')]", - "apiVersion": "2022-03-10", - "location": "[parameters('location')]", - "properties": { - "publisher": "Microsoft.Azure.KeyVault", - "type": "KeyVaultForLinux", - "enableAutomaticUpgrade": true, - "settings": { - "secretsManagementSettings": { - "pollingIntervalInS": <polling interval in seconds, e.g. "3600">, - "certificateStoreName": <ignored on linux>, - "certificateStoreLocation": <disk path where certificate is stored, default: "/var/lib/waagent/Microsoft.Azure.KeyVault">, - "observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.: "https://myvault.vault.azure.net/secrets/mycertificate" - }, - "authenticationSettings": { - "msiEndpoint": "http://localhost:40342/metadata/identity" - } - } - } - } - ] -} -``` --### Template file for Windows --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "vmName": { - "type": "string" - }, - "location": { - "type": "string" - }, - "autoUpgradeMinorVersion":{ - "type": "bool" - }, - "pollingIntervalInS":{ - "type": "int" - }, - "certificateStoreName":{ - "type": "string" - }, - "linkOnRenewal":{ - "type": "bool" - }, - "certificateStoreLocation":{ - "type": "string" - }, - "requireInitialSync":{ - "type": "bool" - }, - "observedCertificates":{ - "type": "string" - }, - "msiEndpoint":{ - "type": "string" - }, - "msiClientId":{ - "type": "string" - } -}, -"resources": [ - { - "type": "Microsoft.HybridCompute/machines/extensions", - "name": "[concat(parameters('vmName'),'/KVVMExtensionForWindows')]", - "apiVersion": "2022-03-10", - "location": "[parameters('location')]", - "properties": { - "publisher": "Microsoft.Azure.KeyVault", - "type": "KeyVaultForWindows", - "enableAutomaticUpgrade": true, - "settings": { - "secretsManagementSettings": { - "pollingIntervalInS": "3600", - "certificateStoreName": <certificate store name, e.g.: "MY">, - "linkOnRenewal": <Only Windows. This feature ensures s-channel binding when certificate renews, without necessitating a re-deployment. e.g.: false>, - "certificateStoreLocation": <certificate store location, currently it works locally only e.g.: "LocalMachine">, - "requireInitialSync": <initial synchronization of certificates e.g.: true>, - "observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.: "https://myvault.vault.azure.net" - }, - "authenticationSettings": { - "msiEndpoint": "http://localhost:40342/metadata/identity" - } - } - } - } - ] -} -``` --> [!NOTE] -> Your observed certificates URLs should be of the form `https://myVaultName.vault.azure.net/secrets/myCertName`. -> -> This is because the `/secrets` path returns the full certificate, including the private key, while the `/certificates` path does not. More information about certificates can be found here: [Key Vault Certificates](/azure/key-vault/general/about-keys-secrets-certificates) --### Template deployment --Save the template file to disk. You can then deploy the extension to the connected machine with the following command. --> [!NOTE] -> The VM extension would require a system-assigned identity to be assigned to authenticate to Key vault. See [How to authenticate to Key Vault using managed identity](managed-identity-authentication.md) for Windows and Linux Azure Arc-enabled servers. --```powershell -New-AzResourceGroupDeployment -ResourceGroupName "ContosoEngineering" -TemplateFile "D:\Azure\Templates\KeyVaultExtension.json" -``` --## Next steps --* You can deploy, manage, and remove VM extensions using the [Azure PowerShell](manage-vm-extensions-powershell.md), from the [Azure portal](manage-vm-extensions-portal.md), or the [Azure CLI](manage-vm-extensions-cli.md). --* Troubleshooting information can be found in the [Troubleshoot VM extensions guide](troubleshoot-vm-extensions.md). |
azure-arc | Manage Vm Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md | - Title: VM extension management with Azure Arc-enabled servers -description: Azure Arc-enabled servers can manage deployment of virtual machine extensions that provide post-deployment configuration and automation tasks with non-Azure VMs. Previously updated : 09/04/2024----# Virtual machine extension management with Azure Arc-enabled servers --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). --Virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script in it, a VM extension can be used. --Azure Arc-enabled servers enables you to deploy, remove, and update Azure VM extensions to non-Azure Windows and Linux VMs, simplifying the management of your hybrid machine through their lifecycle. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc-enabled servers: --- The [Azure portal](manage-vm-extensions-portal.md)-- The [Azure CLI](manage-vm-extensions-cli.md)-- [Azure PowerShell](manage-vm-extensions-powershell.md)-- Azure [Resource Manager templates](manage-vm-extensions-template.md)--> [!NOTE] -> Azure Arc-enabled servers does not support deploying and managing VM extensions to Azure virtual machines. For Azure VMs, see the following [VM extension overview](/azure/virtual-machines/extensions/overview) article. --> [!NOTE] -> Currently you can only update extensions from the Azure portal or the Azure CLI. Performing this operation from Azure PowerShell, or using an Azure Resource Manager template is not supported at this time. --## Key benefits --Azure Arc-enabled servers VM extension support provides the following key benefits: --- Collect log data for analysis with [Logs in Azure Monitor](/azure/azure-monitor/logs/data-platform-logs) by enabling the Azure Monitor agent VM extension. Log data analysis makes it useful for doing complex analysis across log data from different kinds of sources.--- With [VM insights](/azure/azure-monitor/vm/vminsights-overview), it analyzes the performance of your Windows and Linux VMs, and monitors their processes and dependencies on other resources and external processes. This is achieved through enabling both the Azure Monitor agent and Dependency agent VM extensions.--- Download and execute scripts on hybrid connected machines using the Custom Script Extension. This extension is useful for post deployment configuration, software installation, or any other configuration or management tasks.--- Automatically refresh of certificates stored in an [Azure Key Vault](/azure/key-vault/general/overview).--## Availability --VM extension functionality is available only in the list of [supported regions](overview.md#supported-regions). Ensure you onboard your machine in one of these regions. --Additionally, you can configure lists of the extensions you wish to allow and block on servers. See [Extension allowlists and blocklists](/azure/azure-arc/servers/security-overview#extension-allowlists-and-blocklists) for more information. --## Extensions --In this release, we support the following VM extensions on Windows and Linux machines. --To learn about the Azure Connected Machine agent package and details about the Extension agent component, see [Agent overview](agent-overview.md). --> [!NOTE] -> The Desired State Configuration VM extension is no longer available for Azure Arc-enabled servers. Alternatively, we recommend [migrating to machine configuration](../../governance/machine-configuration/migrate-from-azure-automation.md) or using the Custom Script Extension to manage the post-deployment configuration of your server. --Arc-enabled servers support moving machines with one or more VM extensions installed between resource groups or another Azure subscription without experiencing any impact to their configuration. The source and destination subscriptions must exist within the same [Microsoft Entra tenant](../../active-directory/develop/quickstart-create-new-tenant.md). This support is enabled starting with the Connected Machine agent version **1.8.21197.005**. For more information about moving resources and considerations before proceeding, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). --### Windows extensions --|Extension |Publisher |Type |Additional information | -|-|-|--|--| -|Microsoft Defender for Cloud integrated vulnerability scanner |Qualys |WindowsAgent.AzureSecurityCenter |[Microsoft Defender for CloudΓÇÖs integrated vulnerability assessment solution for Azure and hybrid machines](../../security-center/deploy-vulnerability-assessment-vm.md)| -|Microsoft Antimalware extension |Microsoft.Azure.Security |IaaSAntimalware |[Microsoft Antimalware extension for Windows](/azure/virtual-machines/extensions/iaas-antimalware-windows) | -|Custom Script extension |Microsoft.Compute | CustomScriptExtension |[Windows Custom Script Extension](/azure/virtual-machines/extensions/custom-script-windows)| -|Azure Monitor for VMs (insights) |Microsoft.Azure.Monitoring.DependencyAgent |DependencyAgentWindows | [Dependency agent virtual machine extension for Windows](/azure/virtual-machines/extensions/agent-dependency-windows)| -|Azure Key Vault Certificate Sync | Microsoft.Azure.Key.Vault |KeyVaultForWindows | [Key Vault virtual machine extension for Windows](/azure/virtual-machines/extensions/key-vault-windows) | -|Azure Monitor Agent |Microsoft.Azure.Monitor |AzureMonitorWindowsAgent |[Install the Azure Monitor agent](/azure/azure-monitor/agents/azure-monitor-agent-manage) | -|Azure Automation Hybrid Runbook Worker extension |Microsoft.Compute |HybridWorkerForWindows |[Deploy an extension-based User Hybrid Runbook Worker](../../automation/extension-based-hybrid-runbook-worker-install.md) to execute runbooks locally | -|Azure Extension for SQL Server |Microsoft.AzureData |WindowsAgent.SqlServer |[Install Azure extension for SQL Server](/sql/sql-server/azure-arc/connect#initiate-the-connection-from-azure) to initiate SQL Server connection to Azure | -|Windows Admin Center (preview) |Microsoft.AdminCenter |AdminCenter |[Manage Azure Arc-enabled Servers using Windows Admin Center in Azure](/windows-server/manage/windows-admin-center/azure/manage-arc-hybrid-machines) | -|Windows OS Update Extension |WindowsOsUpdateExtension |Microsoft.SoftwareUpdateManagement |[Overview of Azure Update Manager](/azure/update-manager/overview?tabs=azure-vms) | -|Windows Patch Extension |Microsoft.CPlat.Core |WindowsPatchExtension |[Automatic Guest Patching for Azure Virtual Machines and Scale Sets](/azure/virtual-machines/automatic-vm-guest-patching) | --### Linux extensions --|Extension |Publisher |Type |Additional information | -|-|-|--|--| -|Microsoft Defender for Cloud integrated vulnerability scanner |Qualys |LinuxAgent.AzureSecurityCenter |[Microsoft Defender for CloudΓÇÖs integrated vulnerability assessment solution for Azure and hybrid machines](../../security-center/deploy-vulnerability-assessment-vm.md)| -|Custom Script extension |Microsoft.Azure.Extensions |CustomScript |[Linux Custom Script Extension Version 2](/azure/virtual-machines/extensions/custom-script-linux) | -|Azure Monitor for VMs (insights) |Microsoft.Azure.Monitoring.DependencyAgent |DependencyAgentLinux |[Dependency agent virtual machine extension for Linux](/azure/virtual-machines/extensions/agent-dependency-linux) | -|Azure Key Vault Certificate Sync | Microsoft.Azure.Key.Vault |KeyVaultForLinux | [Key Vault virtual machine extension for Linux](/azure/virtual-machines/extensions/key-vault-linux) | -|Azure Monitor Agent |Microsoft.Azure.Monitor |AzureMonitorLinuxAgent |[Install the Azure Monitor agent](/azure/azure-monitor/agents/azure-monitor-agent-manage) | -|Azure Automation Hybrid Runbook Worker extension |Microsoft.Compute |HybridWorkerForLinux |[Deploy an extension-based User Hybrid Runbook Worker](../../automation/extension-based-hybrid-runbook-worker-install.md) to execute runbooks locally| -|Linux OS Update Extension |Microsoft.SoftwareUpdateManagement |LinuxOsUpdateExtension |[Overview of Azure Update Manager](/azure/update-manager/overview?tabs=azure-vms)| -|Linux Patch Extension |Microsoft.CPlat.Core |LinuxPatchExtension |[Automatic Guest Patching for Azure Virtual Machines and Scale Sets](/azure/virtual-machines/automatic-vm-guest-patching)| --## Prerequisites --This feature depends on the following Azure resource providers in your subscription: --- **Microsoft.HybridCompute**-- **Microsoft.GuestConfiguration**--If they aren't already registered, follow the steps under [Register Azure resource providers](prerequisites.md#azure-resource-providers). --Be sure to review the documentation for each VM extension referenced in the previous table to understand if it has any network or system requirements. This can help you avoid experiencing any connectivity issues with an Azure service or feature that relies on that VM extension. --### Required Permissions --To deploy an extension to Arc-enabled servers, a user requires the following permissions. --- `microsoft.hybridcompute/machines/read`-- `microsoft.hybridcompute/machines/extensions/read`-- `microsoft.hybridcompute/machines/extensions/write`--The role **Azure Connected Machine Resource Administrator** includes the permissions required to deploy extensions, however it also includes permission to delete Arc-enabled server resources. --### Azure Monitor agent VM extension --Before you install the extension we suggest you review the [deployment options for the Azure Monitor agent](concept-log-analytics-extension-deployment.md) to understand the different methods available and which meets your requirements. --### Azure Key Vault VM extension --The Key Vault VM extension doesn't support the following Linux operating systems: --- CentOS Linux 7 (x64)-- Red Hat Enterprise Linux (RHEL) 7 (x64)-- Amazon Linux 2 (x64)--Deploying the Key Vault VM extension is only supported using: --- The Azure CLI-- The Azure PowerShell-- Azure Resource Manager template--Before you deploy the extension, you need to complete the following: --1. [Create a vault and certificate](/azure/key-vault/certificates/quick-create-portal) (self-signed or import). --2. Grant the Azure Arc-enabled server access to the certificate secret. If youΓÇÖre using the [RBAC preview](/azure/key-vault/general/rbac-guide), search for the name of the Azure Arc resource and assign it the **Key Vault Secrets User (preview)** role. If youΓÇÖre using [Key Vault access policy](/azure/key-vault/general/assign-access-policy-portal), assign Secret **Get** permissions to the Azure Arc resourceΓÇÖs system assigned identity. --### Connected Machine agent --Verify your machine matches the [supported versions](prerequisites.md#supported-operating-systems) of Windows and Linux operating system for the Azure Connected Machine agent. --The minimum version of the Connected Machine agent that is supported with this feature on Windows and Linux is the 1.0 release. --To upgrade your machine to the version of the agent required, see [Upgrade agent](manage-agent.md#upgrade-the-agent). --## Operating system extension availability --The following extensions are available for Windows and Linux machines: --### Windows extension availability --|Operating system |Azure Monitor agent |Dependency VM Insights |Qualys |Custom Script |Key Vault |Hybrid Runbook |Antimalware Extension |Windows Admin Center | -|--|--|--|-|--|-||-|| -|Windows Server 2022 |X |X |X |X | |X | |X | -|Windows Server 2019 |X |X |X |X |X | | |X | -|Windows Server 2016 |X |X |X |X |X |X |Built-in |X | -|Windows Server 2012 R2 |X |X |X |X | |X |X | | -|Windows Server 2012 |X |X |X |X |X |X |X | | -|Windows Server 2008 R2 SP1 |X |X |X |X | |X |X | | -|Windows Server 2008 R2 | | |X |X | |X |X | | -|Windows Server 2008 SP2 | | |X |X | |X | | | -|Windows 11 client OS |X | |X | | | | | | -|Windows 10 1803 (RS4) and higher |X | |X |X | | | | | -|Windows 10 Enterprise (including multi-session) and Pro (Server scenarios only) |X |X |X |X | |X | | | -|Windows 8 Enterprise and Pro (Server scenarios only) | |X |X | | |X | | | -|Windows 7 SP1 (Server scenarios only) | |X |X | | |X | | | -|Azure Stack HCI (Server scenarios only) | | |X | | |X | | | --### Linux extension availability --|Operating system |Azure Monitor agent |Dependency VM Insights |Qualys |Custom Script |Key Vault |Hybrid Runbook |Antimalware Extension |Connected Machine agent | -|--|--|--|-|--|-||-|| -|Amazon Linux 2 | | |X | | |X |X | -|CentOS Linux 8 |X |X |X |X | |X |X | -|CentOS Linux 7 |X |X |X |X | |X |X | -|CentOS Linux 6 | | |X |X | |X | | -|Debian 10 |X | |X |X | |X | | -|Debian 9 |X |X |X |X | | | | -|Debian 8 | |X |X | | |X | | -|Debian 7 | | |X | | |X | | -|OpenSUSE 13.1+ | | |X |X | | | | -|Oracle Linux 8 |X | |X |X | |X |X | -|Oracle Linux 7 |X | |X |X | |X |X | -|Oracle Linux 6 | | |X |X | |X |X | -|Red Hat Enterprise Linux Server 8 |X | |X |X | |X |X | -|Red Hat Enterprise Linux Server 7 |X |X |X |X | |X |X | -|Red Hat Enterprise Linux Server 6 | |X |X | | |X | | -|SUSE Linux Enterprise Server 15.2 |X | |X |X |X | |X | -|SUSE Linux Enterprise Server 15.1 |X | |X |X |X |X |X | -|SUSE Linux Enterprise Server 15 SP1 |X |X |X |X |X |X |X | -|SUSE Linux Enterprise Server 15 |X |X |X |X |X |X |X | -|SUSE Linux Enterprise Server 15 SP5 |X |X |X |X | |X |X | -|SUSE Linux Enterprise Server 12 SP5 |X |X |X |X |X | |X |X | -|Ubuntu 20.04 LTS |X |X |X |X | |X |X | -|Ubuntu 18.04 LTS |X |X |X |X |X |X |X | -|Ubuntu 16.04 LTS |X |X |X | | |X |X | -|Ubuntu 14.04 LTS | | |X | | |X | | --For the regional availabilities of different Azure services and VM extensions available for Azure Arc-enabled servers, [refer to Azure Global's Product Availability Roadmap](https://global.azure.com/product-availability/roadmap). --## Next steps --You can deploy, manage, and remove VM extensions using the [Azure CLI](manage-vm-extensions-cli.md), [Azure PowerShell](manage-vm-extensions-powershell.md), from the [Azure portal](manage-vm-extensions-portal.md), or [Azure Resource Manager templates](manage-vm-extensions-template.md). |
azure-arc | Managed Identity Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/managed-identity-authentication.md | - Title: Authenticate against Azure resources with Azure Arc-enabled servers -description: This article describes Azure Instance Metadata Service support for Azure Arc-enabled servers and how you can authenticate against Azure resources and local using a secret. - Previously updated : 11/08/2021---# Authenticate against Azure resources with Azure Arc-enabled servers --Applications or processes running directly on an Azure Arc-enabled servers can use managed identities to access other Azure resources that support Microsoft Entra ID-based authentication. An application can obtain an [access token](../../active-directory/develop/developer-glossary.md#access-token) representing its identity, which is system-assigned for Azure Arc-enabled servers, and use it as a 'bearer' token to authenticate itself to another service. --Refer to the [managed identity overview](../../active-directory/managed-identities-azure-resources/overview.md) documentation for a detailed description of managed identities, and understand the distinction between system-assigned and user-assigned identities. --In this article, we show you how a server can use a system-assigned managed identity to access Azure [Key Vault](/azure/key-vault/general/overview). Serving as a bootstrap, Key Vault makes it possible for your client application to then use a secret to access resources not secured by Microsoft Entra ID. For example, TLS/SSL certificates used by your IIS web servers can be stored in Azure Key Vault, and securely deploy the certificates to Windows or Linux servers outside of Azure. --## Security overview --While onboarding your server to Azure Arc-enabled servers, several actions are performed to configure using a managed identity, similar to what is performed for an Azure VM: --- Azure Resource Manager receives a request to enable the system-assigned managed identity on the Azure Arc-enabled server.--- Azure Resource Manager creates a service principal in Microsoft Entra ID for the identity of the server. The service principal is created in the Microsoft Entra tenant that's trusted by the subscription.--- Azure Resource Manager configures the identity on the server by updating the Azure Instance Metadata Service (IMDS) identity endpoint for [Windows](/azure/virtual-machines/windows/instance-metadata-service) or [Linux](/azure/virtual-machines/linux/instance-metadata-service) with the service principal client ID and certificate. The endpoint is a REST endpoint accessible only from within the server using a well-known, non-routable IP address. This service provides a subset of metadata information about the Azure Arc-enabled server to help manage and configure it.--The environment of a managed-identity-enabled server will be configured with the following variables on a Windows Azure Arc-enabled server: --- **IMDS_ENDPOINT**: The IMDS endpoint IP address `http://localhost:40342` for Azure Arc-enabled servers.--- **IDENTITY_ENDPOINT**: the localhost endpoint corresponding to service's managed identity `http://localhost:40342/metadata/identity/oauth2/token`.--Your code that's running on the server can request a token from the Azure Instance Metadata service endpoint, accessible only from within the server. --The system environment variable **IDENTITY_ENDPOINT** is used to discover the identity endpoint by applications. Applications should try to retrieve **IDENTITY_ENDPOINT** and **IMDS_ENDPOINT** values and use them. Applications with any access level are allowed to make requests to the endpoints. Metadata responses are handled as normal and given to any process on the machine. However, when a request is made that would expose a token, we require the client to provide a secret to attest that they are able to access data only available to higher-privileged users. --## Prerequisites --- An understanding of Managed identities.-- On Windows, you must be a member of the local **Administrators** group or the **Hybrid Agent Extension Applications** group.-- On Linux, you must be a member of the **himds** group.-- A server connected and registered with Azure Arc-enabled servers.-- You are a member of the [Owner group](../../role-based-access-control/built-in-roles.md#owner) in the subscription or resource group, in order to perform required resource creation and role management steps.-- An Azure Key Vault to store and retrieve your credential, and assign the Azure Arc identity access to the KeyVault.-- - If you don't have a Key Vault created, see [Create Key Vault](../../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md#create-a-key-vault-). - - To configure access by the managed identity used by the server, see [Grant access for Linux](../../active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md#grant-access) or [Grant access for Windows](../../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md#grant-access). For step number 5, you are going to enter the name of the Azure Arc-enabled server. To complete this using PowerShell, see [Assign an access policy using PowerShell](/azure/key-vault/general/assign-access-policy-powershell). --## Acquiring an access token using REST API --The method to obtain and use a system-assigned managed identity to authenticate with Azure resources is similar to how it is performed with an Azure VM. --For an Azure Arc-enabled Windows server, using PowerShell, you invoke the web request to get the token from the local host in the specific port. Specify the request using the IP address or the environmental variable **IDENTITY_ENDPOINT**. --```powershell -$apiVersion = "2020-06-01" -$resource = "https://management.azure.com/" -$endpoint = "{0}?resource={1}&api-version={2}" -f $env:IDENTITY_ENDPOINT,$resource,$apiVersion -$secretFile = "" -try -{ - Invoke-WebRequest -Method GET -Uri $endpoint -Headers @{Metadata='True'} -UseBasicParsing -} -catch -{ - $wwwAuthHeader = $_.Exception.Response.Headers["WWW-Authenticate"] - if ($wwwAuthHeader -match "Basic realm=.+") - { - $secretFile = ($wwwAuthHeader -split "Basic realm=")[1] - } -} -Write-Host "Secret file path: " $secretFile`n -$secret = cat -Raw $secretFile -$response = Invoke-WebRequest -Method GET -Uri $endpoint -Headers @{Metadata='True'; Authorization="Basic $secret"} -UseBasicParsing -if ($response) -{ - $token = (ConvertFrom-Json -InputObject $response.Content).access_token - Write-Host "Access token: " $token -} -``` --The following response is an example that is returned: ---For an Azure Arc-enabled Linux server, using Bash, you invoke the web request to get the token from the local host in the specific port. Specify the following request using the IP address or the environmental variable **IDENTITY_ENDPOINT**. To complete this step, you need an SSH client. --```bash -CHALLENGE_TOKEN_PATH=$(curl -s -D - -H Metadata:true "http://127.0.0.1:40342/metadata/identity/oauth2/token?api-version=2019-11-01&resource=https%3A%2F%2Fmanagement.azure.com" | grep Www-Authenticate | cut -d "=" -f 2 | tr -d "[:cntrl:]") -CHALLENGE_TOKEN=$(cat $CHALLENGE_TOKEN_PATH) -if [ $? -ne 0 ]; then - echo "Could not retrieve challenge token, double check that this command is run with root privileges." -else - curl -s -H Metadata:true -H "Authorization: Basic $CHALLENGE_TOKEN" "http://127.0.0.1:40342/metadata/identity/oauth2/token?api-version=2019-11-01&resource=https%3A%2F%2Fmanagement.azure.com" -fi -``` --The following response is an example that is returned: ---The response includes the access token you need to access any resource in Azure. To complete the configuration to authenticate to Azure Key Vault, see [Access Key Vault with Windows](../../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md#access-data) or [Access Key Vault with Linux](../../active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md#access-data). --## Next steps --- To learn more about Azure Key Vault, see [Key Vault overview](/azure/key-vault/general/overview).--- Learn how to assign a managed identity access to a resource [using PowerShell](/entra/identity/managed-identities-azure-resources/how-to-assign-access-azure-resource?pivots=identity-mi-access-powershell) or using [the Azure CLI](/entra/identity/managed-identities-azure-resources/how-to-assign-access-azure-resource?pivots=identity-mi-access-cli). |
azure-arc | Migrate Azure Monitor Agent Ansible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/migrate-azure-monitor-agent-ansible.md | - Title: How to migrate to Azure Monitor Agent using Red Hat Ansible Automation Platform -description: Learn how to migrate to Azure Monitor Agent using Red Hat Ansible Automation Platform. Previously updated : 10/17/2022-----# Migrate to Azure Monitor Agent on Azure Arc using Red Hat Ansible Automation Platform --This article covers how to use Red Hat Ansible Automation Platform to migrate non-Azure machines from the Azure Log Analytics agent to Azure Monitor agent. This includes onboarding the machines to Azure Arc-enabled servers. Once you have completed the configuration steps in this article, you'll be able to run a workflow against an automation controller inventory that performs the following tasks: --- Ensure that the Azure Connected Machine agent is installed on each machine. -- Install and enable the Azure Monitor agent.-- Disable and uninstall the Log Analytics agent.--Content from the [Ansible Content Lab for Cloud Automation](https://cloud.lab.ansible.io/) has already been developed to automate this scenario. This article will walk through how you can import that content as a project in an automation controller to build a workflow to perform the tasks above. --Ansible Automation Platform can automate the deployment of Azure services across your IT landscape to make onboarding to Azure Arc fast and reliable. --> [!NOTE] -> The Ansible content examples in this article target Linux hosts, but the playbooks can be altered to accommodate Windows hosts as well. ---## Prerequisites --### Azure Log Analytics workspace --This article assumes you are using the Azure Log Analytics agent and that the servers are pre-configured to report data to a Log Analytics workspace. You will need the name and resource group of the workspace from which you are migrating. --### Automation controller 2.x --This article is applicable to both self-managed Ansible Automation Platform and Red Hat Ansible Automation Platform on Microsoft Azure. --### Automation execution environment --To use the examples in this article, you'll need an automation execution environment with both the Azure Collection and the Azure CLI installed, since both are required to run the automation. --If you don't have an automation execution environment that meets these requirements, you can [use this example](https://github.com/scottharwell/cloud-ee). --See the [Red Hat Ansible documentation](https://docs.ansible.com/automation-controller/latest/html/userguide/execution_environments.html) for more information about building and configuring automation execution environments. --### Host inventory --You will need an inventory of Linux hosts configured in automation controller that contains a list of VMs that will use Azure Arc and the Azure Monitor Agent. --### Azure Resource Manager credential --A working account credential configured in Ansible Automation Platform for the Azure Resource Manager is required. This credential is used by Ansible Automation Platform to authenticate operations using the Azure Collection and the Azure CLI. --### Server machine credential --A “Machine Credential” configured in Automation Controller for SSH access to the servers in your host inventory is required. --## Configuring the content --The examples in this article rely on content developed and incubated by Red Hat through the [Ansible Content Lab for Cloud Content](https://cloud.lab.ansible.io/). --This article also uses the [Azure Infrastructure Configuration Demo](https://github.com/ansible-content-lab/azure.infrastructure_config_demos) collection. This collection contains a number of roles and playbooks that manage Azure use cases including those with Azure Arc-enabled servers. To use this collection in Automation Controller, follow the steps below to set up a project with the repository: --1. Log in to automation controller. -1. In the left menu, select **Projects**. -1. Select **Add**, and then complete the fields of the form as follows: -- **Name:** Content Lab - Azure Infrastructure Configuration Collection -- **Automation Environment:** (select with the Azure Collection and CLI instead) -- **Source Control Type:** Git -- **Source Control URL:** https://github.com/ansible-content-lab/azure.infrastructure_config_demos.git --1. Select **Save**. - :::image type="content" source="media/migrate-ama/configure-content.png" alt-text="Screenshot of Projects window to edit details." lightbox="media/migrate-ama/configure-content.png"::: --Once saved, the project should be synchronized with the automation controller. --## Migrating Azure agents --In this example, we will assume that our Linux servers are already running the Azure Log Analytics agent, but do not yet have the Azure Connected Machine agent installed. If your organization relies on other Azure services that use the Log Analytics agent, you may need to plan for extra data collection rules prior to migrating to the Azure Monitor agent. --We will create a workflow that leverages the following playbooks to install the Azure Connected Machine agent, deploy the Azure Monitor Agent, disable the Log Analytics agent, and then uninstall the Log Analytics agent: --- install_arc_agent.yml-- replace_log_analytics_with_arc_linux.yml-- uninstall_log_analytics_agent.yml--This workflow performs the following tasks: --- Installs the Azure Connected Machine agent on all of the VMs identified in inventory.-- Enables the Azure Monitor agent extension via Azure Arc.-- Disables the Azure Log Analytics agent extension via Azure Arc.-- Uninstalls the Azure Log Analytics agent if flagged.--### Create template to install Azure Connected Machine agent --This template is responsible for installing the Azure Arc [Connected Machine agent](./agent-overview.md) on hosts within the provided inventory. A successful run will have installed the agent on all machines. --Follow the steps below to create the template: --1. On the right menu, select **Templates**. -1. Select **Add**. -1. Select **Add job template**, then complete the fields of the form as follows: -- **Name:** Content Lab - Install Arc Connected Machine Agent -- **Job Type:** Run -- **Inventory:** (Your linux host inventory) -- **Project:** Content Lab - Azure Infrastructure Configuration Collection -- **Playbook:** `playbooks/replace_log_analytics_with_arc_linux.yml` -- **Credentials:** - - Your Azure Resource Manager credential - - Your Host Inventory Machine credential - - **Variables:** -- ```bash - - region: eastus - resource_group_name: sh-rg - subscription_id: "{{ lookup('env', 'AZURE_SUBSCRIPTION_ID') }}" - service_principal_id: "{{ lookup('env', 'AZURE_CLIENT_ID') }}" - service_principal_secret: "{{ lookup('env', 'AZURE_SECRET') }}" - tenant_id: "{{ lookup('env', 'AZURE_TENANT') }}" - ``` - - > [!NOTE] - > The operations in this playbook happen through the Azure CLI. Most of these variables are set to pass along the proper variable from the Azure Resource Manager credential to the CL. -- **Options:** - Privilege Escalation: true -1. Select **Save**. --### Create template to replace log analytics --This template is responsible for migrating from the Log Analytics agent to the Azure Monitor agent by enabling the Azure Monitor Agent extension and disabling the Azure Log Analytics extension (if used via the Azure Connected Machine agent). --Follow the steps below to create the template: --1. On the right menu, select **Templates**. -1. Select **Add**. -1. Select **Add job template**, then complete the fields of the form as follows: -- **Name:** Content Lab - Replace Log Analytics agent with Arc Connected Machine agent -- **Job Type:** Run -- **Inventory:** (Your linux host inventory) -- **Project:** Content Lab - Azure Infrastructure Configuration Collection -- **Playbook:** `playbooks/replace_log_analytics_with_arc_linux.yml` -- **Credentials:** - - Your Azure Resource Manager credential - - Your Host Inventory Machine credential - - **Variables:** - - ```bash - — - Region: <Azure Region> - resource_group_name: <Resource Group Name> - linux_hosts: "{{ hostvars.values() | selectattr('group_names','contains', 'linux') | map(attribute='inventory_hostname') | list }}" - ``` -- > [!NOTE] - > The `linux_hosts` variable is used to create a list of hostnames to send to the Azure Collection and is not directly related to a host inventory. You may set this list in any way that Ansible supports. In this case, the variable attempts to pull host names from groups with “linux” in the group name. -1. Select **Save**. --### Create template to uninstall Log Analytics --This template will attempt to run the Log Analytics agent uninstall script if the Log Analytics agent was installed outside of the Azure Connected Machine agent. --Follow the steps below to create the template: --1. On the right menu, select **Templates**. -1. Select **Add**. -1. Select **Add job template**, then complete the fields of the form as follows: -- **Name:** Content Lab - Uninstall Log Analytics agent -- **Job Type:** Run -- **Inventory:** (Your linux host inventory) -- **Project:** Content Lab - Azure Infrastructure Configuration Collection -- **Playbook:** `playbooks/uninstall_log_analytics_with_arc_linux.yml` -- **Credentials:** - - Your Host Inventory Machine credential - - **Options:** - - - Privilege Escalation: true -1. Select **Save**. --### Create the workflow --An automation controller workflow allows you to construct complex automation by connecting automation templates and other actions together. This workflow example is a simple linear flow that enables the end-to-end scenario in this example, but other nodes could be added for context such as error handling, human approvals, etc. --1. On the right menu, select **Templates**. -1. Select **Add**. -1. Select **Add workflow template**, then complete the following fields as follows: -- **Name:** Content Lab - Migrate Log Agent to Azure Monitor -- **Job Type:** Run -- **Inventory:** (Your linux host inventory) -- **Project:** Content Lab - Azure Infrastructure Configuration Collection --1. Select **Save**. -1. Select **Start** to begin the workflow designer. -1. Set **Node Type** to "Job Template" and select **Content Lab - Replace Log Analytics with Arc Connected Machine Agent**. -1. Select **Next**. -1. Select **Save**. -1. Hover over the **Content Lab - Replace Log Analytics with Arc Connected Machine Agent** node and select the **+** button. -1. Select **On Success**. -1. Select **Next**. -1. Set **Node Type** to "Job Template" and select **Content Lab - Uninstall Log Analytics Agent**. -1. Select **Save**. -1. Select **Save** at the top right corner of the workflow designer. --You will now have a workflow that looks like the following: --### Add a survey to the workflow --We want to add survey questions to the workflow so that we can collect input when the workflow is run. --1. Select **Survey** from the workflow details screen. - :::image type="content" source="media/migrate-ama/survey.png" alt-text="Screenshot of template details window with survey tab highlighted on right side."::: -1. Select **Add**, then complete the form using the following values: -- **Question:** Which Azure region will your Arc servers reside? -- **Answer variable name:** region -- **Required:** true -- **Answer type:** Text --1. Select **Save**. -1. Select **Add**, then complete the form using the following values: -- **Question:** What is the name of the resource group? -- **Answer variable name:** resource_group_name -- **Required:** true -- **Answer type:** Text --1. Select **Save**. -1. Select **Add**, then complete the form using the following values: -- **Question:** What is the name of your Log Analytics workspace? -- **Answer variable name:** analytics_workspace_name -- **Required:** true -- **Answer type:** Text --1. Select **Save**. -1. From the Survey list screen, ensure that the survey is enabled. - :::image type="content" source="media/migrate-ama/survey-enabled.png" alt-text="Screenshot of Survey window with Survey Enabled switched enabled."::: --Your workflow has now been created. --### Running the workflow --Now that you have the workflow created, you can run the workflow at any time. When you click the “launch” 🚀 icon, the survey that you configured will be presented so that you can update the variables across automation runs. This will allow you to move Log Analytics connected servers that are assigned to different regions or resource groups as needed. ---## Conclusion --After following the steps in this article, you have created an automation workflow that migrates your Linux machines from the Azure Log Analytics agent to the Azure Monitor agent. This workflow will onboard the Linux machine to Azure Arc-enabled servers. This example uses the Ansible Content Lab for Cloud Automation to make implementation fast and easy. --## Next steps --Learn more about [connecting machines using Ansible playbooks](onboard-ansible-playbooks.md). |
azure-arc | Migrate Legacy Agents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/migrate-legacy-agents.md | - Title: How to migrate from legacy Log Analytics agents in non-Azure environments with Azure Arc -description: Learn how to migrate from legacy Log Analytics agents in non-Azure environments with Azure Arc. Previously updated : 07/01/2024----# Migrate from legacy Log Analytics agents in non-Azure environments with Azure Arc --Azure Monitor Agent (AMA) replaces the Log Analytics agent (also known as Microsoft Monitor Agent (MMA) and OMS) for Windows and Linux machines. Azure Arc is required to migrate off the legacy Log Analytics agents for non-Azure environments, including on-premises or multicloud infrastructure. --Azure Arc is a bridge, extending not only Azure Monitor but the breadth of Azure management capabilities across Microsoft Defender, Azure Policy, and Azure Update Manager to non-Azure environments. Through the lightweight Connected Machine agent, Azure Arc projects non-Azure servers into the Azure control plane, providing a consistent management experience across Azure VMs and non-Azure servers. --This article focuses on considerations when migrating from legacy Log Analytics agents in non-Azure environments. For core migration guidance, see [Migrate to Azure Monitor Agent from Log Analytics agent](/azure/azure-monitor/agents/azure-monitor-agent-migration). --## Advantages of Azure Arc --Deploying Azure Monitor Agent as an extension with Azure Arc-enabled servers provides several benefits over the legacy Log Analytics agents (MMA and OMS), which directly connect non-Azure servers to Log Analytics workspaces: --- Azure Arc centralizes the identity, connectivity, and governance of non-Azure resources. This streamlines operational overhead and improves the security posture and performance. --- Azure Arc offers extension management capabilities including auto-extension upgrade, reducing typical maintenance overhead. --- Azure Arc enables access to the breadth of server management capabilities beyond monitoring, such as Cloud Security Posture Management with [Microsoft Defender](/azure/defender-for-cloud/defender-for-cloud-introduction) or scripting with [Run Command](run-command.md). As you centralize operations in Azure, Azure Arc provides a robust foundation for these other capabilities. --Azure Arc is the foundation for a cloud-based inventory bringing together Azure and on-premises, multicloud, and edge infrastructure that can be queried and organized through Azure Resource Manager (ARM). --## Limitations on Azure Arc --Azure Arc relies on the [Connected Machine agent](/azure/azure-arc/servers/agent-overview) and is an agent-based solution requiring connectivity and designed for server infrastructure: --- Azure Arc requires the Connected Machine agent in addition to the Azure Monitor Agent as a VM extension. The Connected Machine agent must be configured specifying details of the Azure resource. --- Azure Arc only supports client-like Operating Systems when computers are in a server-like environment and doesn't support short-lived servers or virtual desktop infrastructure. --- Azure Arc has two regional availability gaps with Azure Monitor Agent:- - Qatar Central (Availability expected in August 2024) - - Australia Central (Other Australia regions are available) - -- Azure Arc requires servers to have regular connectivity and the allowance of key endpoints. While proxy and private link connectivity are supported, Azure Arc doesn't support completely disconnected scenarios. Azure Arc doesn't support the Log Analytics (OMS) Gateway. --- Azure Arc defines a System Managed Identity for connected servers, but doesn't support User Assigned Identities. --Learn more about the full Connected Machine agent [prerequisites](/azure/azure-arc/servers/prerequisites#supported-operating-systems) for environmental constraints. --## Relevant services --Azure Arc-enabled servers is required for deploying all solutions that previously required the legacy Log Analytics agents (MMA/OMS) to non-Azure infrastructure. The new Azure Monitor Agent is only required for a subset of these services. --|Azure Monitor Agent and Azure Arc required |Only Azure Arc required | -||| -|Microsoft Sentinel |Microsoft Defender for Cloud | -|Virtual Machine Insights (previously Dependency Agent) |Azure Update Management | -|Change Tracking and Inventory |Automation Hybrid Runbook Worker | --As you design the holistic migration from the legacy Log Analytics agents (MMA/OMS), it's critical to consider and prepare for the migration of these solutions. --## Deploying Azure Arc --Azure Arc can be deployed interactively on a single server basis or programmatically at scale: --- PowerShell and Bash deployment scripts can be generated from Azure portal or written manually following documentation. --- Windows Server machines can be connected through Windows Admin Center and the Windows Server Graphical Installer. --- At scale deployment options include Configuration Manager, Ansible, and Group Policy using the Azure service principal, a limited identity for Arc server onboarding. --- Azure Automation Update Manager customers can onboard from Azure portal with the Arc-enablement of all detected non-Azure servers connected to the Log Analytics workspace with the Azure Automation Update Management solution. --See [Azure Connected Machine agent deployment options](/azure/azure-arc/servers/deployment-options) to learn more. --## Agent control and footprint --You can lock down the Connected Machine agent by specifying the extensions and capabilities that are enabled. If migrating from the legacy Log Analytics agent, the Monitor mode is especially salient. Monitor mode applies a Microsoft-managed extension allowlist, disables remote connectivity, and disables the machine configuration agent. If youΓÇÖre using Azure Arc solely for monitoring purposes, setting the agent to Monitor mode makes it easy to restrict the agent to just the functionality required to use Azure Monitor and solutions that use Azure Monitor. You can configure the agent mode with the following command (run locally on each machine): --`azcmagent config set config.mode monitor` --See [Extensions security](/azure/azure-arc/servers/security-extensions) to learn more. --## Networking options --Azure Arc-enabled servers supports three networking options: --- Connectivity over public endpoint-- Proxy -- Private Link (Azure Express Route). --All connections are TCP and outbound over port 443 unless specified. All HTTP connections use HTTPS and SSL/TLS with officially signed and verifiable certificates. --Azure Arc doesn't officially support using the Log Analytics gateway as a proxy for the Connected Machine agent. --The connectivity method specified can be changed after onboarding. --See [Connected Machine agent network requirements](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud) to learn more. --## Deploying Azure Monitor Agent with Azure Arc --There are multiple methods to deploy the Azure Monitor Agent extension on Azure Arc-enabled servers programmatically, graphically, and automatically. Some popular methods to deploy Azure Monitor Agent on Azure Arc-enabled servers include: --- Azure portal -- PowerShell, Azure CLI, or Azure Resource Manager (ARM) templates -- Azure Policy --Azure Arc doesn't eliminate the need to configure and define Data Collection Rules. You should configure Data Collection Rules similar to your Azure VMs for Azure Arc-enabled servers. --See [Deployment options for Azure Monitor Agent on Azure Arc-enabled servers](/azure/azure-arc/servers/concept-log-analytics-extension-deployment) to learn more. --## Standalone Azure Monitor Agent installation --For Windows client machines running in non-Azure environments, use a standalone Azure Monitor Agent installation that doesn't require deployment of the Azure Connected Machine agent through Azure Arc. See [Install Azure Monitor Agent on Windows client devices using the client installer](/azure/azure-monitor/agents/azure-monitor-agent-windows-client) to learn more. |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md | - Title: Connected Machine agent network requirements -description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 06/25/2024----# Connected Machine agent network requirements --This topic describes the networking requirements for using the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers. --## Details ----## Subset of endpoints for ESU only ---## Next steps --* Review additional [prerequisites for deploying the Connected Machine agent](prerequisites.md). -* Before you deploy the Azure Connected Machine agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md). -* To resolve problems, review the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md). -* For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md). |
azure-arc | Onboard Ansible Playbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-ansible-playbooks.md | - Title: Connect machines at scale using Ansible Playbooks -description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using Ansible playbooks. Previously updated : 05/09/2022-----# Connect machines at scale using Ansible playbooks --You can onboard Ansible-managed nodes to Azure Arc-enabled servers at scale using Ansible playbooks. To do so, you'll need to download, modify, and then run the appropriate playbook. --Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations. --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---## Generate a service principal and collect Azure details --Before you can run the script to connect your machines, you'll need to do the following: --1. Follow the steps to [create a service principal for onboarding at scale](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). -- * Assign the Azure Connected Machine Onboarding role to your service principal and limit the scope of the role to the target Azure subscription or resource group. - * Make a note of the Service Principal Secret and Service Principal Client ID; you'll need these values later. --1. Collect details on the Tenant ID, Subscription ID, Resource Group, and Region where the Azure Arc-enabled resource will be onboarded. --## Download the Ansible playbook --If you are onboarding machines to Azure Arc-enabled servers, copy the following Ansible playbook template and save the playbook as `arc-server-onboard-playbook.yml`. --```yaml --- name: Onboard Linux and Windows Servers to Azure Arc-enabled servers with public endpoint connectivity- hosts: all - # vars: - # azure: - # service_principal_id: 'INSERT-SERVICE-PRINCIPAL-CLIENT-ID' - # service_principal_secret: 'INSERT-SERVICE-PRINCIPAL-SECRET' - # resource_group: 'INSERT-RESOURCE-GROUP' - # tenant_id: 'INSERT-TENANT-ID' - # subscription_id: 'INSERT-SUBSCRIPTION-ID' - # location: 'INSERT-LOCATION' - tasks: - - name: Check if the Connected Machine Agent has already been downloaded on Linux servers - stat: - path: /usr/bin/azcmagent - get_attributes: False - get_checksum: False - register: azcmagent_lnx_downloaded - when: ansible_system == 'Linux' -- - name: Download the Connected Machine Agent on Linux servers - become: yes - get_url: - url: https://aka.ms/azcmagent - dest: ~/install_linux_azcmagent.sh - mode: '700' - when: (ansible_system == 'Linux') and (azcmagent_lnx_downloaded.stat.exists == false) -- - name: Install the Connected Machine Agent on Linux servers - become: yes - shell: bash ~/install_linux_azcmagent.sh - when: (ansible_system == 'Linux') and (not azcmagent_lnx_downloaded.stat.exists) -- - name: Check if the Connected Machine Agent has already been downloaded on Windows servers - win_stat: - path: C:\Program Files\AzureConnectedMachineAgent - register: azcmagent_win_downloaded - when: ansible_os_family == 'Windows' -- - name: Download the Connected Machine Agent on Windows servers - win_get_url: - url: https://aka.ms/AzureConnectedMachineAgent - dest: C:\AzureConnectedMachineAgent.msi - when: (ansible_os_family == 'Windows') and (not azcmagent_win_downloaded.stat.exists) -- - name: Install the Connected Machine Agent on Windows servers - win_package: - path: C:\AzureConnectedMachineAgent.msi - when: (ansible_os_family == 'Windows') and (not azcmagent_win_downloaded.stat.exists) -- - name: Check if the Connected Machine Agent has already been connected - become: true - command: - cmd: azcmagent check - register: azcmagent_lnx_connected - ignore_errors: yes - when: ansible_system == 'Linux' - failed_when: (azcmagent_lnx_connected.rc not in [ 0, 16 ]) - changed_when: False -- - name: Check if the Connected Machine Agent has already been connected on windows - win_command: azcmagent check - register: azcmagent_win_connected - when: ansible_os_family == 'Windows' - ignore_errors: yes - failed_when: (azcmagent_win_connected.rc not in [ 0, 16 ]) - changed_when: False -- - name: Connect the Connected Machine Agent on Linux servers to Azure Arc - become: yes - shell: azcmagent connect --service-principal-id "{{ azure.service_principal_id }}" --service-principal-secret "{{ azure.service_principal_secret }}" --resource-group "{{ azure.resource_group }}" --tenant-id "{{ azure.tenant_id }}" --location "{{ azure.location }}" --subscription-id "{{ azure.subscription_id }}" - when: (ansible_system == 'Linux') and (azcmagent_lnx_connected.rc is defined and azcmagent_lnx_connected.rc != 0) -- - name: Connect the Connected Machine Agent on Windows servers to Azure - win_shell: '& $env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe connect --service-principal-id "{{ azure.service_principal_id }}" --service-principal-secret "{{ azure.service_principal_secret }}" --resource-group "{{ azure.resource_group }}" --tenant-id "{{ azure.tenant_id }}" --location "{{ azure.location }}" --subscription-id "{{ azure.subscription_id }}"' - when: (ansible_os_family == 'Windows') and (azcmagent_win_connected.rc is defined and azcmagent_win_connected.rc != 0) -``` --## Modify the Ansible playbook --After downloading the Ansible playbook, complete the following steps: --1. Within the Ansible playbook, modify the variables under the **vars section** with the service principal and Azure details collected earlier: -- * Service Principal ID - * Service Principal Secret - * Resource Group - * Tenant ID - * Subscription ID - * Region --1. Enter the correct hosts field capturing the target servers for onboarding to Azure Arc. You can employ [Ansible patterns](https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html#common-patterns) to selectively target which hybrid machines to onboard. --1. This template passes the service principal secret as a variable in the Ansible playbook. Please note that an [Ansible vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html) could be used to encrypt this secret and the variables could be passed through a configuration file. --## Run the Ansible playbook --From the Ansible control node, run the Ansible playbook by invoking the `ansible-playbook` command: --``` -ansible-playbook arc-server-onboard-playbook.yml -``` --After the playbook has run, the **PLAY RECAP** will indicate if all tasks were completed successfully and surface any nodes where tasks failed. --## Verify the connection with Azure Arc --After you have successfully installed the agent and configured it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your target hosts have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal). --## Next steps --- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.-- Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).-- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and much more. |
azure-arc | Onboard Configuration Manager Custom Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-configuration-manager-custom-task.md | - Title: Connect machines at scale with a Configuration Manager custom task sequence -description: You can use a custom task sequence that can deploy the Connected Machine Agent to onboard a collection of devices to Azure Arc-enabled servers. Previously updated : 05/25/2023----# Connect machines at scale with a Configuration Manager custom task sequence --Microsoft Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager offers the custom task sequence as a flexible paradigm for application deployment. --You can use a custom task sequence, that can deploy the Connected Machine Agent to onboard a collection of devices to Azure Arc-enabled servers. --Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations. --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---## Generate a service principal --Follow the steps to [create a service principal for onboarding at scale](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). Assign the **Azure Connected Machine Onboarding** role to your service principal, and limit the scope of the role to the target Azure landing zone. Make a note of the Service Principal ID and Service Principal Secret, as you'll need these values later. --## Download the agent and create the application --First, download the Azure Connected Machine agent package (AzureConnectedMachineAgent.msi) for Windows from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent). The Azure Connected Machine agent for Windows can be [upgraded to the latest release manually or automatically](manage-agent.md), depending on your requirements. The .msi must be saved in a server share for the custom task sequence. --Next, [create an application in Configuration Manager](/mem/configmgr/apps/get-started/create-and-deploy-an-application) using the installed Azure Connected Machine agent package: --1. In the **Configuration Manager** console, select **Software Library > Application Management > Applications**. -1. On the **Home** tab, in the **Create** group, select **Create Application**. -1. On the **General** page of the Create Application Wizard, select **Automatically detect information about this application from installation files**. This action pre-populates some of the information in the wizard with information that is extracted from the installation .msi file. Then, specify the following information: - 1. **Type**: Select **Windows Installer (*.msi file)** - 1. **Location**: Select **Browse** to choose the location where you saved the installation file **AzureConnectedMachineAgent.msi**. - :::image type="content" source="media/onboard-configuration-manager-custom-task/configuration-manager-create-application.png" alt-text="Screenshot of the Create Application Wizard in Configuration Manager."::: -1. Select **Next**, and on the **Import Information** page, select **Next** again. -1. On the **General Information** page, you can supply further information about the application to help you sort and locate it in the Configuration Manager console. Once complete, select Next. -1. On the **Installation program** page, select **Next**. -1. On the **Summary** page, confirm your application settings and then complete the wizard. --You have finished creating the application. To find it, in the **Software Library** workspace, expand **Application Management**, and then choose **Applications**. --## Create a task sequence --The next step is to define a custom task sequence that installs the Azure Connected Machine Agent on a machine, then connects it to Azure Arc. --1. In the Configuration Manager console, go to the **Software Library** workspace, expand **Operating Systems**, and then select the **Task Sequences** node. -1. On the **Home** tab of the ribbon, in the **Create** group, select **Create Task Sequence**. This will launch the Create Task Sequence Wizard. -1. On the **Create a New Task Sequence** page, select **Create a new custom task sequence**. -1. On the **Task Sequence Information** page, specify a name for the task sequence and optionally a description of the task sequence. -- :::image type="content" source="media/onboard-configuration-manager-custom-task/configuration-manager-create-task-sequence.png" alt-text="Screenshot of the Create Task Sequence Wizard in Configuration Manager."::: --After you complete the Create Task Sequence Wizard, Configuration Manager adds the custom task sequence to the **Task Sequences** node. You can now edit this task sequence to add steps to it. --1. In the Configuration Manager console, go to the **Software Library** workspace, expand **Operating Systems**, and then select the **Task Sequences** node. -1. In the **Task Sequence** list, select the task sequence that you want to edit. -1. Define **Install Application** as the first task in the task sequence. - 1. On the **Home** tab of the ribbon, in the**Task Sequence** group, select **Edit**. Then, select **Add**, select **Software**, and select **Install Application**. - 1. Set the name to `Install Connected Machine Agent`. - 1. Select the Azure Connected Machine Agent. - :::image type="content" source="media/onboard-configuration-manager-custom-task/configuration-manager-edit-task-sequence.png" alt-text="Screenshot showing a task sequence being edited in Configuration Manager."::: -1. Define **Run PowerShell Script** as the second task in the task sequence. - 1. Select **Add**, select **General**, and select **Run PowerShell Script**. - 1. Set the name to `Connect to Azure Arc`. - 1. Select **Enter a PowerShell script**. - 1. Select **Add Script**, and then edit the script to connect to Arc as shown below. Note that this template script has placeholder values for the service principal, tenant, subscription, resource group, and location, which you should update to the appropriate values. -- ```azurepowershell - & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword> --tenant-id <tenantID> --subscription-id <subscriptionID> --resource-group <ResourceGroupName> --location <resourceLocation> - ``` -- :::image type="content" source="media/onboard-configuration-manager-custom-task/configuration-manager-connect-to-azure-arc.png" alt-text="Screenshot showing a task sequence being edited to run a PowerShell script."::: -1. Set **PowerShell execution policy** to **Bypass** (if not already set by default). -1. Select **OK** to save the changes to your custom task sequence. --## Deploy the custom task sequence and verify connection to Azure Arc --Follow the steps outlined in Deploy a task sequence to deploy the task sequence to the target collection of devices. Choose the following parameter settings. --- Under **Deployment Settings**, set **Purpose** as **Required** so that Configuration Manager automatically runs the task sequence according to the configured schedule. If **Purpose** is set to **Available** instead, the task sequence will need to be installed on demand from Software Center.-- Under **Scheduling**, set **Rerun Behavior** to **Rerun if failed previous attempt**.--## Verify successful connection to Azure Arc --To verify that the machines have been successfully connected to Azure Arc, verify that they are visible in the [Azure portal](https://aka.ms/hybridmachineportal). ---## Next steps --- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.-- Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).-- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and much more. |
azure-arc | Onboard Configuration Manager Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-configuration-manager-powershell.md | - Title: Connect machines at scale by running PowerShell scripts with Configuration Manager -description: You can use Configuration Manager to run a PowerShell script that automates at-scale onboarding to Azure Arc-enabled servers. Previously updated : 01/20/2022----# Connect machines at scale by running PowerShell scripts with Configuration Manager --Microsoft Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager has an integrated ability to run PowerShell scripts. --You can use Configuration Manager to run a PowerShell script that automates at-scale onboarding to Azure Arc-enabled servers. --Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations. --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---## Prerequisites for Configuration Manager to run PowerShell scripts --The following prerequisites must be met to use PowerShell scripts in Configuration --- The Configuration Manager version must be 1706 or higher.-- To import and author scripts, your Configuration Manager account must have **Create** permissions for **SMS Scripts**.-- To approve or deny scripts, your Configuration Manager account must have **Approve** permissions for **SMS Scripts**.-- To run scripts, your Configuration Manager account must have **Run Script** permissions for **Collections**.--## Generate a service principal and prepare the installation script --Before you can run the script to connect your machines, you'll need to do the following: --1. Follow the steps to [create a service principal for onboarding at scale](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). Assign the **Azure Connected Machine Onboarding** role to your service principal, and limit the scope of the role to the target Azure landing zone. Make a note of the Service Principal Secret, as you'll need this value later. --2. Follow the steps to [generate the installation script from the Azure portal](onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal). While you will use this installation script later, do not run the script in PowerShell. --## Create the script in Configuration Manager --Before you begin, check in **Configuration Manager Default Settings** that the PowerShell execution policy under **Computer Agent** is set to **Bypass**. --1. In the Configuration Manager console, select **Software Library**. -1. In the **Software Library** workspace, select **Scripts**. -1. On the **Home** tab, in the **Create** group, select **Create Script**. -1. On the **Script** page of the **Create Script** wizard, configure the following settings: - 1. **Script Name** ΓÇô Onboard Azure Arc - 1. **Script language** - PowerShell - 1. **Import** ΓÇô Import the installation script that you generated in the Azure portal. - :::image type="content" source="media/onboard-configuration-manager-powershell/configuration-manager-create-script.png" alt-text="Screenshot of the Create Script screen in Configuration Manager."::: -1. In the Script Wizard, paste the script generated from Azure portal. Edit this pasted script with the Service Principal Secret for the service principal you generated. -1. Complete the wizard. The new script is displayed in the **Script** list with a status of **Waiting for approval**. --## Approve the script in Configuration Manager --With an account that has **Approve** permissions for **SMS Scripts**, do the following: --1. In the Configuration Manager console, select **Software Library**. -1. In the **Software Library** workspace, select **Scripts**. -1. In the **Script** list, choose the script you want to approve or deny. Then, on the Home tab, in the Script group, select **Approve/Deny**. -1. In the **Approve or deny script** dialog box, select **Approve** for the script. - :::image type="content" source="media/onboard-configuration-manager-powershell/configuration-manager-approve-script.png" alt-text="Screenshot of the Approve or deny script screen in Configuration Manager."::: -1. Complete the wizard, then confirm that the new script is shown as **Approved** in the **Script** list. --## Run the script in Configuration Manager --Select a collection of targets for your script by doing the following: --1. In the Configuration Manager console, select **Assets and Compliance**. -1. In the **Assets and Compliance** workspace, select **Device Collections**. -1. In the **Device Collections** list, select the collection of devices on which you want to run the script. -1. Select a collection of your choice, and then select **Run Script**. -1. On the **Script** page of the **Run Script** wizard, choose the script you authored and approved. -1. Click **Next**, and then complete the wizard. --## Verify successful connection to Azure Arc --The script status monitoring will indicate whether the script has successfully installed the Connected Machine Agent to the collection of devices. Successfully onboarded Azure Arc-enabled servers will also be visible in the [Azure portal](https://aka.ms/hybridmachineportal). ---## Next steps --- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.-- Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).-- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and much more. |
azure-arc | Onboard Group Policy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy-powershell.md | - Title: Connect machines at scale using Group Policy with a PowerShell script -description: In this article, you learn how to create a Group Policy Object to onboard Active Directory-joined Windows machines to Azure Arc-enabled servers. Previously updated : 05/04/2023-----# Connect machines at scale using Group Policy --You can onboard Active Directory–joined Windows machines to Azure Arc-enabled servers at scale using Group Policy. --You'll first need to set up a local remote share with the Connected Machine agent and modify a script specifying the Arc-enabled server's landing zone within Azure. You'll then run a script that generates a Group Policy Object (GPO) to onboard a group of machines to Azure Arc-enabled servers. This Group Policy Object can be applied to the site, domain, or organizational level. Assignment can also use Access Control List (ACL) and other security filtering native to Group Policy. Machines in the scope of the Group Policy will be onboarded to Azure Arc-enabled servers. Scope your GPO to only include machines that you want to onboard to Azure Arc. --Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations. --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---## Prepare a remote share and create a service principal --The Group Policy Object, which is used to onboard Azure Arc-enabled servers, requires a remote share with the Connected Machine agent. You will need to: --1. Prepare a remote share to host the Azure Connected Machine agent package for Windows and the configuration file. You need to be able to add files to the distributed location. The network share should provide Domain Controllers, and Domain Computers with Change permissions, and Domain Admins with Full Control permissions. --1. Follow the steps to [create a service principal for onboarding at scale](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). -- * Assign the Azure Connected Machine Onboarding role to your service principal and limit the scope of the role to the target Azure landing zone. - * Make a note of the Service Principal Secret; you'll need this value later. --1. Download and unzip the folder **ArcEnabledServersGroupPolicy_vX.X.X** from [https://github.com/Azure/ArcEnabledServersGroupPolicy/releases/latest/](https://github.com/Azure/ArcEnabledServersGroupPolicy/releases/latest/). This folder contains the ArcGPO project structure with the scripts `EnableAzureArc.ps1`, `DeployGPO.ps1`, and `AzureArcDeployment.psm1`. These assets will be used for onboarding the machine to Azure Arc-enabled servers. --1. Download the latest version of the [Azure Connected Machine agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share. --1. Execute the deployment script `DeployGPO.ps1`, modifying the run parameters for the DomainFQDN, ReportServerFQDN, ArcRemoteShare, Service Principal secret, Service Principal Client ID, Subscription ID, Resource Group, Region, Tenant, and AgentProxy (if applicable): -- ``` - .\DeployGPO.ps1 -DomainFQDN contoso.com -ReportServerFQDN Server.contoso.com -ArcRemoteShare AzureArcOnBoard -ServicePrincipalSecret $ServicePrincipalSecret -ServicePrincipalClientId $ServicePrincipalClientId -SubscriptionId $SubscriptionId -ResourceGroup $ResourceGroup -Location $Location -TenantId $TenantId [-AgentProxy $AgentProxy] - ``` --## Apply the Group Policy Object --On the Group Policy Management Console (GPMC), right-click on the desired Organizational Unit and link the GPO named **[MSFT] Azure Arc Servers (datetime)**. This is the Group Policy Object which has the Scheduled Task to onboard the machines. After 10 or 20 minutes, the Group Policy Object will be replicated to the respective domain controllers. Learn more about [creating and managing group policy in Microsoft Entra Domain Services](../../active-directory-domain-services/manage-group-policy.md). --After you have successfully installed the agent and configured it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your Organizational Unit have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal). --> [!IMPORTANT] -> Once you've confirmed that your servers have successfully onboarded to Arc, disable the Group Policy Object. This will prevent the same Powershell commands in the scheduled tasks from executing when the system reboots or when the group policy is updated. -> --## Next steps --* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. -* Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md). -* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and much more. -* Learn more about [Group Policy](/troubleshoot/windows-server/group-policy/group-policy-overview). |
azure-arc | Onboard Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-portal.md | - Title: Connect hybrid machines to Azure using a deployment script -description: In this article, you learn how to install the agent and connect machines to Azure by using Azure Arc-enabled servers using the deployment script you create in the Azure portal. Previously updated : 10/23/2023-----# Connect hybrid machines to Azure using a deployment script --You can enable Azure Arc-enabled servers for one or a small number of Windows or Linux machines in your environment by performing a set of steps manually. Or you can use an automated method by running a template script that we provide. This script automates the download and installation of both agents. --This method requires that you have administrator permissions on the machine to install and configure the agent. On Linux, by using the root account, and on Windows, you are member of the Local Administrators group. --Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --> [!NOTE] -> Follow best security practices and avoid using an Azure account with Owner access to onboard servers. Instead, use an account that only has the Azure Connected Machine onboarding or Azure Connected Machine resource administrator role assignment. See [Azure Identity Management and access control security best practices](/azure/security/fundamentals/identity-management-best-practices#use-role-based-access-control) for more information. ---## Generate the installation script from the Azure portal --The script to automate the download and installation, and to establish the connection with Azure Arc, is available from the Azure portal. To complete the process, perform the following steps: --1. From your browser, sign in to the [Azure portal](https://portal.azure.com). --1. On the **Azure Arc - Machines** page, select **Add/Create** at the upper left, and then select **Add a machine** from the drop-down menu. --1. On the **Add servers with Azure Arc** page, under the **Add a single server** tile, select **Generate script**. --1. On the **Basics** page, provide the following: -- 1. In the **Project Details** section, select the **Subscription** and **Resource group** the machine will be managed from. - 1. In the **Region** drop-down list, select the Azure region to store the servers metadata. - 1. In the **Operating system** drop-down list, select the operating system that the script is configured to run on. - 1. In the **Connectivity method** section, If the machine is communicating through a proxy server to connect to the internet, select **Proxy server** option and specify the proxy server IP address or the name and port number that the machine will use to communicate with the proxy server. Enter the value in the format `http://<proxyURL>:<proxyport>`. Else if the machine is communicating through a private endpoint then select **Private endpoint** option and appropriate private link scope in the drop-down list. Else if the machine is communicating through a public endpoint then select **Public endpoint** option. - 1. In the **Automanage machine best practices** section, you may enable automanage if you want to onboard and configure best practice services like Machine configuration and Insights, based on your server needs. - 1. Select **Next** to go to the Tags page. --1. On the **Tags** page, review the default **Physical location tags** suggested and enter a value, or specify one or more **Custom tags** to support your standards. --1. Select **Next** to Download and run script page. --1. On the **Download and run script** page, review the summary information, and then select **Download**. If you still need to make changes, select **Previous**. --## Install and validate the agent on Windows --### Install manually --You can install the Connected Machine agent manually by running the Windows Installer package *AzureConnectedMachineAgent.msi*. You can download the latest version of the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center. -->[!NOTE] ->* To install or uninstall the agent, you must have *Administrator* permissions. ->* You must first download and copy the Installer package to a folder on the target server, or from a shared network folder. If you run the Installer package without any options, it starts a setup wizard that you can follow to install the agent interactively. --If the machine needs to communicate through a proxy server to the service, after you install the agent you need to run a command that's described in the steps below. This command sets the proxy server system environment variable `https_proxy`. Using this configuration, the agent communicates through the proxy server using the HTTP protocol. --If you are unfamiliar with the command-line options for Windows Installer packages, review [Msiexec standard command-line options](/windows/win32/msi/standard-installer-command-line-options) and [Msiexec command-line options](/windows/win32/msi/command-line-options). --For example, run the installation program with the `/?` parameter to review the help and quick reference option. --```dos -msiexec.exe /i AzureConnectedMachineAgent.msi /? -``` --1. To install the agent silently and create a setup log file in the `C:\Support\Logs` folder that exist, run the following command. -- ```dos - msiexec.exe /i AzureConnectedMachineAgent.msi /qn /l*v "C:\Support\Logs\Azcmagentsetup.log" - ``` -- If the agent fails to start after setup is finished, check the logs for detailed error information. The log directory is *%ProgramData%\AzureConnectedMachineAgent\log*. --2. If the machine needs to communicate through a proxy server, to set the proxy server environment variable, run the following command: -- ```powershell - [Environment]::SetEnvironmentVariable("https_proxy", "http://{proxy-url}:{proxy-port}", "Machine") - $env:https_proxy = [System.Environment]::GetEnvironmentVariable("https_proxy","Machine") - # For the changes to take effect, the agent service needs to be restarted after the proxy environment variable is set. - Restart-Service -Name himds - ``` -- >[!NOTE] - >The agent does not support setting proxy authentication. - > --3. After installing the agent, you need to configure it to communicate with the Azure Arc service by running the following command: -- ```dos - "%ProgramFiles%\AzureConnectedMachineAgent\azcmagent.exe" connect --resource-group "resourceGroupName" --tenant-id "tenantID" --location "regionName" --subscription-id "subscriptionID" - ``` --### Install with the scripted method --1. Log in to the server. --1. Open an elevated PowerShell command prompt. -- >[!NOTE] - >The script only supports running from a 64-bit version of Windows PowerShell. - > --1. Change to the folder or share that you copied the script to, and execute it on the server by running the `./OnboardingScript.ps1` script. --If the agent fails to start after setup is finished, check the logs for detailed error information. The log directory is *%ProgramData%\AzureConnectedMachineAgent\log*. --## Install and validate the agent on Linux --The Connected Machine agent for Linux is provided in the preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/). The [shell script bundle `Install_linux_azcmagent.sh`](https://aka.ms/azcmagent) performs the following actions: --* Configures the host machine to download the agent package from packages.microsoft.com. --* Installs the Hybrid Resource Provider package. --Optionally, you can configure the agent with your proxy information by including the `--proxy "{proxy-url}:{proxy-port}"` parameter. Using this configuration, the agent communicates through the proxy server using the HTTP protocol. --The script also contains logic to identify the supported and unsupported distributions, and it verifies the permissions that are required to perform the installation. --The following example downloads the agent and installs it: --```bash -# Download the installation package. -wget https://aka.ms/azcmagent -O ~/Install_linux_azcmagent.sh --# Install the Azure Connected Machine agent. -bash ~/Install_linux_azcmagent.sh -``` --1. To download and install the agent, run the following commands. If your machine needs to communicate through a proxy server to connect to the internet, include the `--proxy` parameter. -- ```bash - # Download the installation package. - wget https://aka.ms/azcmagent -O ~/Install_linux_azcmagent.sh -- # Install the AZure Connected Machine agent. - bash ~/Install_linux_azcmagent.sh --proxy "{proxy-url}:{proxy-port}" - ``` --2. After installing the agent, you need to configure it to communicate with the Azure Arc service by running the following command: -- ```bash - azcmagent connect --resource-group "resourceGroupName" --tenant-id "tenantID" --location "regionName" --subscription-id "subscriptionID" --cloud "cloudName" - if [ $? = 0 ]; then echo "\033[33mTo view your onboarded server(s), navigate to https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.HybridCompute%2Fmachines\033[m"; fi - ``` --### Install with the scripted method --1. Log in to the server with an account that has root access. --1. Change to the folder or share that you copied the script to, and execute it on the server by running the `./OnboardingScript.sh` script. --If the agent fails to start after setup is finished, check the logs for detailed error information. The log directory is `/var/opt/azcmagent/log`. --## Verify the connection with Azure Arc --After you install the agent and configure it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal). --![A successful server connection](./media/onboard-portal/arc-for-servers-successful-onboard.png) --## Next steps --- Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).--- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.--- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verify the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and much more. |
azure-arc | Onboard Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-powershell.md | - Title: Connect hybrid machines to Azure by using PowerShell -description: In this article, you learn how to install the agent and connect a machine to Azure by using Azure Arc-enabled servers. You can do this with PowerShell. Previously updated : 07/16/2021-----# Connect hybrid machines to Azure by using PowerShell --For servers enabled with Azure Arc, you can take manual steps to enable them for one or more Windows or Linux machines in your environment. Alternatively, you can use the PowerShell cmdlet [Connect-AzConnectedMachine](/powershell/module/az.connectedmachine/remove-azconnectedmachine) to download the Connected Machine agent, install the agent, and register the machine with Azure Arc. The cmdlet downloads the Windows agent package (Windows Installer) from the Microsoft Download Center, and the Linux agent package from the Microsoft package repository. --This method requires that you have administrator permissions on the machine to install and configure the agent. On Linux, by using the root account, and on Windows, you are member of the Local Administrators group. You can complete this process interactively or remotely on a Windows server by using [PowerShell remoting](/powershell/scripting/learn/ps101/08-powershell-remoting). --Before you get started, review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---## Prerequisites --- A machine with Azure PowerShell. For instructions, see [Install and configure Azure PowerShell](/powershell/azure/).--You use PowerShell to manage VM extensions on your hybrid servers managed by Azure Arc-enabled servers. Before using PowerShell, install the `Az.ConnectedMachine` module on the server you want to Arc-enable. Run the following command on your server enabled with Azure Arc: --```powershell -Install-Module -Name Az.ConnectedMachine -``` --When the installation finishes, you see the following message: --`The installed extension ``Az.ConnectedMachine`` is experimental and not covered by customer support. Please use with discretion.` --## Install the agent and connect to Azure --1. Open a PowerShell console with elevated privileges. --2. Sign in to Azure by running the command `Connect-AzAccount`. --3. To install the Connected Machine agent, use `Connect-AzConnectedMachine` with the `-Name`, `-ResourceGroupName`, and `-Location` parameters. Use the `-SubscriptionId` parameter to override the default subscription as a result of the Azure context created after sign-in. Run one of the following commands: -- * To install the Connected Machine agent on the target machine that can directly communicate to Azure, run: -- ```azurepowershell - Connect-AzConnectedMachine -ResourceGroupName myResourceGroup -Name myMachineName -Location <region> - ``` -- * To install the Connected Machine agent on the target machine that communicates through a proxy server, run: -- ```azurepowershell - Connect-AzConnectedMachine -ResourceGroupName myResourceGroup -Name myMachineName -Location <region> -Proxy http://<proxyURL>:<proxyport> - ``` -- Using this configuration, the agent communicates through the proxy server using the HTTP protocol. --If the agent fails to start after setup is finished, check the logs for detailed error information. On Windows, check this file: *%ProgramData%\AzureConnectedMachineAgent\Log\himds.log*. On Linux, check this file: */var/opt/azcmagent/log/himds.log*. --## Install and connect by using PowerShell remoting --Here's how to configure one or more Windows servers with servers enabled with Azure Arc. You must enable PowerShell remoting on the remote machine. Use the `Enable-PSRemoting` cmdlet to do this. --1. Open a PowerShell console as an Administrator. --2. Sign in to Azure by running the command `Connect-AzAccount`. --3. To install the Connected Machine agent, use `Connect-AzConnectedMachine` with the `-ResourceGroupName`, and `-Location` parameters. The Azure resource names will automatically use the hostname of each server. Use the `-SubscriptionId` parameter to override the default subscription as a result of the Azure context created after sign-in. -- * To install the Connected Machine agent on the target machine that can directly communicate to Azure, run the following command: -- ```azurepowershell - $sessions = New-PSSession -ComputerName myMachineName - Connect-AzConnectedMachine -ResourceGroupName myResourceGroup -Location <region> -PSSession $sessions - ``` -- * To install the Connected Machine agent on multiple remote machines at the same time, add a list of remote machine names, each separated by a comma. -- ```azurepowershell - $sessions = New-PSSession -ComputerName myMachineName1, myMachineName2, myMachineName3 - Connect-AzConnectedMachine -ResourceGroupName myResourceGroup -Location <region> -PSSession $sessions - ``` -- The following example shows the results of the command targeting a single machine: -- ```azurepowershell - time="2020-08-07T13:13:25-07:00" level=info msg="Onboarding Machine. It usually takes a few minutes to complete. Sometimes it may take longer depending on network and server load status." - time="2020-08-07T13:13:25-07:00" level=info msg="Check network connectivity to all endpoints..." - time="2020-08-07T13:13:29-07:00" level=info msg="All endpoints are available... continue onboarding" - time="2020-08-07T13:13:50-07:00" level=info msg="Successfully Onboarded Resource to Azure" VM Id=f65bffc7-4734-483e-b3ca-3164bfa42941 -- Name Location OSName Status ProvisioningState - - -- -- - myMachineName eastus windows Connected Succeeded - ``` --## Verify the connection with Azure Arc --After you install and configure the agent to register with Azure Arc-enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machine in the [Azure portal](https://portal.azure.com). --![Screenshot of Servers dashboard, showing a successful server connection.](./media/onboard-portal/arc-for-servers-successful-onboard.png) --## Next steps --* If necessary, see the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md). --* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. --* Learn how to manage your machine by using [Azure Policy](../../governance/policy/overview.md). You can use VM [guest configuration](../../governance/machine-configuration/overview.md), verify that the machine is reporting to the expected Log Analytics workspace, and enable monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy). |
azure-arc | Onboard Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md | - Title: Connect hybrid machines to Azure at scale -description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using a service principal. Previously updated : 11/03/2023-----# Connect hybrid machines to Azure at scale --You can enable Azure Arc-enabled servers for multiple Windows or Linux machines in your environment with several flexible options depending on your requirements. Using the template script we provide, you can automate every step of the installation, including establishing the connection to Azure Arc. However, you are required to execute this script manually with an account that has elevated permissions on the target machine and in Azure. --One method to connect the machines to Azure Arc-enabled servers is to use a Microsoft Entra [service principal](../../active-directory/develop/app-objects-and-service-principals.md). This service principal method can be used instead of your privileged identity to [interactively connect the machine](onboard-portal.md). This service principal is a special limited management identity that has only the minimum permission necessary to connect machines to Azure using the `azcmagent` command. This method is safer than using a higher privileged account like a Tenant Administrator and follows our access control security best practices. **The service principal is used only during onboarding; it is not used for any other purpose.** --Before you start connecting your machines, review the following requirements: --1. Make sure you have administrator permission on the machines you want to onboard. -- Administrator permissions are required to install the Connected Machine agent on the machines; on Linux by using the root account, and on Windows as a member of the Local Administrators group. -1. Review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. You will need to have the **Azure Connected Machine Onboarding** role or the **Contributor** role for the resource group of the machine. Make sure to register the below Azure resource providers beforehand in your target subscription. -- * Microsoft.HybridCompute - * Microsoft.GuestConfiguration - * Microsoft.HybridConnectivity - * Microsoft.AzureArcData (if you plan to Arc-enable SQL Server instances) -- See detailed how to here: [Azure resource providers prerequisites](prerequisites.md#azure-resource-providers) -- For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations. --<!--The installation methods to install and configure the Connected Machine agent requires that the automated method you use has administrator permissions on the machines: on Linux by using the root account, and on Windows as a member of the Local Administrators group. --Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.--> --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---## Create a service principal for onboarding at scale --You can create a service principal in the Azure portal or by using Azure PowerShell. --> [!NOTE] -> To create a service principal, your Microsoft Entra tenant needs to allow users to register applications. If it does not, your account must be a member of the **Application Administrator** or **Cloud Application Administrator** administrative role. See [Delegate app registration permissions in Microsoft Entra ID](../../active-directory/roles/delegate-app-roles.md) for more information about tenant-level requirements. To assign Arc-enabled server roles, your account must be a member of the **Owner** or **User Access Administrator** role in the subscription that you want to use for onboarding. --### Azure portal --The Azure Arc service in the Azure portal provides a streamlined way to create a service principal that can be used to connect your hybrid machines to Azure. --1. In the Azure portal, navigate to Azure Arc, then select **Service principals** in the left menu. -1. Select **Add**. -1. Enter a name for your service principal. -1. Choose whether the service principal will have access to an entire subscription, or only to a specific resource group. -1. Select the subscription (and resource group, if applicable) to which the service principal will have access. -1. In the **Client secret** section, select the duration for which your generated client secret will be in use. You can optionally enter a friendly name of your choice in the **Description** field. -1. In the **Role assignment** section, select **Azure Connected Machine Onboarding**. -1. Select **Create**. ---### Azure PowerShell --You can use [Azure PowerShell](/powershell/azure/install-azure-powershell) to create a service principal with the [New-AzADServicePrincipal](/powershell/module/Az.Resources/New-AzADServicePrincipal) cmdlet. --1. Check the context of your Azure PowerShell session to ensure you're working in the correct subscription. Use [Set-AzContext](/powershell/module/az.accounts/set-azcontext) if you need to change the subscription. - - ```azurepowershell-interactive - Get-AzContext - ``` - -1. Run the following command to create a service principal and assign it the Azure Connected Machine Onboarding role for the selected subscription. After the service principal is created, it will print the application ID and secret. The secret is valid for 1 year, after which you'll need to generate a new secret and update any scripts with the new secret. - - ```azurepowershell-interactive - $sp = New-AzADServicePrincipal -DisplayName "Arc server onboarding account" -Role "Azure Connected Machine Onboarding" - $sp | Format-Table AppId, @{ Name = "Secret"; Expression = { $_.PasswordCredentials.SecretText }} - ``` - ```output - AppId Secret - -- - aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee PASSWORD_SHOWN_HERE - ``` -- The values from the following properties are used with parameters passed to the `azcmagent`: - - - The value from the **AppId** property is used for the `--service-principal-id` parameter value - - The value from the **Secret** property is used for the `--service-principal-secret` parameter used to connect the agent. --## Generate the installation script from the Azure portal --The script to automate the download and installation, and to establish the connection with Azure Arc, is available from the Azure portal. To complete the process, do the following steps: --1. From your browser, go to the [Azure portal](https://portal.azure.com). --1. On the **Machines - Azure Arc** page, select **Add/Create** at the upper left, then select **Add a machine** from the drop-down menu. --1. On the **Add servers with Azure Arc** page, select the **Add multiple servers** tile, and then select **Generate script**. --1. On the **Basics** page, provide the following: -- 1. Select the **Subscription** and **Resource group** for the machines. - 1. In the **Region** drop-down list, select the Azure region to store the servers' metadata. - 1. In the **Operating system** drop-down list, select the operating system that the script is configured to run on. - 1. If the machine is communicating through a proxy server to connect to the internet, specify the proxy server IP address or the name and port number that the machine will use to communicate with the proxy server. Using this configuration, the agent communicates through the proxy server using the HTTP protocol. Enter the value in the format `http://<proxyURL>:<proxyport>`. - 1. Select **Next**. - 1. In the **Authentication** section, under the **Service principal** drop-down list, select **Arc-for-servers**. Then select, **Next**. --1. On the **Tags** page, review the default **Physical location tags** suggested and enter a value, or specify one or more **Custom tags** to support your standards. --1. Select **Next**. --1. On the **Download and run script** page, review the summary information, and then select **Download**. If you still need to make changes, select **Previous**. --For Windows, you are prompted to save `OnboardingScript.ps1`, and for Linux `OnboardingScript.sh` to your computer. --## Install the agent and connect to Azure --Taking the script template created earlier, you can install and configure the Connected Machine agent on multiple hybrid Linux and Windows machines using your organizations preferred automation tool. The script performs similar steps described in the [Connect hybrid machines to Azure from the Azure portal](onboard-portal.md) article. The difference is in the final step, where you establish the connection to Azure Arc using the `azcmagent` command using the service principal. --The following are the settings that you configure the `azcmagent` command to use for the service principal. --- `service-principal-id` : The unique identifier (GUID) that represents the application ID of the service principal.-- `service-principal-secret` | The service principal password.-- `tenant-id` : The unique identifier (GUID) that represents your dedicated instance of Microsoft Entra ID.-- `subscription-id` : The subscription ID (GUID) of your Azure subscription that you want the machines in.-- `resource-group` : The resource group name where you want your connected machines to belong to.-- `location` : See [supported Azure regions](overview.md#supported-regions). This location can be the same or different, as the resource group's location.-- `resource-name` : (*Optional*) Used for the Azure resource representation of your on-premises machine. If you do not specify this value, the machine hostname is used.--You can learn more about the `azcmagent` command-line tool by reviewing the [Azcmagent Reference](./manage-agent.md). -->[!NOTE] ->The Windows PowerShell script only supports running from a 64-bit version of Windows PowerShell. --After you install the agent and configure it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal). --![Screenshot showing a successful server connection in the Azure portal.](./media/onboard-portal/arc-for-servers-successful-onboard.png) --## Next steps --- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.-- Learn how to [troubleshoot agent connection issues](troubleshoot-agent-onboard.md).-- Learn how to manage your machines using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that machines are reporting to the expected Log Analytics workspace, monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and more. |
azure-arc | Onboard Update Management Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-update-management-machines.md | - Title: Connect machines from Azure Automation Update Management -description: In this article, you learn how to connect hybrid machines to Azure Arc managed by Automation Update Management. Previously updated : 11/06/2023----# Connect hybrid machines to Azure from Automation Update Management --You can enable Azure Arc-enabled servers for one or more of your Windows or Linux virtual machines or physical servers hosted on-premises or other cloud environment that are managed with Azure Automation Update Management. This onboarding process automates the download and installation of the [Connected Machine agent](agent-overview.md). To connect the machines to Azure Arc-enabled servers, a Microsoft Entra [service principal](../../active-directory/develop/app-objects-and-service-principals.md) is used instead of your privileged identity to [interactively connect](onboard-portal.md) the machine. This service principal is created automatically as part of the onboarding process for these machines. --Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---## How it works --When the onboarding process is launched, an Active Directory [service principal](../../active-directory/fundamentals/service-accounts-principal.md) is created in the tenant. --To install and configure the Connected Machine agent on the target machine, a master runbook named **Add-UMMachinesToArc** runs in the Azure sandbox. Based on the operating system detected on the machine, the master runbook calls a child runbook named **Add-UMMachinesToArcWindowsChild** or **Add-UMMachinesToArcLinuxChild** that runs under the system [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md) role directly on the machine. Runbook job output is written to the job history, and you can view their [status summary](../../automation/automation-runbook-execution.md#job-statuses) or drill into details of a specific runbook job in the [Azure portal](../../automation/manage-runbooks.md#view-statuses-in-the-azure-portal) or using [Azure PowerShell](../../automation/manage-runbooks.md#retrieve-job-statuses-using-powershell). Execution of runbooks in Azure Automation writes details in an activity log for the Automation account. For details of using the log, see [Retrieve details from Activity log](../../automation/manage-runbooks.md#retrieve-details-from-activity-log). --The final step establishes the connection to Azure Arc using the `azcmagent` command using the service principal to register the machine as a resource in Azure. --## Prerequisites --This method requires that you are a member of the [Automation Job Operator](../../automation/automation-role-based-access-control.md#automation-job-operator) role or higher so you can create runbook jobs in the Automation account. --If you have enabled Azure Policy to [manage runbook execution](../../automation/enforce-job-execution-hybrid-worker.md) and enforce targeting of runbook execution against a Hybrid Runbook Worker group, this policy must be disabled. Otherwise, the runbook jobs that onboard the machine(s) to Arc-enabled servers will fail. --## Add machines from the Azure portal --Perform the following steps to configure the hybrid machine with Arc-enabled servers. The server or machine must be powered on and online in order for the process to complete successfully. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --1. Navigate to the **Machines - Azure Arc** page, select **Add/Create**, and then select **Add a machine** from the drop-down menu. --1. On the **Add servers with Azure Arc** page, select **Add servers** from the **Add managed servers from Update Management** tile. --1. On the **Resource details** page, configure the following: -- 1. Select the **Subscription** and **Resource group** where you want the server to be managed within Azure. - 1. In the **Region** drop-down list, select the Azure region to store the servers metadata. - 1. If the machine is communicating through a proxy server to connect to the internet, specify the proxy server IP address or the name and port number that the machine will use to communicate with the proxy server. Enter the value in the format `http://<proxyURL>:<proxyport>`. - 1. Select **Next**. --1. On the **Servers** page, select **Add Servers**, then select the **Subscription** and **Automation account** from the drop-down list that has the Update Management feature enabled and includes the machines you want to onboard to Azure Arc-enabled servers. -- After specifying the Automation account, the list below returns non-Azure machines managed by Update Management for that Automation account. Both Windows and Linux machines are listed and for each one, select **add**. -- You can review your selection by selecting **Review selection** and if you want to remove a machine select **remove** from under the **Action** column. -- Once you confirm your selection, select **Next**. --1. On the **Tags** page, specify one or more **Name**/**Value** pairs to support your standards. Select **Next: Review + add**. --1. On the **Review _ add** page, review the summary information, and then select **Add machines**. If you still need to make changes, select **Previous**. --## Verify the connection with Azure Arc --After the agent is installed and configured to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal). --![A successful server connection](./media/onboard-portal/arc-for-servers-successful-onboard.png) --## Next steps --- Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).--- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.--- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verify the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and much more. |
azure-arc | Onboard Windows Admin Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-windows-admin-center.md | - Title: Connect hybrid machines to Azure from Windows Admin Center -description: In this article, you learn how to install the agent and connect machines to Azure by using Azure Arc-enabled servers from Windows Admin Center. Previously updated : 08/17/2021----# Connect hybrid machines to Azure from Windows Admin Center --You can enable Azure Arc-enabled servers for one or more Windows machines in your environment by performing a set of steps manually. Or you can use [Windows Admin Center](/windows-server/manage/windows-admin-center/understand/what-is) to deploy the Connected Machine agent and register your on-premises servers without having to perform any steps outside of this tool. ---## Prerequisites --* Azure Arc-enabled servers - Review the [prerequisites](prerequisites.md) and verify that your subscription, your Azure account, and resources meet the requirements. --* Windows Admin Center - Review the requirements to [prepare your environment](/windows-server/manage/windows-admin-center/deploy/prepare-environment) to deploy and [configure Azure integration](/windows-server/manage/windows-admin-center/azure/azure-integration). --* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --* The target Windows servers that you want to manage must have Internet connectivity to access Azure. --### Security --This deployment method requires that you have administrator rights on the target Windows machine or server to install and configure the agent. You also need to be a member of the [**Gateway users**](/windows-server/manage/windows-admin-center/plan/user-access-options#gateway-access-roles) role. --## Deploy --Perform the following steps to configure the Windows server with Azure Arc-enabled servers. --1. Sign in to Windows Admin Center. --1. From the connection list on the **Overview** page, in the list of connected Windows servers, select a server from the list to connect to it. --1. From the left-hand pane, select **Azure hybrid services**. --1. On the **Azure hybrid services** page, select **Discover Azure services**. --1. On the **Discover Azure services** page, under **Leverage Azure policies and solutions to manage your servers with Azure Arc**, select **Set up**. --1. On the **Settings\Azure Arc for servers** page, if prompted authenticate to Azure and then select **Get started**. --1. On the **Connect server to Azure** page, provide the following: -- 1. In the **Azure subscription** drop-down list, select the Azure subscription. - 1. For **Resource group**, either select **New** to create a new resource group, or under the **Resource group** drop-down list, select an existing resource group to register and manage the machine from. - 1. In the **Region** drop-down list, select the Azure region to store the servers metadata. - 1. If the machine or server is communicating through a proxy server to connect to the internet, select the option **Use proxy server**. Using this configuration, the agent communicates through the proxy server using the HTTP protocol. Specify the proxy server IP address or the name, and port number that the machine will use to communicate with the proxy server. --1. Select **Set up** to proceed with configuring the Windows server with Azure Arc-enabled servers. --The Windows server will connect to Azure, download the Connected Machine agent, install it and register with Azure Arc-enabled servers. To track the progress, select **Notifications** in the menu. --To confirm installation of the Connected Machine Agent, in Windows Admin Center select [**Events**](/windows-server/manage/windows-admin-center/use/manage-servers#events) from the left-hand pane to review *MsiInstaller* events in the Application Event Log. --## Verify the connection with Azure Arc --After you install the agent and configure it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machine in the [Azure portal](https://portal.azure.com). ---## Next steps --* Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md). --* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. --* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and much more. |
azure-arc | Onboard Windows Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-windows-server.md | - Title: Connect Windows Server machines to Azure through Azure Arc Setup -description: In this article, you learn how to connect Windows Server machines to Azure Arc using the built-in Windows Server Azure Arc Setup wizard. Previously updated : 04/05/2024----# Connect Windows Server machines to Azure through Azure Arc Setup --Windows Server machines can be onboarded directly to [Azure Arc](https://azure.microsoft.com/products/azure-arc/) through a graphical wizard included in Windows Server. The wizard automates the onboarding process by checking the necessary prerequisites for successful Azure Arc onboarding and fetching and installing the latest version of the Azure Connected Machine (AzCM) agent. Once the wizard process completes, you're directed to your Window Server machine in the Azure portal, where it can be viewed and managed like any other Azure Arc-enabled resource. --Onboarding to Azure Arc is not needed if the Windows Server machine is already running in Azure. --For Windows Server 2022, Azure Arc Setup is an optional component that can be removed using the **Remove Roles and Features Wizard**. For Windows Server 2025 and later, Azure Arc Setup is a [Features On Demand](/windows-hardware/manufacture/desktop/features-on-demand-v2--capabilities?view=windows-11). Essentially, this means that the procedures for removal and enablement differ between OS versions. See for more information. --> [!NOTE] -> The Azure Arc Setup feature only applies to Windows Server 2022 and later. It was released in the [Cumulative Update of 10/10/2023](https://support.microsoft.com/en-us/topic/october-10-2023-kb5031364-os-build-20348-2031-7f1d69e7-c468-4566-887a-1902af791bbc). ---## Prerequisites --* Azure Arc-enabled servers - Review the [prerequisites](prerequisites.md) and verify that your subscription, your Azure account, and resources meet the requirements. --* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --* Modern browser (Microsoft Edge) for authentication to Microsoft Azure. Configuration of the Azure Connected Machine agent requires authentication to your Azure account, either through interactive authentication on a modern browser or device code login on a separate device (if the machine doesn't have a modern browser). --## Launch Azure Arc Setup and connect to Azure Arc --The Azure Arc Setup wizard is launched from a system tray icon at the bottom of the Windows Server machine when the Azure Arc Setup feature is enabled. This feature is enabled by default. Alternatively, you can launch the wizard from a pop-up window in the Server Manager or from the Windows Server Start menu. --1. Select the Azure Arc system tray icon, then select **Launch Azure Arc Setup**. -- :::image type="content" source="media/onboard-windows-server/system-tray-icon.png" alt-text="Screenshot showing Azure Arc system tray icon and window to launch Azure Arc setup process."::: - -1. The introduction window of the Azure Arc Setup wizard explains the benefits of onboarding your machine to Azure Arc. When you're ready to proceed, click **Next**. -- :::image type="content" source="media/onboard-windows-server/get-started-with-arc.png" alt-text="Screenshot of the Getting Started page of the wizard."::: --1. The wizard automatically checks for the prerequisites necessary to install the Azure Connected Machine agent on your Windows Server machine. Once this process completes and the agent is installed, select **Configure**. --1. The configuration window details the steps required to configure the Azure Connected Machine agent. When you're ready to begin configuration, select **Next**. --1. Sign-in to Azure by selecting the applicable Azure cloud, and then selecting **Sign in to Azure**. You'll be asked to provide your sign-in credentials. --1. Provide the resource details of how your machine will work within Azure Arc, such as the **Subscription** and **Resource group**, and then select **Next**. -- :::image type="content" source="media/onboard-windows-server/resource-details.png" alt-text="Screenshot of resource details window with fields."::: --1. Once the configuration completes and your machine is onboarded to Azure Arc, select **Finish**. --1. Go to the Server Manager and select **Local Server** to view the status of the machine in the **Azure Arc Management** field. A successfully onboarded machine has a status of **Enabled**. -- :::image type="content" source="media/onboard-windows-server/server-manager-enabled.png" alt-text="Screenshot of Server Manager local server pane showing machine status is enabled."::: ---## Server Manager functions --You can select the **Enabled/Disabled** link in the **Azure Arc Management** field of the Server Manager to launch different functions based on the status of the machine: --- If Azure Arc Setup isn't installed, selecting **Enabled/Disabled** launches the **Add Roles and Features Wizard**.-- If Azure Arc Setup is installed and the Azure Connected Machine agent hasn't been installed, selecting **Disabled** launches `AzureArcSetup.exe`, the executable file for the Azure Arc Setup wizard.-- If Azure Arc Setup is installed and the Azure Connected Machine agent is already installed, selecting **Enabled/Disabled** launches `AzureArcConfiguration.exe`, the executable file for configuring the Azure Connected Machine agent to work with your machine.- -## Viewing the connected machine --The Azure Arc system tray icon at the bottom of your Windows Server machine indicates if the machine is connected to Azure Arc; a red symbol means the machine does not have the Azure Connected Machine agent installed. To view a connected machine in Azure Arc, select the icon and then select **View Machine in Azure**. You can then view the machine in the [Azure portal](https://portal.azure.com/), just as you would other Azure Arc-enabled resources. ---## Uninstalling Azure Arc Setup --> [!NOTE] -> Uninstalling Azure Arc Setup does not uninstall the Azure Connected Machine agent from the machine. For instructions on uninstalling the agent, see [Managing and maintaining the Connected Machine agent](manage-agent.md). -> -To uninstall Azure Arc Setup from a Windows Server 2022 machine: --1. In the Server Manager, navigate to the **Remove Roles and Features Wizard**. (See [Remove roles, role services, and features by using the Remove Roles and Features Wizard](/windows-server/administration/server-manager/install-or-uninstall-roles-role-services-or-features#remove-roles-role-services-and-features-by-using-the-remove-roles-and-features-wizard) for more information.) --1. On the Features page, uncheck the box for **Azure Arc Setup**. --1. On the confirmation page, select **Restart the destination server automatically if required**, then select **Remove**. --To uninstall Azure Arc Setup through PowerShell, run the following command: --```powershell -Disable-WindowsOptionalFeature -Online -FeatureName AzureArcSetup -``` --To uninstall Azure Arc Setup from a Windows Server 2025 machine: --1. Open the Settings app on the machine and select **System**, then select **Optional features**. --1. Select **AzureArcSetup**, and then select **Remove**. ---To uninstall Azure Arc Setup from a Windows Server 2025 machine from the command line, run the following line of code: --`DISM /online /Remove-Capability /CapabilityName:AzureArcSetup~~~~` --## Next steps --* Troubleshooting information can be found in the [Troubleshoot Azure Connected Machine agent guide](troubleshoot-agent-onboard.md). --* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. --* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and much more. |
azure-arc | Organize Inventory Servers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/organize-inventory-servers.md | - Title: How to organize and inventory servers using hierarchies, tagging, and reporting -description: Learn how to organize and inventory servers using hierarchies, tagging, and reporting. Previously updated : 03/03/2023----# Organize and inventory servers with hierarchies, tagging, and reporting --Azure Arc-enabled servers allows customers to develop an inventory across hybrid, multicloud, and edge workloads with the organizational and reporting capabilities native to Azure management. Azure Arc-enabled servers supports a breadth of platforms and distributions across Windows and Linux. Arc-enabled servers is also domain agnostic and integrates with Azure Lighthouse for multi-tenant customers. --By projecting resources into the Azure management plane, Azure Arc empowers customers to leverage the organizational, tagging, and querying capabilities native to Azure. --## Organize resources with built-in Azure hierarchies --Azure provides four levels of management scope: --- Management groups-- Subscriptions-- Resource groups-- Resources--These levels of management help to manage access, policies, and compliance more efficiently. For example, if you apply a policy at one level, it propagates down to lower levels, helping improve governance posture. Moreover, these levels can be used to scope policies and security controls. For Arc-enabled servers, the different business units, applications, or workloads can be used to derive the hierarchical structure in Azure. Once resources have been onboarded to Azure Arc, you can seamlessly move an Arc-enabled server between different resource groups and scopes. ---## Tagging resources to capture additional, customizable metadata --Tags are metadata elements you apply to your Azure resources. They are key-value pairs that help identify resources, based on settings relevant to your organization. For example, you can tag the environment for a resource as *Production* or *Testing*. Alternatively, you can use tagging to capture the ownership for a resource, separating the *Creator* or *Administrator*. Tags can also capture details on the resource itself, such as the physical datacenter, business unit, or workload. You can apply tags to your Azure resources, resource groups, and subscriptions. This extends to infrastructure outside of Azure as well, through Azure Arc. ---You can define tags in Azure portal through a simple point and click method. Tags can be defined when onboarding servers to Azure Arc-enabled servers or on a per-server basis. Alternatively, you can use Azure CLI, Azure PowerShell, ARM templates, or Azure policy for scalable tag deployments. Tags can be used to filter operations as well, such as the deployment of extensions or service attachments. This provides not only a more comprehensive inventory of your servers, but also operational flexibility and ease of management. ---## Reporting and querying with Azure Resource Graph (ARG) --Numerous types of data are collected with Azure Arc-enabled servers as part of the instance metadata. This includes the platform, operating system, presence of SQL server, or AWS and GCP details. These attributes can be queried at scale using Azure Resource Graph. --Azure Resource Graph is an Azure service designed to extend Azure Resource Management by providing efficient and performant resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment. These queries provide the ability to query resources with complex filtering, grouping, and sorting by resource properties. --Results can be easily visualized and exported to other reporting solutions. Moreover there are dozens of built-in Azure Resource Graph queries capturing salient information across Azure VMs and Arc-enabled servers, such as their VM extensions, regional breakdown, and operating systems. --## Additional resources --* [What is Azure Resource Graph?](../../governance/resource-graph/overview.md) --* [Azure Resource Graph sample queries for Azure Arc-enabled servers](resource-graph-samples.md) --* [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md?tabs=json) |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md | - Title: Azure Arc-enabled servers Overview -description: Learn how to use Azure Arc-enabled servers to manage servers hosted outside of Azure like an Azure resource. Previously updated : 06/03/2024----# What is Azure Arc-enabled servers? --Azure Arc-enabled servers lets you manage Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or other cloud provider. For the purposes of Azure Arc, these machines hosted outside of Azure are considered hybrid machines. The management of hybrid machines in Azure Arc is designed to be consistent with how you manage native Azure virtual machines, using standard Azure constructs such as Azure Policy and applying tags. (For additional information about hybrid environments, see [What is a hybrid cloud?](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-hybrid-cloud-computing)) --When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID enabling the machine to be included in a resource group. --To connect hybrid machines to Azure, you install the [Azure Connected Machine agent](agent-overview.md) on each machine. This agent doesn't replace the Azure [Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-overview). The Azure Monitor Agent for Windows and Linux is required in order to: --* Proactively monitor the OS and workloads running on the machine -* Manage it using Automation runbooks or solutions like Update Management -* Use other Azure services like [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) --You can install the Connected Machine agent manually, or on multiple machines at scale, using the [deployment method](deployment-options.md) that works best for your scenario. ---> [!NOTE] -> For additional guidance regarding the different services Azure Arc offers, see [Choosing the right Azure Arc service for machines](../choose-service.md). -> --## Supported cloud operations --When you connect your machine to Azure Arc-enabled servers, you can perform many operational functions, just as you would with native Azure virtual machines. Below are some of the key supported actions for connected machines. --* **Govern**: - * Assign [Azure machine configurations](../../governance/machine-configuration/overview.md) to audit settings inside the machine. To understand the cost of using Azure Machine Configuration policies with Arc-enabled servers, see Azure Policy [pricing guide](https://azure.microsoft.com/pricing/details/azure-policy/). -* **Protect**: - * Protect non-Azure servers with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint), included through [Microsoft Defender for Cloud](../../security-center/defender-for-servers-introduction.md), for threat detection, for vulnerability management, and to proactively monitor for potential security threats. Microsoft Defender for Cloud presents the alerts and remediation suggestions from the threats detected. - * Use [Microsoft Sentinel](scenario-onboard-azure-sentinel.md) to collect security-related events and correlate them with other data sources. -* **Configure**: - * Use [Azure Automation](../../automation/extension-based-hybrid-runbook-worker-install.md?tabs=windows) for frequent and time-consuming management tasks using PowerShell and Python [runbooks](../../automation/automation-runbook-execution.md). Assess configuration changes for installed software, Microsoft services, Windows registry and files, and Linux daemons using [Change Tracking and Inventory](../../automation/change-tracking/overview.md) - * Use [Update Management](../../automation/update-management/overview.md) to manage operating system updates for your Windows and Linux servers. Automate onboarding and configuration of a set of Azure services when you use [Azure Automanage (preview)](../../automanage/automanage-arc.md). - * Perform post-deployment configuration and automation tasks using supported [Arc-enabled servers VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. -* **Monitor**: - * Monitor operating system performance and discover application components to monitor processes and dependencies with other resources using [VM insights](/azure/azure-monitor/vm/vminsights-overview). - * Collect other log data, such as performance data and events, from the operating system or workloads running on the machine with the [Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-overview). This data is stored in a [Log Analytics workspace](/azure/azure-monitor/logs/log-analytics-workspace-overview). --> [!NOTE] -> At this time, enabling Azure Automation Update Management directly from an Azure Arc-enabled server is not supported. See [Enable Update Management from your Automation account](../../automation/update-management/enable-from-automation-account.md) to understand requirements and [how to enable Update Management for non-Azure VMs](../../automation/update-management/enable-from-automation-account.md#enable-non-azure-vms). --Log data collected and stored in a Log Analytics workspace from the hybrid machine contains properties specific to the machine, such as a Resource ID, to support [resource-context](/azure/azure-monitor/logs/manage-access#access-mode) log access. --Watch this video to learn more about Azure monitoring, security, and update services across hybrid and multicloud environments. --> [!VIDEO https://www.youtube.com/embed/mJnmXBrU1ao] --## Supported regions --For a list of supported regions with Azure Arc-enabled servers, see the [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc) page. --In most cases, the location you select when you create the installation script should be the Azure region geographically closest to your machine's location. Data at rest is stored within the Azure geography containing the region you specify, which may also affect your choice of region if you have data residency requirements. If the Azure region your machine connects to has an outage, the connected machine isn't affected, but management operations using Azure may be unable to complete. If there's a regional outage, and if you have multiple locations that support a geographically redundant service, it's best to connect the machines in each location to a different Azure region. --[Instance metadata information about the connected machine](agent-overview.md#instance-metadata) is collected and stored in the region where the Azure Arc machine resource is configured, including the following: --* Operating system name and version -* Computer name -* Computers fully qualified domain name (FQDN) -* Connected Machine agent version --For example, if the machine is registered with Azure Arc in the East US region, the metadata is stored in the US region. --## Supported environments --Azure Arc-enabled servers support the management of physical servers and virtual machines hosted *outside* of Azure. For specific details about supported hybrid cloud environments hosting VMs, see [Connected Machine agent prerequisites](prerequisites.md#supported-environments). --> [!NOTE] -> Azure Arc-enabled servers is not designed or supported to enable management of virtual machines running in Azure. --## Agent status --The status for a connected machine can be viewed in the Azure portal under **Azure Arc > Servers**. --The Connected Machine agent sends a regular heartbeat message to the service every five minutes. If the service stops receiving these heartbeat messages from a machine, that machine is considered offline, and its status will automatically be changed to **Disconnected** within 15 to 30 minutes. Upon receiving a subsequent heartbeat message from the Connected Machine agent, its status will automatically be changed back to **Connected**. --If a machine remains disconnected for 45 days, its status may change to **Expired**. An expired machine can no longer connect to Azure and requires a server administrator to disconnect and then reconnect it to Azure to continue managing it with Azure Arc. The exact date upon which a machine expires is determined by the expiration date of the managed identity's credential, which is valid up to 90 days and renewed every 45 days. --## Service limits --There's no limit to how many Arc-enabled servers and VM extensions you can deploy in a resource group or subscription. The standard 800 resource limit per resource group applies to the Azure Arc Private Link Scope resource type. --To learn more about resource type limits, see the [Resource instance limit](../../azure-resource-manager/management/resources-without-resource-group-limit.md#microsofthybridcompute) article. --## Data residency --Azure Arc-enabled servers stores customer data. By default, customer data stays within the region the customer deploys the service instance in. For region with data residency requirements, customer data is always kept within the same region. --## Next steps --* Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review the [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods. -* Try out Arc-enabled servers by using the [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_servers). -* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. -* Explore the [Azure Arc landing zone accelerator for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-servers/eslz-identity-and-access-management). |
azure-arc | Plan At Scale Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-at-scale-deployment.md | - Title: Plan and deploy Azure Arc-enabled servers -description: Learn how to enable a large number of machines to Azure Arc-enabled servers to simplify configuration of essential security, management, and monitoring capabilities in Azure. Previously updated : 02/26/2024----# Plan and deploy Azure Arc-enabled servers --Deployment of an IT infrastructure service or business application is a challenge for any company. In order to execute it well and avoid any unwelcome surprises and unplanned costs, you need to thoroughly plan for it to ensure that you're as ready as possible. To plan for deploying Azure Arc-enabled servers at any scale, it should cover the design and deployment criteria that needs to be met in order to successfully complete the tasks. --For the deployment to proceed smoothly, your plan should establish a clear understanding of: --* Roles and responsibilities. -* Inventory of physical servers or virtual machines to verify they meet network and system requirements. -* The skill set and training required to enable successful deployment and on-going management. -* Acceptance criteria and how you track its success. -* Tools or methods to be used to automate the deployments. -* Identified risks and mitigation plans to avoid delays, disruptions, etc. -* How to avoid disruption during deployment. -* What's the escalation path when a significant issue occurs? --The purpose of this article is to ensure you are prepared for a successful deployment of Azure Arc-enabled servers across multiple production physical servers or virtual machines in your environment. --To learn more about our at-scale deployment recommendations, you can also refer to this video. --> [!VIDEO https://www.youtube.com/embed/Cf1jUPOB_vs] --## Prerequisites --Consider the following basic requirements when planning your deployment: --* Your machines must run a [supported operating system](prerequisites.md#supported-operating-systems) for the Connected Machine agent. -* Your machines must have connectivity from your on-premises network or other cloud environment to resources in Azure, either directly or through a proxy server. -* To install and configure the Azure Connected Machine agent, you must have an account with elevated privileges (that is, an administrator or as root)on the machines. -* To onboard machines, you must have the **Azure Connected Machine Onboarding** Azure built-in role. -* To read, modify, and delete a machine, you must have the **Azure Connected Machine Resource Administrator** Azure built-in role. --For more details, see the [prerequisites](prerequisites.md) and [network requirements](network-requirements.md) for installing the Connected Machine agent. --## Pilot --Before deploying to all production machines, start by evaluating the deployment process before adopting it broadly in your environment. For a pilot, identify a representative sampling of machines that aren't critical to your companies ability to conduct business. You'll want to be sure to allow enough time to run the pilot and assess its impact: we recommend a minimum of 30 days. --Establish a formal plan describing the scope and details of the pilot. The following is a sample of what a plan should include to help get you started. --* Objectives - Describes the business and technical drivers that led to the decision that a pilot is necessary. -* Selection criteria - Specifies the criteria used to select which aspects of the solution will be demonstrated via a pilot. -* Scope - Describes the scope of the pilot, which includes but not limited to solution components, anticipated schedule, duration of the pilot, and number of machines to target. -* Success criteria and metrics - Define the pilot's success criteria and specific measures used to determine level of success. -* Training plan - Describes the plan for training system engineers, administrators, etc. who are new to Azure and it services during the pilot. -* Transition plan - Describes the strategy and criteria used to guide transition from pilot to production. -* Rollback - Describes the procedures for rolling back a pilot to pre-deployment state. -* Risks - List all identified risks for conducting the pilot and associated with production deployment. --## Phase 1: Build a foundation --In this phase, system engineers or administrators enable the core features in their organization's Azure subscription to start the foundation before enabling machines for management by Azure Arc-enabled servers and other Azure services. --|Task |Detail |Estimated duration | -|--|-|| -| [Create a resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) | A dedicated resource group to include only Azure Arc-enabled servers and centralize management and monitoring of these resources. | One hour | -| Apply [Tags](../../azure-resource-manager/management/tag-resources.md) to help organize machines. | Evaluate and develop an IT-aligned [tagging strategy](/azure/cloud-adoption-framework/decision-guides/resource-tagging/) that can help reduce the complexity of managing your Azure Arc-enabled servers and simplify making management decisions. | One day | -| Design and deploy [Azure Monitor Logs](/azure/azure-monitor/logs/data-platform-logs) | Evaluate [design and deployment considerations](/azure/azure-monitor/logs/workspace-design) to determine if your organization should use an existing or implement another Log Analytics workspace to store collected log data from hybrid servers and machines.<sup>1</sup> | One day | -| [Develop an Azure Policy](../../governance/policy/overview.md) governance plan | Determine how you will implement governance of hybrid servers and machines at the subscription or resource group scope with Azure Policy. | One day | -| Configure [Role based access control (RBAC)](../../role-based-access-control/overview.md) | Develop an access plan to control who has access to manage Azure Arc-enabled servers and ability to view their data from other Azure services and solutions. | One day | -| Identify machines with Log Analytics agent already installed | Run the following log query in [Log Analytics](/azure/azure-monitor/logs/log-analytics-overview) to support conversion of existing Log Analytics agent deployments to extension-managed agent:<br> Heartbeat <br> | summarize arg_max(TimeGenerated, OSType, ResourceId, ComputerEnvironment) by Computer <br> | where ComputerEnvironment == "Non-Azure" and isempty(ResourceId) <br> | project Computer, OSType | One hour | --<sup>1</sup> When evaluating your Log Analytics workspace design, consider integration with Azure Automation in support of its Update Management and Change Tracking and Inventory feature, as well as Microsoft Defender for Cloud and Microsoft Sentinel. If your organization already has an Automation account and enabled its management features linked with a Log Analytics workspace, evaluate whether you can centralize and streamline management operations, as well as minimize cost, by using those existing resources versus creating a duplicate account, workspace, etc. --## Phase 2: Deploy Azure Arc-enabled servers --Next, we add to the foundation laid in Phase 1 by preparing for and [deploying the Azure Connected Machine agent](deployment-options.md). --|Task |Detail |Estimated duration | -|--|-|| -| Download the pre-defined installation script | Review and customize the pre-defined installation script for at-scale deployment of the Connected Machine agent to support your automated deployment requirements.<br><br> Sample at-scale onboarding resources:<br><br> <ul><li> [At-scale basic deployment script](onboard-service-principal.md)</ul></li> <ul><li>At-scale onboarding VMware vSphere Windows Server VMs</ul></li> <ul><li>At-scale onboarding VMware vSphere Linux VMs</ul></li> <ul><li>At-scale onboarding AWS EC2 instances using Ansible</ul></li> | One or more days depending on requirements, organizational processes (for example, Change and Release Management), and automation method used. | -| [Create service principal](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) |Create a service principal to connect machines non-interactively using Azure PowerShell or from the portal.| One hour | -| Deploy the Connected Machine agent to your target servers and machines |Use your automation tool to deploy the scripts to your servers and connect them to Azure.| One or more days depending on your release plan and if following a phased rollout. | --## Phase 3: Manage and operate --Phase 3 is when administrators or system engineers can enable automation of manual tasks to manage and operate the Connected Machine agent and the machines during their lifecycle. --|Task |Detail |Estimated duration | -|--|-|| -|Create a Resource Health alert |If a server stops sending heartbeats to Azure for longer than 15 minutes, it can mean that it is offline, the network connection has been blocked, or the agent is not running. Develop a plan for how youΓÇÖll respond and investigate these incidents and use [Resource Health alerts](/azure/service-health/resource-health-alert-monitor-guide) to get notified when they start.<br><br> Specify the following when configuring the alert:<br> **Resource type** = **Azure Arc-enabled servers**<br> **Current resource status** = **Unavailable**<br> **Previous resource status** = **Available** | One hour | -|Create an Azure Advisor alert | For the best experience and most recent security and bug fixes, we recommend keeping the Azure Connected Machine agent up to date. Out-of-date agents will be identified with an [Azure Advisor alert](/azure/advisor/advisor-alerts-portal).<br><br> Specify the following when configuring the alert:<br> **Recommendation type** = **Upgrade to the latest version of the Azure Connected Machine agent** | One hour | -|[Assign Azure policies](../../governance/policy/assign-policy-portal.md) to your subscription or resource group scope |Assign the **Enable Azure Monitor for VMs** [policy](/azure/azure-monitor/vm/vminsights-enable-policy) (and others that meet your needs) to the subscription or resource group scope. Azure Policy allows you to assign policy definitions that install the required agents for VM insights across your environment.| Varies | -|Enable [Azure Update Manager](/azure/update-manager/) for your Azure Arc-enabled servers. |Configure Azure Update Manager on your Arc-enabled servers to manage system updates for your Windows and Linux virtual machines. You can choose to [deploy updates on-demand](/azure/update-manager/deploy-updates?tabs=install-single-overview%2Cinstall-scale-overview) or [apply updates using custom schedule](/azure/update-manager/scheduled-patching?tabs=schedule-updates-single-machine%2Cschedule-updates-scale-overview%2Cwindows-maintenance). | 5 minutes | --## Next steps --* Learn about best practices and design patterns through the [Azure Arc landing zone accelerator for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-servers/eslz-identity-and-access-management). -* Learn about [reconfiguring, upgrading, and removing the Connected Machine agent](manage-agent.md). -* Review troubleshooting information in the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md). -* Learn how to simplify deployment with other Azure services like Azure Automation [State Configuration](../../automation/automation-dsc-overview.md) and other supported [Azure VM extensions](manage-vm-extensions.md). |
azure-arc | Plan Evaluate On Azure Virtual Machine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md | - Title: How to evaluate Azure Arc-enabled servers with an Azure virtual machine -description: Learn how to evaluate Azure Arc-enabled servers using an Azure virtual machine. Previously updated : 10/01/2021----# Evaluate Azure Arc-enabled servers on an Azure virtual machine --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). --Azure Arc-enabled servers is designed to help you connect servers running on-premises or in other clouds to Azure. Normally, you wouldn't connect an Azure virtual machine to Azure Arc because all the same capabilities are natively available for these VMs. Azure VMs already have a representation in Azure Resource Manager, VM extensions, managed identities, and Azure Policy. If you attempt to install Azure Arc-enabled servers on an Azure VM, you'll receive an error message stating that it is unsupported. --While you cannot install Azure Arc-enabled servers on an Azure VM for production scenarios, it's possible to configure Azure Arc-enabled servers to run on an Azure VM for *evaluation and testing purposes only*. This article walks you through how to prepare an Azure VM to look like an on-premises server for testing purposes. --> [!NOTE] -> The steps in this article are intended for virtual machines hosted in the Azure cloud. Azure Arc-enabled servers is not supported on virtual machines running on Azure Stack Hub or Azure Stack Edge. --## Prerequisites --* Your account is assigned to the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor) role. -* The Azure virtual machine is running an [operating system supported by Azure Arc-enabled servers](prerequisites.md#supported-operating-systems). If you don't have an Azure VM, you can deploy a [simple Windows VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json) or a [simple Ubuntu Linux 18.04 LTS VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json). -* Your Azure VM can communicate outbound to download the Azure Connected Machine agent package for Windows from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent), and Linux from the Microsoft [package repository](https://packages.microsoft.com/). If outbound connectivity to the Internet is restricted following your IT security policy, you can download the agent package manually and copy it to a folder on the Azure VM. -* An account with elevated (that is, an administrator or as root) privileges on the VM, and RDP or SSH access to the VM. -* To register and manage the Azure VM with Azure Arc-enabled servers, you are a member of the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group. --## Plan --To start managing your Azure VM as an Azure Arc-enabled server, you need to make the following changes to the Azure VM before you can install and configure Azure Arc-enabled servers. --1. Remove any VM extensions deployed to the Azure VM, such as the Azure Monitor agent. While Azure Arc-enabled servers support many of the same extensions as Azure VMs, the Azure Connected Machine agent can't manage VM extensions already deployed to the VM. --2. Disable the Azure Windows or Linux Guest Agent. The Azure VM guest agent serves a similar purpose to the Azure Connected Machine agent. To avoid conflicts between the two, the Azure VM Agent needs to be disabled. Once it is disabled, you cannot use VM extensions or some Azure services. --3. Create a security rule to deny access to the Azure Instance Metadata Service (IMDS). IMDS is a REST API that applications can call to get information about the VM's representation in Azure, including its resource ID and location. IMDS also provides access to any managed identities assigned to the machine. Azure Arc-enabled servers provides its own IMDS implementation and returns information about the Azure Arc representation of the VM. To avoid situations where both IMDS endpoints are available and apps have to choose between the two, you block access to the Azure VM IMDS so that the Azure Arc-enabled server IMDS implementation is the only one available. --After you make these changes, your Azure VM behaves like any machine or server outside of Azure and is at the necessary starting point to install and evaluate Azure Arc-enabled servers. --When Azure Arc-enabled servers is configured on the VM, you see two representations of it in Azure. One is the Azure VM resource, with a `Microsoft.Compute/virtualMachines` resource type, and the other is an Azure Arc resource, with a `Microsoft.HybridCompute/machines` resource type. As a result of preventing management of the guest operating system from the shared physical host server, the best way to think about the two resources is the Azure VM resource is the virtual hardware for your VM, and let's you control the power state and view information about its SKU, network, and storage configurations. The Azure Arc resource manages the guest operating system in that VM, and can be used to install extensions, view compliance data for Azure Policy, and complete any other supported task by Azure Arc-enabled servers. --## Reconfigure Azure VM --> [!NOTE] -> For windows, set the environment variable to override the ARC on an Azure VM installation. -> -> ```powershell -> [System.Environment]::SetEnvironmentVariable("MSFT_ARC_TEST",'true', [System.EnvironmentVariableTarget]::Machine) -> ``` --1. Remove any VM extensions on the Azure VM. -- In the Azure portal, navigate to your Azure VM resource and from the left-hand pane, select **Extensions**. If there are any extensions installed on the VM, select each extension individually and then select **Uninstall**. Wait for all extensions to finish uninstalling before proceeding to step 2. --2. Disable the Azure VM Guest Agent. -- To disable the Azure VM Guest Agent, connect to your VM using Remote Desktop Connection (Windows) or SSH (Linux) and run the following commands to disable the guest agent. -- For Windows, run the following PowerShell commands: -- ```powershell - Set-Service WindowsAzureGuestAgent -StartupType Disabled -Verbose - Stop-Service WindowsAzureGuestAgent -Force -Verbose - ``` -- For Linux, run the following commands: -- ```bash - sudo systemctl stop walinuxagent - sudo systemctl disable walinuxagent - ``` --3. Block access to the Azure IMDS endpoint. - > [!NOTE] - > The configurations below need to be applied for 169.254.169.254 and 169.254.169.253. These are endpoints used for IMDS in Azure and Azure Stack HCI respectively. -- While still connected to the server, run the following commands to block access to the Azure IMDS endpoint. For Windows, run the following PowerShell command: -- ```powershell - New-NetFirewallRule -Name BlockAzureIMDS -DisplayName "Block access to Azure IMDS" -Enabled True -Profile Any -Direction Outbound -Action Block -RemoteAddress 169.254.169.254 - ``` -- For Linux, consult your distribution's documentation for the best way to block outbound access to `169.254.169.254/32` over TCP port 80. Normally you'd block outbound access with the built-in firewall, but you can also temporarily block it with **iptables** or **nftables**. -- If your Azure VM is running Ubuntu, perform the following steps to configure its uncomplicated firewall (UFW): -- ```bash - sudo ufw --force enable - sudo ufw deny out from any to 169.254.169.254 - sudo ufw default allow incoming - ``` -- If your Azure VM is running CentOS, Red Hat, or SUSE Linux Enterprise Server (SLES), perform the following steps to configure firewalld: -- ```bash - sudo firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 1 -p tcp -d 169.254.169.254 -j REJECT - sudo firewall-cmd --reload - ``` -- For other distributions, consult your firewall docs or configure a generic iptables rule with the following command: -- ```bash - sudo iptables -I OUTPUT 1 -d 169.254.169.254 -j REJECT - ``` -- > [!NOTE] - > The iptables configuration needs to be set after every reboot unless a persistent iptables solution is used. ---4. Install and configure the Azure Connected Machine agent. -- The VM is now ready for you to begin evaluating Azure Arc-enabled servers. To install and configure the Azure Connected Machine agent, see [Connect hybrid machines using the Azure portal](onboard-portal.md) and follow the steps to generate an installation script and install using the scripted method. -- > [!NOTE] - > If outbound connectivity to the internet is restricted from your Azure VM, you can download the agent package manually. Copy the agent package to the Azure VM, and modify the Azure Arc-enabled servers installation script to reference the source folder. --If you missed one of the steps, the installation script detects it is running on an Azure VM and terminates with an error. Verify you've completed steps 1-3, and then rerun the script. --## Verify the connection with Azure Arc --After you install and configure the agent to register with Azure Arc-enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machine in the [Azure portal](https://portal.azure.com). --![A successful server connection](./media/onboard-portal/arc-for-servers-successful-onboard.png) --## Next steps --* Learn [how to plan and enable a large number of machines to Azure Arc-enabled servers](plan-at-scale-deployment.md) to simplify configuration of essential security management and monitoring capabilities in Azure. --* Learn about our [supported Azure VM extensions](manage-vm-extensions.md) available to simplify deployment with other Azure services like Automation, KeyVault, and others for your Windows or Linux machine. --* When you have finished testing, [uninstall the Azure Connected Machine agent](manage-agent.md#uninstall-the-agent). |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md | - Title: Built-in policy definitions for Azure Arc-enabled servers -description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/06/2024----# Azure Policy built-in definitions for Azure Arc-enabled servers --This page is an index of [Azure Policy](../../governance/policy/overview.md) built-in policy -definitions for Azure Arc-enabled servers. For additional Azure Policy built-ins for other services, -see [Azure Policy built-in definitions](../../governance/policy/samples/built-in-policies.md). --The name of each built-in policy definition links to the policy definition in the Azure portal. Use -the link in the **Version** column to view the source on the -[Azure Policy GitHub repo](https://github.com/Azure/azure-policy). --## Azure Arc-enabled servers ---## Next steps --- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).-- Review the [Azure Policy definition structure](../../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../../governance/policy/concepts/effects.md). |
azure-arc | Prepare Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md | - Title: How to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc -description: Learn how to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 07/03/2024----# Prepare to deliver Extended Security Updates for Windows Server 2012 --With Windows Server 2012 and Windows Server 2012 R2 having reached end of support on October 10, 2023, Azure Arc-enabled servers lets you enroll your existing Windows Server 2012/2012 R2 machines in [Extended Security Updates (ESUs)](/windows-server/get-started/extended-security-updates-overview). Affording both cost flexibility and an enhanced delivery experience, Azure Arc better positions you to migrate to Azure. --The purpose of this article is to help you understand the benefits and how to prepare to use Arc-enabled servers to enable delivery of ESUs. --> [!NOTE] -> Azure VMware Solutions (AVS) machines are eligible for free ESUs and should not enroll in ESUs enabled through Azure Arc. -> -## Key benefits --Delivering ESUs to your Windows Server 2012/2012 R2 machines provides the following key benefits: --- **Pay-as-you-go:** Flexibility to sign up for a monthly subscription service with the ability to migrate mid-year.--- **Azure billed:** You can draw down from your existing [Microsoft Azure Consumption Commitment (MACC)](/marketplace/azure-consumption-commitment-benefit) and analyze your costs using [Microsoft Cost Management and Billing](../../cost-management-billing/cost-management-billing-overview.md).--- **Built-in inventory:** The coverage and enrollment status of Windows Server 2012/2012 R2 ESUs on eligible Arc-enabled servers are identified in the Azure portal, highlighting gaps and status changes.--- **Keyless delivery:** The enrollment of ESUs on Azure Arc-enabled Windows Server 2012/2012 R2 machines won't require the acquisition or activation of keys.--## Access to Azure services --For Azure Arc-enabled servers enrolled in WS2012 ESUs enabled by Azure Arc, free access is provided to these Azure services from October 10, 2023: --* [Azure Update Manager](../../update-center/overview.md) - Unified management and governance of update compliance that includes not only Azure and hybrid machines, but also ESU update compliance for all your Windows Server 2012/2012 R2 machines. - Enrollment in ESUs does not impact Azure Update Manager. After enrollment in ESUs through Azure Arc, the server becomes eligible for ESU patches. These patches can be delivered through Azure Update Manager or any other patching solution. You'll still need to configure updates from Microsoft Updates or Windows Server Update Services. -* [Azure Automation Change Tracking and Inventory](/azure/automation/change-tracking/overview?tabs=python-2) - Track changes in virtual machines hosted in Azure, on-premises, and other cloud environments. -* [Azure Policy Guest Configuration](/azure/cloud-adoption-framework/manage/azure-server-management/guest-configuration-policy) - Audit the configuration settings in a virtual machine. Guest configuration supports Azure VMs natively and non-Azure physical and virtual servers through Azure Arc-enabled servers. --Other Azure services through Azure Arc-enabled servers are available as well, with offerings such as: --* [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction) - As part of the cloud security posture management (CSPM) pillar, it provides server protections through [Microsoft Defender for Servers](/azure/defender-for-cloud/plan-defender-for-servers) to help protect you from various cyber threats and vulnerabilities. -* [Microsoft Sentinel](scenario-onboard-azure-sentinel.md) - Collect security-related events and correlate them with other data sources. - -## Prepare delivery of ESUs --Plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) to establish a connection to Azure. Windows Server 2012 Extended Security Updates supports Windows Server 2012 and R2 Standard and Datacenter editions. Windows Server 2012 Storage is not supported. --We recommend you deploy your machines to Azure Arc in preparation for when the related Azure services deliver supported functionality to manage ESU. Once these machines are onboarded to Azure Arc-enabled servers, you'll have visibility into their ESU coverage and enroll through the Azure portal or using Azure Policy. Billing for this service starts from October 2023 (i.e., after Windows Server 2012 end of support). --> [!NOTE] -> In order to purchase ESUs, you must have Software Assurance through Volume Licensing Programs such as an Enterprise Agreement (EA), Enterprise Agreement Subscription (EAS), Enrollment for Education Solutions (EES), Server and Cloud Enrollment (SCE), or through Microsoft Open Value Programs. Alternatively, if your Windows Server 2012/2012 R2 machines are licensed through SPLA or with a Server Subscription, Software Assurance is not required to purchase ESUs. --You must also download both the licensing package and servicing stack update (SSU) for the Azure Arc-enabled server as documented at [KB5031043: Procedure to continue receiving security updates after extended support has ended on October 10, 2023](https://support.microsoft.com/topic/kb5031043-procedure-to-continue-receiving-security-updates-after-extended-support-has-ended-on-october-10-2023-c1a20132-e34c-402d-96ca-1e785ed51d45). --### Deployment options --There are several at-scale onboarding options for Azure Arc-enabled servers, including running a [Custom Task Sequence](onboard-configuration-manager-custom-task.md) through Configuration Manager and deploying a [Scheduled Task through Group Policy](onboard-group-policy-powershell.md). There are also at-scale ESU delivery options for [VMware vCenter managed VMs](../vmware-vsphere/deliver-extended-security-updates-for-vmware-vms-through-arc.md) and [SCVMM managed VMs](../system-center-virtual-machine-manager/deliver-esus-for-system-center-virtual-machine-manager-vms.md) through Azure Arc. --> [!NOTE] -> Delivery of ESUs through Azure Arc to virtual machines running on Virtual Desktop Infrastructure (VDI) is not recommended. VDI systems should use Multiple Activation Keys (MAK) to apply ESUs. See [Access your Multiple Activation Key from the Microsoft 365 Admin Center](/windows-server/get-started/extended-security-updates-deploy) to learn more. -> --### Networking --Connectivity options include public endpoint, proxy server, and private link or Azure Express Route. Review the [networking prerequisites](network-requirements.md) to prepare non-Azure environments for deployment to Azure Arc. ---> [!TIP] -> To take advantage of the full range of offerings for Arc-enabled servers, such as extensions and remote connectivity, ensure that you allow the additional URLs that apply to your scenario. For more information, see [Connected machine agent networking requirements](network-requirements.md). --## Required Certificate Authorities --The following [Certificate Authorities](/azure/security/fundamentals/azure-ca-details?tabs=root-and-subordinate-cas-list) are required for Extended Security Updates for Windows Server 2012: --- [Microsoft Azure RSA TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003%20-%20xsign.crt)-- [Microsoft Azure RSA TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004%20-%20xsign.crt)-- [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007%20-%20xsign.crt)-- [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt)--If necessary, these Certificate Authorities can be [manually download and installed](troubleshoot-extended-security-updates.md#option-2-manually-download-and-install-the-intermediate-ca-certificates). --## Next steps --* Find out more about [planning for Windows Server and SQL Server end of support](https://www.microsoft.com/en-us/windows-server/extended-security-updates) and [getting Extended Security Updates](/windows-server/get-started/extended-security-updates-deploy). --* Learn about best practices and design patterns through the [Azure Arc landing zone accelerator for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-servers/eslz-identity-and-access-management). -* Learn more about [Arc-enabled servers](overview.md) and how they work with Azure through the Azure Connected Machine agent. -* Explore options for [onboarding your machines](plan-at-scale-deployment.md) to Azure Arc-enabled servers. |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | - Title: Connected Machine agent prerequisites -description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 07/29/2024-----# Connected Machine agent prerequisites --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). --This article describes the basic requirements for installing the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers. Some [onboarding methods](deployment-options.md) may have more requirements. --## Supported environments --Azure Arc-enabled servers support the installation of the Connected Machine agent on physical servers and virtual machines hosted outside of Azure. This includes support for virtual machines running on platforms like: --* VMware (including Azure VMware Solution) -* Azure Stack HCI -* Other cloud environments --You shouldn't install Azure Arc on virtual machines hosted in Azure, Azure Stack Hub, or Azure Stack Edge, as they already have similar capabilities. You can, however, [use an Azure VM to simulate an on-premises environment](plan-evaluate-on-azure-virtual-machine.md) for testing purposes, only. --Take extra care when using Azure Arc on systems that are: --* Cloned -* Restored from backup as a second instance of the server -* Used to create a "golden image" from which other virtual machines are created --If two agents use the same configuration, you'll encounter inconsistent behaviors when both agents try to act as one Azure resource. The best practice for these situations is to use an automation tool or script to onboard the server to Azure Arc after its cloned, restored from backup, or created from a golden image. --> [!NOTE] -> For additional information on using Azure Arc-enabled servers in VMware environments, see the [VMware FAQ](vmware-faq.md). --## Supported operating systems --Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. The Azure Connected Machine agent doesn't run on x86 (32-bit) or ARM-based architectures. --* AlmaLinux 9 -* Amazon Linux 2 and 2023 -* Azure Linux (CBL-Mariner) 2.0 and 3.0 -* Azure Stack HCI -* Debian 11, and 12 -* Oracle Linux 7, 8, and 9 -* Red Hat Enterprise Linux (RHEL) 7, 8 and 9 -* Rocky Linux 8 and 9 -* SUSE Linux Enterprise Server (SLES) 12 SP3-SP5 and 15 -* Ubuntu 18.04, 20.04, and 22.04 LTS -* Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance)) -* Windows IoT Enterprise -* Windows Server 2012, 2012 R2, 2016, 2019, and 2022 - * Both Desktop and Server Core experiences are supported - * Azure Editions are supported on Azure Stack HCI --The Azure Connected Machine agent isn't tested on operating systems hardened by the Center for Information Security (CIS) Benchmark. --> [!NOTE] -> [Azure Connected Machine agent version 1.44](agent-release-notes.md#version-144july-2024) is the last version to officially support Debian 10, Ubuntu 16.04, and Azure Linux (CBL-Mariner) 1.0. -> --## Limited support operating systems --The following operating system versions have **limited support**. In each case, newer agent versions won't support these operating systems. The last agent version that supports the operating system is listed, and newer agent releases won't be made available for that system. -The listed version is supported until the **End of Arc Support Date**. If critical security issues are identified that affect these agent versions, the fixes can be backported to the last supported version, but new functionality or other bug fixes won't be. --| Operating system | Last supported agent version | End of Arc Support Date | Notes | -| -- | -- | -- | -- | -| Windows Server 2008 R2 SP1 | 1.39 [Download](https://aka.ms/AzureConnectedMachineAgent-1.39) | 03/31/2025 | Windows Server 2008 and 2008 R2 reached End of Support in January 2020. See [End of support for Windows Server 2008 and Windows Server 2008 R2](/troubleshoot/windows-server/windows-server-eos-faq/end-of-support-windows-server-2008-2008r2). | -| CentOS 7 and 8 | 1.42 | 05/31/2025 | See the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). | -| Debian 10 | 1.44 | 07/15/2025 | | -| Ubuntu 16.04 | 1.44 | 07/15/2025 | | -| Azure Linux (CBL-Mariner) 1.0 | 1.44 | 07/15/2025 | | --### Connect new limited support servers --To connect a new server running a Limited Support operating system to Azure Arc, you will need to make some adjustments to the onboarding script. --For Windows, modify the installation script to specify the version required, using the -AltDownload parameter. --Instead of --```pwsh - # Install the hybrid agent - & "$env:TEMP\install_windows_azcmagent.ps1"; -``` --Use --```pwsh - # Install the hybrid agent - & "$env:TEMP\install_windows_azcmagent.ps1" -AltDownload https://aka.ms/AzureConnectedMachineAgent-1.39; -``` --For Linux, the relevant package repository will only contain releases that are applicable, so no special considerations are required. ---### Client operating system guidance --The Azure Arc service and Azure Connected Machine Agent are supported on Windows 10 and 11 client operating systems only when using those computers in a server-like environment. That is, the computer should always be: --* Connected to the internet -* Connected to a power source -* Powered on --For example, a computer running Windows 11 that's responsible for digital signage, point-of-sale solutions, and general back office management tasks is a good candidate for Azure Arc. End-user productivity machines, such as a laptop, which may go offline for long periods of time, shouldn't use Azure Arc and instead should consider [Microsoft Intune](/mem/intune) or [Microsoft Configuration Manager](/mem/configmgr). --### Short-lived servers and virtual desktop infrastructure --Microsoft doesn't recommend running Azure Arc on short-lived (ephemeral) servers or virtual desktop infrastructure (VDI) VMs. Azure Arc is designed for long-term management of servers and isn't optimized for scenarios where you are regularly creating and deleting servers. For example, Azure Arc doesn't know if the agent is offline due to planned system maintenance or if the VM was deleted, so it won't automatically clean up server resources that stopped sending heartbeats. As a result, you could encounter a conflict if you re-create the VM with the same name and there's an existing Azure Arc resource with the same name. --[Azure Virtual Desktop on Azure Stack HCI](../../virtual-desktop/azure-stack-hci-overview.md) doesn't use short-lived VMs and supports running Azure Arc in the desktop VMs. --## Software requirements --Windows operating systems: --* Windows Server 2008 R2 SP1 requires PowerShell 4.0 or later. Microsoft recommends running the latest version, [Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616). --Linux operating systems: --* systemd -* wget (to download the installation script) -* openssl -* gnupg (Debian-based systems, only) --## Local user logon right for Windows systems --The Azure Hybrid Instance Metadata Service runs under a low-privileged virtual account, `NT SERVICE\himds`. This account needs the "log on as a service" right in Windows to run. In most cases, there's nothing you need to do because this right is granted to virtual accounts by default. However, if your organization uses Group Policy to customize this setting, you'll need to add `NT SERVICE\himds` to the list of accounts allowed to log on as a service. --You can check the current policy on your machine by opening the Local Group Policy Editor (`gpedit.msc`) from the Start menu and navigating to the following policy item: --Computer Configuration > Windows Settings > Security Settings > Local Policies > User Rights Assignment > Log on as a service --Check if any of `NT SERVICE\ALL SERVICES`, `NT SERVICE\himds`, or `S-1-5-80-4215458991-2034252225-2287069555-1155419622-2701885083` (the static security identifier for NT SERVICE\\himds) are in the list. If none are in the list, you'll need to work with your Group Policy administrator to add `NT SERVICE\himds` to any policies that configure user rights assignments on your servers. The Group Policy administrator needs to make the change on a computer with the Azure Connected Machine agent installed so the object picker resolves the identity correctly. The agent doesn't need to be configured or connected to Azure to make this change. ---## Required permissions --You'll need the following Azure built-in roles for different aspects of managing connected machines: --* To onboard machines, you must have the [Azure Connected Machine Onboarding](../../role-based-access-control/built-in-roles.md#azure-connected-machine-onboarding) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group where you're managing the servers. -* To read, modify, and delete a machine, you must have the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) role for the resource group. -* To select a resource group from the drop-down list when using the **Generate script** method, you'll also need the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role that includes **Reader** access). -* When associating a Private Link Scope with an Arc Server, you must have Microsoft.HybridCompute/privateLinkScopes/read permission on the Private Link Scope Resource. --## Azure subscription and service limits --There are no limits to the number of Azure Arc-enabled servers you can register in any single resource group, subscription, or tenant. --Each Azure Arc-enabled server is associated with a Microsoft Entra object and counts against your directory quota. See [Microsoft Entra service limits and restrictions](../../active-directory/enterprise-users/directory-service-limits-restrictions.md) for information about the maximum number of objects you can have in a Microsoft Entra directory. --## Azure resource providers --To use Azure Arc-enabled servers, the following [Azure resource providers](../../azure-resource-manager/management/resource-providers-and-types.md) must be registered in your subscription: --* **Microsoft.HybridCompute** -* **Microsoft.GuestConfiguration** -* **Microsoft.HybridConnectivity** -* **Microsoft.AzureArcData** (if you plan to Arc-enable SQL Servers) -* **Microsoft.Compute** (for Azure Update Manager and automatic extension upgrades) --You can register the resource providers using the following commands: --Azure PowerShell: --```azurepowershell-interactive -Connect-AzAccount -Set-AzContext -SubscriptionId [subscription you want to onboard] -Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute -Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration -Register-AzResourceProvider -ProviderNamespace Microsoft.HybridConnectivity -Register-AzResourceProvider -ProviderNamespace Microsoft.AzureArcData -``` --Azure CLI: --```azurecli-interactive -az account set --subscription "{Your Subscription Name}" -az provider register --namespace 'Microsoft.HybridCompute' -az provider register --namespace 'Microsoft.GuestConfiguration' -az provider register --namespace 'Microsoft.HybridConnectivity' -az provider register --namespace 'Microsoft.AzureArcData' -``` --You can also register the resource providers in the [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). --## Next steps --* Review the [networking requirements for deploying Azure Arc-enabled servers](network-requirements.md). -* Before you deploy the Azure Connected Machine agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md).* To resolve problems, review the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md). |
azure-arc | Private Link Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md | - Title: Use Azure Private Link to connect servers to Azure Arc using a private endpoint -description: Learn how to use Azure Private Link to securely connect networks to Azure Arc. -- Previously updated : 06/20/2023---# Use Azure Private Link to securely connect servers to Azure Arc --[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. For many services, you just set up an endpoint per resource. This means you can connect your on-premises or multicloud servers with Azure Arc and send all traffic over an Azure [ExpressRoute](../../expressroute/expressroute-introduction.md) or site-to-site [VPN connection](../../vpn-gateway/vpn-gateway-about-vpngateways.md) instead of using public networks. --Starting with Azure Arc-enabled servers, you can use a Private Link Scope model to allow multiple servers or machines to communicate with their Azure Arc resources using a single private endpoint. --This article covers when to use and how to set up an Azure Arc Private Link Scope. --## Advantages --With Private Link you can: --- Connect privately to Azure Arc without opening up any public network access.-- Ensure data from the Azure Arc-enabled machine or server is only accessed through authorized private networks. This also includes data from [VM extensions](manage-vm-extensions.md) installed on the machine or server that provide post-deployment management and monitoring support.-- Prevent data exfiltration from your private networks by defining specific Azure Arc-enabled servers and other Azure services resources, such as Azure Monitor, that connects through your private endpoint.-- Securely connect your private on-premises network to Azure Arc using ExpressRoute and Private Link.-- Keep all traffic inside the Microsoft Azure backbone network.--For more information, see [Key Benefits of Private Link](../../private-link/private-link-overview.md#key-benefits). --## How it works --Azure Arc Private Link Scope connects private endpoints (and the virtual networks they're contained in) to an Azure resource, in this case Azure Arc-enabled servers. When you enable any one of the Azure Arc-enabled servers supported VM extensions, such as Azure Monitor, those resources connect other Azure resources. Such as: --- Log Analytics workspace, required for Azure Automation Change Tracking and Inventory, Azure Monitor VM insights, and Azure Monitor log collection with Azure Monitor agent.-- Azure Automation account, required for Update Management and Change Tracking and Inventory.-- Azure Key Vault-- Azure Blob storage, required for Custom Script Extension.---Connectivity to any other Azure resource from an Azure Arc-enabled server requires configuring Private Link for each service, which is optional, but recommended. Azure Private Link requires separate configuration per service. --For more information about configuring Private Link for the Azure services listed earlier, see the [Azure Automation](../../automation/how-to/private-link-security.md), [Azure Monitor](/azure/azure-monitor/logs/private-link-security), [Azure Key Vault](/azure/key-vault/general/private-link-service), or [Azure Blob storage](../../private-link/tutorial-private-endpoint-storage-portal.md) articles. --> [!IMPORTANT] -> Azure Private Link is now generally available. Both Private Endpoint and Private Link service (service behind standard load balancer) are generally available. Different Azure PaaS onboard to Azure Private Link following different schedules. See [Private Link availability](../../private-link/availability.md) for an updated status of Azure PaaS on Private Link. For known limitations, see [Private Endpoint](../../private-link/private-endpoint-overview.md#limitations) and [Private Link Service](../../private-link/private-link-service-overview.md#limitations). --* The Private Endpoint on your VNet allows it to reach Azure Arc-enabled servers endpoints through private IPs from your network's pool, instead of using to the public IPs of these endpoints. That allows you to keep using your Azure Arc-enabled servers resource without opening your VNet to outbound traffic not requested. --* Traffic from the Private Endpoint to your resources will go over the Microsoft Azure backbone, and not routed to public networks. --* You can configure each of your components to allow or deny ingestion and queries from public networks. That provides a resource-level protection, so that you can control traffic to specific resources. --## Restrictions and limitations --The Azure Arc-enabled servers Private Link Scope object has a number of limits you should consider when planning your Private Link setup. --- You can associate at most one Azure Arc Private Link Scope with a virtual network.-- An Azure Arc-enabled machine or server resource can only connect to one Azure Arc-enabled servers Private Link Scope.-- All on-premises machines need to use the same private endpoint by resolving the correct private endpoint information (FQDN record name and private IP address) using the same DNS forwarder. For more information, see [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md)-- The Azure Arc-enabled server and Azure Arc Private Link Scope must be in the same Azure region. The Private Endpoint and the virtual network must also be in the same Azure region, but this region can be different from that of your Azure Arc Private Link Scope and Arc-enabled server.-- Network traffic to Microsoft Entra ID and Azure Resource Manager does not traverse the Azure Arc Private Link Scope and will continue to use your default network route to the internet. You can optionally [configure a resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) to send Azure Resource Manager traffic to a private endpoint.-- Other Azure services that you will use, for example Azure Monitor, requires their own private endpoints in your virtual network.-- Remote access to the server using Windows Admin Center or SSH is not supported over private link at this time.--## Planning your Private Link setup --To connect your server to Azure Arc over a private link, you need to configure your network to accomplish the following: --1. Establish a connection between your on-premises network and an Azure virtual network using a [site-to-site VPN](../../vpn-gateway/tutorial-site-to-site-portal.md) or [ExpressRoute circuit](../../expressroute/expressroute-howto-linkvnet-arm.md). --1. Deploy an Azure Arc Private Link Scope, which controls which machines or servers can communicate with Azure Arc over private endpoints and associate it with your Azure virtual network using a private endpoint. --1. Update the DNS configuration on your local network to resolve the private endpoint addresses. --1. Configure your local firewall to allow access to Microsoft Entra ID and Azure Resource Manager. --1. Associate the machines or servers registered with Azure Arc-enabled servers with the private link scope. --1. Optionally, deploy private endpoints for other Azure services your machine or server is managed by, such as: -- - Azure Monitor - - Azure Automation - - Azure Blob storage - - Azure Key Vault --This article assumes you have already set up your ExpressRoute circuit or site-to-site VPN connection. --## Network configuration --Azure Arc-enabled servers integrate with several Azure services to bring cloud management and governance to your hybrid machines or servers. Most of these services already offer private endpoints, but you need to configure your firewall and routing rules to allow access to Microsoft Entra ID and Azure Resource Manager over the internet until these services offer private endpoints. --There are two ways you can achieve this: --- If your network is configured to route all internet-bound traffic through the Azure VPN or ExpressRoute circuit, you can configure the network security group (NSG) associated with your subnet in Azure to allow outbound TCP 443 (HTTPS) access to Microsoft Entra ID and Azure using [service tags](../../virtual-network/service-tags-overview.md). The NSG rules should look like the following:-- |Setting |Microsoft Entra ID rule | Azure rule | - |--|--|--| - |Source |Virtual network |Virtual network | - |Source port ranges |* |* | - |Destination |Service Tag |Service Tag | - |Destination service tag |AzureActiveDirectory |AzureResourceManager | - |Destination port ranges |443 |443 | - |Protocol |Tcp |Tcp | - |Action |Allow |Allow | - |Priority |150 (must be lower than any rules that block internet access) |151 (must be lower than any rules that block internet access) | - |Name |AllowAADOutboundAccess |AllowAzOutboundAccess | --- Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Microsoft Entra ID and Azure using the downloadable service tag files. The [JSON file](https://www.microsoft.com/en-us/download/details.aspx?id=56519) contains all the public IP address ranges used by Microsoft Entra ID and Azure and is updated monthly to reflect any changes. Azure AD's service tag is `AzureActiveDirectory` and Azure's service tag is `AzureResourceManager`. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.--See the visual diagram under the section [How it works](#how-it-works) for the network traffic flows. --## Create a Private Link Scope --1. Sign in to the [Azure portal](https://portal.azure.com). --1. Go to **Create a resource** in the Azure portal and search for **Azure Arc Private Link Scope**. Or you can use the following link to open the [Azure Arc Private Link Scope](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.HybridCompute%2FprivateLinkScopes) page in the portal. -- :::image type="content" source="./media/private-link-security/private-scope-home.png" lightbox="./media/private-link-security/private-scope-home.png" alt-text="Screenshot of private scope home page with Create button." border="true"::: --1. Select **Create**. --1. In the **Basics** tab, select a Subscription and Resource Group. --1. Enter a name for the Azure Arc Private Link Scope. It's best to use a meaningful and clear name. -- Optionally, you can require every Azure Arc-enabled machine or server associated with this Azure Arc Private Link Scope to send data to the service through the private endpoint. To do so, check the box for **Allow public network access** so machines or servers associated with this Azure Arc Private Link Scope can communicate with the service over both private or public networks. You can change this setting after creating the scope if you change your mind. --1. Select the **Private endpoint** tab, then select **Create**. -1. In the **Create private endpoint** window: - 1. Enter a **Name** for the endpoint. -- 1. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. -- > [!NOTE] - > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the Private Scope configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Azure Arc-enabled servers. -- 1. Select **OK**. --1. Select **Review + Create**. -- :::image type="content" source="./media/private-link-security/create-private-link-scope.png" alt-text="Screenshot showing the Create Private Link Scope window" border="true"::: --1. Let the validation pass, and then select **Create**. --<!--## Create a private endpoint --Once your Azure Arc Private Link Scope is created, you need to connect it with one or more virtual networks using a private endpoint. The private endpoint exposes access to the Azure Arc services on a private IP in your virtual network address space. --1. In your scope resource, select **Private Endpoint connections** in the left-hand resource menu. Select **Add** to start the endpoint create process. You can also approve connections that were started in the Private Link center here by selecting them and selecting **Approve**. -- :::image type="content" source="./media/private-link-security/create-private-endpoint.png" alt-text="Create Private Endpoint" border="true"::: --1. Pick the subscription, resource group, and name of the endpoint, and the region it should live in. The region needs to be the same region as the VNet you connect it to. --1. Select **Next: Resource**. --1. On the **Resource** page, -- a. Pick the **Subscription** that contains your Azure Arc Private Link Scope resource. -- b. For **Resource type**, choose **Microsoft.HybridCompute/privateLinkScopes**. -- c. From the **Resource** drop-down, choose your Private Link scope you created earlier. -- d. Select **Next: Configuration >**. -- :::image type="content" source="./media/private-link-security/create-private-endpoint-configuration.png" alt-text="Complete creation of Private Endpoint" border="true"::: --1. On the **Configuration** page, -- a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Arc-enabled server. -- b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones might be different from what is shown in the screenshot below. -- > [!NOTE] - > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the Private Scope configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Azure Arc-enabled servers. -- c. Select **Review + create**. -- d. Let validation pass. -- e. Select **Create**.--> --## Configure on-premises DNS forwarding --Your on-premises machines or servers need to be able to resolve the private link DNS records to the private endpoint IP addresses. How you configure this depends on whether you're using Azure private DNS zones to maintain DNS records, or if you're using your own DNS server on-premises and how many servers you're configuring. --### DNS configuration using Azure-integrated private DNS zones --If you set up private DNS zones for Azure Arc-enabled servers and Guest Configuration when creating the private endpoint, your on-premises machines or servers need to be able to forward DNS queries to the built-in Azure DNS servers to resolve the private endpoint addresses correctly. You need a DNS forwarder in Azure (either a purpose-built VM or an Azure Firewall instance with DNS proxy enabled), after which you can configure your on-premises DNS server to forward queries to Azure to resolve private endpoint IP addresses. --The private endpoint documentation provides guidance for configuring [on-premises workloads using a DNS forwarder](../../private-link/private-endpoint-dns-integration.md#on-premises-workloads-using-a-dns-forwarder). --### Manual DNS server configuration --If you opted out of using Azure private DNS zones during private endpoint creation, you will need to create the required DNS records in your on-premises DNS server. --1. Go to the Azure portal. --1. Navigate to the private endpoint resource associated with your virtual network and private link scope. --1. From the left-hand pane, select **DNS configuration** to see a list of the DNS records and corresponding IP addresses you'll need to set up on your DNS server. The FQDNs and IP addresses will change based on the region you selected for your private endpoint and the available IP addresses in your subnet. -- :::image type="content" source="./media/private-link-security/dns-configuration.png" lightbox="./media/private-link-security/dns-configuration.png" alt-text="DNS configuration details" border="true"::: --1. Follow the guidance from your DNS server vendor to add the necessary DNS zones and A records to match the table in the portal. Ensure that you select a DNS server that is appropriately scoped for your network. Every machine or server that uses this DNS server now resolves the private endpoint IP addresses and must be associated with the Azure Arc Private Link Scope, or the connection will be refused. --### Single server scenarios --If you're only planning to use Private Links to support a few machines or servers, you might not want to update your entire network's DNS configuration. In this case, you can add the private endpoint hostnames and IP addresses to your operating systems **Hosts** file. Depending on the OS configuration, the Hosts file can be the primary or alternative method for resolving hostname to IP address. --#### Windows --1. Using an account with administrator privileges, open **C:\Windows\System32\drivers\etc\hosts**. --1. Add the private endpoint IPs and hostnames as shown in the table from step 3 under [Manual DNS server configuration](#manual-dns-server-configuration). The hosts file requires the IP address first followed by a space and then the hostname. --1. Save the file with your changes. You might need to save to another directory first, then copy the file to the original path. --#### Linux --1. Open the `/etc/hosts` hosts file in a text editor. --1. Add the private endpoint IPs and hostnames as shown in the table from step 3 under [Manual DNS server configuration](#manual-dns-server-configuration). The hosts file asks for the IP address first followed by a space and then the hostname. --1. Save the file with your changes. --## Connect to an Azure Arc-enabled servers --> [!NOTE] -> The minimum supported version of the Azure Arc-connected machine agent with private endpoint is version 1.4. The Azure Arc-enabled servers deployment script generated in the portal downloads the latest version. --### Configure a new Azure Arc-enabled server to use Private link --When connecting a machine or server with Azure Arc-enabled servers for the first time, you can optionally connect it to a Private Link Scope. The following steps are --1. From your browser, go to the [Azure portal](https://portal.azure.com). --1. Navigate to **Machines - Azure Arc**. --1. On the **Machines - Azure Arc** page, select **Add/Create** at the upper left, and then select **Add a machine** from the drop-down menu. --1. On the **Add servers with Azure Arc** page, select either the **Add a single server** or **Add multiple servers** depending on your deployment scenario, and then select **Generate script**. --1. On the **Generate script** page, select the subscription and resource group where you want the machine to be managed within Azure. Select an Azure location where the machine metadata will be stored. This location can be the same or different, as the resource group's location. --1. On the **Basics** page, provide the following: -- 1. Select the **Subscription** and **Resource group** for the machine. - 1. In the **Region** drop-down list, select the Azure region to store the machine or server metadata. - 1. In the **Operating system** drop-down list, select the operating system that the script is configured to run on. - 1. Under **Connectivity method**, select **Private endpoint** and select the Azure Arc Private Link Scope created in Part 1 from the drop-down list. -- :::image type="content" source="./media/private-link-security/arc-enabled-servers-create-script.png" alt-text="Selecting Private Endpoint connectivity option" border="true"::: -- 1. Select **Next: Tags**. --1. If you selected **Add multiple servers**, on the **Authentication** page, select the service principal created for Azure Arc-enabled servers from the drop-down list. If you have not created a service principal for Azure Arc-enabled servers, first review [how to create a service principal](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to familiarize yourself with permissions required and the steps to create one. Select **Next: Tags** to continue. --1. On the **Tags** page, review the default **Physical location tags** suggested and enter a value, or specify one or more **Custom tags** to support your standards. --1. Select **Next: Download and run script**. --1. On the **Download and run script** page, review the summary information, and then select **Download**. If you still need to make changes, select **Previous**. --After downloading the script, you have to run it on your machine or server using a privileged (administrator or root) account. Depending on your network configuration, you might need to download the agent from a computer with internet access and transfer it to your machine or server, and then modify the script with the path to the agent. --The Windows agent can be downloaded from [https://aka.ms/AzureConnectedMachineAgent](https://aka.ms/AzureConnectedMachineAgent) and the Linux agent can be downloaded from [https://packages.microsoft.com](https://packages.microsoft.com). Look for the latest version of the **azcmagent** under your OS distribution directory and installed with your local package manager. --The script will return status messages letting you know if onboarding was successful after it completes. --> [!TIP] -> [Network traffic from the Azure Connected Machine agent](network-requirements.md#urls) to Microsoft Entra ID (login.windows.net, login.microsoftonline.com, pas.windows.net) and Azure Resource Manager (management.azure.com) will continue to use public endpoints. If your server needs to communicate through a proxy server to reach these endpoints, [configure the agent with the proxy server URL](manage-agent.md#update-or-remove-proxy-settings) before connecting it to Azure. You might also need to [configure a proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) for the Azure Arc services if your private endpoint is not accessible from your proxy server. --### Configure an existing Azure Arc-enabled server --For Azure Arc-enabled servers that were set up prior to your private link scope, you can allow them to start using the Azure Arc-enabled servers Private Link Scope by completing the following steps. --1. In the Azure portal, navigate to your Azure Arc Private Link Scope resource. --1. From the left-hand pane, select **Azure Arc resources** and then **+ Add**. --1. Select the servers in the list that you want to associate with the Private Link Scope, and then select **Select** to save your changes. -- :::image type="content" source="./media/private-link-security/select-servers-private-link-scope.png" lightbox="./media/private-link-security/select-servers-private-link-scope.png" alt-text="Selecting Azure Arc resources" border="true"::: --It might take up to 15 minutes for the Private Link Scope to accept connections from the recently associated server(s). --## Troubleshooting --1. Check your on-premises DNS server(s) to verify it is either forwarding to Azure DNS or is configured with appropriate A records in your private link zone. These lookup commands should return private IP addresses in your Azure virtual network. If they resolve public IP addresses, double check your machine or server and network's DNS configuration. -- ``` - nslookup gbl.his.arc.azure.com - nslookup agentserviceapi.guestconfiguration.azure.com - ``` --1. If you are having trouble onboarding a machine or server, confirm that you've added the Microsoft Entra ID and Azure Resource Manager service tags to your local network firewall. The agent needs to communicate with these services over the internet until private endpoints are available for these services. --## Next steps --* To learn more about Private Endpoint, see [What is Azure Private Endpoint?](../../private-link/private-endpoint-overview.md). --* If you are experiencing issues with your Azure Private Endpoint connectivity setup, see [Troubleshoot Azure Private Endpoint connectivity problems](../../private-link/troubleshoot-private-endpoint-connectivity.md). --* See the following to configure Private Link for [Azure Automation](../../automation/how-to/private-link-security.md), [Azure Monitor](/azure/azure-monitor/logs/private-link-security), [Azure Key Vault](/azure/key-vault/general/private-link-service), or [Azure Blob storage](../../private-link/tutorial-private-endpoint-storage-portal.md). |
azure-arc | Resource Graph Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/resource-graph-samples.md | - Title: Azure Resource Graph sample queries for Azure Arc-enabled servers -description: Sample Azure Resource Graph queries for Azure Arc-enabled servers showing use of resource types and tables to access Azure Arc-enabled servers related resources and properties. Previously updated : 07/07/2022-----# Azure Resource Graph sample queries for Azure Arc-enabled servers --This page is a collection of [Azure Resource Graph](../../governance/resource-graph/overview.md) sample queries for Azure Arc-enabled servers. --## Sample queries ---## Next steps --- Learn more about the [query language](../../governance/resource-graph/concepts/query-language.md).-- Learn more about how to [explore resources](../../governance/resource-graph/concepts/explore-resources.md). |
azure-arc | Run Command | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/run-command.md | - Title: How to remotely and securely configure servers using Run command (Preview) -description: Learn how to remotely and securely configure servers using Run Command. Previously updated : 02/07/2024-----# Remotely and securely configure servers using Run command (Preview) --Run Command on Azure Arc-enabled servers (Public Preview) uses the Connected Machine agent to let you remotely and securely run a script inside your servers. This can be helpful for myriad scenarios across troubleshooting, recovery, diagnostics, and maintenance. --## Supported environment and configuration --- **Experiences:** Run Command is currently supported through Azure CLI and PowerShell. --- **Operating Systems:** Run Command supports both Windows and Linux operating systems. --- **Environments:** Run Command supports non-Azure environments including on-premises, VMware, SCVMM, AWS, GCP, and OCI. --- **Cost:** Run Command is free of charge, however storage of scripts in Azure may incur billing.--- **Configuration:** Run Command doesn't require more configuration or the deployment of any extensions. The-Connected Machine agent version must be 1.33 or higher. ---## Limiting access to Run Command using RBAC --Listing the run commands or showing details of a command requires the `Microsoft.HybridCompute/machines/runCommands/read` permission. The built-in [Reader](/azure/role-based-access-control/built-in-roles) role and higher levels have this permission. --Running a command requires the `Microsoft.HybridCompute/machines/runCommands/write` permission. The [Azure Connected Machine Resource Administrator](/azure/role-based-access-control/built-in-roles) role and higher levels have this permission. --You can use one of the [built-in roles](/azure/role-based-access-control/built-in-roles) or create a [custom role](/azure/role-based-access-control/custom-roles) to use Run Command. --## Blocking run commands locally --The Connected Machine agent supports local configurations that allow you to set an allowlist or a blocklist. See [Extension allowlists and blocklists](security-extensions.md#allowlists-and-blocklists) to learn more. --For Windows: --`azcmagent config set extensions.blocklist "microsoft.cplat.core/runcommandhandlerwindows"` --For Linux: --`azcmagent config set extensions.blocklist "microsoft.cplat.core/runcommandhandlerlinux"` ---## Azure CLI --The following examples use [az connectedmachine run-command](/cli/azure/connectedmachine/run-command) to run a shell script on an Azure Windows machine. --### Execute a script with the machine --This command delivers the script to the machine, executes it, and returns the captured output. --```azurecli -az connectedmachine run-command create --name "myRunCommand" --machine-name "myMachine" --resource-group "myRG" --script "Write-Host Hello World!" -``` --### List all deployed RunCommand resources on a machine --This command returns a full list of previously deployed run commands along with their properties. --```azurecli -az connectedmachine run-command list --machine-name "myMachine" --resource-group "myRG" -``` --### Get execution status and results --This command retrieves current execution progress, including latest output, start/end time, exit code, and terminal state of the execution. --```azurecli -az connectedmachine run-command show --name "myRunCommand" --machine-name "myMachine" --resource-group "myRG" -``` --> [!NOTE] -> Output and error fields in `instanceView` is limited to the last 4KB. To access the full output and error, you can forward the output and error data to storage append blobs using `-outputBlobUri` and `-errorBlobUri` parameters while executing Run Command. -> --### Delete RunCommand resource from the machine --Remove the RunCommand resource previously deployed on the machine. If the script execution is still in progress, execution will be terminated. --```azurecli -az connectedmachine run-command delete --name "myRunCommand" --machine-name "myMachine" --resource-group "myRG" -``` --## PowerShell --### Execute a script with the machine --```powershell -New-AzConnectedMachineRunCommand -ResourceGroupName "myRG" -MachineName "myMachine" -Location "EastUS" -RunCommandName "RunCommandName" ΓÇôSourceScript "echo Hello World!" -``` --### Execute a script on the machine using SourceScriptUri parameter --`OutputBlobUri` and `ErrorBlobUri` are optional parameters. --```powershell -New-AzConnectedMachineRunCommand -ResourceGroupName -MachineName -RunCommandName -SourceScriptUri ΓÇ£< SAS URI of a storage blob with read access or public URI>ΓÇ¥ -OutputBlobUri ΓÇ£< SAS URI of a storage append blob with read, add, create, write access>ΓÇ¥ -ErrorBlobUri ΓÇ£< SAS URI of a storage append blob with read, add, create, write access>ΓÇ¥ -``` --### List all deployed RunCommand resources on a machine --This command returns a full list of previously deployed Run Commands along with their properties. --```powershell -Get-AzConnectedMachineRunCommand -ResourceGroupName "myRG" -MachineName "myMachine" -``` --### Get execution status and results --This command retrieves current execution progress, including latest output, start/end time, exit code, and terminal state of the execution. --```powershell -Get-AzConnectedMachineRunCommand -ResourceGroupName "myRG" - MachineName "myMachine" -RunCommandName "RunCommandName" -``` --### Create or update Run Command on a machine using SourceScriptUri (storage blob SAS URL) --Create or update Run Command on a Windows machine using a SAS URL of a storage blob that contains a PowerShell script. `SourceScriptUri` can be a storage blobΓÇÖs full SAS URL or public URL. --```powershell -New-AzConnectedMachineRunCommand -ResourceGroupName MyRG0 -MachineName MyMachine -RunCommandName MyRunCommand -Location EastUS2EUAP -SourceScriptUri <SourceScriptUri> -``` --> [!NOTE] -> SAS URL must provide read access to the blob. An expiration time of 24 hours is suggested for SAS URL. SAS URLs can be generated on the Azure portal using blob options, or SAS token using `New-AzStorageBlobSASToken`. If generating SAS token using `New-AzStorageBlobSASToken`, your SAS URL = "base blob URL" + "?" + "SAS token from `New-AzStorageBlobSASToken`" -> --### Get a Run Command Instance View for a machine after creating or updating Run Command --Get a Run Command for machine with Instance View. Instance View contains the execution state of run command (Succeeded, Failed, etc.), exit code, standard output, and standard error generated by executing the script using Run Command. A non-zero ExitCode indicates an unsuccessful execution. --```powershell -Get-AzConnectedMachineRunCommand -ResourceGroupName MyRG -MachineName MyMachine -RunCommandName MyRunCommand -``` --`InstanceViewExecutionState`: Status of user's Run Command script. Refer to this state to know whether your script was successful or not. --`ProvisioningState`: Status of general extension provisioning end to end (whether extension platform was able to trigger Run Command script or not). --### Create or update Run Command on a machine using SourceScript (script text) --Create or update Run Command on a machine passing the script content directly to `-SourceScript` parameter. Use `;` to separate multiple commands. --```powershell -New-AzConnectedMachineRunCommand -ResourceGroupName MyRG0 -MachineName MyMachine -RunCommandName MyRunCommand2 -Location EastUS2EUAP -SourceScript "id; echo HelloWorld" -``` --### Create or update Run Command on a machine using OutputBlobUri, ErrorBlobUri to stream standard output and standard error messages to output and error Append blobs --Create or update Run Command on a machine and stream standard output and standard error messages to output and error Append blobs. --```powershell -New-AzConnectedMachineRunCommand -ResourceGroupName MyRG0 - MachineName MyMachine -RunCommandName MyRunCommand3 -Location EastUS2EUAP -SourceScript "id; echo HelloWorld"-OutputBlobUri <OutPutBlobUrI> -ErrorBlobUri <ErrorBlobUri> -``` --> [!NOTE] -> Output and error blobs must be the AppendBlob type and their SAS URLs must provide read, append, create, write access to the blob. An expiration time of 24 hours is suggested for SAS URL. If output or error blob does not exist, a blob of type AppendBlob will be created. SAS URLs can be generated on Azure portal using blob's options, or SAS token from using `New-AzStorageBlobSASToken`. -> --### Create or update Run Command on a machine as a different user using RunAsUser and RunAsPassword parameters --Create or update Run Command on a machine as a different user using `RunAsUser` and `RunAsPassword` parameters. For RunAs to work properly, contact the administrator the of machine and make sure user is added on the machine, user has access to resources accessed by the Run Command (directories, files, network etc.), and in case of Windows machine, 'Secondary Logon' service is running on the machine. --```powershell -New-AzMachineRunCommand -ResourceGroupName MyRG0 -MachineName MyMachine -RunCommandName MyRunCommand -Location EastUS2EUAP -SourceScript "id; echo HelloWorld" -RunAsUser myusername -RunAsPassword mypassword -``` --### Create or update Run Command on a machine resource using SourceScriptUri (storage blob SAS URL) --Create or update Run Command on a Windows machine resource using a SAS URL of a storage blob that contains a PowerShell script. ---```powershell -New-AzMachineRunCommand -ResourceGroupName MyRG0 -MachineName MyMachine -RunCommandName MyRunCommand -Location EastUS2EUAP -SourceScriptUri <SourceScriptUri> -``` --> [!NOTE] -> SAS URL must provide read access to the blob. An expiry time of 24 hours is suggested for SAS URL. SAS URLs can be generated on Azure portal using blob options or SAS token using `New-AzStorageBlobSASToken`. If generating SAS token using `New-AzStorageBlobSASToken`, the SAS URL format is: base blob URL + "?" + the SAS token from `New-AzStorageBlobSASToken`. ---### Create or update Run Command on a machine using ScriptLocalPath (local script file) -Create or update Run Command on a machine using a local script file that is on the client machine where cmdlet is executed. --```powershell -New-AzMachineRunCommand -ResourceGroupName MyRG0 -VMName MyMachine -RunCommandName MyRunCommand -Location EastUS2EUAP -ScriptLocalPath "C:\MyScriptsDir\MyScript.ps1" -``` --### Create or update Run Command on a machine instance using Parameter and ProtectedParameter parameters (Public and Protected Parameters to script) --Use ProtectedParameter to pass any sensitive inputs to script such as passwords, keys etc. --- Windows: Parameters and ProtectedParameters are passed to script as arguments are passed to script and run like this: `myscript.ps1 -publicParam1 publicParam1value -publicParam2 publicParam2value -secret1 secret1value -secret2 secret2value`--- Linux: Named Parameters and its values are set to environment config, which should be accessible within the .sh script. For Nameless arguments, pass an empty string to name input. Nameless arguments are passed to script and run like this: `myscript.sh publicParam1value publicParam2value secret1value secret2value`--### Delete RunCommand resource from the machine --Remove the RunCommand resource previously deployed on the machine. If the script execution is still in progress, execution will be terminated. --```powershell -Remove-AzConnetedMachineRunCommand -ResourceGroupName "myRG" -MachineName "myMachine" -RunCommandName "RunCommandName" -``` --## Run Command operations --Run Command on Azure Arc-enabled servers supports the following operations: --|Operation |Description | -||| -|Create |The operation to create a run command. This runs the run command. | -|Delete |The operation to delete a run command. If it's running, delete will also stop the run command. | -|Get |The operation to get a run command. | -|List |The operation to get all the run commands of an Azure Arc-enabled server. | -|Update |The operation to update the run command. This stops the previous run command. | - -> [!NOTE] -> Output and error blobs are overwritten each time the run command script executes. -> --## Example scenarios --Suppose you have an Azure Arc-enabled server called ΓÇ£2012DatacenterServer1ΓÇ¥ in resource group ΓÇ£ContosoRGΓÇ¥ with Subscription ID ΓÇ£aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaaΓÇ¥. Consider a scenario where you need to provide remote access to an endpoint for Windows Server 2012 / R2 servers. Access to Extended Security Updates enabled by Azure Arc requires access to the endpoint `www.microsoft.com/pkiops/certs`. You need to remotely configure a firewall rule that allows access to this endpoint. Use Run Command in order to allow connectivity to this endpoint. --### Example 1: Endpoint access with Run Command --Start off by creating a Run Command script to provide endpoint access to the `www.microsoft.com/pkiops/certs` endpoint on your target Arc-enabled server using the PUT operation. --To directly provide the script in line, use the following operation: --```rest -PUT https://management.azure.com/subscriptions/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/resourceGroups/ContosoRG/providers/Microsoft.HybridCompute/machines/2012DatacenterServer1/runCommands/EndpointAccessCommand?api-version=2023-10-03-preview -``` --```json -{ - "location": "eastus2", - "properties": { - "source": { - "script": "New-NetFirewallRule -DisplayName $ruleName -Direction Outbound -Action Allow -RemoteAddress $endpoint -RemotePort $port -Protocol $protocol" - }, - "parameters": [ - { - "name": "ruleName", - "value": "Allow access to www.microsoft.com/pkiops/certs" - }, - { - "name": "endpoint", - "value": "www.microsoft.com/pkiops/certs" - }, - { - "name": "port", - "value": 433 - }, - { - "name": "protocol", - "value": "TCP" - } -- ], - "asyncExecution": false, - "runAsUser": "contoso-user1", - "runAsPassword": "Contoso123!" - "timeoutInSeconds": 3600, - "outputBlobUri": "https://mystorageaccount.blob.core.windows.net/myscriptoutputcontainer/MyScriptoutput.txt", - "errorBlobUri": "https://mystorageaccount.blob.core.windows.net/mycontainer/MyScriptError.txt" - } -} -``` --To instead link to the script file, you can use the Run Command operationΓÇÖs ScriptURI option. For this it's assumed you have prepared a `newnetfirewallrule.ps1` file containing the in-line script and uploaded this script to blob storage. --```rest -PUT https://management.azure.com/subscriptions/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/resourceGroups/ContosoRG/providers/Microsoft.HybridCompute/machines/2012DatacenterServer1/runCommands/EndpointAccessCommand?api-version=2023-10-03-preview -``` --```json -{ - "location": "eastus2", - "properties": { - "source": { - "scriptUri": "https://mystorageaccount.blob.core.windows.net/myscriptoutputcontainer/newnetfirewallrule.ps1" - }, - "parameters": [ - { - "name": "ruleName", - "value": " Allow access to www.microsoft.com/pkiops/certs" - }, - { - "name": "endpoint", - "value": "www.microsoft.com/pkiops/certs" - }, - { - "name": "port", - "value": 433 - }, - { - "name": "protocol", - "value": "TCP" - } -- ], - "asyncExecution": false, - "runAsUser": "contoso-user1", - "runAsPassword": "Contoso123!" - "timeoutInSeconds": 3600, - "outputBlobUri": "https://mystorageaccount.blob.core.windows.net/myscriptoutputcontainer/MyScriptoutput.txt", - "errorBlobUri": "https://mystorageaccount.blob.core.windows.net/mycontainer/MyScriptError.txt" - } -} -``` --SAS URL must provide read access to the blob. An expiry time of 24 hours is suggested for SAS URL. SAS URLs can be generated on Azure portal using blobs options or SAS token using `New-AzStorageBlobSASToken`. If generating SAS token using `New-AzStorageBlobSASToken`, the SAS URL format is: `base blob URL + "?"` + the SAS token from `New-AzStorageBlobSASToken`. --Output and error blobs must be the AppendBlob type and their SAS URLs must provide read, append, create, write access to the blob. An expiration time of 24 hours is suggested for SAS URL. SAS URLs can be generated on Azure portal using blob's options, or SAS token from using `New-AzStorageBlobSASToken`. --### Example 2: Get Run Command details --To verify that you've correctly provisioned the Run Command, use the GET command to retrieve details on the provisioned Run Command: --```rest -GET https://management.azure.com/subscriptions/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/resourceGroups/ContosoRG/providers/Microsoft.HybridCompute/machines/2012DatacenterServer1/runCommands/EndpointAccessCommand?api-version=2023-10-03-preview -``` --### Example 3: Update the Run Command --LetΓÇÖs suppose you want to open up access to an additional endpoint `*.waconazure.com` for connectivity to Windows Admin Center. You can update the existing Run Command with new parameters: ---```rest -PATCH https://management.azure.com/subscriptions/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/resourceGroups/ContosoRG/providers/Microsoft.HybridCompute/machines/2012DatacenterServer1/runCommands/EndpointAccessCommand?api-version=2023-10-03-preview -``` --```json -{ - "location": "eastus2", - "properties": { - "source": { - "script": "New-NetFirewallRule -DisplayName $ruleName -Direction Outbound -Action Allow -RemoteAddress $endpoint -RemotePort $port -Protocol $protocol" - }, - "parameters": [ - { - "name": "ruleName", - "value": "Allow access to WAC endpoint" - }, - { - "name": "endpoint", - "value": "*.waconazure.com" - }, - { - "name": "port", - "value": 433 - }, - { - "name": "protocol", - "value": "TCP" - } - ], - "asyncExecution": false, - "runAsUser": "contoso-user1", - "runAsPassword": "Contoso123!", - "timeoutInSeconds": 3600, - "outputBlobUri": "https://mystorageaccount.blob.core.windows.net/myscriptoutputcontainer/MyScriptoutput.txt", - "errorBlobUri": "https://mystorageaccount.blob.core.windows.net/mycontainer/MyScriptError.txt" - } -} -``` ---### Example 4: List Run Commands --Ahead of deleting the Run Command for Endpoint Access, make sure there are no other Run Commands for the Arc-enabled server. You can use the list command to get all of the Run Commands: --```rest -LIST https://management.azure.com/subscriptions/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/resourceGroups/ContosoRG/providers/Microsoft.HybridCompute/machines/2012DatacenterServer1/runCommands/ -``` --### Example 5: Delete a Run Command --If you no longer need the Run Command extension, you can delete it using the following command: --```rest -DELETE https://management.azure.com/subscriptions/ aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/resourceGroups/ContosoRG/providers/Microsoft.HybridCompute/machines/2012DatacenterServer1/runCommands/EndpointAccessCommand?api-version=2023-10-03-preview -``` --## Disabling Run Command --To disable the Run Command on Azure Arc-enabled servers, open an administrative command prompt and run the following commands. These commands use the local agent configuration capabilities for the Connected Machine agent in the Extension blocklist. --**Windows** --`azcmagent config set extensions.blocklist "microsoft.cplat.core/runcommandhandlerwindows"` --**Linux** --`sudo azcmagent config set extensions.blocklist "microsoft.cplat.core/runcommandhandlerlinux"` |
azure-arc | Scenario Migrate To Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/scenario-migrate-to-azure.md | - Title: Migrate Azure Arc-enabled server to Azure -description: Learn how to migrate your Azure Arc-enabled servers running on-premises or other cloud environment to Azure. Previously updated : 07/16/2021----# Migrate your on-premises or other cloud Azure Arc-enabled server to Azure --This article is intended to help you plan and successfully migrate your on-premises server or virtual machine managed by Azure Arc-enabled servers to Azure. By following these steps, you are able to transition management from Azure Arc-enabled servers based on the supported VM extensions installed and Azure services based on its Arc server resource identity. --Before performing these steps, review the Azure Migrate [Prepare on-premises machines for migration to Azure](../../migrate/prepare-for-migration.md) article to understand requirements how to prepare for using Azure Migrate. --In this article, you: --* Inventory Azure Arc-enabled servers supported VM extensions installed. -* Uninstall all VM extensions from the Azure Arc-enabled server. -* Identify Azure services configured to authenticate with your Azure Arc-enabled server-managed identity and prepare to update those services to use the Azure VM identity after migration. -* Review Azure role-based access control (Azure RBAC) access rights granted to the Azure Arc-enabled server resource to maintain who has access to the resource after it has been migrated to an Azure VM. -* Delete the Azure Arc-enabled server resource identity from Azure and remove the Azure Connected Machine agent. -* Install the Azure guest agent. -* Migrate the server or VM to Azure. --## Step 1: Inventory and remove VM extensions --To inventory the VM extensions installed on your Azure Arc-enabled server, you can list them using the Azure CLI or with Azure PowerShell. --With Azure PowerShell, use the [Get-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/get-azconnectedmachineextension) command with the `-MachineName` and `-ResourceGroupName` parameters. --With the Azure CLI, use the [az connectedmachine extension list](/cli/azure/connectedmachine/extension#az-connectedmachine-extension-list) command with the `--machine-name` and `--resource-group` parameters. By default, the output of Azure CLI commands is in JSON (JavaScript Object Notation). To change the default output to a list or table, for example, use [az configure --output](/cli/azure/reference-index). You can also add `--output` to any command for a one time change in output format. --After identifying which VM extensions are deployed, you can remove them using the [Azure portal](manage-vm-extensions-portal.md), using the [Azure PowerShell](manage-vm-extensions-powershell.md), or using the [Azure CLI](manage-vm-extensions-cli.md). If the Log Analytics VM extension or Dependency agent VM extension was deployed using Azure Policy and the [VM insights initiative](/azure/azure-monitor/vm/vminsights-enable-policy), it is necessary to [create an exclusion](../../governance/policy/tutorials/create-and-manage.md#remove-a-non-compliant-or-denied-resource-from-the-scope-with-an-exclusion) to prevent re-evaluation and deployment of the extensions on the Azure Arc-enabled server before the migration is complete. --## Step 2: Review access rights --List role assignments for the Azure Arc-enabled servers resource, using [Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.yml#list-role-assignments-for-a-resource) and with other PowerShell code, you can export the results to CSV or another format. --If you're using a managed identity for an application or process running on an Azure Arc-enabled server, you need to make sure the Azure VM has a managed identity assigned. To view the role assignment for a managed identity, you can use the Azure PowerShell `Get-AzADServicePrincipal` cmdlet. For more information, see [List role assignments for a managed identity](../../role-based-access-control/role-assignments-list-powershell.yml#list-role-assignments-for-a-managed-identity). --A system-managed identity is also used when Azure Policy is used to audit or configure settings inside a machine or server. With Azure Arc-enabled servers, the guest configuration agent service is included, and performs validation of audit settings. After you migrate, see [Deploy requirements for Azure virtual machines](../../governance/machine-configuration/overview.md#deploy-requirements-for-azure-virtual-machines) for information on how to configure your Azure VM manually or with policy with the guest configuration extension. --Update role assignment with any resources accessed by the managed identity to allow the new Azure VM identity to authenticate to those services. See the following to learn [how managed identities for Azure resources work for an Azure Virtual Machine (VM)](../../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md). --## Step 3: Uninstall the Azure Connected Machine agent --Follow the guidance to [uninstall the agent](manage-agent.md#uninstall-the-agent) from the server. Double check that all extensions are removed before disconnecting the agent. --## Step 4: Install the Azure Guest Agent --The VM that is migrated to Azure from on-premises doesn't have the Linux or Windows Azure Guest Agent installed. In these scenarios, you have to manually install the VM agent. For more information about how to install the VM Agent, see [Azure Virtual Machine Windows Agent Overview](/azure/virtual-machines/extensions/agent-windows) or [Azure Virtual Machine Linux Agent Overview](/azure/virtual-machines/extensions/agent-linux). --## Step 5: Migrate server or machine to Azure --Before proceeding with the migration with Azure Migration, review the [Prepare on-premises machines for migration to Azure](../../migrate/prepare-for-migration.md) article to learn about requirements necessary to use Azure Migrate. To complete the migration to Azure, review the Azure Migrate [migration options](../../migrate/prepare-for-migration.md#next-steps) based on your environment. --## Step 6: Deploy Azure VM extensions --After migration and completion of all post-migration configuration steps, you can now deploy the Azure VM extensions based on the VM extensions originally installed on your Azure Arc-enabled server. Review [Azure virtual machine extensions and features](/azure/virtual-machines/extensions/overview) to help plan your extension deployment. --To resume using audit settings inside a machine with guest configuration policy definitions, see [Enable guest configuration](../../governance/machine-configuration/overview.md). --If the Log Analytics VM extension or Dependency agent VM extension was deployed using Azure Policy and the [VM insights initiative](/azure/azure-monitor/vm/vminsights-enable-policy), remove the [exclusion](../../governance/policy/tutorials/create-and-manage.md#remove-a-non-compliant-or-denied-resource-from-the-scope-with-an-exclusion) you created earlier. To use Azure Policy to enable Azure virtual machines, see [Deploy Azure Monitor at scale using Azure Policy](/azure/azure-monitor/best-practices). --## Next steps --Troubleshooting information can be found in the [Troubleshoot Connected Machine agent](troubleshoot-agent-onboard.md) guide. |
azure-arc | Scenario Onboard Azure Sentinel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/scenario-onboard-azure-sentinel.md | - Title: Onboard Azure Arc-enabled server to Microsoft Sentinel -description: Learn how to add your Azure Arc-enabled servers to Microsoft Sentinel and proactively monitor their security status. Previously updated : 07/16/2021----# Onboard Azure Arc-enabled servers to Microsoft Sentinel --This article is intended to help you onboard your Azure Arc-enabled server to [Microsoft Sentinel](../../sentinel/overview.md) and start collecting security-related events. Microsoft Sentinel provides a single solution for alert detection, threat visibility, proactive hunting, and threat response across the enterprise. --## Prerequisites --Before you start, make sure that you've met the following requirements: --- A [Log Analytics workspace](/azure/azure-monitor/logs/data-platform-logs). For more information about Log Analytics workspaces, see [Designing your Azure Monitor Logs deployment](/azure/azure-monitor/logs/workspace-design).--- Microsoft Sentinel [enabled in your subscription](../../sentinel/quickstart-onboard.md).--- Your machine or server is connected to Azure Arc-enabled servers.--## Onboard Azure Arc-enabled servers to Microsoft Sentinel --Microsoft Sentinel comes with a number of connectors for Microsoft solutions, available out of the box and providing real-time integration. For physical and virtual machines, you can install the Log Analytics agent that collects the logs and forwards them to Microsoft Sentinel. Azure Arc-enabled servers supports deploying the Log Analytics agent using the following methods: --- Using the VM extensions framework.-- This feature in Azure Arc-enabled servers allows you to deploy the Log Analytics agent VM extension to a non-Azure Windows and/or Linux server. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Azure Arc-enabled servers: -- - The [Azure portal](manage-vm-extensions-portal.md) - - The [Azure CLI](manage-vm-extensions-cli.md) - - [Azure PowerShell](manage-vm-extensions-powershell.md) - - Azure [Resource Manager templates](manage-vm-extensions-template.md) --- Using Azure Policy.-- Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy to audit if the Azure Arc-enabled server has the Log Analytics agent installed. If the agent is not installed, it automatically deploys it using a remediation task. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent. --We recommend installing the Log Analytics agent for Windows or Linux using Azure Policy. --After your Arc-enabled servers are connected, your data starts streaming into Microsoft Sentinel and is ready for you to start working with. You can view the logs in the [built-in workbooks](../../sentinel/get-visibility.md) and start building queries in Log Analytics to [investigate the data](../../sentinel/investigate-cases.md). --## Next steps --Get started [detecting threats with Microsoft Sentinel](../../sentinel/detect-threats-built-in.md). |
azure-arc | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md | - Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) -description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/06/2024----# Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers --[Regulatory Compliance in Azure Policy](../../governance/policy/concepts/regulatory-compliance.md) -provides Microsoft created and managed initiative definitions, known as _built-ins_, for the -**compliance domains** and **security controls** related to different compliance standards. This -page lists the **compliance domains** and **security controls** for Azure Arc-enabled servers. You can -assign the built-ins for a **security control** individually to help make your Azure resources -compliant with the specific standard. ----## Next steps --- Learn more about [Azure Policy Regulatory Compliance](../../governance/policy/concepts/regulatory-compliance.md).-- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). |
azure-arc | Security Data Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-data-privacy.md | - Title: Data and privacy -description: Data and privacy for Arc-enabled servers. - Previously updated : 06/06/2024---# Data and privacy for Arc-enabled servers --This article explains the data collection process by the Azure Connected Machine agent for Azure Arc-enabled servers, detailing how system metadata is gathered and sent to Azure. This article also describes the logging mechanisms available for Azure Arc-enabled servers, including the Azure Activity log for tracking server actions. --## Information collected by Azure Arc --As part of its normal operation, the Azure Connected Machine agent collects system metadata and sends it to Azure as part of its regular heartbeat. This metadata is populated in the Azure Arc-enabled server resource so you can identify and query your servers as part of your Azure inventory. Azure Arc collects no end user-identifiable data. --See [instance metadata](/azure/azure-arc/servers/agent-overview#instance-metadata) for a complete list of metadata collected by Azure Arc. This list is regularly updated to reflect the data collected by the most recent release of the Azure Connected Machine agent. It's not possible to opt out of this data collection because it's used across Azure experiences to help filter and identify your servers. --To collect cloud metadata, the Azure Connected Machine agent queries the instance metadata endpoints for AWS, GCP, Oracle Cloud, Azure Stack HCI and Azure. The agent checks if itΓÇÖs in a cloud once, each time the "himds" service is started. Your security software may notice the agent reaching out to the following endpoints as part of that process: 169.254.169.254, 169.254.169.253, and metadata.google.internal. --All data is handled according to [MicrosoftΓÇÖs privacy standards](https://www.microsoft.com/en-us/trust-center/privacy). --## Data replication and disaster recovery --Azure Arc-enabled servers is a software-as-a-service offering and handles data replication and disaster recovery preparation on your behalf. When you select the region to store your data, that data is automatically replicated to another region in that same geography to protect against a regional outage. In the event a region becomes unavailable, DNS records are automatically changed to point to the failover region. No action is required from you and your agents will automatically reconnect when the failover is complete. --In some geographies, only one region supports Azure Arc-enabled servers. In these situations, data is still replicated for backup purposes to another region in that geography but won't be able to fail over to another region during an outage. You continue to see metadata in Azure from the last time your servers sent a heartbeat but can't make changes or connect new servers until region functionality is restored. The Azure Arc team regularly considers region expansion opportunities to minimize the number of geographies in this configuration. --## Compliance with regulatory standards --Azure Arc is regularly audited for compliance with many global, regional, and industry-specific regulatory standards. A summary table of the compliance offerings is available at [https://aka.ms/AzureCompliance](https://aka.ms/AzureCompliance). --For more information on a particular standard and to download audit documents, see [Azure and other Microsoft cloud services compliance offerings](/azure/compliance/offerings/). --## Azure Activity log --You can use the Azure Activity log to track actions taken on an Azure Arc-enabled server. Actions like installing extensions on an Arc server have unique operation identifiers (all starting with ΓÇ£Microsoft.HybridComputeΓÇ¥) that you can use to filter the log. Learn more about the [Azure Activity Log](/azure/azure-monitor/essentials/activity-log-insights) and how to retain activity logs for more than 30 days by [sending activity log data](/azure/azure-monitor/essentials/activity-log?tabs=powershell) to Log Analytics. --## Local logs --The Azure Connected Machine agent keeps a set of local logs on each server that may be useful for troubleshooting or auditing when the Arc agent made a change to the system. The fastest way to get a copy of all logs from a server is to run [azcmagent logs](/azure/azure-arc/servers/azcmagent-logs), which generates a compressed folder of all the latest logs for you. --## HIMDS log --The HIMDS log file contains all log data from the HIMDS service. This data includes heartbeat information, connection and disconnection attempts, and a history of REST API requests for IMDS metadata and managed identity tokens from other apps on the system. --|OS |Log location | -||| -|Windows |%PROGRAMDATA%\AzureConnectedMachineAgent\Log\himds.log | -|Linux |/var/opt/azcmagent/log/himds.log | --## azcmagent CLI log --The azcmagent log file contains a history of commands run using the local ΓÇ£azcmagentΓÇ¥ command line interface. This log provides the parameters used when connecting, disconnecting, or modifying the configuration of the agent. --|OS |Log location | -||| -|Windows |%PROGRAMDATA%\AzureConnectedMachineAgent\Log\azcmagent.log | -|Linux |/var/opt/azcmagent/log/azcmagent.log | --## Extension Manager log --The extension manager log contains information about attempts to install, upgrade, reconfigure, and uninstall extensions on the machine. --|OS |Log location | -||| -|Windows |%PROGRAMDATA%\GuestConfig\ext_mgr_logs\gc_ext.log | -|Linux |/var/lib/GuestConfig/ext_mgr_logs/gc_ext.log | --Other logs may be generated by individual extensions. Logs for individual extensions aren't guaranteed to follow any standard log format. --|OS |Log location | -||| -|Windows |%PROGRAMDATA%\GuestConfig\extension_logs\* | -|Linux |/var/lib/GuestConfig/extension_logs/* | --## Machine Configuration log --The machine configuration policy engine generates logs for the audit and enforcement of settings on the system. --|OS |Log location | -||| -|Windows |%PROGRAMDATA%\GuestConfig\arc_policy_logs\gc_agent.log | -|Linux |/var/lib/GuestConfig/arc_policy_logs/gc_agent.log | - |
azure-arc | Security Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-extensions.md | - Title: Extensions security -description: Extensions security for Azure Arc-enabled servers. - Previously updated : 06/06/2024---# Extensions security --This article describes the fundamentals of [VM extensions](manage-vm-extensions.md) for Azure Arc-enabled servers and details how extension settings can be customized. --## Extension basics --VM extensions for Azure Arc-enabled servers are optional add-ons that enable other functionality, such as monitoring, patch management, and script execution. Extensions are published by Microsoft and select third parties from the Azure Marketplace and stored in Microsoft-managed storage accounts. All extensions are scanned for malware as part of the publishing process. The extensions for Azure Arc-enabled servers are identical to those available for Azure VMs, ensuring consistency across your operating environments. --Extensions are downloaded directly from Azure Storage (`*.blob.core.windows.net`) at the time they are installed or upgraded, unless you have configured private endpoints. The storage accounts regularly change and canΓÇÖt be predicted in advance. When private endpoints are used, extensions are proxied via the regional URL for the Azure Arc service instead. --A digitally signed catalog file is downloaded separately from the extension package and used to verify the integrity of each extension before the extension manager opens or executes the extension package. If the downloaded ZIP file for the extension doesnΓÇÖt match the contents in the catalog file, the extension operation will be aborted. --Extensions may take settings to customize or configure the installation, such as proxy URLs or API keys to connect a monitoring agent to its cloud service. Extension settings come in two flavors: regular settings and protected settings. Protected settings arenΓÇÖt persisted in Azure and are encrypted at rest on your local machine. --All extension operations originate from Azure through an API call, CLI, PowerShell, or Portal action. This design ensures that any action to install, update, or upgrade an extension on a server gets logged in the Azure Activity Log. The Azure Connected Machine agent does allow extensions to be removed locally for troubleshooting and cleanup purposes. However, if the extension is removed locally and the service still expects the machine to have the extension installed, it will be reinstalled the next time the extension manager syncs with Azure. --## Script execution --The extension manager can be used to run scripts on machines using the Custom Script Extension or Run Command. By default, these scripts will run in the extension managerΓÇÖs user context ΓÇô Local System on Windows or root on Linux ΓÇô meaning these scripts will have unrestricted access to the machine. If you do not intend to use these features, you can block them using an allowlist or blocklist. An example is provided in the next section. --## Local agent security controls --Starting with agent version 1.16, you can optionally limit the extensions that can be installed on your server and disable Guest Configuration. These controls can be useful when connecting servers to Azure for a single purpose, such as collecting event logs, without allowing other management capabilities to be used on the server. --These security controls can only be configured by running a command on the server itself and cannot be modified from Azure. This approach preserves the server admin's intent when enabling remote management scenarios with Azure Arc, but also means that changing the setting is more difficult if you later decide to change them. This feature is intended for sensitive servers (for example, Active Directory Domain Controllers, servers that handle payment data, and servers subject to strict change control measures). In most other cases, it's not necessary to modify these settings. --## Allowlists and blocklists --The Azure Connected Machine agent supports an allowlist and blocklist to restrict which extensions can be installed on your machine. Allowlists are exclusive, meaning that only the specific extensions you include in the list can be installed. Blocklists are exclusive, meaning anything except those extensions can be installed. Allowlists are preferable to blocklists because they inherently block any new extensions that become available in the future. -Allowlists and blocklists are configured locally on a per-server basis. This ensures that nobody, not even a user with Owner or Global Administrator permissions in Azure, can override your security rules by trying to install an unauthorized extension. If someone tries to install an unauthorized extension, the extension manager will refuse to install it and mark the extension installation report a failure to Azure. -Allowlists and blocklists can be configured any time after the agent is installed, including before the agent is connected to Azure. --If no allowlist or blocklist is configured on the agent, all extensions are allowed. --The most secure option is to explicitly allow the extensions you expect to be installed. Any extension not in the allowlist is automatically blocked. To configure the Azure Connected Machine agent to allow only the Azure Monitor Agent for Linux, run the following command on each server: --```bash -azcmagent config set extensions.allowlist "Microsoft.Azure.Monitor/AzureMonitorLinuxAgent" -``` --Here is an example blocklist that blocks all extensions with the capability of running arbitrary scripts: --``` -azcmagent config set extensions.blocklist ΓÇ£Microsoft.Cplat.Core/RunCommandHandlerWindows, Microsoft.Cplat.Core/RunCommandHandlerLinux,Microsoft.Compute/CustomScriptExtension,Microsoft.Azure.Extensions/CustomScript,Microsoft.Azure.Automation.HybridWorker/HybridWorkerForWindows,Microsoft.Azure.Automation.HybridWorkerForLinux,Microsoft.EnterpriseCloud.Monitoring/MicrosoftMonitoringAgent, Microsoft.EnterpriseCloud.Monitoring/OMSAgentForLinuxΓÇ¥ -``` --Specify extensions with their publisher and type, separated by a forward slash `/`. See the list of the [most common extensions](manage-vm-extensions.md) in the docs or list the VM extensions already installed on your server in the [portal](manage-vm-extensions-portal.md#list-extensions-installed), [Azure PowerShell](manage-vm-extensions-powershell.md#list-extensions-installed), or [Azure CLI](manage-vm-extensions-cli.md#list-extensions-installed). --The table describes the behavior when performing an extension operation against an agent that has the allowlist or blocklist configured. --| Operation | In the allowlist | In the blocklist | In both the allowlist and blocklist | Not in any list, but an allowlist is configured | -|--|--|--|--| -| Install extension | Allowed | Blocked | Blocked | Blocked | -| Update (reconfigure) extension | Allowed | Blocked | Blocked | Blocked | -| Upgrade extension | Allowed | Blocked | Blocked | Blocked | -| Delete extension | Allowed | Allowed | Allowed | Allowed | --> [!IMPORTANT] -> If an extension is already installed on your server before you configure an allowlist or blocklist, it won't automatically be removed. It's your responsibility to delete the extension from Azure to fully remove it from the machine. Delete requests are always accepted to accommodate this scenario. Once deleted, the allowlist and blocklist determine whether or not to allow future install attempts. --Starting with agent version 1.35, there is a special allowlist value `Allow/None`, which instructs the extension manager to run, but not allow any extensions to be installed. This is the recommended configuration when using Azure Arc to deliver Windows Server 2012 Extended Security Updates (ESU) without intending to use any other extensions. --```bash -azcmagent config set extensions.allowlist "Allow/None" -``` -Azure Policies can also be used to restrict which extensions can be installed. Azure Policies have the advantage of being configurable in the cloud and not requiring a change on each individual server if you need to change the list of approved extensions. However, anyone with permission to modify policy assignments could override or remove this protection. If you choose to use Azure Policies to restrict extensions, make sure you review which accounts in your organization have permission to edit policy assignments and that appropriate change control measures are in place. --## Locked down machine best practices --When configuring the Azure Connected Machine agent with a reduced set of capabilities, it's important to consider the mechanisms that someone could use to remove those restrictions and implement appropriate controls. Anybody capable of running commands as an administrator or root user on the server can change the Azure Connected Machine agent configuration. Extensions and guest configuration policies execute in privileged contexts on your server, and as such might be able to change the agent configuration. If you apply local agent security controls to lock down the agent, Microsoft recommends the following best practices to ensure only local server admins can update the agent configuration: --* Use allowlists for extensions instead of blocklists whenever possible. -* Don't include the Custom Script Extension in the extension allowlist to prevent execution of arbitrary scripts that could change the agent configuration. -* Disable Guest Configuration to prevent the use of custom Guest Configuration policies that could change the agent configuration. --### Example configuration for monitoring and security scenarios --It's common to use Azure Arc to monitor your servers with Azure Monitor and Microsoft Sentinel and secure them with Microsoft Defender for Cloud. This section contains examples for how to lock down the agent to only support monitoring and security scenarios. --#### Azure Monitor Agent only --On your Windows servers, run the following commands in an elevated command console: --```powershell -azcmagent config set extensions.allowlist "Microsoft.Azure.Monitor/AzureMonitorWindowsAgent" -azcmagent config set guestconfiguration.enabled false -``` --On your Linux servers, run the following commands: --```bash -sudo azcmagent config set extensions.allowlist "Microsoft.Azure.Monitor/AzureMonitorLinuxAgent" -sudo azcmagent config set guestconfiguration.enabled false -``` --#### Log Analytics and dependency (Azure Monitor VM Insights) only --This configuration is for the legacy Log Analytics agents and the dependency agent. --On your Windows servers, run the following commands in an elevated console: --```powershell -azcmagent config set extensions.allowlist "Microsoft.EnterpriseCloud.Monitoring/MicrosoftMonitoringAgent,Microsoft.Azure.Monitoring.DependencyAgent/DependencyAgentWindows" -azcmagent config set guestconfiguration.enabled false -``` --On your Linux servers, run the following commands: --```bash -sudo azcmagent config set extensions.allowlist "Microsoft.EnterpriseCloud.Monitoring/OMSAgentForLinux,Microsoft.Azure.Monitoring.DependencyAgent/DependencyAgentLinux" -sudo azcmagent config set guestconfiguration.enabled false -``` --#### Monitoring and security --Microsoft Defender for Cloud deploys extensions on your server to identify vulnerable software on your server and enable Microsoft Defender for Endpoint (if configured). Microsoft Defender for Cloud also uses Guest Configuration for its regulatory compliance feature. Since a custom Guest Configuration assignment could be used to undo the agent limitations, you should carefully evaluate whether or not you need the regulatory compliance feature and, as a result, Guest Configuration to be enabled on the machine. --On your Windows servers, run the following commands in an elevated command console: --```powershell -azcmagent config set extensions.allowlist "Microsoft.EnterpriseCloud.Monitoring/MicrosoftMonitoringAgent,Qualys/WindowsAgent.AzureSecurityCenter,Microsoft.Azure.AzureDefenderForServers/MDE.Windows,Microsoft.Azure.AzureDefenderForSQL/AdvancedThreatProtection.Windows" -azcmagent config set guestconfiguration.enabled true -``` --On your Linux servers, run the following commands: --```bash -sudo azcmagent config set extensions.allowlist "Microsoft.EnterpriseCloud.Monitoring/OMSAgentForLinux,Qualys/LinuxAgent.AzureSecurityCenter,Microsoft.Azure.AzureDefenderForServers/MDE.Linux" -sudo azcmagent config set guestconfiguration.enabled true -``` --## Agent modes --A simpler way to configure local security controls for monitoring and security scenarios is to use the *monitor mode*, available with agent version 1.18 and newer. Modes are pre-defined configurations of the extension allowlist and guest configuration agent maintained by Microsoft. As new extensions become available that enable monitoring scenarios, Microsoft will update the allowlist and agent configuration to include or exclude the new functionality, as appropriate. --There are two modes to choose from: --1. **full** - the default mode. This allows all agent functionality. -1. **monitor** - a restricted mode that disables the guest configuration policy agent and only allows the use of extensions related to monitoring and security. --To enable monitor mode, run the following command: --```bash -azcmagent config set config.mode monitor -``` --You can check the current mode of the agent and allowed extensions with the following command: --```bash -azcmagent config list -``` --While in monitor mode, you cannot modify the extension allowlist or blocklist. If you need to change either list, change the agent back to full mode and specify your own allowlist and blocklist. --To change the agent back to full mode, run the following command: --```bash -azcmagent config set config.mode full -``` --## Disabling the extension manager --If you donΓÇÖt need to use extensions with Azure Arc, you can also disable the extension manager entirely. You can disable the extension manager with the following command (run locally on each machine): --`azcmagent config set extensions.enabled false` --Disabling the extension manager won't remove any extensions already installed on your server. Extensions that are hosted in their own Windows or Linux services, such as the Log Analytics Agent, might continue to run even if the extension manager is disabled. Other extensions that are hosted by the extension manager itself, like the Azure Monitor Agent, don't run if the extension manger is disabled. You should [remove any extensions](manage-vm-extensions-portal.md#remove-extensions) before disabling the extension manager to ensure no extensions continue to run on the server. ------- |
azure-arc | Security Identity Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-identity-authorization.md | - Title: Identity and authorization -description: Identity and authorization for Azure Arc-enabled servers. - Previously updated : 06/06/2024---# Identity and authorization --This article describes the Microsoft Entra ID managed identity for Azure Arc-enabled servers, which is used for authentication when communicating with Azure and details two built-in RBAC roles. --## Microsoft Entra ID managed identity --Every Azure Arc-enabled server has a system-assigned Microsoft Entra ID managed identity associated with it. This identity is used by the agent to authenticate itself with Azure. It can also be used by extensions or other authorized apps on your system to access resources that understand OAuth tokens. The managed identity appears in the Microsoft Entra ID portal with the same name as the Azure Arc-enabled server resource. For example, if your Azure Arc-enabled server is named *prodsvr01*, an enterprise app in Microsoft Entra ID with the same name appears. --Each Microsoft Entra ID directory has a finite limit for the number of objects it can store. A managed identity counts as one object in the directory. If you're planning a large deployment of Azure Arc-enabled servers, check the available quota in your Microsoft Entra ID directory first and submit a support request for more quota if necessary. You can see the available and used quota in the [List Organizations API](/graph/api/intune-onboarding-organization-list) response under the ΓÇ£directorySizeLimitΓÇ¥ section. --The managed identity is fully managed by the agent. As long as the agent stays connected to Azure, it handles rotating the credential automatically. The certificate backing the managed identity is valid for 90 days. The agent attempts to renew the certificate when it has 45 or fewer days of validity remaining. If the agent is offline long enough to expire, the agent becomes ΓÇ£expiredΓÇ¥ as well and won't connect to Azure. In this situation, automatic reconnection isn't possible and requires you to disconnect and reconnect the agent to Azure using an onboarding credential. --The managed identity certificate is stored on the local disk of the system. ItΓÇÖs important that you protect this file, because anyone in possession of this certificate can request a token from Microsoft Entra ID. The agent stores the certificate in C:\ProgramData\AzureConnectedMachineAgent\Certs\ on Windows and /var/opt/azcmagent/certs on Linux. The agent automatically applies an access control list to this directory, restricting access to local administrators and the "himds" account. Don't modify access to the certificate files or modify the certificates on your own. If you think the credential for a system-assigned managed identity has been compromised, [disconnect](/azure/azure-arc/servers/azcmagent-disconnect) the agent from Azure and [connect](/azure/azure-arc/servers/azcmagent-connect) it again to generate a new identity and credential. Disconnecting the agent removes the resource in Azure, including its managed identity. --When an application on your system wants to get a token for the managed identity, it issues a request to the REST identity endpoint at *http://localhost:40342/identity*. There are slight differences in how Azure Arc handles this request compared to Azure VM. The first response from the API includes a path to a challenge token located on disk. The challenge token is stored in *C:\ProgramData\AzureConnectedMachineAgent\tokens* on Windows or */var/opt/azcmagent/tokens* on Linux. The caller must prove they have access to this folder by reading the contents of the file and reissuing the request with this information in the authorization header. The tokens directory is configured to allow administrators and any identity belonging to the "Hybrid agent extension applications" (Windows) or the "himds" (Linux) group to read the challenge tokens. If you're authorizing a custom application to use the system-assigned managed identity, you should add its user account to the appropriate group to grant it access. --To learn more about using a managed identity with Arc-enabled servers to authenticate and access Azure resources, see the following video. --> [!VIDEO https://www.youtube.com/embed/4hfwxwhWcP4] --## RBAC roles --There are two built-in roles in Azure that you can use to control access to an Azure Arc-enabled server: --- **Azure Connected Machine Onboarding**, intended for accounts used to connect new machines to Azure Arc. This role allows accounts to see and create new Arc servers but disallows extension management.--- **Azure Connected Machine Resource Administrator**, intended for accounts that will manage servers once theyΓÇÖre connected. This role allows accounts to read, create, and delete Arc servers, VM extensions, licenses, and private link scopes.--Generic RBAC roles in Azure also apply to Azure Arc-enabled servers, including Reader, Contributor, and Owner. --## Identity and access control --[Azure role-based access control](../../role-based-access-control/overview.md) is used to control which accounts can see and manage your Azure Arc-enabled server. From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.yml) page in the Azure portal, you can verify who has access to your Azure Arc-enabled server. ---Users and applications granted [contributor](../../role-based-access-control/built-in-roles.md#contributor) or administrator role access to the resource can make changes to the resource, including deploying or deleting [extensions](manage-vm-extensions.md) on the machine. Extensions can include arbitrary scripts that run in a privileged context, so consider any contributor on the Azure resource to be an indirect administrator of the server. --The **Azure Connected Machine Onboarding** role is available for at-scale onboarding, and is only able to read or create new Azure Arc-enabled servers in Azure. It cannot be used to delete servers already registered or manage extensions. As a best practice, we recommend only assigning this role to the Microsoft Entra service principal used to onboard machines at scale. --Users as a member of the **Azure Connected Machine Resource Administrator** role can read, modify, reonboard, and delete a machine. This role is designed to support management of Azure Arc-enabled servers, but not other resources in the resource group or subscription. - |
azure-arc | Security Machine Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-machine-configuration.md | - Title: Configuration and remote access -description: Configuration and remote access for Azure Arc-enabled servers. - Previously updated : 06/06/2024---# Configuration and remote access --This article describes the basics of Azure Machine Configuration, a compliance reporting and configuration tool that can check and optionally remediate security and other settings on machines at scale. This article also describes the Azure Arc connectivity platform, used for communication between the Azure Connected Machine agent and Azure. --## Machine configuration basics --Azure Machine Configuration is a PowerShell Desired State Configuration-based compliance reporting and configuration tool. It can help you check security and other settings on your machines at-scale and optionally remediate them if they drift from the approved state. Microsoft provides its own built-in Machine Configuration policies for your use, or you can author your own policies to check any condition on your machine. --Machine Configuration policies run in the Local System context on Windows or root on Linux, and therefore can access any system settings or resources. You should review which accounts in your organization have permission to assign Azure Policies or Azure Guest Assignments (the Azure resource representing a machine configuration) and ensure all those accounts are trusted. --### Disabling the machine configuration agent --If you donΓÇÖt intend to use machine configuration policies, you can disable the machine configuration agent with the following command (run locally on each machine): --`azcmagent config set guestconfiguration.enabled false` --## Agent modes --The Azure Connected Machine agent has two possible modes: --- **Full mode**, the default mode which allows all use of agent functionality.--- **Monitor mode**, which applies a Microsoft-managed extension allowlist, disables remote connectivity, and disables the machine configuration agent.--If youΓÇÖre using Arc solely for monitoring purposes, setting the agent to Monitor mode makes it easy to restrict the agent to just the functionality required to use Azure Monitor. You can configure the agent mode with the following command (run locally on each machine): --`azcmagent config set config.mode monitor` --## Azure Arc connectivity platform --The Azure Arc connectivity platform is a web sockets-based experience to allow real-time communication between the Azure Connected Machine agent and Azure. This enables interactive remote access scenarios to your server without requiring direct line of sight from the management client to the server. --The connectivity platform supports two scenarios: --- SSH access to Azure Arc-enabled servers-- Windows Admin Center for Azure Arc-enabled servers--For both scenarios, the management client (SSH client or web browser) talks to the Azure Arc connectivity service that then relays the information to and from the Azure Connected Machine agent. --Connectivity access is disabled by default and is enabled using a three step process: --1. Create a connectivity endpoint in Azure for the Azure Arc-enabled server. The connectivity endpoint isnΓÇÖt a real endpoint with an IP address. ItΓÇÖs just a way of saying that access to this server via Azure is allowed and provides an API to retrieve the connection details for management clients. --1. Configure the connectivity endpoint to allow your specific intended scenarios. Having an endpoint created doesnΓÇÖt allow any traffic through. Instead, you need to configure it to say, ΓÇ£we allow traffic to this local port on the target server.ΓÇ¥ For SSH, thatΓÇÖs commonly TCP port 22. For WAC, TCP port 6516. --1. Assign the appropriate RBAC roles to the accounts that will use this feature. Remote access to servers requires other role assignments. Common roles like Azure Connected Machine Resource Administrator, Contributor, and Owner don't grant access to use SSH or WAC via the Azure Arc Connectivity Platform. Roles that allow remote access include: -- - Virtual Machine Local User Login (SSH with local credentials) - - Virtual Machine User Login (SSH with Microsoft Entra ID, standard user access) - - Virtual Machine Administrator Login (SSH with Microsoft Entra ID, full admin access) - - Windows Admin Center Administrator Login (WAC with Microsoft Entra ID authentication) --> [!TIP] -> Consider using Microsoft Entra Privileged Identity Management to provide your IT operators with just-in-time access to these roles. This enables a least privilege approach to remote access. -> --There's a local agent configuration control as well to block remote access, regardless of the configuration in Azure. --## Disabling remote access --To disable all remote access to your machine, run the following command on each machine: --`azcmagent config set incomingconnections.enabled false` --## SSH access to Azure Arc-enabled servers --SSH access via the Azure Arc connectivity platform can help you avoid opening SSH ports directly through a firewall or requiring your IT operators to use a VPN. It also allows you to grant access to Linux servers using Entra IDs and Azure RBAC, reducing the management overhead of distributing and protecting SSH keys. --When a user connects using SSH and Microsoft Entra ID authentication, a temporary account is created on the server to manage it on their behalf. The account is named after the userΓÇÖs UPN in Azure to help you audit actions taken on the machine. If the user has the "Virtual Machine Administrator Login" role, the temporary account is created as a member of the sudoers group so that it can elevate to perform administrative tasks on the server. Otherwise, the account is just a standard user on the machine. If you change the role assignment from user to administrator or vice versa, it can take up to 10 minutes for the change to take effect. Users must disconnect any active SSH sessions and reconnect to see the changes reflected on the local user account. --When a user connects using local credentials (SSH key or password), they get the permissions and group memberships of the account information they provided. --## Windows Admin Center --WAC in the Azure portal allows Windows users to see and manage their Windows Server without connecting over Remote Desktop Connection. The ΓÇ£Windows Admin Center Administrator LoginΓÇ¥ role is required to use the WAC experience in the Azure portal. When the user opens the WAC experience, a virtual account is created on the Windows Server using the UPN of the Azure user to identify them. This virtual account is a member of the administrators group and can make changes to the system. Actions the user takes in WAC are then executed locally on the server using this virtual account. --Interactive access to the machine with the PowerShell or Remote Desktop experiences in WAC don't currently support Microsoft Entra ID authentication and will prompt the user to provide local user credentials. These credentials aren't stored in Azure and are only used to establish the PowerShell or Remote Desktop session. |
azure-arc | Security Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-networking.md | - Title: Network security -description: Network security for Azure Arc-enabled servers. - Previously updated : 06/06/2024---# Network security --This article describes the networking requirements and options for Azure Arc-enabled servers. --## General networking --Azure Arc-enabled servers is a software-as-a-service offering with a combination of global and regional endpoints shared by all customers. All network communication from the Azure Connected Machine agent is outbound to Azure. Azure will never reach "into" your network to manage your machines. These connections are always encrypted using TLS certificates. The list of endpoints and IP addresses accessed by the agent are documented in the [network requirements](network-requirements.md). --Extensions you install may require extra endpoints not included in the Azure Arc network requirements. Consult the extension documentation for further information on network requirements for that solution. --If your organization uses TLS inspection, the Azure Connected Machine agent doesn't use certificate pinning and will continue to work, so long as your machine trusts the certificate presented by the TLS inspection service. Some Azure Arc extensions use certificate pinning and need to be excluded from TLS inspection. Consult the documentation for any extensions you deploy to determine if they support TLS inspection. --### Private endpoints --[Private endpoints](private-link-security.md) are an optional Azure networking technology that allows network traffic to be sent over Express Route or a site-to-site VPN and more granularly control which machines can use Azure Arc. With private endpoints, you can use private IP addresses in your organizationΓÇÖs network address space to access the Azure Arc cloud services. Additionally, only servers you authorize are able to send data through these endpoints, which protects against unauthorized use of the Azure Connected Machine agent in your network. --ItΓÇÖs important to note that not all endpoints and not all scenarios are supported with private endpoints. You'll still need to make firewall exceptions for some endpoints like Microsoft Entra ID, which doesn't offer a private endpoint solution. Any extensions you install may require other private endpoints (if supported) or access to the public endpoints for their services. Additionally, you canΓÇÖt use SSH or Windows Admin Center to access your server over a private endpoint. --Regardless of whether you use private or public endpoints, data transferred between the Azure Connected Machine agent and Azure is always encrypted. You can always start with public endpoints and later switch to private endpoints (or vice versa) as your business needs change. - |
azure-arc | Security Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-onboarding.md | - Title: Security onboarding and updates -description: Azure Arc-enabled servers planning and deployment guidance. - Previously updated : 06/06/2024---# Planning and deployment guidance --The [Azure Arc Landing Zone Accelerator for Hybrid and Multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/enterprise-scale-landing-zone) has a complete set of guidance for you to consider as you plan an Azure Arc-enabled servers deployment. This section contains a selection of that content with security relevance. --## Resource hierarchy and inherited access --The subscription and resource group where you choose to connect your machine will influence which users and accounts in your organization can see and manage the machine from Azure. Generally, you should organize your servers based on the groups of accounts that need to access them. If you have two teams managing two separate sets of servers who shouldn't be able to manage each otherΓÇÖs machines, you should use two resource groups and control access to the servers at the resource group. --## Onboarding credential --When you connect a server to Azure Arc, you need to use an onboarding credential to authorize the machine to create a resource in your Azure subscription. There are three ways to provide credentials: --1. **Interactive logons**, either using the local web browser (Windows-only) or a device login code that can be entered on any computer with internet access. --1. **Service principals**, which are dedicated accounts that can be used for scripted installations of the agent. Service principals consist of a unique application ID and either a plain text secret or a certificate. If you choose to use a service principal, you should use certificates instead of secrets because they can be controlled with Microsoft Entra conditional access policies. Remember to protect access to and regularly rotate the service principal secrets/certificates to minimize the risk of a compromised credential. --1. **Access tokens**, which are short-lived and obtained from another credential. --No matter the type of credential you choose to use, the most important part is to ensure that it has only the required permissions to onboard machines to Azure Arc and nothing extra. The Azure Connected Machine Onboarding role is designed specifically for onboarding credentials and only includes the necessary permissions to create and read Azure Arc-enabled server resources in Azure. You should also limit the scope of the role assignment to only the resource groups or subscriptions necessary to onboard your servers. --The onboarding credential is only needed at the time the azcmagent connect step is run on a server. It's not needed once a server is connected. If the onboarding credential expires or is deleted, the server continues to be connected to Azure. --If a malicious actor gains access to your onboarding credential, they could use the credential to onboard servers outside of your organization to Azure Arc within your subscription/resource group. You can use [private endpoints](security-networking.md#private-endpoints) to protect against such attacks by restricting access to Azure Arc within your network. --## Protecting secrets in onboarding script --The onboarding script contains all the information needed to connect your server to Azure. This includes steps to download, install, and configure the Azure Connected Machine agent on your server. It also includes the onboarding credential used to non-interactively connect that server to Azure. ItΓÇÖs important to protect the onboarding credential so it isn't accidentally captured in logs and end up in the wrong hands. --For production deployments, itΓÇÖs common to orchestrate the onboarding script using an automation tool such as Microsoft Configuration Manager, Red Hat Ansible, or Group Policy. Check with your automation tool to see if it has a way to protect secrets used in the installation script. If it doesnΓÇÖt, consider moving the onboarding script parameters to a dedicated configuration file. This prevents secrets from being parsed and potentially logged directly on the command line. The [Group Policy onboarding guidance](onboard-group-policy-powershell.md) includes extra steps to encrypt the configuration file so that only computer accounts can decrypt it, not users or others outside your organization. --If your automation tool copies the configuration file to the server, make sure it also cleans up the file after it's done so the secrets donΓÇÖt persist longer than necessary. --Additionally, as with all Azure resources, tags for Azure Arc-enabled servers are stored as plain text. Don't put sensitive information in tags. --## Agent updates --A new version of the Azure Connected Machine agent is typically released every month. There isnΓÇÖt an exact schedule of when the updates are available, but you should check for and apply updates on a monthly basis. Refer to the [list of all the new releases](/azure/azure-arc/servers/agent-release-notes), including what specific changes are included in them. Most updates include security, performance. and quality fixes. Some also include new features and functionality. When a hotfix is required to address an issue with a release, it's released as a new agent version and available via the same means as a regular agent release. --The Azure Connected Machine agent doesn't update itself. You must update it using your preferred update management tool. For Windows machines, updates are delivered through Microsoft Update. Standalone servers should opt-in to Microsoft Updates (using the *receive updates for other Microsoft products* option). If your organization uses Windows Server Update Services to cache and approve updates locally, your WSUS admin must synchronize and approve updates for the Azure Connected Machine agent product. --Linux updates are published to `packages.microsoft.com`. Your package management software (apt, yum, dnf, zypper, etc.) should show ΓÇ£azcmagentΓÇ¥ updates alongside your other system packages. Learn more about [upgrading Linux agents](/azure/azure-arc/servers/manage-agent?tabs=linux-apt). --Microsoft recommends staying up to date with the latest agent version whenever possible. If your maintenance windows are less frequent, Microsoft supports all agent versions released within the last 12 months. However, since the agent updates include security fixes, you should update as frequently as possible. --If you're looking for a patch management tool to orchestrate updates of the Azure Connected Machine agent on both Windows and Linux, consider Azure Update Manager. --## Extension updates --### Automatic extension updates --By default, every extension you deploy to an Azure Arc-enabled server has automatic extension upgrades enabled. If the extension publisher supports this feature, new versions of the extension are automatically installed within 60 days of the new version becoming available. Automatic extension upgrades follow a safe deployment practice, meaning that only a small number of extensions are updated at a time. Rollouts continue slowly across regions and subscriptions until every extension is updated. --There are no granular controls over automatic extension upgrades. You'll always be upgraded to the most recent version of the extension and canΓÇÖt choose when the upgrade happens. The extension manager has [built-in resource governance](/azure/azure-arc/servers/agent-overview) to ensure an extension upgrade doesn't consume too much of the systemΓÇÖs CPU and interfere with your workloads during the upgrade. --If you don't want to use automatic upgrades for extensions, you can disable them on a per-extension, per-server basis using the [Azure portal, CLI, or PowerShell](/azure/azure-arc/servers/manage-automatic-vm-extension-upgrade?tabs=azure-portal). --### Manual extension updates --For extensions that donΓÇÖt support automatic upgrades or have automatic upgrades disabled, you can use the Azure portal, CLI, or PowerShell to upgrade extensions to the newest version. The CLI and PowerShell commands also support downgrading an extension, in case you need to revert to an earlier version. --## Using disk encryption --The Azure Connected Machine agent uses public key authentication to communicate with the Azure service. After you onboard a server to Azure Arc, a private key is saved to the disk and used whenever the agent communicates with Azure. If stolen, the private key can be used on another server to communicate with the service and act as if it were the original server. This includes getting access to the system assigned identity and any resources that identity has access to. The private key file is protected to only allow the **himds** account access to read it. To prevent offline attacks, we strongly recommend the use of full disk encryption (for example, BitLocker, dm-crypt, etc.) on the operating system volume of your server. - |
azure-arc | Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-overview.md | - Title: Security overview -description: Basic security information about Azure Arc-enabled servers. - Previously updated : 06/06/2024---# Security overview for Azure Arc-enabled servers --This article describes the security considerations and controls available when using Azure Arc-enabled servers. Whether youΓÇÖre a security practitioner or IT operator, the information in this article let's you confidently configure Azure Arc in a way that meets your organizationΓÇÖs security requirements. --## Responsibilities --The security of your Azure Arc-enabled Servers deployment is a shared responsibility between you and Microsoft. Microsoft is responsible for: --- To secure the cloud service that stores system metadata and orchestrate operations for the agents you connect to the service.-- Securing and protecting the privacy of your system metadata stored in Azure.-- Documenting optional security features so you understand the benefits and drawbacks of deployment options.-- Publishing regular agent updates with security, quality, performance, and feature improvements.--You're responsible for: --- Managing and monitoring RBAC access to your Azure Arc-enabled resources in your Azure subscription.-- Protecting and regularly rotating the credentials of any accounts used to manage Azure Arc-enabled servers. This includes any service principal secrets or credentials used to onboard new servers.-- Determining if and how any security features described in this document (for example, extension allowlists) should be applied to the Azure Connected Machine agents you deploy.-- Keeping the Azure Connected Machine agent and extensions up-to-date.-- Determining Azure ArcΓÇÖs compliance with your organizationΓÇÖs legal, and regulatory, and internal policy obligations.-- Securing the server itself, including the compute, storage, and networking infrastructure used to run the server.--## Architecture overview --Azure Arc-enabled servers is an agent-based service. Your interaction with Azure Arc is primarily through AzureΓÇÖs APIs, portal, and management experiences. The data you see and actions you take in Azure are relayed via the Azure Connected Machine agent installed on each managed server. Azure is the source of truth for the agent. The only way to tell the agent to do something (for example, install an extension) is to take an action on the Azure representation of the server. This helps ensure that your organizationΓÇÖs RBAC and policy assignments can evaluate the request before any changes are made. --The Azure Connected Machine agent is primarily an enablement platform for other Azure and third-party services. Its core functionalities include: --- Establishing a relationship between your machine and your Azure subscription-- Providing a managed identity for the agent and other apps to use when authenticating with Azure-- Enabling other capabilities (agents, scripts) with extensions-- Evaluating and enforcing settings on your server--Once the Azure Connected Machine agent is installed, you can enable other Azure services on your server to meet your monitoring, patch management, remote access, or other needs. Azure ArcΓÇÖs role is to help enable those services to work outside of AzureΓÇÖs own datacenters. --You can use Azure Policy to limit what your organizationΓÇÖs users can do with Azure Arc. Cloud-based restrictions like Azure Policy are a great way to apply security controls at-scale while retaining flexibility to adjust the restrictions at any time. However, sometimes you need even stronger controls to protect against a legitimately privileged account being used to circumvent security measures (for example, disabling policies). To account for this, the Azure Connected Machine agent also has security controls of its own that take precedence over any restrictions set in the cloud. ---## Agent services --The Azure Connected Machine agent is a combination of four services/daemons that run on your server and help connect it with Azure. They're installed together as a single application and are managed centrally using the azcmagent command line interface. --### Hybrid Instance Metadata Service --The Hybrid Instance Metadata Service (HIMDS) is the ΓÇ£coreΓÇ¥ service in the agent and is responsible for registering the server with Azure, ongoing metadata synchronization (heartbeats), managed identity operations, and hosting the local REST API which other apps can query to learn about the deviceΓÇÖs connection with Azure. This service is unprivileged and runs as a virtual account (NT SERVICE\himds with SID S-1-5-80-4215458991-2034252225-2287069555-1155419622-2701885083) on Windows or a standard user account (himds) on Linux operating systems. --### Extension manager --The extension manager is responsible for installing, configuring, upgrading, and removing additional software on your machine. Out of the box, Azure Arc doesnΓÇÖt know how to do things like monitor or patch your machine. Instead, when you choose to use those features, the extension manager downloads and enables those capabilities. The extension manager runs as Local System on Windows and root on Linux because the software it installs may require full system access. You can limit which extensions the extension manager is allowed to install or disable it entirely if you donΓÇÖt intend to use extensions. --### Guest configuration --The guest configuration service evaluates and enforces Azure machine (guest) configuration policies on your server. These are special Azure policies written in PowerShell Desired State Configuration to check software settings on a server. The guest configuration service regularly evaluates and reports on compliance with these policies and, if the policy is configured in enforce mode, will change settings on your system to bring the machine back into compliance if necessary. The guest configuration service runs as Local System on Windows and root on Linux to ensure it has access to all settings on your system. You can disable the guest configuration feature if you don't intend to use guest configuration policies. --### Azure Arc proxy --The Azure Arc proxy service is responsible for aggregating network traffic from the Azure Connected Machine agent services and any extensions youΓÇÖve installed and deciding where to route that data. If youΓÇÖre using the Azure Arc Gateway to simplify your network endpoints, the Azure Arc Proxy service is the local component that forwards network requests via the Azure Arc Gateway instead of the default route. The Azure Arc proxy runs as Network Service on Windows and a standard user account (arcproxy) on Linux. It's disabled by default until you configure the agent to use the Azure Arc Gateway. --## Security considerations for Tier 0 assets --Tier 0 assets such as an Active Directory Domain Controller, Certificate Authority server, or highly sensitive business application server can be connected to Azure Arc with extra care to ensure only the desired management functions and authorized users can manage the servers. These recommendations are not required but are strongly recommended to maintain the security posture of your Tier 0 assets. --### Dedicated Azure subscription --Access to Azure Arc-enabled servers is often determined by the organizational hierarchy to which it belongs in Azure. You should treat any subscription or management group admin as equivalent to a local administrator on Tier 0 assets because they could use their permissions to add new role assignments to the Azure Arc resource. Additionally, policies applied at the subscription or management group level may also have permission to make changes to the server. --To minimize the number of accounts and policies with access to your Tier 0 assets, consider using a dedicated Azure subscription that can be closely monitored and configured with as few persistent administrators as possible. Review Azure policies in any parent management groups to ensure they are aligned with your intent for these servers. --### Disable unnecessary management features --For a Tier 0 asset, you should use the local agent security controls to disable any unused functionality in the agent to prevent any intentionalΓÇöor accidentalΓÇöuse of those features to make changes to the server. This includes: --- Disabling remote access capabilities-- Setting an extension allowlist for the extensions you intend to use, or disabling the extension manager if you are not using extensions-- Disabling the machine configuration agent if you donΓÇÖt intend to use machine configuration policies--The following example shows how to lock down the Azure Connected Machine agent for a domain controller that needs to use the Azure Monitor Agent to collect security logs for Microsoft Sentinel and Microsoft Defender for Servers to protect against malware threats: --``` -azcmagent config set incomingconnections.enabled false --azcmagent config set guestconfiguration.enabled false --azcmagent config set extensions.allowlist ΓÇ£Microsoft.Azure.Monitor/AzureMonitorWindowsAgent,Microsoft.Azure.AzureDefenderForServers/MDE.WindowsΓÇ¥ -``` |
azure-arc | Ssh Arc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md | - Title: SSH access to Azure Arc-enabled servers -description: Use SSH remoting to access and manage Azure Arc-enabled servers. Previously updated : 07/01/2023-----# SSH access to Azure Arc-enabled servers -SSH for Arc-enabled servers enables SSH based connections to Arc-enabled servers without requiring a public IP address or additional open ports. -This functionality can be used interactively, automated, or with existing SSH based tooling, -allowing existing management tools to have a greater impact on Azure Arc-enabled servers. --## Key benefits -SSH access to Arc-enabled servers provides the following key benefits: --## Prerequisites -To enable this functionality, ensure the following: --Authenticating with Microsoft Entra credentials has additional requirements: - - **Virtual Machine Administrator Login**: Users who have this role assigned can log in to an Azure virtual machine with administrator privileges. - - **Virtual Machine User Login**: Users who have this role assigned can log in to an Azure virtual machine with regular user privileges. - - An Azure user who has the Owner or Contributor role assigned for a VM doesn't automatically have privileges to Microsoft Entra login to the VM over SSH. There's an intentional (and audited) separation between the set of people who control virtual machines and the set of people who can access virtual machines. -- > [!NOTE] - > The Virtual Machine Administrator Login and Virtual Machine User Login roles use `dataActions` and can be assigned at the management group, subscription, resource group, or resource scope. We recommend that you assign the roles at the management group, subscription, or resource level and not at the individual VM level. This practice avoids the risk of reaching the [Azure role assignments limit](../../role-based-access-control/troubleshoot-limits.md) per subscription. --### Availability -SSH access to Arc-enabled servers is currently supported in all regions supported by Arc-Enabled Servers. --## Getting started --### Register the HybridConnectivity resource provider -> [!NOTE] -> This is a one-time operation that needs to be performed on each subscription. --Check if the HybridConnectivity resource provider (RP) has been registered: --```az provider show -n Microsoft.HybridConnectivity -o tsv --query registrationState``` --If the RP hasn't been registered, run the following: --```az provider register -n Microsoft.HybridConnectivity``` --This operation can take 2-5 minutes to complete. Before moving on, check that the RP has been registered. --### Create default connectivity endpoint -> [!NOTE] -> The following step will not need to be run for most users as it should complete automatically at first connection. -> This step must be completed for each Arc-enabled server. --#### [Create the default endpoint with Azure CLI:](#tab/azure-cli) -```bash -az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15 --body '{"properties": {"type": "default"}}' -``` -> [!NOTE] -> If using Azure CLI from PowerShell, the following should be used. -```powershell -az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15 --body '{\"properties\":{\"type\":\"default\"}}' -``` --Validate endpoint creation: - ```bash -az rest --method get --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15 - ``` - -#### [Create the default endpoint with Azure PowerShell:](#tab/azure-powershell) - ```powershell -Invoke-AzRestMethod -Method put -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15 -Payload '{"properties": {"type": "default"}}' -``` --Validate endpoint creation: - ```powershell - Invoke-AzRestMethod -Method get -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15 - ``` - - - ### Install local command line tool -This functionality is currently packaged in an Azure CLI extension and an Azure PowerShell module. -#### [Install Azure CLI extension](#tab/azure-cli) --```az extension add --name ssh``` --> [!NOTE] -> The Azure CLI extension version must be greater than 2.0.0. --#### [Install Azure PowerShell module](#tab/azure-powershell) --```powershell -Install-Module -Name Az.Ssh -Scope CurrentUser -Repository PSGallery -Install-Module -Name Az.Ssh.ArcProxy -Scope CurrentUser -Repository PSGallery -``` ----### Enable functionality on your Arc-enabled server -In order to use the SSH connect feature, you must update the Service Configuration in the Connectivity Endpoint on the Arc-Enabled Server to allow SSH connection to a specific port. You may only allow connection to a single port. The CLI tools attempt to update the allowed port at runtime, but the port can be manually configured with the following: --> [!NOTE] -> There may be a delay after updating the Service Configuration until you are able to connect. --#### [Azure CLI](#tab/azure-cli) --```az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 --body "{\"properties\": {\"serviceName\": \"SSH\", \"port\": 22}}"``` --#### [Azure PowerShell](#tab/azure-powershell) --```Invoke-AzRestMethod -Method put -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 -Payload '{"properties": {"serviceName": "SSH", "port": 22}}'``` ----If you're using a nondefault port for your SSH connection, replace port 22 with your desired port in the previous command. --### Optional: Install Azure AD login extension -The `Azure AD based SSH Login ΓÇô Azure Arc` VM extension can be added from the extensions menu of the Arc server. The Azure AD login extension can also be installed locally via a package manager via: `apt-get install aadsshlogin` or the following command. --```az connectedmachine extension create --machine-name <arc enabled server name> --resource-group <resourcegroup> --publisher Microsoft.Azure.ActiveDirectory --name AADSSHLogin --type AADSSHLoginForLinux --location <location>``` ---## Examples -To view examples, view the Az CLI documentation page for [az ssh](/cli/azure/ssh) or the Azure PowerShell documentation page for [Az.Ssh](/powershell/module/az.ssh). --## Next steps --- Learn about [OpenSSH for Windows](/windows-server/administration/openssh/openssh_overview)-- Learn about troubleshooting [SSH access to Azure Arc-enabled servers](ssh-arc-troubleshoot.md).-- Learn about troubleshooting [agent connection issues](troubleshoot-agent-onboard.md). |
azure-arc | Ssh Arc Powershell Remoting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-powershell-remoting.md | - Title: SSH access to Azure Arc-enabled servers with PowerShell remoting -description: Use PowerShell remoting over SSH to access and manage Azure Arc-enabled servers. Previously updated : 04/08/2024-----# PowerShell remoting to Azure Arc-enabled servers -SSH for Arc-enabled servers enables SSH based connections to Arc-enabled servers without requiring a public IP address or additional open ports. -[PowerShell remoting over SSH](/powershell/scripting/security/remoting/ssh-remoting-in-powershell) is available for Windows and Linux machines. --## Prerequisites -To leverage PowerShell remoting over SSH access to Azure Arc-enabled servers, ensure the following: --## How to connect via PowerShell remoting -Follow the below steps to connect via PowerShell remoting to an Arc-enabled server. --#### [Generate a SSH config file with Azure CLI:](#tab/azure-cli) -```bash -az ssh config --resource-group <myRG> --name <myMachine> --local-user <localUser> --resource-type Microsoft.HybridCompute --file <SSH config file> -``` - -#### [Generate a SSH config file with Azure PowerShell:](#tab/azure-powershell) - ```powershell -Export-AzSshConfig -ResourceGroupName <myRG> -Name <myMachine> -LocalUser <localUser> -ResourceType Microsoft.HybridCompute/machines -ConfigFilePath <SSH config file> -``` - --#### Find newly created entry in the SSH config file -Open the created or modified SSH config file. The entry should have a similar format to the following. -```powershell -Host <myRG>-<myMachine>-<localUser> - HostName <myMachine> - User <localUser> - ProxyCommand "<path to proxy>\.clientsshproxy\sshProxy_windows_amd64_1_3_022941.exe" -r "<path to relay info>\az_ssh_config\<myRG>-<myMachine>\<myRG>-<myMachine>-relay_info" -``` -#### Leveraging the -Options parameter -Levering the [options](/powershell/module/microsoft.powershell.core/new-pssession#-options) parameter allows you to specify a hashtable of SSH options used when connecting to a remote SSH-based session. -Create the hashtable by following the below format. Be mindful of the locations of quotation marks. -```powershell -$options = @{ProxyCommand = '"<path to proxy>\.clientsshproxy\sshProxy_windows_amd64_1_3_022941.exe -r <path to relay info>\az_ssh_config\<myRG>-<myMachine>\<myRG>-<myMachine>-relay_info"'} -``` -Next leverage the options hashtable in a PowerShell remoting command. -```powershell -New-PSSession -HostName <myMachine> -UserName <localUser> -Options $options -``` --## Next steps --- Learn about [OpenSSH for Windows](/windows-server/administration/openssh/openssh_overview)-- Learn about troubleshooting [SSH access to Azure Arc-enabled servers](ssh-arc-troubleshoot.md).-- Learn about troubleshooting [agent connection issues](troubleshoot-agent-onboard.md). |
azure-arc | Ssh Arc Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-troubleshoot.md | - Title: Troubleshoot SSH access to Azure Arc-enabled servers -description: Learn how to troubleshoot and resolve issues with SSH access to Arc-enabled servers. Previously updated : 07/01/2023----# Troubleshoot SSH access to Azure Arc-enabled servers --This article provides information on troubleshooting and resolving issues that may occur while attempting to connect to Azure Arc-enabled servers via SSH. -For general information, see [SSH access to Arc-enabled servers overview](./ssh-arc-overview.md). --## Client-side issues --These issues are due to errors that occur on the machine that the user is connecting from. --### Unable to locate client binaries --This issue occurs when the client side SSH binaries required to connect aren't found. Possible errors: --- `Failed to create ssh key file with error: \<ERROR\>.`-- `Failed to run ssh command with error: \<ERROR\>.`-- `Failed to get certificate info with error: \<ERROR\>.`-- `Failed to create ssh key file with error: [WinError 2] The system cannot find the file specified.`-- `Failed to create ssh key file with error: [Errno 2] No such file or directory: 'ssh-keygen'.`--Resolution: --- Provide the path to the folder that contains the SSH client executables by using the ```--ssh-client-folder``` parameter.-- Ensure that the folder is in the PATH environment variable for Azure PowerShell--### Azure PowerShell module version mismatch -This issue occurs when the installed Azure PowerShell submodule, Az.Ssh.ArcProxy, isn't supported by the installed version of Az.Ssh. Error: --- `This version of Az.Ssh only supports version 1.x.x of the Az.Ssh.ArcProxy PowerShell Module. The Az.Ssh.ArcProxy module {ModulePath} version is {ModuleVersion}, and it is not supported by this version of the Az.Ssh module. Check that this version of Az.Ssh is the latest available.`--Resolution: --- Update the Az.Ssh and Az.Ssh.ArcProxy modules--### Az.Ssh.ArcProxy not installed -This issue occurs when the proxy module isn't found on the client machine. Error: --- `Failed to find the PowerShell module Az.Ssh.ArcProxy installed in this machine. You must have the Az.Ssh.Proxy PowerShell module installed in the client machine in order to connect to Azure Arc resources. You can find the module in the PowerShell Gallery (see: https://aka.ms/PowerShellGallery-Az.Ssh.ArcProxy).`--Resolution: --- Install the module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.Ssh.ArcProxy): `Install-Module -Name Az.Ssh.ArcProxy`--### User doesn't have permissions to execute proxy -This issue happens when the user doesn't have permissions to execute the SSH proxy that is used to connect. Errors: --- `/bin/bash: line 1: exec: /usr/local/share/powershell/Modules/Az.Ssh.ArcProxy/1.0.0/sshProxy_linux_amd64_1.3.022941: cannot execute: Permission denied`-- `CreateProcessW failed error:5 posix_spawnp: Input/output error`--Resolution: --- Ensure that the user has permissions to execute the proxy file.--## Server-side issues --### Unable to connect after the public preview -If the user had participated in the public preview and has updated their Arc agent and the Azure CLI/PowerShell to the general availability releases, then the connectivity may fail. --Resolution: --- Re-enable the functionality on the [Azure Arc-enabled servers](ssh-arc-overview.md).--### SSH traffic not allowed on the server -This issue occurs when SSHD isn't running on the server, or SSH traffic isn't allowed on the server. Error: --- `{"level":"fatal","msg":"sshproxy: error copying information from the connection: read tcp 192.168.1.180:60887-\u003e40.122.115.96:443: wsarecv: An existing connection was forcibly closed by the remote host.","time":"2022-02-24T13:50:40-05:00"}`-- `{"level":"fatal","msg":"sshproxy: error connecting to the address: 503 connection to localhost:22 failed: dial tcp [::1]:22: connectex: No connection could be made because the target machine actively refused it.. websocket: bad handshake","proxyVersion":"1.3.022941"}`-- `SSH connection is not enabled in the target port {Port}. `--Resolution: --#### [Azure CLI](#tab/azure-cli) --```az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 --body '{\"properties\": {\"serviceName\": \"SSH\", \"port\": \"22\"}}'``` --#### [Azure PowerShell](#tab/azure-powershell) --```Invoke-AzRestMethod -Method put -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 -Payload '{"properties": {"serviceName": "SSH", "port": "22"}}'``` --- -## Azure permissions issues -### Incorrect role assignments to enable SSH connectivity -This issue occurs when the current user doesn't have the proper role assignment to make contributions to the target resource. Error: --- `Client is not authorized to create a Default connectivity endpoint for {Name} in the Resource Group {ResourceGroupName}. This is a one-time operation that must be performed by an account with Owner or Contributor role to allow connections to target resource`--Resolution: -- Ensure that you have the Owner or Contributor role on the resource or contact the owner/contributor of the resource to set up SSH connectivity.--### Incorrect role assignments to connect -This issue occurs when the current user doesn't have the proper role assignment on the target resource, specifically a lack of `read` permissions. Possible errors: --- `Unable to determine the target machine type as Azure VM or Arc Server`-- `Unable to determine that the target machine is an Arc Server`-- `Unable to determine that the target machine is an Azure VM`-- `Permission denied (publickey).`-- `Request for Azure Relay Information Failed: (AuthorizationFailed) The client '\<user name\>' with object id '\<ID\>' does not have authorization to perform action 'Microsoft.HybridConnectivity/endpoints/listCredentials/action' over scope '/subscriptions/\<Subscription ID\>/resourceGroups/\<Resource Group\>/providers/Microsoft.HybridCompute/machines/\<Machine Name\>/providers/Microsoft.HybridConnectivity/endpoints/default' or the scope is invalid. If access was recently granted, please refresh your credentials.`--Resolution: -- Ensure that you have the Virtual Machine Local user Login role on the resource you're connecting to. If using Microsoft Entra login, ensure you have the Virtual Machine User Login or the Virtual Machine Administrator Login roles and that the Microsoft Entra SSH Login extension is installed on the Arc-Enabled Server.--### HybridConnectivity RP not registered --This issue occurs when the HybridConnectivity resource provider isn't registered for the subscription. Error: --- Request for Azure Relay Information Failed: (NoRegisteredProviderFound) Code: NoRegisteredProviderFound--Resolution: --- Run ```az provider register -n Microsoft.HybridConnectivity```-- Confirm success by running ```az provider show -n Microsoft.HybridConnectivity```, verify that `registrationState` is set to `Registered`-- Restart the hybrid agent on the Arc-enabled server--### Cannot connect after updating CLI tool and Arc agent --This issue occurs when the updated command creates a new service configuration before the Arc agent is updated. This will only impact Azure Arc versions older than 1.31 when updating to a version 1.31 or newer. Error: --- Connection closed by UNKNOWN port 65535-- Resolution: -- - Delete the existing service configuration and allow it to be re-created by the CLI command at the next connection. Run ```az rest --method delete --uri https://management.azure.com/subscriptions/<SUB_ID>/resourceGroups/<RG_NAME>/providers/Microsoft.HybridCompute/machines/<VM_NAME>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15``` -- ## Disable SSH to Arc-enabled servers - - This functionality can be disabled by completing the following actions: -- #### [Azure CLI](#tab/azure-cli) - - - Remove the SSH port and functionality from the Arc-enabled server: ```az rest --method delete --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 --body '{\"properties\": {\"serviceName\": \"SSH\", \"port\": \"22\"}}'``` -- - Delete the default connectivity endpoint: ```az rest --method delete --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15``` --#### [Azure PowerShell](#tab/azure-powershell) -- - Remove the SSH port and functionality from the Arc-enabled server: ```Invoke-AzRestMethod -Method delete -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 -Payload '{"properties": {"serviceName": "SSH", "port": "22"}}'``` -- - Delete the default connectivity endpoint: ```Invoke-AzRestMethod -Method delete -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15``` ----## Next steps --- Learn about SSH access to [Azure Arc-enabled servers](ssh-arc-overview.md).-- Learn about troubleshooting [agent connection issues](troubleshoot-agent-onboard.md). |
azure-arc | Troubleshoot Agent Onboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-agent-onboard.md | - Title: Troubleshoot Azure Connected Machine agent connection issues -description: This article tells how to troubleshoot and resolve issues with the Connected Machine agent that arise with Azure Arc-enabled servers when trying to connect to the service. Previously updated : 10/13/2022----# Troubleshoot Azure Connected Machine agent connection issues --This article provides information for troubleshooting issues that might occur while configuring the Azure Connected Machine agent for Windows or Linux. Both the interactive and at-scale installation methods when configuring connection to the service are included. For general information, see [Azure Arc-enabled servers overview](./overview.md). --## Agent error codes --Use the following table to identify and resolve issues when configuring the Azure Connected Machine agent using the `AZCM0000` ("0000" can be any four digit number) error code printed to the console or script output. --| Error code | Probable cause | Suggested remediation | -||-|--| -| AZCM0000 | The action was successful | N/A | -| AZCM0001 | An unknown error occurred | Contact Microsoft Support for assistance. | -| AZCM0011 | The user canceled the action (CTRL+C) | Retry the previous command. | -| AZCM0012 | The access token is invalid | If authenticating via access token, obtain a new token and try again. If authenticating via service principal or device logins, contact Microsoft Support for assistance. | -| AZCM0016 | Missing mandatory parameter | Review the error message in the output to identify which parameters are missing. For the complete syntax of the command, run `azcmagent <command> --help`. | -| AZCM0018 | The command was executed without administrative privileges | Retry the command in an elevated user context (administrator/root). | -| AZCM0019 | The path to the configuration file is incorrect | Ensure the path to the configuration file is correct and try again. | -| AZCM0023 | The value provided for a parameter (argument) is invalid | Review the error message for more specific information. Refer to the syntax of the command (`azcmagent <command> --help`) for valid values or expected format for the arguments. | -| AZCM0026 | There's an error in network configuration or some critical services are temporarily unavailable | Check if the required endpoints are reachable (for example, hostnames are resolvable, endpoints aren't blocked). If the network is configured for Private Link Scope, a Private Link Scope resource ID must be provided for onboarding using the `--private-link-scope` parameter. | -| AZCM0041 | The credentials supplied are invalid | For device logins, verify that the user account specified has access to the tenant and subscription where the server resource will be created. For service principal logins, check the client ID and secret for correctness, the expiration date of the secret, and that the service principal is from the same tenant where the server resource will be created. | -| AZCM0042 | Creation of the Azure Arc-enabled server resource failed | Review the error message in the output to identify the cause of the failure to create resource and the suggested remediation. For more information, see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions) for more information. | -| AZCM0043 | Deletion of the Azure Arc-enabled server resource failed | Verify that the user/service principal specified has permissions to delete Azure Arc-enabled server/resources in the specified group. For more information, see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions). If the resource no longer exists in Azure, use the `--force-local-only` flag to proceed. | -| AZCM0044 | A resource with the same name already exists | Specify a different name for the `--resource-name` parameter or delete the existing Azure Arc-enabled server in Azure and try again. | -| AZCM0062 | An error occurred while connecting the server | Review the error message in the output for more specific information. If the error occurred after the Azure resource was created, delete this resource before retrying. | -| AZCM0063 | An error occurred while disconnecting the server | Review the error message in the output for more specific information. If this error persists, delete the resource in Azure, and then run `azcmagent disconnect --force-local-only` on the server. | -| AZCM0067 | The machine is already connected to Azure | Run `azcmagent disconnect` to remove the current connection, then try again. | -| AZCM0068 | Subscription name was provided, and an error occurred while looking up the corresponding subscription GUID. | Retry the command with the subscription GUID instead of subscription name. | -| AZCM0061<br>AZCM0064<br>AZCM0065<br>AZCM0066<br>AZCM0070<br> | The agent service isn't responding or unavailable | Verify the command is run in an elevated user context (administrator/root). Ensure that the HIMDS service is running (start or restart HIMDS as needed) then try the command again. | -| AZCM0081 | An error occurred while downloading the Microsoft Entra managed identity certificate | If this message is encountered while attempting to connect the server to Azure, the agent won't be able to communicate with the Azure Arc service. Delete the resource in Azure and try connecting again. | -| AZCM0101 | The command wasn't parsed successfully | Run `azcmagent <command> --help` to review the command syntax. | -| AZCM0102 | An error occurred while retrieving the computer hostname | Retry the command and specify a resource name (with parameter --resource-name or ΓÇôn). Use only alphanumeric characters, hyphens and/or underscores; note that resource name can't end with a hyphen or underscore. | -| AZCM0103 | An error occurred while generating RSA keys | Contact Microsoft Support for assistance. | -| AZCM0105 | An error occurred while downloading the Microsoft Entra ID managed identify certificate | Delete the resource created in Azure and try again. | -| AZCM0147-<br>AZCM0152 | An error occurred while installing Azcmagent on Windows | Review the error message in the output for more specific information. | -| AZCM0127-<br>AZCM0146 | An error occurred while installing Azcmagent on Linux | Review the error message in the output for more specific information. | -| AZCM0150 | Generic failure during installation | Submit a support ticket to get assistance. | -| AZCM0153 | The system platform isn't supported | Review the [prerequisites](prerequisites.md) for supported platforms | -| AZCM0154 | The version of PowerShell installed on the system is too old | Upgrade to PowerShell 4 or later and try again. | -| AZCM0155 | The user running the installation script doesn't have administrator permissions | Re-run the script as an administrator. | -| AZCM0156 | Installation of the agent failed | Confirm that the machine isn't running on Azure. Detailed errors might be found in the installation log at `%TEMP%\installationlog.txt`. | -| AZCM0157 | Unable to download repo metadata for the Microsoft Linux software repository | Check if a firewall is blocking access to `packages.microsoft.com` and try again. | --## Agent verbose log --Before following the troubleshooting steps described later in this article, the minimum information you need is the verbose log. It contains the output of the **azcmagent** tool commands, when the verbose (-v) argument is used. The log files are written to `%ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log` for Windows, and Linux to `/var/opt/azcmagent/log/azcmagent.log`. --### Windows --Following is an example of the command to enable verbose logging with the Connected Machine agent for Windows when performing an interactive installation. --```console -& "$env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe" connect --resource-group "resourceGroupName" --tenant-id "tenantID" --location "regionName" --subscription-id "subscriptionID" --verbose -``` --Following is an example of the command to enable verbose logging with the Connected Machine agent for Windows when performing an at-scale installation using a service principal. --```console -& "$env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe" connect ` - --service-principal-id "{serviceprincipalAppID}" ` - --service-principal-secret "{serviceprincipalPassword}" ` - --resource-group "{ResourceGroupName}" ` - --tenant-id "{tenantID}" ` - --location "{resourceLocation}" ` - --subscription-id "{subscriptionID}" - --verbose -``` --### Linux --Following is an example of the command to enable verbose logging with the Connected Machine agent for Linux when performing an interactive installation. -->[!NOTE] ->You must have *root* access permissions on Linux machines to run **azcmagent**. --```bash -azcmagent connect --resource-group "resourceGroupName" --tenant-id "tenantID" --location "regionName" --subscription-id "subscriptionID" --verbose -``` --Following is an example of the command to enable verbose logging with the Connected Machine agent for Linux when performing an at-scale installation using a service principal. --```bash -azcmagent connect \ - --service-principal-id "{serviceprincipalAppID}" \ - --service-principal-secret "{serviceprincipalPassword}" \ - --resource-group "{ResourceGroupName}" \ - --tenant-id "{tenantID}" \ - --location "{resourceLocation}" \ - --subscription-id "{subscriptionID}" - --verbose -``` --## Agent connection issues to service --The following table lists some of the known errors and suggestions on how to troubleshoot and resolve them. --|Message |Error |Probable cause |Solution | -|--|||| -|Failed to acquire authorization token device flow |`Error occurred while sending request for Device Authorization Code: Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/devicecode?api-version=1.0: dial tcp 40.126.9.7:443: connect: network is unreachable.` |Can't reach `login.windows.net` endpoint | Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Microsoft Entra ID. | -|Failed to acquire authorization token device flow |`Error occurred while sending request for Device Authorization Code: Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/devicecode?api-version=1.0: dial tcp 40.126.9.7:443: connect: network is Forbidden`. |Proxy or firewall is blocking access to `login.windows.net` endpoint. | Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Microsoft Entra ID.| -|Failed to acquire authorization token from SPN |`Failed to execute the refresh request. Error = 'Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/token?api-version=1.0: Forbidden'` |Proxy or firewall is blocking access to `login.windows.net` endpoint. |Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Microsoft Entra ID. | -|Failed to acquire authorization token from SPN |`Invalid client secret is provided` |Wrong or invalid service principal secret. |Verify the service principal secret. | -| Failed to acquire authorization token from SPN |`Application with identifier 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' wasn't found in the directory 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant` |Incorrect service principal and/or Tenant ID. |Verify the service principal and/or the tenant ID.| -|Get ARM Resource Response |`The client 'username@domain.com' with object id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' does not have authorization to perform action 'Microsoft.HybridCompute/machines/read' over scope '/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.HybridCompute/machines/MSJC01' or the scope is invalid. If access was recently granted, please refresh your credentials."}}" Status Code=403` |Wrong credentials and/or permissions |Verify you or the service principal is a member of the **Azure Connected Machine Onboarding** role. | -|Failed to AzcmagentConnect ARM resource |`The subscription isn't registered to use namespace 'Microsoft.HybridCompute'` |Azure resource providers aren't registered. |Register the [resource providers](prerequisites.md#azure-resource-providers). | -|Failed to AzcmagentConnect ARM resource |`Get https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.HybridCompute/machines/MSJC01?api-version=2019-03-18-preview: Forbidden` |Proxy server or firewall is blocking access to `management.azure.com` endpoint. | Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Azure Resource Manager. | --## Next steps --If you don't see your problem here or you can't resolve your issue, try one of the following channels for more support: --* Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html). --* Connect with [@AzureSupport](https://x.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. --* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**. |
azure-arc | Troubleshoot Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-extended-security-updates.md | - Title: How to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc -description: Learn how to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 07/03/2024----# Troubleshoot delivery of Extended Security Updates for Windows Server 2012 --This article provides information on troubleshooting and resolving issues that may occur while [enabling Extended Security Updates for Windows Server 2012 and Windows Server 2012 R2 through Arc-enabled servers](deliver-extended-security-updates.md). --## License provisioning issues --If you're unable to provision a Windows Server 2012 Extended Security Update license for Azure Arc-enabled servers, check the following: --- **Permissions:** Verify you have sufficient permissions (Contributor role or higher) within the scope of ESU provisioning and linking. --- **Core minimums:** Verify you have specified sufficient cores for the ESU License. Physical core-based licenses require a minimum of 16 cores per machine, and virtual core-based licenses require a minimum of 8 cores per virtual machine (VM). --- **Conventions:** Verify you have selected an appropriate subscription and resource group and provided a unique name for the ESU license. --## ESU enrollment issues --If you're unable to successfully link your Azure Arc-enabled server to an activated Extended Security Updates license, verify the following conditions are met: --- **Connectivity:** Azure Arc-enabled server is **Connected**. For information about viewing the status of Azure Arc-enabled machines, see [Agent status](overview.md#agent-status).--- **Agent version:** Connected Machine agent is version 1.34 or higher. If the agent version is less than 1.34, you need to update it to this version or higher.--- **Operating system:** Only Azure Arc-enabled servers running the Windows Server 2012 and 2012 R2 operating system are eligible to enroll in Extended Security Updates.--- **Environment:** The connected machine should not be running on Azure Stack HCI, Azure VMware solution (AVS), or as an Azure virtual machine. **In these scenarios, WS2012 ESUs are available for free**. For information about no-cost ESUs through Azure Stack HCI, see [Free Extended Security Updates through Azure Stack HCI](/azure-stack/hci/manage/azure-benefits-esu?tabs=windows-server-2012).--- **License properties:** Verify the license is activated and has been allocated sufficient physical or virtual cores to support the intended scope of servers.--## Resource providers --If you're unable to enable this service offering, review the resource providers registered on the subscription as noted below. If you receive an error while attempting to register the resource providers, validate the role assignment/s on the subscription. Also review any potential Azure policies that may be set with a Deny effect, preventing the enablement of these resource providers. --- **Microsoft.HybridCompute:** This resource provider is essential for Azure Arc-enabled servers, allowing you to onboard and manage on-premises servers in the Azure portal.--- **Microsoft.GuestConfiguration:** Enables Guest Configuration policies, which are used to assess and enforce configurations on your Arc-enabled servers for compliance and security.--- **Microsoft.Compute:** This resource provider is required for Azure Update Management, which is used to manage updates and patches on your on-premises servers, including ESU updates.--- **Microsoft.Security:** Enabling this resource provider is crucial for implementing security-related features and configurations for both Azure Arc and on-premises servers.--- **Microsoft.OperationalInsights:** This resource provider is associated with Azure Monitor and Log Analytics, which are used for monitoring and collecting telemetry data from your hybrid infrastructure, including on-premises servers.--- **Microsoft.Sql:** If you're managing on-premises SQL Server instances and require ESU for SQL Server, enabling this resource provider is necessary.--- **Microsoft.Storage:** Enabling this resource provider is important for managing storage resources, which may be relevant for hybrid and on-premises scenarios.--## ESU patch issues --### ESU patch status --To detect whether your Azure Arc-enabled servers are patched with the most recent Windows Server 2012/R2 Extended Security Updates, use Azure Update Manager or the Azure Policy [Extended Security Updates should be installed on Windows Server 2012 Arc machines-Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetail.ReactView/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F14b4e776-9fab-44b0-b53f-38d2458ea8be/version~/null/scopes~/%5B%22%2Fsubscriptions%2F4fabcc63-0ec0-4708-8a98-04b990085bf8%22%5D), which checks whether the most recent WS2012 ESU patches have been received. Both of these options are available at no additional cost for Azure Arc-enabled servers enrolled in WS2012 ESUs enabled by Azure Arc. --### ESU prerequisites --Ensure that both the licensing package and servicing stack update (SSU) are downloaded for the Azure Arc-enabled server as documented at [KB5031043: Procedure to continue receiving security updates after extended support has ended on October 10, 2023](https://support.microsoft.com/topic/kb5031043-procedure-to-continue-receiving-security-updates-after-extended-support-has-ended-on-october-10-2023-c1a20132-e34c-402d-96ca-1e785ed51d45). Ensure you are following all of the networking prerequisites as recorded at [Prepare to deliver Extended Security Updates for Windows Server 2012](prepare-extended-security-updates.md?tabs=azure-cloud#networking). --### Error: Trying to check IMDS again (HRESULT 12002 or 12029) --If installing the Extended Security Update enabled by Azure Arc fails with errors such as "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12029)" or "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12002)", you may need to update the intermediate certificate authorities trusted by your computer using one of the following methods. --> [!IMPORTANT] -> If you're running the [latest version of the Azure Connected machine agent](agent-release-notes.md), it's not necessary to install the intermediate CA certificates or allow access to the PKI URL. However, if a license was already assigned before the agent was upgraded, it can take up to 15 days for the older license to be replaced. During this time, the intermediate cert will still be required. After upgrading the agent, you can delete the license file `%ProgramData%\AzureConnectedMachineAgent\certs\license.json` to force it to be refreshed. --#### Option 1: Allow access to the PKI URL --Configure your network firewall and/or proxy server to allow access from the Windows Server 2012 (R2) machines to `http://www.microsoft.com/pkiops/certs` and `https://www.microsoft.com/pkiops/certs` (both TCP 80 and 443). This will enable the machines to automatically retrieve any missing intermediate CA certificates from Microsoft. --Once the network changes are made to allow access to the PKI URL, try installing the Windows updates again. You may need to reboot your computer for the automatic installation of certificates and validation of the license to take effect. --#### Option 2: Manually download and install the intermediate CA certificates --If you're unable to allow access to the PKI URL from your servers, you can manually download and install the certificates on each machine. --1. On any computer with internet access, download these intermediate CA certificates: - 1. [Microsoft Azure RSA TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) - 1. [Microsoft Azure RSA TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) - 1. [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) - 1. [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) -1. Copy the certificate files to your Windows Server 2012 (R2) machines. -1. Run any one set of the following commands in an elevated command prompt or PowerShell session to add the certificates to the "Intermediate Certificate Authorities" store for the local computer. The command should be run from the same directory as the certificate files. The commands are idempotent and won't make any changes if you've already imported the certificate: -- ``` - certutil -addstore CA "Microsoft Azure RSA TLS Issuing CA 03 - xsign.crt" - certutil -addstore CA "Microsoft Azure RSA TLS Issuing CA 04 - xsign.crt" - certutil -addstore CA "Microsoft Azure RSA TLS Issuing CA 07 - xsign.crt" - certutil -addstore CA "Microsoft Azure RSA TLS Issuing CA 08 - xsign.crt" - ``` --1. Try installing the Windows updates again. You may need to reboot your computer for the validation logic to recognize the newly imported intermediate CA certificates. --### Error: Not eligible (HRESULT 1633) --If you encounter the error "ESU: not eligible HRESULT_FROM_WIN32(1633)", follow these steps: --```powershell -Remove-Item "$env:ProgramData\AzureConnectedMachineAgent\Certs\license.json" -Force -Restart-Service himds -``` --If you have other issues receiving ESUs after successfully enrolling the server through Arc-enabled servers, or you need additional information related to issues affecting ESU deployment, see [Troubleshoot issues in ESU](/troubleshoot/windows-client/windows-7-eos-faq/troubleshoot-extended-security-updates-issues). |
azure-arc | Troubleshoot Vm Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-vm-extensions.md | - Title: Troubleshoot Azure Arc-enabled servers VM extension issues -description: This article tells how to troubleshoot and resolve issues with Azure VM extensions that arise with Azure Arc-enabled servers. Previously updated : 07/16/2021----# Troubleshoot Azure Arc-enabled servers VM extension issues --This article provides information on troubleshooting and resolving issues that may occur while attempting to deploy or remove Azure VM extensions on Azure Arc-enabled servers. For general information, see [Manage and use Azure VM extensions](./manage-vm-extensions.md). --## General troubleshooting --Data about the state of extension deployments can be retrieved from the Azure portal. --The following troubleshooting steps apply to all VM extensions. --1. To check the Guest agent log, look at the activity when your extension was being provisioned in `%SystemDrive%\ProgramData\GuestConfig\ext_mgr_logs` for Windows, and for Linux under `/var/lib/GuestConfig/ext_mgr_logs`. --2. Check the extension logs for the specific extension for more details in `%SystemDrive%\ProgramData\GuestConfig\extension_logs\<Extension>` for Windows. Extension output is logged to a file for each extension installed on Linux under `/var/lib/GuestConfig/extension_logs`. --3. Check extension-specific documentation troubleshooting sections for error codes, known issues etc. Additional troubleshooting information for each extension can be found in the **Troubleshoot and support** section in the overview for the extension. This includes the description of error codes written to the log. The extension articles are linked in the [extensions table](manage-vm-extensions.md#extensions). --4. Look at the system logs. Check for other operations that may have interfered with the extension, such as a long running installation of another application that required exclusive package manager access. --## Troubleshooting specific extension scenarios --### VM Insights --- When enabling VM Insights for an Azure Arc-enabled server, it installs the Dependency and Log Analytics agent. On a slow machine or one with a slow network connection, it is possible to see timeouts during the installation process. Microsoft is taking steps to address this in the Connected Machine agent to help improve this condition. In the interim, a retry of the installation may succeed.--### Log Analytics agent for Linux --- The Log Analytics agent version 1.13.9 (corresponding extension version is 1.13.15) is not correctly marking uploaded data with the resource ID of the Azure Arc-enabled server. Although logs are being sent to the service, when you try to view the data from the selected enabled server after selecting **Logs** or **Insights**, no data is returned. You can view its data by running queries from Azure Monitor Logs or from Azure Monitor for VMs, which are scoped to the workspace.--- Some distributions are not currently supported by the Log Analytics agent for Linux. The agent requires additional dependencies to be installed, including Python 2. Review the support matrix and prerequisites [here](/azure/azure-monitor/agents/agents-overview#supported-operating-systems).--- Error code 52 in the status message indicates a missing dependency. Check the output and logs for more information about which dependency is missing.--- If an installation fails, review the **Troubleshoot and support** section in the overview for the extension. In most cases, there is an error code included in the status message. For the Log Analytics agent for Linux, status messages are explained [here](/azure/virtual-machines/extensions/oms-linux#troubleshoot-and-support), along with general troubleshooting information for this VM extension.--## Next steps --If you don't see your problem here or you can't resolve your issue, try one of the following channels for additional support: --- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).--- Connect with [@AzureSupport](https://x.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.--- File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**. |
azure-arc | Vmware Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/vmware-faq.md | - Title: Azure Arc-enabled servers VMware Frequently Asked Questions -description: Learn how to use Azure Arc-enabled servers on virtual machines running in VMware vSphere environments. Previously updated : 11/20/2023----# Azure Arc-enabled servers VMware Frequently Asked Questions --This article addresses frequently asked questions about Arc-enabled servers on virtual machines running in VMware vSphere environments. --## What is Azure Arc? --Azure Arc is the overarching brand for a suite of Azure hybrid products that extend specific Azure public cloud services and/or management capabilities beyond Azure to on-premises environments and 3rd-party clouds. Azure Arc-enabled server, for example, allows you to use the same Azure management tools you would with a VM running in Azure with a VM running on-premises in a VMware vSphere cluster. --## What's the difference between Azure Arc-enabled servers and Azure Arc-enabled VMware vSphere? --> [!NOTE] -> [Arc-enabled VMware vSphere](../vmware-vsphere/overview.md) supports vSphere environments anywhere, either on-premises as well as [Azure VMware Solution (AVS)](./../../azure-vmware/deploy-arc-for-azure-vmware-solution.md), VMware Cloud on AWS, and Google Cloud VMware Engine. --The easiest way to think of this is as follows: --- Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that itΓÇÖs running on. Since Arc-enabled servers also support bare-metal machines, there may, in fact, not even be a host hypervisor in some cases.--- Azure Arc-enabled VMware vSphere is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on a VMware vSphere VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. See [What is Azure Arc-enabled VMware vSphere](../vmware-vsphere/overview.md) to learn more.--## Can I use Azure Arc-enabled server on VMs running in VMware environments? --Yes. Azure Arc-enabled servers work with VMs running in an on-premises VMware vSphere environment as well as Azure VMware Solution (AVS) and support the full breadth of guest management capabilities across security, monitoring, and governance. --## Which operating systems does Azure Arc-enabled servers work with? --Azure Arc-enabled servers and/or Azure Arc-enabled VMware vSphere work with [all supported versions](./prerequisites.md) of Windows Server and major distributions of Linux. As mentioned, even though Arc-enabled servers work with VMware vSphere virtual machines, the [Connected Machine agent](agent-overview.md) has no notion of familiarity with the underlying infrastructure fabric and virtualization layer. --## Should I use Arc-enabled servers or Arc-enabled VMware vSphere for my VMware VMs? --Each option has its own unique benefits and can be combined as needed. Arc-enabled servers allows you to manage the guest OS of your VMs with the Azure Connected Machine agent. Arc-enabled VMware vSphere enables you to onboard your VMware environment at-scale to Azure Arc with automatic discovery, in addition to performing full VM lifecycle and virtual hardware operations. You have the flexibility to start with either option and incorporate the other one later without any disruption. With both options, you'll enjoy the same consistent experience. |
azure-arc | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/site-manager/faq.md | - Title: Frequently asked questions -description: "Frequently asked questions to understand and troubleshoot Azure Arc sites and site manager" ----- Previously updated : 02/16/2024---#customer intent: As a customer, I want answers to questions so that I can answer my own questions. ----# Frequently asked questions: Azure Arc site manager (preview) --The following are frequently asked questions and answers for Azure Arc site manager. --**Question:** I have resources in the resource group, which aren't yet supported by site manager. Do I need to move them? --**Answer:** Site manager provides status aggregation for only the supported resource types. Resources of other types won't be managed via site manager. They continue to function normally as they would without otherwise. --**Question:** Does site manager have a subscription or fee for usage? --**Answer:** Site manager is free. However, the Azure services that integrated with sites and site manager might have a fee. Additionally, alerts used with site manager via monitor might have fees as well. --**Question:** What regions are currently supported via site manager? What regions of these supported regions aren't fully supported? --**Answer:** Site manager supports resources that exist in [supported regions](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc®ions=all), with a few exceptions. For the following regions, connectivity and update status aren't supported for Arc-enabled machines or Arc-enabled Kubernetes clusters: --* Brazil South -* UAE North -* South Africa North |
azure-arc | How To Configure Monitor Site | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/site-manager/how-to-configure-monitor-site.md | - Title: "How to configure Azure Monitor alerts for a site" -description: "Describes how to create and configure alerts using Azure Monitor to manage resources in an Azure Arc site." ----- Previously updated : 04/18/2024--#customer intent: As a site admin, I want to know where to create a alert in Azure for my site so that I can deploy monitoring for resources in my site. ----# Monitor sites in Azure Arc --Azure Arc sites provide a centralized view to monitor groups of resources, but don't provide monitoring capabilities for the site overall. Instead, customers can set up alerts and monitoring for supported resources within a site. Once alerts are set up and triggered depending on the alert criteria, Azure Arc site manager (preview) makes the resource alert status visible within the site pages. --If you aren't familiar with Azure Monitor, learn more about how to [monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource). --## Prerequisites --* An Azure subscription. If you don't have a service subscription, create a [free trial account in Azure](https://azure.microsoft.com/free/). -* Azure portal access -* Internet connectivity -* A resource group or subscription in Azure with at least one resource for a site. For more information, see [Supported resource types](./overview.md#supported-resource-types). --## Configure alerts for sites in Azure Arc --This section provides basic steps for configuring alerts for sites in Azure Arc. For more detailed information about Azure Monitor, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-metric-alert-rule). --To configure alerts for sites in Azure Arc, follow the below steps. --1. Navigate to Azure Monitor by searching for **monitor** within the Azure portal. Select **Monitor** as shown. -- :::image type="content" source="./media/how-to-configure-monitor-site/search-monitor.png" alt-text="Screenshot that shows searching for monitor within the Azure portal."::: --1. On the **Monitor** overview, select **Alerts** in either the navigation menu or the boxes shown in the primary screen. -- :::image type="content" source="./media/how-to-configure-monitor-site/select-alerts-monitor.png" alt-text="Screenshot that shows selecting the Alerts option on the Monitor overview."::: --1. On the **Alerts** page, you can manage existing alerts or create new ones. -- Select **Alert rules** to see all of the alerts currently in effect in your subscription. -- Select **Create** to create an alert rule for a specific resource. If a resource is managed as part of a site, any alerts triggered via its rule appear in the site manager overview. -- :::image type="content" source="./media/how-to-configure-monitor-site/create-alert-monitor.png" alt-text="Screenshot that shows the Create and Alert rules actions on the Alerts page."::: --By having either existing alert rules or creating a new alert rule, once the rule is in place for resources supported by Azure Arc site monitor, any alerts that are trigger on that resource will be visible on the sites overview tab. --## Next steps --To learn how to view alerts triggered from Azure Monitor for supported resources within site manager, see [How to view alerts in site manager](./how-to-view-alerts.md). |
azure-arc | How To Crud Site | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/site-manager/how-to-crud-site.md | - Title: "How to create and manage an Azure Arc site" -description: "Describes how to create, view, delete, or modify an Azure Arc site in the Azure portal using site manager." ----- Previously updated : 04/18/2024--#customer intent: As a site admin, I want to know how to create, delete, and modify sites so that I can manage my site. ----# Create and manage sites --This article guides you through how to create, modify, and delete a site using Azure Arc site manager (preview). --## Prerequisites --* An Azure subscription. If you don't have a service subscription, create a [free trial account in Azure](https://azure.microsoft.com/free/). -* Azure portal access -* Internet connectivity -* A resource group or subscription in Azure with at least one resource for a site. For more information, see [Supported resource types](./overview.md#supported-resource-types). --## Open Azure Arc site manager --In the [Azure portal](https://portal.azure.com), search for and select **Azure Arc**. Select **Site manager (preview)** from the Azure Arc navigation menu. ---Alternatively, you can also search for Azure Arc site manager directly in the Azure portal using terms such as **site**, **Arc Site**, **site manager** and so on. --## Create a site --Create a site to manage geographically related resources. --1. From the main **Site manager** page in **Azure Arc**, select the blue **Create a site** button. -- :::image type="content" source="./media/how-to-crud-site/create-a-site-button.png" alt-text="Screenshot that shows creating a site from the site manager overview."::: --1. Provide the following information about your site: -- | Parameter | Description | - |--|--| - | **Site name** | Custom name for site. | - | **Display name** | Custom display name for site. | - | **Site scope** | Either **Subscription** or **Resource group**. The scope can only be defined at the time of creating a site and can't be modified later. All the resources in the scope can be viewed and managed from site manager. | - | **Subscription** | Subscription for the site to be created under. | - | **Resource group** | The resource group for the site, if the scope was set to resource group. | - | **Address** | Physical address for a site. | --1. Once these details are provided, select **Review + create**. -- :::image type="content" source="./media/how-to-crud-site/create-a-site-page-los-angeles.png" alt-text="Screenshot that shows all the site details filled in to create a site and then select review + create."::: --1. On the summary page, review and confirm the site details then select **Create** to create your site. -- :::image type="content" source="./media/how-to-crud-site/final-create-screen-arc-site.png" alt-text="Screenshot that shows the validation and review page for a new site and then select create."::: --If a site is created from a resource group or subscription that contains resources that are supported by site, these resources will automatically be visible within the created site. --## View and modify a site --Once you create a site, you can access it and its managed resources through site manager. --1. From the main **Site manager** page in **Azure Arc**, select **Sites** to view all existing sites. -- :::image type="content" source="./media/how-to-crud-site/sites-button-from-site-manager.png" alt-text="Screenshot that shows selecting Sites to view all sites."::: --1. On the **Sites** page, you can view all existing sites. Select the name of the site that you want to delete. -- :::image type="content" source="./media/how-to-crud-site/los-angeles-site-select.png" alt-text="Screenshot that shows selecting a site to manage from the list of sites."::: --1. On a specific site's resource page, you can: -- * View resources - * Modify resources (modifications affect the resources elsewhere as well) - * View connectivity status - * View update status - * View alerts - * Add new resources --Currently, only some aspects of a site can be modified. These are as follows: --| Site Attribute | Modification that can be done | -|--|--| -| Display name | Update the display name of a site to a new unique name. | -| Address | Update the address of a site to an existing or new address. | --## Delete a site --Deleting a site doesn't affect the resources, resource group, or subscription in its scope. After a site is deleted, the resources of that site will still exist but can't be viewed or managed from site manager. You can create a new site for the resource group or the subscription after the original site is deleted. --1. From the main **Site manager** page in **Azure Arc**, select **Sites** to view all existing sites. --1. On the **Sites** page, you can view all existing sites. Select the name of the site that you want to delete. --1. On the site's resource page, select **Delete**. -- :::image type="content" source="./media/how-to-crud-site/los-angeles-site-main-page-delete.png" alt-text="Screenshot that shows selecting Delete on the details page of a site."::: |
azure-arc | How To View Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/site-manager/how-to-view-alerts.md | - Title: "How to view alerts for a site" -description: "How to view and create alerts for a site" ----- Previously updated : 04/18/2024----# How to view alert status for an Azure Arc site --This article details how to view the alert status for an Azure Arc site. A site's alert status reflects the status of the underlying resources. From the site status view, you can find detailed status information for the supported resources as well. --## Prerequisites --* An Azure subscription. If you don't have a service subscription, create a [free trial account in Azure](https://azure.microsoft.com/free/). -* Azure portal access -* Internet connectivity -* A resource group or subscription in Azure with at least one resource for a site. For more information, see [Supported resource types](./overview.md#supported-resource-types). -* A site created for the associated resource group or subscription. If you don't have one, see [Create and manage sites](./how-to-crud-site.md). --## Alert status colors and meanings --In the Azure portal, status is indicated using color. --* Green: **Up to Date** -* Blue: **Info** -* Purple: **Verbose** -* Yellow: **Warning** -* Orange: **Error** -* Red: **Critical** --## View alert status --View alert status for an Arc site from the main page of Azure Arc site manager (preview). --1. From the [Azure portal](https://portal.azure.com), navigate to **Azure Arc** and select **Site manager (preview)** to open site manager. --1. From Azure Arc site manager, navigate to the **Overview** page. -- :::image type="content" source="./media/how-to-view-alerts/overview-sites-page.png" alt-text="Screenshot that shows selecting the overview page from site manager."::: --1. On the **Overview** page, you can view the summarized alert statuses of all sites. This site-level alert status is an aggregation of all the alert statuses of the resources in that site. In the following example, sites are shown with different statuses. -- :::image type="content" source="./media/how-to-view-alerts/site-manager-overview-alerts.png" alt-text="Screenshot that shows viewing the alert status on the site manager overview page." lightbox="./media/how-to-view-alerts/site-manager-overview-alerts.png"::: --1. To understand which site has which status, select either the **Sites** tab or the blue status text. -- :::image type="content" source="./media/how-to-view-alerts/site-manager-overview-alerts-details.png" alt-text="Screenshot of site manager overview page directing to the sites page to view more details." lightbox="./media/how-to-view-alerts/site-manager-overview-alerts-details.png"::: --1. The **sites** page shows the top-level status for each site, which reflects the most significant status for the site. -- :::image type="content" source="./media/how-to-view-alerts/site-manager-overview-alerts-details-status-site-page.png" alt-text="Screenshot that shows the top level alerts status for each site." lightbox="./media/how-to-view-alerts/site-manager-overview-alerts-details-status-site-page.png"::: --1. If there's an alert, select the status text to open details for a given site. You can also select the name of the site to open its details. --1. On a site's resource page, you can view the alert status for each resource within the site, including the resource responsible for the top-level most significant status. -- :::image type="content" source="./media/how-to-view-alerts/site-manager-overview-alerts-details-status-los-angeles.png" alt-text="Screenshot that shows the site detail page with alert status for each resource." lightbox="./media/how-to-view-alerts/site-manager-overview-alerts-details-status-los-angeles.png"::: |
azure-arc | How To View Connectivity Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/site-manager/how-to-view-connectivity-status.md | - Title: "How to view connectivity status" -description: "How to view the connectivity status of an Arc Site and all of its managed resources through the Azure portal." ----- Previously updated : 04/18/2024--# As a site admin, I want to know how to view update status so that I can use my site. ---# How to view connectivity status for an Arc site --This article details how to view the connectivity status for an Arc site. A site's connectivity status reflects the status of the underlying resources. From the site status view, you can find detailed status information for the supported resources as well. --## Prerequisites --* An Azure subscription. If you don't have a service subscription, create a [free trial account in Azure](https://azure.microsoft.com/free/). -* Azure portal access -* Internet connectivity -* A resource group or subscription in Azure with at least one resource for a site. For more information, see [Supported resource types](./overview.md#supported-resource-types). -* A site created for the associated resource group or subscription. If you don't have one, see [Create and manage sites](./how-to-crud-site.md). --## Connectivity status colors and meanings --In the Azure portal, status is indicated using color. --* Green: **Connected** -* Yellow: **Not Connected Recently** -* Red: **Needs Attention** --## View connectivity status --You can view connectivity status for an Arc site as a whole from the main page of Azure Arc site manager (preview). --1. From the [Azure portal](https://portal.azure.com), navigate to **Azure Arc** and select **Site manager (preview)** to open site manager. --1. From Azure Arc site manager, navigate to the **Overview** page. -- :::image type="content" source="./media/how-to-view-connectivity-status/overview-sites-page.png" alt-text="Screenshot that shows selecting the Overview page in site manager."::: --1. On the **Overview** page, you can see a summary of the connectivity statuses of all your sites. The connectivity status of a given site is an aggregation of the connectivity status of its resources. In the following example, sites are shown with different statuses. -- :::image type="content" source="./media/how-to-view-connectivity-status/site-connection-overview.png" alt-text="Screenshot that shows the connectivity view in the sites overview page." lightbox="./media/how-to-view-connectivity-status/site-connection-overview.png"::: --1. To understand which site has which status, select either the **sites** tab or the blue colored status text to be directed to the **sites** page. -- :::image type="content" source="./media/how-to-view-connectivity-status/click-connectivity-status-site-details.png" alt-text="Screenshot that shows selecting the Sites tab to get more detail about connectivity status." lightbox="./media/how-to-view-connectivity-status/click-connectivity-status-site-details.png"::: --1. On the **Sites** page, you can view the top-level status for each site. This site-level status reflects the most significant resource-level status for the site. --1. Select the **Needs attention** link to view the resource details. -- :::image type="content" source="./media/how-to-view-connectivity-status/site-connectivity-status-from-sites-page.png" alt-text="Screenshot that shows selecting the connectivity status for a site to see the resource details." lightbox="./media/how-to-view-connectivity-status/site-connectivity-status-from-sites-page.png"::: --1. On the site's resource page, you can view the connectivity status for each resource within the site, including the resource responsible for the top-level most significant status. -- :::image type="content" source="./media/how-to-view-connectivity-status/los-angeles-resource-status-connectivity.png" alt-text="Screenshot that shows using the site details page to identify resources with connectivity issues." lightbox="./media/how-to-view-connectivity-status/los-angeles-resource-status-connectivity.png"::: |
azure-arc | How To View Update Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/site-manager/how-to-view-update-status.md | - Title: "How to view update status for site" -description: "How to view update status for site" ----- Previously updated : 04/18/2024--# As a site admin, I want to know how to view update status for sites so that I can use my site. ----# How to view update status for an Arc site --This article details how to view update status for an Arc site. A site's update status reflects the status of the underlying resources. From the site status view, you can find detailed status information for the supported resources as well. --## Prerequisites --* An Azure subscription. If you don't have a service subscription, create a [free trial account in Azure](https://azure.microsoft.com/free/). -* Azure portal access -* Internet connectivity -* A resource group or subscription in Azure with at least one resource for a site. For more information, see [Supported resource types](./overview.md#supported-resource-types). -* A site created for the associated resource group or subscription. If you don't have one, see [Create and manage sites](./how-to-crud-site.md). --## Update status colors and meanings --In the Azure portal, status is indicated using color. --* Green: **Up to Date** -* Blue: **Update Available** -* Yellow: **Update In Progress** -* Red: **Needs Attention** --This update status comes from the resources within each site and is provided by Azure Update Manager. --## View update status --You can view update status for an Arc site as a whole from the main page of Azure Arc site manager (preview). --1. From the [Azure portal](https://portal.azure.com), navigate to **Azure Arc** and select **Site manager (preview)** to open site manager. --1. From Azure Arc site manager, navigate to the **Overview** page. -- :::image type="content" source="./media/how-to-view-update-status/overview-sites-page.png" alt-text="Screenshot that shows selecting the Overview page in site manager."::: --1. On the **Overview** page, you can view the summarized update statuses of your sites. This site-level status is aggregated from the statuses of its managed resources. In the following example, sites are shown with different statuses. -- :::image type="content" source="./media/how-to-view-update-status/site-manager-update-status-overview-page.png" alt-text="Screenshot that shows the update status summary on the view page." lightbox="./media/how-to-view-update-status/site-manager-update-status-overview-page.png"::: --1. To understand which site has which status, select either the **sites** tab or the blue colored status text to be directed to the **sites** page. -- :::image type="content" source="./media/how-to-view-update-status/click-update-status-site-details.png" alt-text="Screenshot that shows selecting the Sites tab to get more detail about update status." lightbox="./media/how-to-view-update-status/click-update-status-site-details.png"::: --1. On the **Sites** page, you can view the top-level status for each site. This site-level status reflects the most significant resource-level status for the site. --1. Select the **Needs attention** link to view the resource details. -- :::image type="content" source="./media/how-to-view-update-status/site-update-status-from-sites-page.png" alt-text="Screenshot that shows selecting the update status for a site to see the resource details." lightbox="./media/how-to-view-update-status/site-update-status-from-sites-page.png" ::: --1. On the site's resource page, you can view the update status for each resource within the site, including the resource responsible for the top-level most significant status. -- :::image type="content" source="./media/how-to-view-update-status/los-angeles-resource-status-updates.png" alt-text="Screenshot that shows using the site details page to identify resources with pending or in progress updates." lightbox="./media/how-to-view-update-status/los-angeles-resource-status-updates.png" ::: |
azure-arc | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/site-manager/known-issues.md | - Title: Known issues -description: "Known issues in site manager" ----- Previously updated : 04/18/2024--#customer intent: As a customer, I want to understand how to resolve known issues I experience in site manager. -----# Known issues in Azure Arc site manager (preview) --This article identifies the known issues and when applicable their workarounds in Azure Arc site manager. --This page is continuously updated, and as known issues are discovered, they're added. --> [!IMPORTANT] -> Azure Arc site manager is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Known issues --|Feature |Issue |Workaround | -|||| -| Filtering | When you select on sites with connectivity issues, it isn't possible to filter the sites list view into those with connectivity issues. Similar issue at the resource level. | This is a known issue with no current workaround. | -| Microsoft.Edge Resource Provider | "Not Registered" occurs. User doesn't have the right permissions to register the Common Edge resource provider, they run into issues with the monitoring areas within sites | Request that your subscription administrator register the Microsoft.Edge resource provider. | -| Site Creation | During site creation, resource group is greyed out and unable to be selected. | This is presently by design, resource groups can't be associated to duplicate sites. This indicates that the resource group has already been associated to a site previously. Locate that associated site and make the desired changes to that site. | -| Site Creation | Error: "Site already associated with subscription scope" occurs during site creation | This is presently by design, subscriptions can't be associated to duplicate sites. This indicates that the subscription has already been associated to a site previously. Locate that associated site and make the desired changes to that site. | -| Sites tab view | In the sites tab view, a resource isn't showing up (visible) | Ensure that the resource is a supported resource type for sites. This likely is occurring as the resource isn't currently supported for sites | -| Site manager | Site manager isn't displaying, or searching, or visible anywhere in Azure portal | Check the url being used while in the Azure portal, you might have a text in the url that is preventing site manager from displaying or being searchable. Try to restart your Azure portal session and ensure your url doesn't have any extra text. | -| Resource status in site manager | Connectivity, alerts, and/or update status aren't showing | Site manager is unable to display status for resources in the following regions: Brazil South, UAE North, South Africa North | -- |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/site-manager/overview.md | - Title: "What is Azure Arc site manager (preview)" -description: "Describes how you can use Azure Arc sites and site manager to monitor and manage physical and logical resources, focused on edge scenarios." ----- Previously updated : 04/18/2024-----# What is Azure Arc site manager (preview)? --Azure Arc site manager allows you to manage and monitor your on-premises environments as Azure Arc *sites*. Arc sites are scoped to an Azure resource group or subscription and enable you to track connectivity, alerts, and updates across your environment. The experience is tailored for on-premises scenarios where infrastructure is often managed within a common physical boundary, such as a store, restaurant, or factory. --> [!IMPORTANT] -> Azure Arc site manager is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Set an Arc site scope --When you create a site, you scope it to either a resource group or a subscription. The site automatically pulls in any supported resources within its scope. --Arc sites currently have a 1:1 relationship with resource groups and subscriptions. Any given Arc site can only be associated to one resource group or subscription, and vice versa. --You can create a hierarchy of sites by creating one site for a subscription and more sites for the resource groups within the subscription. The following screenshot shows an example of a hierarchy, with sites for **Los Angeles**, **San Francisco**, and **New York** nested within the site **United States**. ---With site manager, customers who manage on-premises infrastructure can view resources based on their physical site or location. Sites don't logically have to be associated with a physical grouping. You can use sites in whatever way supports your scenario. For example, you could create a site that groups resources by function or type rather than location. --## Supported resource types --Currently, site manager supports the following Azure resources with the following capabilities: --| Resource | Inventory | Connectivity status | Updates | Alerts | -| -- | | - | - | | -| Azure Stack HCI | ![Checkmark icon - Inventory status supported for Azure Stack HCI.](./media/overview/yes-icon.svg) | ![Checkmark icon - Connectivity status supported for Azure Stack HCI.](./media/overview/yes-icon.svg) | ![Checkmark icon - Updates supported for Azure Stack HCI.](./media/overview/yes-icon.svg) (Minimum OS required: HCI 23H2) | ![Checkmark icon - Alerts supported for Azure Stack HCI.](./media/overview/yes-icon.svg) | -| Arc-enabled Servers | ![Checkmark icon - Inventory status supported for Arc for Servers.](./media/overview/yes-icon.svg) | ![Checkmark icon - Connectivity status supported for Arc for Servers.](./media/overview/yes-icon.svg) | ![Checkmark icon - Updates supported for Arc for Servers.](./media/overview/yes-icon.svg) | ![Checkmark icon - Alerts supported for Arc for Servers.](./media/overview/yes-icon.svg) | -| Arc-enabled VMs | ![Checkmark icon - Inventory status supported for Arc VMs.](./media/overview/yes-icon.svg) | ![Checkmark icon - Connectivity status supported for Arc VMs.](./media/overview/yes-icon.svg) | ![Checkmark icon - Update status supported for Arc VMs.](./media/overview/yes-icon.svg) | ![Checkmark icon - Alerts supported for Arc VMs.](./media/overview/yes-icon.svg) | -| Arc-enabled Kubernetes | ![Checkmark icon - Inventory status supported for Arc enabled Kubernetes.](./media/overview/yes-icon.svg) | ![Checkmark icon - Connectivity status supported for Arc enabled Kubernetes.](./media/overview/yes-icon.svg) | | ![Checkmark icon - Alerts supported for Arc enabled Kubernetes.](./media/overview/yes-icon.svg) | -| Azure Kubernetes Service (AKS) hybrid | ![Checkmark icon - Inventory status supported for AKS.](./media/overview/yes-icon.svg) | ![Checkmark icon - Connectivity status supported for AKS.](./media/overview/yes-icon.svg) | ![Checkmark icon - Update status supported for AKS.](./media/overview/yes-icon.svg) (only provisioned clusters) | ![Checkmark icon - Alerts supported for AKS.](./media/overview/yes-icon.svg) | -| Assets | ![Checkmark icon - Inventory status supported for Assets.](./media/overview/yes-icon.svg) | | | | --Site manager only provides status aggregation for the supported resource types. Site manager doesn't manage resources of other types that exist in the resource group or subscription, but those resources continue to function normally otherwise. --## Regions --Site manager supports resources that exist in [supported regions](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc®ions=all), with a few exceptions. For the following regions, connectivity and update status aren't supported for Arc-enabled machines or Arc-enabled Kubernetes clusters: --* Brazil South -* UAE North -* South Africa North --## Pricing --Site manager is free to use, but integrates with other Azure services that have their own pricing models. For your managed resources and monitoring configuration, including Azure Monitor alerts, refer to the individual service's pricing page. --## Next steps --[Quickstart: Create a site in Azure Arc site manager (preview)](./quickstart.md) |
azure-arc | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/site-manager/quickstart.md | - Title: "Quickstart: Create an Arc site" -description: "Describes how to create an Arc site" ----- Previously updated : 04/18/2024--#customer intent: As a admin who manages my sites as resource groups in Azure, I want to represent them as Arc sites and so that I can benefit from logical representation and extended functionality in Arc for my resources under my resource groups. --- -# Quickstart: Create a site in Azure Arc site manager (preview) --In this quickstart, you will create an Azure Arc site for resources grouped within a single resource group. Once you create your first Arc site, you're ready to view your resources within Arc and take actions on the resources, such as viewing inventory, connectivity status, updates, and alerts. --## Prerequisites --* An Azure subscription. If you don't have a service subscription, create a [free trial account in Azure](https://azure.microsoft.com/free/). -* Azure portal access -* Internet connectivity -* At least one supported resource in your Azure subscription or a resource group. For more information, see [Supported resource types](./overview.md#supported-resource-types). -- >[!TIP] - >We recommend that you give the resource group a name that represents the real site function. For the example in this article, the resource group is named **LA_10001** to reflect resources in Los Angeles. --## Create a site --Create a site to manage geographically related resources. --1. In the [Azure portal](https://portal.azure.com), search for and select **Azure Arc**. Select **Site manager (preview)** from the Azure Arc navigation menu. -- :::image type="content" source="./media/quickstart/arc-portal-main.png" alt-text="Screenshot that shows selecting Site manager from the Azure Arc overview."::: --1. From the main **Site manager** page in **Azure Arc**, select the blue **Create a site** button. -- :::image type="content" source="./media/quickstart/create-a-site-button.png" alt-text="Screenshot that shows creating a site from the site manager overview."::: --1. Provide the following information about your site: -- | Parameter | Description | - |--|--| - | **Site name** | Custom name for site. | - | **Display name** | Custom display name for site. | - | **Site scope** | Either **Subscription** or **Resource group**. The scope can only be defined at the time of creating a site and can't be modified later. All the resources in the scope can be viewed and managed from site manager. | - | **Subscription** | Subscription for the site to be created under. | - | **Resource group** | The resource group for the site, if the scope was set to resource group. | - | **Address** | Physical address for a site. | --1. Once these details are provided, select **Review + create**. -- :::image type="content" source="./media/quickstart/create-a-site-page-los-angeles.png" alt-text="Screenshot that shows all the site details filled in to create a site and then select review + create."::: --1. On the summary page, review and confirm the site details then select **Create** to create your site. -- :::image type="content" source="./media/quickstart/final-create-screen-arc-site.png" alt-text="Screenshot that shows the validation and review page for a new site and then select create."::: --## View your new site --Once you create a site, you can access it and its managed resources through site manager. --1. From the main **Site manager (preview)** page in **Azure Arc**, select **Sites** to view all existing sites. -- :::image type="content" source="./media/quickstart/sites-button-from-site-manager.png" alt-text="Screenshot that shows selecting Sites to view all sites."::: --1. On the **Sites** page, you can view all existing sites. Select the name of the site that you created. -- :::image type="content" source="./media/quickstart/los-angeles-site-select.png" alt-text="Screenshot that shows selecting a site to manage from the list of sites."::: --1. On a specific site's resource page, you can: -- * View resources - * Modify resources (modifications affect the resources elsewhere as well) - * View connectivity status - * View update status - * View alerts - * Add new resources --## Delete your site --You can delete a site from within the site's resource details page. ---Deleting a site doesn't affect the resources or the resource group and subscription in its scope. After a site is deleted, the resources of that site can't be viewed or managed from site manager. --A new site can be created for the resource group or the subscription after the original site is deleted. |
azure-arc | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/site-manager/troubleshooting.md | - Title: Troubleshooting -description: "Troubleshooting in site manager" ----- Previously updated : 04/18/2024--#customer intent: As a customer, I want to understand how to resolve known issues I experience in site manager. ----# Troubleshooting in Azure Arc site manager public preview --This article identifies the potential issue prone scenarios and when applicable their troubleshooting steps in Azure Arc site manager. --| Scenario | Troubleshooting suggestions | -||| -| Error adding resource to site | Site manager only supports specific resources. For more information, see [Supported resource types](./overview.md#supported-resource-types).<br><br>The resource might not be able to be created in the resource group or subscription associated with the site.<br><br>Your permissions might not enable you to modify the resources within the resource group or subscription associated with the site. Work with your admin to ensure your permissions are correct and try again. | -| Permissions error, also known as role based access control or RBAC | Ensure that you have the correct permissions to create new sites under your subscription or resource group, work with your admin to ensure you have permission to create. | -| Resource not visible in site | It's likely that the resource isn't supported by site manager. For more information, see [Supported resource types](./overview.md#supported-resource-types). | -| Site page or overview or get started page in site manager isn't loading or not showing any information | 1. Check the url being used while in the Azure portal, you might have a text in the url that is preventing site manager and/or pages within site manager from displaying or being searched. Try to restart your Azure portal session and ensure your url doesn't have any extra text.<br><br>2. Ensure that your subscription and/or resource group is within a region that is supported. For more information, see [supported regions](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc®ions=all). | - |
azure-arc | Administer Arc Scvmm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/administer-arc-scvmm.md | - Title: Perform ongoing administration for Arc-enabled System Center Virtual Machine Manager -description: Learn how to perform administrator operations related to Azure Arc-enabled System Center Virtual Machine Manager - Previously updated : 12/04/2023---------# Perform ongoing administration for Arc-enabled System Center Virtual Machine Manager --In this article, you learn how to perform various administrative operations related to Azure Arc-enabled System Center Virtual Machine Manager (SCVMM): --- Upgrade the Azure Arc resource bridge manually-- Update the SCVMM account credentials-- Collect logs from the Arc resource bridge--Each of these operations requires either SSH key to the resource bridge VM or the kubeconfig file that provides access to the Kubernetes cluster on the resource bridge VM. --## Upgrade the Arc resource bridge manually --Azure Arc-enabled SCVMM requires the Arc resource bridge to connect your SCVMM environment with Azure. Periodically, new images of Arc resource bridge are released to include security and feature updates. The Arc resource bridge can be manually upgraded from the SCVMM server. You must meet all upgrade [prerequisites](../resource-bridge/upgrade.md#prerequisites) before attempting to upgrade. The SCVMM server must have the kubeconfig and appliance configuration files stored locally. If the SCVMM account credentials changed after the initial deployment of the resource bridge, [update the new account credentials](administer-arc-scvmm.md#update-the-scvmm-account-credentials-using-a-new-password-or-a-new-scvmm-account-after-onboarding) before attempting manual upgrade. --> [!NOTE] -> The manual upgrade feature is available for resource bridge version 1.0.14 and later. Resource bridges below version 1.0.14 must [perform the recovery option](./disaster-recovery.md) to upgrade to version 1.0.15 or later. --The manual upgrade generally takes between 30-90 minutes, depending on the network speed. The upgrade command takes your Arc resource bridge to the immediate next version, which might not be the latest available version. Multiple upgrades could be needed to reach a [supported version](../resource-bridge/upgrade.md#supported-versions). You can check your resource bridge version by checking the Azure resource of your Arc resource bridge. --To manually upgrade your Arc resource bridge, make sure you've installed the latest `az arcappliance` CLI extension by running the extension upgrade command from the SCVMM server: --```azurecli -az extension add --upgrade --name arcappliance -``` --To manually upgrade your resource bridge, use the following command: --```azurecli -az arcappliance upgrade scvmm --config-file <file path to ARBname-appliance.yaml> -``` --## Update the SCVMM account credentials (using a new password or a new SCVMM account after onboarding) --Azure Arc-enabled SCVMM uses the SCVMM account credentials you provided during the onboarding to communicate with your SCVMM management server. These credentials are only persisted locally on the Arc resource bridge VM. --As part of your security practices, you might need to rotate credentials for your SCVMM accounts. As credentials are rotated, you must also update the credentials provided to Azure Arc to ensure the functioning of Azure Arc-enabled SCVMM. You can also use the same steps in case you need to use a different SCVMM account after onboarding. You must ensure the new account also has all the [required SCVMM permissions](quickstart-connect-system-center-virtual-machine-manager-to-arc.md#prerequisites). --There are two different sets of credentials stored on the Arc resource bridge. You can use the same account credentials for both. --- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade.-- **Account for SCVMM cluster extension**. This account is used to discover inventory and perform all the VM operations through Azure Arc-enabled SCVMM.--To update the credentials of the account for Arc resource bridge, run the following Azure CLI commands. Run the commands from a workstation that can access cluster configuration IP address of the Arc resource bridge locally: --```azurecli -az account set -s <subscription id> -az arcappliance get-credentials -n <name of the appliance> -g <resource group name> -az arcappliance update-infracredentials scvmm --kubeconfig kubeconfig -``` -For more information on the commands, see [`az arcappliance get-credentials`](/cli/azure/arcappliance#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials scvmm`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-scvmm). ---To update the credentials used by the SCVMM cluster extension on the resource bridge. This command can be run from anywhere with `connectedscvmm` CLI extension installed. --```azurecli -az connectedscvmm scvmm connect --custom-location <name of the custom location> --location <Azure region> --name <name of the SCVMM resource in Azure> --resource-group <resource group for the SCVMM resource> --username <username for the SCVMM account> --password <password to the SCVMM account> -``` --## Collect logs from the Arc resource bridge --For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](/cli/azure/arcappliance/logs#az-arcappliance-logs-scvmm) command. --To save the logs to a destination folder, run the following commands. These commands need connectivity to cluster configuration IP address. --```azurecli -az account set -s <subscription id> -az arcappliance get-credentials -n <name of the appliance> -g <resource group name> -az arcappliance logs scvmm --kubeconfig kubeconfig --out-dir <path to specified output directory> -``` --If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following commands. These commands require connectivity to IP address of the Azure Arc resource bridge VM via SSH. --```azurecli -az account set -s <subscription id> -az arcappliance get-credentials -n <name of the appliance> -g <resource group name> -az arcappliance logs scvmm --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX -``` --## Next steps --- [Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md).-- [Understand disaster recovery operations for resource bridge](./disaster-recovery.md). |
azure-arc | Agent Overview Scvmm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/agent-overview-scvmm.md | - Title: Overview of Azure Connected Machine agent to manage Windows and Linux machines -description: This article provides an overview of the Azure Connected Machine agent, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 11/15/2023-----ms. -----# Overview of Azure Connected Machine agent to manage Windows and Linux machines --When you [enable guest management](enable-guest-management-at-scale.md) on SCVMM VMs, Azure arc agent is installed on the VMs. The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. This article provides an architectural overview of Azure connected machine agent. --## Agent components ---The Azure Connected Machine agent package contains several logical components bundled together: --* The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity. --* The guest configuration agent provides functionality such as assessing whether the machine complies with required policies and enforcing compliance. -- Note the following behavior with Azure Policy [guest configuration](../../governance/machine-configuration/overview.md) for a disconnected machine: -- * An Azure Policy assignment that targets disconnected machines is unaffected. - * Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied. - * Assignments are deleted after 14 days, and aren't reassigned to the machine after the 14-day period. --* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Azure downloads extensions and copies them to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension installs to the following path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension installs to `/var/lib/waagent/<extension>`. -->[!NOTE] -> The [Azure Monitor agent (AMA)](/azure/azure-monitor/agents/azure-monitor-agent-overview) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines. --## Agent resources --The following information describes the directories and user accounts used by the Azure Connected Machine agent. --### Windows agent installation details --The Windows agent is distributed as a Windows Installer package (MSI). Download the Windows agent from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent). -Installing the Connected Machine agent for Window applies the following system-wide configuration changes: --* The installation process creates the following folders during setup. -- | Directory | Description | - |--|-| - | %ProgramFiles%\AzureConnectedMachineAgent | azcmagent CLI and instance metadata service executables.| - | %ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.| - | %ProgramFiles%\AzureConnectedMachineAgent\GCArcService\GC | Guest configuration (policy) service executables.| - | %ProgramData%\AzureConnectedMachineAgent | Configuration, log and identity token files for azcmagent CLI and instance metadata service.| - | %ProgramData%\GuestConfig | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.| - | %SYSTEMDRIVE%\packages | Extension package executables | --* Installing the agent creates the following Windows services on the target machine. -- | Service name | Display name | Process name | Description | - |--|--|--|-| - | himds | Azure Hybrid Instance Metadata Service | himds | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens | - | GCArcService | Guest configuration Arc Service | gc_service | Audits and enforces Azure guest configuration policies on the machine. | - | ExtensionService | Guest configuration Extension Service | gc_service | Installs, updates, and manages extensions on the machine. | --* Agent installation creates the following virtual service account. -- | Virtual Account | Description | - ||-| - | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. | -- > [!TIP] - > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you might need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function. --* Agent installation creates the following local security group. -- | Security group name | Description | - ||-| - | Hybrid agent extension applications | Members of this security group can request Microsoft Entra tokens for the system-assigned managed identity | --* Agent installation creates the following environmental variables -- | Name | Default value | Description | - |||| - | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` | - | IMDS_ENDPOINT | `http://localhost:40342` | --* There are several log files available for troubleshooting, described in the following table. -- | Log | Description | - |--|-| - | %ProgramData%\AzureConnectedMachineAgent\Log\himds.log | Records details of the heartbeat and identity agent component. | - | %ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log | Contains the output of the azcmagent tool commands. | - | %ProgramData%\GuestConfig\arc_policy_logs\gc_agent.log | Records details about the guest configuration (policy) agent component. | - | %ProgramData%\GuestConfig\ext_mgr_logs\gc_ext.log | Records details about extension manager activity (extension install, uninstall, and upgrade events). | - | %ProgramData%\GuestConfig\extension_logs | Directory containing logs for individual extensions. | --* The process creates the local security group **Hybrid agent extension applications**. --* After uninstalling the agent, the following artifacts remain. -- * %ProgramData%\AzureConnectedMachineAgent\Log - * %ProgramData%\AzureConnectedMachineAgent - * %ProgramData%\GuestConfig - * %SystemDrive%\packages --### Linux agent installation details --The preferred package format for the distribution (`.rpm` or `.deb`) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/) provides the Connected Machine agent for Linux. The shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent) installs and configures the agent. --Installing, upgrading, and removing the Connected Machine agent isn't required after server restart. --Installing the Connected Machine agent for Linux applies the following system-wide configuration changes. --* Setup creates the following installation folders. -- | Directory | Description | - |--|-| - | /opt/azcmagent/ | azcmagent CLI and instance metadata service executables. | - | /opt/GC_Ext/ | Extension service executables. | - | /opt/GC_Service/ | Guest configuration (policy) service executables. | - | /var/opt/azcmagent/ | Configuration, log and identity token files for azcmagent CLI and instance metadata service.| - | /var/lib/GuestConfig/ | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.| --* Installing the agent creates the following daemons. -- | Service name | Display name | Process name | Description | - |--|--|--|-| - | himdsd.service | Azure Connected Machine Agent Service | himds | This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.| - | gcad.service | GC Arc Service | gc_linux_service | Audits and enforces Azure guest configuration policies on the machine. | - | extd.service | Extension Service | gc_linux_service | Installs, updates, and manages extensions on the machine. | --* There are several log files available for troubleshooting, described in the following table. -- | Log | Description | - |--|-| - | /var/opt/azcmagent/log/himds.log | Records details of the heartbeat and identity agent component. | - | /var/opt/azcmagent/log/azcmagent.log | Contains the output of the azcmagent tool commands. | - | /var/lib/GuestConfig/arc_policy_logs | Records details about the guest configuration (policy) agent component. | - | /var/lib/GuestConfig/ext_mgr_logs | Records details about extension manager activity (extension install, uninstall, and upgrade events). | - | /var/lib/GuestConfig/extension_logs | Directory containing logs for individual extensions. | --* Agent installation creates the following environment variables, set in `/lib/systemd/system.conf.d/azcmagent.conf`. -- | Name | Default value | Description | - |||-| - | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` | - | IMDS_ENDPOINT | `http://localhost:40342` | --* After uninstalling the agent, the following artifacts remain. -- * /var/opt/azcmagent - * /var/lib/GuestConfig --## Agent resource governance --The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions: --* The Guest Configuration agent can use up to 5% of the CPU to evaluate policies. -* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply: -- | Extension type | Operating system | CPU limit | - | -- | - | | - | AzureMonitorLinuxAgent | Linux | 60% | - | AzureMonitorWindowsAgent | Windows | 100% | - | AzureSecurityLinuxAgent | Linux | 30% | - | LinuxOsUpdateExtension | Linux | 60% | - | MDE.Linux | Linux | 60% | - | MicrosoftDnsAgent | Windows | 100% | - | MicrosoftMonitoringAgent | Windows | 60% | - | OmsAgentForLinux | Windows | 60%| --During normal operations, defined as the Azure Connected Machine agent being connected to Azure and not actively modifying an extension or evaluating a policy, you can expect the agent to consume the following system resources: --| | Windows | Linux | -| | - | -- | -| **CPU usage (normalized to 1 core)** | 0.07% | 0.02% | -| **Memory usage** | 57 MB | 42 MB | --The performance data above was gathered in April 2023 on virtual machines running Windows Server 2022 and Ubuntu 20.04. Actual agent performance and resource consumption will vary based on the hardware and software configuration of your servers. --## Instance metadata --Metadata information about a connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers. Specifically: --* Operating system name, type, and version -* Computer name -* Computer manufacturer and model -* Computer fully qualified domain name (FQDN) -* Domain name (if joined to an Active Directory domain) -* Active Directory and DNS fully qualified domain name (FQDN) -* UUID (BIOS ID) -* Connected Machine agent heartbeat -* Connected Machine agent version -* Public key for managed identity -* Policy compliance status and details (if using guest configuration policies) -* SQL Server installed (Boolean value) -* Cluster resource ID (for Azure Stack HCI nodes) -* Hardware manufacturer -* Hardware model -* CPU family, socket, physical core and logical core counts -* Total physical memory -* Serial number -* SMBIOS asset tag -* Cloud provider --The agent requests the following metadata information from Azure: --* Resource location (region) -* Virtual machine ID -* Tags -* Microsoft Entra managed identity certificate -* Guest configuration policy assignments -* Extension requests - install, update, and delete. --> [!NOTE] -> Azure Arc-enabled servers doesn't store/process customer data outside the region the customer deploys the service instance in. --## Next steps --- [Connect your SCVMM server to Azure Arc](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc).-- [Install Arc agent at scale for your SCVMM VMs](/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale).-- [Install Arc agent using a script for SCVMM VMs](/azure/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script). |
azure-arc | Create Virtual Machine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/create-virtual-machine.md | - Title: Create a virtual machine on System Center Virtual Machine Manager using Azure Arc -description: This article helps you create a virtual machine using Azure portal. Previously updated : 07/01/2024--ms. -----keywords: "VMM, Arc, Azure" ----# Create a virtual machine on System Center Virtual Machine Manager using Azure Arc --Once your administrator has connected an SCVMM management server to Azure, represented VMM resources such as private clouds, VM templates in Azure, and provided you the required permissions on those resources, you'll be able to create a virtual machine in Azure. --## Prerequisites --- An Azure subscription and resource group where you have *Arc SCVMM VM Contributor* role.-- A cloud resource on which you have *Arc SCVMM Private Cloud Resource User* role.-- A virtual machine template resource on which you have *Arc SCVMM Private Cloud Resource User* role.-- A virtual network resource on which you have *Arc SCVMM Private Cloud Resource User* role.--## How to create a VM in Azure portal --1. Go to Azure portal. -2. You can initiate the creation of a new VM in either of the following two ways: - - Select **Azure Arc** as the service and then select **SCVMM management servers** under **Host environments** from the left blade. Search and select your SCVMM management server. Select **Virtual machines** under **SCVMM inventory** from the left blade and select **Add**. - Or - - Select **Azure Arc** as the service and then select **Machine** under **Azure Arc resources** from the left blade. Select **Add/Create** and select **Create a machine in a connected host environment** from the dropdown. -1. Once the **Create an Azure Arc virtual machine** page opens, under **Basics** > **Project details**, select the **Subscription** and **Resource group** where you want to deploy the VM. -1. Under **Instance details**, provide the following details: - - **Virtual machine name** - Specify the name of the virtual machine. - - **Custom location** - Select the custom location that your administrator has shared with you. - - **Virtual machine kind** - Select **System Center Virtual Machine Manager**. - - **Cloud** - Select the target VMM private cloud. - - **Availability set** - (Optional) Use availability sets to identify virtual machines that you want VMM to keep on separate hosts for improved continuity of service. -1. Under **Template details**, provide the following details: - - **Template** - Choose the VM template for deployment. - - **Override template defaults** - Select the checkbox to override the default CPU cores and memory on the VM templates. - - Specify computer name for the VM if the VM template has computer name associated with it. -1. Keep the **Enable Guest Management** checkbox selected to automatically install Azure connected machine agent immediately after the creation of the VM. [Azure connected machine agent (Arc agent)](../servers/agent-overview.md) is required if you're planning to use Azure management services to govern, patch, monitor, and secure your VM through Azure. -1. Under **Administrator account**, provide the following details and select **Next : Disks >**. - - Username - - Password - - Confirm password -1. Under **Disks**, you can optionally change the disks configured in the template. You can add more disks or update existing disks. -1. Under **Networking**, you can optionally change the network interfaces configured in the template. You can add Network interface cards (NICs) or update the existing NICs. You can also change the network that this NIC will be attached to provided you have appropriate permissions to the network resource. -1. Under **Advanced**, enable processor compatibility mode if required. -1. Under **Tags**, you can optionally add tags to the VM resource. -1. Under **Review + create**, review all the properties and select **Create**. The VM will be created in a few minutes. |
azure-arc | Deliver Esus For System Center Virtual Machine Manager Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/deliver-esus-for-system-center-virtual-machine-manager-vms.md | - Title: Deliver ESUs for SCVMM VMs through Arc -description: Deliver ESUs for SCVMM VMs through Azure Arc. Previously updated : 12/05/2023--ms. -----keywords: "VMM, Arc, Azure" ---# Deliver ESUs for SCVMM VMs through Arc --Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) allows you to enroll all the Windows Server 2012/2012 R2 VMs managed by your SCVMM server in [Extended Security Updates (ESUs)](/windows-server/get-started/extended-security-updates-overview) at scale. --ESUs allow you to leverage cost flexibility in the form of pay-as-you-go Azure billing and enhanced delivery experience in the form of built-in inventory and keyless delivery. In addition, ESUs enabled by Azure Arc give you access to Azure management services such as [Azure Update Manager](/azure/update-manager/overview?tabs=azure-vms), [Azure Automation Change Tracking and Inventory](/azure/automation/change-tracking/overview?tabs=python-2), and [Azure Policy Guest Configuration](/azure/cloud-adoption-framework/manage/azure-server-management/guest-configuration-policy) at no additional cost. --This article provides the steps to procure and deliver ESUs to WS 2012 and 2012 R2 SCVMM VMs onboarded to Azure Arc-enabled SCVMM. -->[!Note] -> - Through Azure Arc-enabled SCVMM, you can procure and deliver ESUs only for the SCVMM managed VMs and not for your hosts. -> - To purchase ESUs, you must have Software Assurance through Volume Licensing Programs such as an Enterprise Agreement (EA), Enterprise Agreement Subscription (EAS), Enrollment for Education Solutions (EES), or Server and Cloud Enrollment (SCE). Alternatively, if your Windows Server 2012/2012 R2 machines are licensed through SPLA or with a Server Subscription, Software Assurance isn't required to purchase ESUs. --## Prerequisites --- The user account must have an Owner/Contributor role in a Resource Group in Azure to create and assign ESUs to SCVMM VMs. -- The SCVMM server managing the WS 2012 and 2012 R2 VMs, for which the ESUs are to be applied, should be [onboarded to Azure Arc](./quickstart-connect-system-center-virtual-machine-manager-to-arc.md). After onboarding, the WS 2012 and 2012 R2 VMs, for which the ESUs are to be applied, should be [Azure-enabled](enable-scvmm-inventory-resources.md) and [guest management enabled](./enable-guest-management-at-scale.md). --## Create Azure Arc ESUs --1. Sign in to the [Azure portal](https://portal.azure.com/). -2. On the **Azure Arc** page, select **Extended Security Updates** in the left pane. Here, you can view and create ESU Licenses and view eligible resources for ESUs. -3. The **Licenses** tab displays Azure Arc WS 2012 licenses that are available. Select an existing license to apply or create a new license. -- :::image type="content" source="media/deliver-esus-for-scvmm-vms/select-or-create-license.png" alt-text="Screenshot of how to create a new license." lightbox="media/deliver-esus-for-scvmm-vms/select-or-create-license.png"::: --4. To create a new WS 2012 license, select **Create**, and then provide the information required to configure the license on the page. For detailed information on how to complete this step, see [License provisioning guidelines for Extended Security Updates for Windows Server 2012](../servers/license-extended-security-updates.md). -5. Review the information provided and select **Create**. The license you created appears in the list, and you can link it to one or more Arc-enabled SCVMM VMs by following the steps in the next section. -- :::image type="content" source="media/deliver-esus-for-scvmm-vms/new-license-created.png" alt-text="Screenshot showing the successful creation of a new license." lightbox="media/deliver-esus-for-scvmm-vms/new-license-created.png"::: --## Link ESU licenses to Arc-enabled SCVMM VMs --You can select one or more Arc-enabled SCVMM VMs to link to an ESU license. Once you've linked a VM to an activated ESU license, the VM is eligible to receive Windows Server 2012 and 2012 R2 ESUs. -->[!Note] -> You have the flexibility to configure your patching solution of choice to receive these updates ΓÇô whether it's [Azure Update Manager](/azure/update-center/overview), [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus), Microsoft Updates, [Microsoft Endpoint Configuration Manager](/mem/configmgr/core/understand/introduction), or a third-party patch management solution. --1. Select the **Eligible Resources** tab to view a list of all your Arc-enabled server machines running Windows Server 2012 and 2012 R2, including SCVMM machines that are guest management enabled. The **ESUs status** column indicates whether the machine is ESUs enabled. - - :::image type="content" source="media/deliver-esus-for-scvmm-vms/view-arc-enabled-machines.png" alt-text="Screenshot of arc-enabled server machines running Windows Server 2012 and 2012 R2 under the eligible resources tab." lightbox="media/deliver-esus-for-scvmm-vms/view-arc-enabled-machines.png"::: --2. To enable ESUs for one or more machines, select them in the list, and then select **Enable ESUs**. -3. On the **Enable Extended Security Updates** page, you can see the number of machines selected to enable ESUs and the WS 2012 licenses available to apply. Select a license to link to the selected machine(s) and select **Enable**. -- :::image type="content" source="media/deliver-esus-for-scvmm-vms/enable-license.png" alt-text="Screenshot of how to select and enable license." lightbox="media/deliver-esus-for-scvmm-vms/enable-license.png"::: --4. The **ESUs status** column value of the selected machines changes to **Enabled**. -- >[!Note] - > - See [Troubleshoot delivery of Extended Security Updates for Windows Server 2012](../servers/troubleshoot-extended-security-updates.md) to troubleshoot any problems that occur during the enablement process.<br> - > - Review the [additional scenarios](../servers/deliver-extended-security-updates.md#additional-scenarios) in which you may be eligible to receive ESU patches at no additional cost. --## Next steps --[Programmatically deploy and manage Azure Arc Extended Security Updates licenses](../servers/api-extended-security-updates.md). |
azure-arc | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/disaster-recovery.md | - Title: Recover from accidental deletion of resource bridge VM -description: Learn how to perform recovery operations for the Azure Arc resource bridge VM in Azure Arc-enabled System Center Virtual Machine Manager disaster scenarios. -- Previously updated : 12/28/2023-ms. -------# Recover from accidental deletion of resource bridge virtual machine --In this article, you learn how to recover the Azure Arc resource bridge connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail. --## Recover the Arc resource bridge in case of virtual machine deletion --To recover from Arc resource bridge VM deletion, you need to deploy a new resource bridge with the same resource ID as the current resource bridge using the following steps. -->[!Note] -> This note is applicable only if you're performing this recovery operation to upgrade your Arc resource bridge.<br><br> -> If you have VMs that are still in the older version, i.e., have *Enabled (Deprecated)* set under the *Virtual hardware operations* column in the Virtual Machines inventory of your SCVMM server in Azure, switch them to the new version by following the steps in [this article](./switch-to-the-new-version-scvmm.md#switch-to-the-new-version-existing-customer) before proceeding with the steps for resource bridge recovery. -->[!Note] -> DHCP-based Arc Resource Bridge deployment is no longer supported.<br><br> -If you had deployed Arc Resource Bridge earlier using DHCP, you must clean up your deployment by removing your resources from Azure and do a [fresh onboarding](./quickstart-connect-system-center-virtual-machine-manager-to-arc.md). -> -## Prerequisites --1. The disaster recovery script must be run from the same folder where the config (.yaml) files are present. The config files are present on the machine used to run the script to deploy Arc resource bridge. --1. The machine being used to run the script must have bidirectional connectivity to the Arc resource bridge VM on port 6443 (Kubernetes API server) and 22 (SSH), and outbound connectivity to the Arc resource bridge VM on port 443 (HTTPS). ---### Recover Arc resource bridge from a Windows machine --1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and SCVMM management server Azure resources. --2. Download [this script](https://download.microsoft.com/download/a/a/8/aa8687e4-1a30-485f-9de4-4f15fc576724/arcvmm-windows-dr.ps1) and update the following section in the script using the same information as the original resources in Azure. -- ```powershell - $location = <Azure region of the original Arc resource bridge> - $applianceSubscriptionId = <subscription-id> - $applianceResourceGroupName = <resource-group-name> - $applianceName = <resource-bridge-name> -- $customLocationSubscriptionId = <subscription-id> - $customLocationResourceGroupName = <resource-group-name> - $customLocationName = <custom-location-name> -- $vmmserverSubscriptionId = <subscription-id> - $vmmserverResourceGroupName = <resource-group-name> - $vmmserverName= <SCVMM-name-in-azure> - ``` - -3. Run the updated script from the same location where the config YAML files are stored after the initial onboarding. This is most likely the same folder from where you ran the initial onboarding script unless the config files were moved later to a different location. [Provide the inputs](quickstart-connect-system-center-virtual-machine-manager-to-arc.md#script-runtime) as prompted. --4. Once the script is run successfully, the old Resource Bridge is recovered, and the connection is re-established to the existing Azure-enabled SCVMM resources. --## Next steps --[Troubleshoot Azure Arc resource bridge issues](../resource-bridge/troubleshoot-resource-bridge.md) --If the recovery steps mentioned above are unsuccessful in restoring Arc resource bridge to its original state, try one of the following channels for support: --- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).-- Connect with [@AzureSupport](https://x.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.-- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). |
azure-arc | Enable Guest Management At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md | - Title: Install Arc agent at scale for your SCVMM VMs -description: Learn how to enable guest management at scale for Arc-enabled SCVMM VMs. ------ Previously updated : 03/27/2024-keywords: "VMM, Arc, Azure" --#Customer intent: As an IT infrastructure admin, I want to install arc agents to use Azure management services for SCVMM VMs. ---# Install Arc agents at scale for Arc-enabled SCVMM VMs --In this article, you learn how to install Arc agents at scale for SCVMM VMs and use Azure management capabilities. -->[!IMPORTANT] ->We recommend maintaining the SCVMM management server and the SCVMM console in the same Long-Term Servicing Channel (LTSC) and Update Rollup (UR) version. -->[!NOTE] ->This article is applicable only if you are running: ->- SCVMM 2022 UR1 or later versions of SCVMM server or console ->- SCVMM 2019 UR5 or later versions of SCVMM server or console ->- VMs running Windows Server 2012 R2, 2016, 2019, 2022, Windows 10, and Windows 11 ->For other SCVMM versions, Linux VMs or Windows VMs running WS 2012 or earlier, [install Arc agents through the script](install-arc-agents-using-script.md). --## Prerequisites --Ensure the following before you install Arc agents at scale for SCVMM VMs: --- The resource bridge must be in a running state.-- The SCVMM management server must be in a connected state.-- The user account must have permissions listed in Azure Arc SCVMM Administrator role.-- All the target machines are:- - Powered on and the resource bridge has network connectivity to the host running the VM. - - Running a [supported operating system](../servers/prerequisites.md#supported-operating-systems). - - Able to connect through the firewall to communicate over the internet and [these URLs](../servers/network-requirements.md?tabs=azure-cloud#urls) aren't blocked. --## Install Arc agents at scale from portal --An admin can install agents for multiple machines from the Azure portal if the machines share the same administrator credentials. --1. Navigate to the **SCVMM management servers** blade on [Azure Arc Center](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview), and select the SCVMM management server resource. -2. Select all the machines and choose the **Enable in Azure** option. -3. Select **Enable guest management** checkbox to install Arc agents on the selected machine. -4. If you want to connect the Arc agent via proxy, provide the proxy server details. -5. If you want to connect Arc agent via private endpoint, follow these [steps](../servers/private-link-security.md) to set up Azure private link. -- >[!Note] - > Private endpoint connectivity is only available for Arc agent to Azure communications. For Arc resource bridge to Azure connectivity, Azure Private link isn't supported. --6. Provide the administrator username and password for the machine. -- >[!Note] - > For Windows VMs, the account must be part of the local administrator group; and for Linux VM, it must be a root account. --## Next steps --[Manage VM extensions to use Azure management services for your SCVMM VMs](../servers/manage-vm-extensions.md). |
azure-arc | Enable Scvmm Inventory Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-scvmm-inventory-resources.md | - Title: Enable SCVMM inventory resources in Azure Arc center -description: This article helps you enable SCVMM inventory resources from Azure portal ----- Previously updated : 11/15/2023--keywords: "VMM, Arc, Azure" ---# Enable SCVMM inventory resources from Azure portal --The article describes how you can view SCVMM management servers and enable SCVMM inventory from Azure portal, after connecting to the SCVMM management server. --## View SCVMM management servers --You can view all the connected SCVMM management servers under **SCVMM management servers** in Azure Arc center. ---In the inventory view, you can browse the virtual machines (VMs), VMM clouds, VM network, and VM templates. -Under each inventory, you can select and enable one or more SCVMM resources in Azure to create an Azure resource representing your SCVMM resource. --You can further use the Azure resource to assign permissions or perform management operations. --## Enable SCVMM cloud, VM templates and VM networks in Azure --To enable the SCVMM inventory resources, follow these steps: --1. From Azure home > **Azure Arc** center, go to **SCVMM management servers** blade and go to inventory resources blade. -- :::image type="content" source="media/enable-scvmm-inventory-resources/scvmm-server-blade-inline.png" alt-text="Screenshot of how to go to SCVMM management servers blade." lightbox="media/enable-scvmm-inventory-resources/scvmm-server-blade-expanded.png"::: --1. Select the resource(s) you want to enable and select **Enable in Azure**. -- :::image type="content" source="media/enable-scvmm-inventory-resources/scvmm-enable-azure-inline.png" alt-text="Screenshot of how to enable in Azure option." lightbox="media/enable-scvmm-inventory-resources/scvmm-enable-azure-expanded.png"::: --1. In **Enable in Azure**, select your **Azure subscription** and **Resource Group** and select **Enable**. -- :::image type="content" source="media/enable-scvmm-inventory-resources/scvmm-select-sub-resource-inline.png" alt-text="Screenshot of how to select subscription and resource group." lightbox="media/enable-scvmm-inventory-resources/scvmm-select-sub-resource-expanded.png"::: -- The deployment is initiated and it creates a resource in Azure, representing your SCVMM resources. It allows you to manage the access to these resources through the Azure role-based access control (RBAC) granularly. -- Repeat the above steps for one or more VM networks and VM template resources. --## Enable existing virtual machines in Azure --To enable the existing virtual machines in Azure, follow these steps: --1. From Azure home > **Azure Arc** center, go to **SCVMM management servers** blade and go to inventory resources blade. --1. Go to **SCVMM inventory** resource blade, select **Virtual machines** and then select the VMs you want to enable and select **Enable in Azure**. -- :::image type="content" source="media/enable-scvmm-inventory-resources/scvmm-enable-existing-vm-inline.png" alt-text="Screenshot of how to enable existing virtual machines in Azure." lightbox="media/enable-scvmm-inventory-resources/scvmm-enable-existing-vm-expanded.png"::: --1. Select your **Azure subscription** and **Resource group**. --1. Select **Enable** to start the deployment of the VM represented in Azure. -->[!NOTE] ->Moving SCVMM resources between Resource Groups and Subscriptions is currently not supported. --## Next steps --[Connect virtual machines to Arc](quickstart-connect-system-center-virtual-machine-manager-to-arc.md) |
azure-arc | Enable Virtual Hardware Scvmm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-virtual-hardware-scvmm.md | - Title: Enable virtual hardware and VM CRUD capabilities in an SCVMM machine with Arc agent installed -description: Enable virtual hardware and VM CRUD capabilities in an SCVMM machine with Arc agent installed - Previously updated : 01/05/2024--------# Enable virtual hardware and VM CRUD capabilities in an SCVMM machine with Arc agent installed --In this article, you learn how to enable virtual hardware management and VM CRUD operational ability on an SCVMM VM that has Arc agents installed via the Arc-enabled Servers route. -->[!IMPORTANT] -> This article is applicable only if you've installed Arc agents directly in SCVMM machines before onboarding to Azure Arc-enabled SCVMM by deploying Arc resource bridge. --## Prerequisites --- An Azure subscription and resource group where you have *Arc ScVmm VM Administrator* role. -- Your SCVMM management server instance must be [onboarded](quickstart-connect-system-center-virtual-machine-manager-to-arc.md) to Azure Arc.--## Enable virtual hardware management and self-service access to SCVMM VMs with Arc agent installed --1. From your browser, go to [Azure portal](https://portal.azure.com/). --1. Navigate to the Virtual machines inventory page of your SCVMM management servers. The virtual machines that have Arc agent installed via the Arc-enabled Servers route will have **Link to SCVMM management server** status under virtual hardware management. --1. Select **Link to SCVMM management server** to view the pane with the list of all the machines under SCVMM management server with Arc agent installed but not linked to the SCVMM management server in Azure Arc. --1. Choose all the machines that need to be enabled in Azure, and select **Link** to link the machines to SCVMM management server. --1. After you link to SCVMM management server, the virtual hardware status will reflect as **Enabled** for all the VMs, and you can perform virtual hardware operations. --## Next steps --[Set up and manage self-service access to SCVMM resources](set-up-and-manage-self-service-access-scvmm.md). - |
azure-arc | Install Arc Agents Using Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script.md | - Title: Install Arc agent using a script for SCVMM VMs -description: Learn how to enable guest management using a script for Arc enabled SCVMM VMs. - Previously updated : 12/01/2023-------#Customer intent: As an IT infrastructure admin, I want to install arc agents to use Azure management services for SCVMM VMs. ----# Install Arc agents using a script --In this article, you learn how to install Arc agents on Azure-enabled SCVMM VMs using a script. --## Prerequisites --Ensure the following before you install Arc agents using a script for SCVMM VMs: --- The resource bridge must be in a running state.-- The SCVMM management server must be in a connected state.-- The user account must have permissions listed in Azure Arc SCVMM Administrator role.-- The target machine:- - Is powered on and the resource bridge has network connectivity to the host running the VM. - - Is running a [supported operating system](/azure/azure-arc/servers/prerequisites#supported-operating-systems). - - Is able to connect through the firewall to communicate over the Internet and [these URLs](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked. - - Has Azure CLI [installed](/cli/azure/install-azure-cli). - - Has the Arc agent installation script downloaded from [here](https://download.microsoft.com/download/7/1/6/7164490e-6d8c-450c-8511-f8191f6ec110/arcscvmm-enable-guest-management.ps1) for a Windows VM or from [here](https://download.microsoft.com/download/0/9/b/09bd9ef4-a7af-49e5-ad5f-9e8f85fae75b/arcscvmm-enable-guest-management.sh) for a Linux VM. -->[!NOTE] ->- If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo`, and `add <username> ALL=(ALL) NOPASSWD:ALL` at the end of the file. Ensure you replace `<username>`. ->- If your VM template has these changes incorporated, you won't need to do this for the VM created from that template. --## Steps to install Arc agents using a script --1. Sign in to the target VM as an administrator. -2. Run the Azure CLI with the `az` command from either Windows Command Prompt or PowerShell. -3. Sign in to your Azure account in Azure CLI using `az login --use-device-code` -4. Run the downloaded script *arcscvmm-enable-guest-management.ps1* or *arcscvmm-enable-guest-management.sh*, as applicable, using the following commands. The `vmmServerId` parameter should denote your VMM ServerΓÇÖs ARM ID. -- **For a Windows VM:** -- ```azurecli - ./arcscvmm-enable-guest-management.ps1 -<vmmServerId> '/subscriptions/<subscriptionId>/resourceGroups/<rgName>/providers/Microsoft.ScVmm/vmmServers/<vmmServerName> - ``` -- **For a Linux VM:** -- ```azurecli - ./arcscvmm-enable-guest-management.sh -<vmmServerId> '/subscriptions/<subscriptionId>/resourceGroups/<rgName>/providers/Microsoft.ScVmm/vmmServers/<vmmServerName> - ``` --## Next steps --[Manage VM extensions to use Azure management services for your SCVMM VMs](../servers/manage-vm-extensions.md). |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md | - Title: Overview of the Azure Connected System Center Virtual Machine Manager -description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager. Previously updated : 07/11/2024--ms. -----keywords: "VMM, Arc, Azure" ----# Overview of Arc-enabled System Center Virtual Machine Manager --Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) empowers System Center customers to connect their VMM environment to Azure and perform VM self-service operations from Azure portal. Azure Arc-enabled SCVMM extends the Azure control plane to SCVMM managed infrastructure, enabling the use of Azure security, governance, and management capabilities consistently across System Center managed estate and Azure. --Azure Arc-enabled System Center Virtual Machine Manager also allows you to manage your hybrid environment consistently and perform self-service VM operations through Azure portal. For Microsoft Azure Pack customers, this solution is intended as an alternative to perform VM self-service operations. --Arc-enabled System Center VMM allows you to: --- Perform various VM lifecycle operations such as start, stop, pause, and delete VMs on SCVMM managed VMs directly from Azure.-- Empower developers and application teams to self-serve VM operations on demand using [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview).-- Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.-- Discover and onboard existing SCVMM managed VMs to Azure.-- Install the Azure connected machine agent at scale on SCVMM VMs to [govern, protect, configure, and monitor them](../servers/overview.md#supported-cloud-operations).--> [!NOTE] -> For more information regarding the different services Azure Arc offers, see [Choosing the right Azure Arc service for machines](../choose-service.md). --## Onboard resources to Azure management at scale --Azure services such as Microsoft Defender for Cloud, Azure Monitor, Azure Update Manager, and Azure Policy provide a rich set of capabilities to secure, monitor, patch, and govern off-Azure resources via Arc. --By using Arc-enabled SCVMM's capabilities to discover your SCVMM managed estate and install the Arc agent at scale, you can simplify onboarding your entire System Center estate to these services. --## How does it work? --To Arc-enable a System Center VMM management server, deploy [Azure Arc resource bridge](../resource-bridge/overview.md) in the VMM environment. Arc resource bridge is a virtual appliance that connects VMM management server to Azure. Azure Arc resource bridge enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and do various operations on them. --## Architecture --The following image shows the architecture for the Arc-enabled SCVMM: ---## How is Arc-enabled SCVMM different from Arc-enabled Servers --- Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that they're running on. Since Arc-enabled servers also support bare-metal machines, there might, in fact, not even be a host hypervisor in some cases.-- Azure Arc-enabled SCVMM is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on an SCVMM VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. Azure Arc-enabled SCVMM also provides guest operating system management, in fact, it uses the same components as Azure Arc-enabled servers.--You have the flexibility to start with either option, and incorporate the other one later without any disruption. With both options, you'll enjoy the same consistent experience. --### Supported scenarios --The following scenarios are supported in Azure Arc-enabled SCVMM: --- SCVMM administrators can connect a VMM instance to Azure and browse the SCVMM virtual machine inventory in Azure.-- Administrators can use the Azure portal to browse SCVMM inventory and register SCVMM cloud, virtual machines, VM networks, and VM templates into Azure.-- Administrators can provide app teams/developers fine-grained permissions on those SCVMM resources through Azure RBAC.-- App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart).-- Administrators can install Arc agents on SCVMM VMs at-scale and install corresponding extensions to use Azure management services like Microsoft Defender for Cloud, Azure Update Manager, Azure Monitor, etc. -->[!NOTE] -> Azure Arc-enabled SCVMM doesn't support VMware vCenter VMs managed by SCVMM. To onboard VMware VMs to Azure Arc, we recommend you to use [Azure Arc-enabled VMware vSphere](../vmware-vsphere/overview.md). --### Supported VMM versions --Azure Arc-enabled SCVMM works with VMM 2019 and 2022 versions and supports SCVMM management servers with a maximum of 15,000 VMs. --### Supported regions --Azure Arc-enabled SCVMM is currently supported in the following regions: --- East US-- East US 2-- West US 2-- West US 3-- Central US-- South Central US-- UK South-- North Europe-- West Europe-- Sweden Central-- Southeast Asia-- Australia East--## Data Residency --Azure Arc-enabled SCVMM doesn't store/process customer data outside the region the customer deploys the service instance in. --## Next steps ---- Plan your Arc-enabled SCVMM deployment by reviewing the [support matrix](support-matrix-for-system-center-virtual-machine-manager.md).-- Once ready, [connect your SCVMM management server to Azure Arc using the onboarding script](quickstart-connect-system-center-virtual-machine-manager-to-arc.md). |
azure-arc | Perform Vm Ops On Scvmm Through Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/perform-vm-ops-on-scvmm-through-azure.md | - Title: Perform VM operations on SCVMM VMs through Azure -description: Learn how to manage SCVMM VMs in Azure through Arc-enabled SCVMM. - Previously updated : 03/12/2024--------# Manage SCVMM VMs in Azure through Arc-enabled SCVMM --In this article, you learn how to perform various operations on the Azure Arc-enabled SCVMM VMs such as: --- Start, stop, and restart a VM--- Control access and add Azure tags--- Add, remove, and update network interfaces--- Add, remove, and update disks and update VM size (CPU cores, memory)--- Enable guest management--- Install extensions (enabling guest management is required). All the [extensions](../servers/manage-vm-extensions.md#extensions) that are available with Arc-enabled Servers are supported.---To perform guest OS operations on Arc-enabled SCVMM VMs, you must enable guest management on the VMs. When you enable guest management, the Arc Connected Machine Agent is installed on the VM. --## Enable guest management --Before you can install an extension, you must enable guest management on the SCVMM VM. --1. Make sure your target machine: -- - is running a [supported operating system](../servers/prerequisites.md#supported-operating-systems). -- - can connect through the firewall to communicate over the internet and [these URLs](../servers/network-requirements.md#urls) aren't blocked. -- - has SCVMM tools installed and running. -- - is powered on and the resource bridge has network connectivity to the host running the VM. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --2. Search for and select the SCVMM VM for which you want to enable guest management and select **Configuration**. --3. Select **Enable guest management** and provide the administrator username and password to enable guest management. Then select **Apply**. --For Linux, use the root account, and for Windows, use an account that is a member of the Local Administrators group. -->[!Note] ->You can install Arc agents at scale on Arc-enabled SCVMM VMs through Azure Portal only if you are running: ->- SCVMM 2022 UR1 or later versions of SCVMM server and console ->- SCVMM 2019 UR5 or later versions of SCVMM server and console ->- VMs running Windows Server 2012 R2, 2016, 2019, 2022, Windows 10, and Windows 11 <br> -> For other SCVMM versions, Linux VMs, or Windows VMs running WS 2012 or earlier, [install Arc agents through the script](./install-arc-agents-using-script.md). --## Delete a VM --If you no longer need the VM, you can delete it. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --2. Search for and select the VM you want to delete. --3. In the selected VM's Overview page, select **Delete**. --4. When prompted, confirm that you want to delete it. -->[!NOTE] ->This also deletes the VM on your SCVMM managed on-premises host. --## Next steps --[Create a Virtual Machine on SCVMM managed on-premises hosts](./create-virtual-machine.md). |
azure-arc | Quickstart Connect System Center Virtual Machine Manager To Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md | - Title: Quickstart for Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) -description: In this Quickstart, you learn how to use the helper script to connect your System Center Virtual Machine Manager management server to Azure Arc. -----ms. - Previously updated : 08/12/2024---# Customer intent: As a VI admin, I want to connect my VMM management server to Azure Arc. ---# Quickstart: Connect your System Center Virtual Machine Manager management server to Azure Arc --Before you can start using the Azure Arc-enabled SCVMM features, you need to connect your VMM management server to Azure Arc. --This Quickstart shows you how to connect your SCVMM management server to Azure Arc using a helper script. The script deploys a lightweight Azure Arc appliance (called Azure Arc resource bridge) as a virtual machine running in your VMM environment and installs an SCVMM cluster extension on it to provide a continuous connection between your VMM management server and Azure Arc. --## Prerequisites -->[!Note] -> - If VMM server is running on Windows Server 2016 machine, ensure that [Open SSH package](https://github.com/PowerShell/Win32-OpenSSH/releases) and tar are installed. To install tar, you can copy tar.exe and archiveint.dll from any Windows 11 or Windows Server 2019/2022 machine to *C:\Windows\System32* path on your VMM server machine. -> - If you deploy an older version of appliance (version lesser than 0.2.25), Arc operation fails with the error *Appliance cluster is not deployed with Microsoft Entra ID authentication*. To fix this issue, download the latest version of the onboarding script and deploy the resource bridge again. -> - Azure Arc Resource Bridge deployment using private link is currently not supported. --| **Requirement** | **Details** | -| | | -| **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. | -| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud or a host group with a minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. The supported storage configurations are hybrid storage (flash and HDD) and all-flash storage (SSDs or NVMe). <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported; Dynamic IP allocation using DHCP isn't supported. Static IP allocation can be performed by one of the following approaches:<br><br> 1. **VMM IP Pool**: Follow [these steps](/system-center/vmm/network-pool?view=sc-vmm-2022&preserve-view=true) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least three IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP Pool and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. <br> <br> 2. **Custom IP range**: Ensure that your VM network has three continuous free IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP range and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. If the VM network is configured with a VLAN, the VLAN ID is required as an input. Azure Arc Resource Bridge requires internal and external DNS resolution to the required sites and the on-premises management machine for the Static gateway IP and the IP address(es) of your DNS server(s) are needed. <br/><br/> A library share with write permission for the SCVMM admin account through which Resource Bridge deployment is going to be performed.| -| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. If the SCVMM server is installed in a High Availability configuration, the user should be a part of the local administrator accounts in all the SCVMM cluster nodes. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM and the deployment of the Arc Resource bridge VM. | -| **Workstation** | The workstation will be used to run the helper script. Ensure you have [64-bit Azure CLI installed](/cli/azure/install-azure-cli) on the workstation.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> To avoid network latency issues, we recommend executing the helper script directly in the VMM server machine.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. | --## Prepare SCVMM management server --- Create an SCVMM private cloud if you don't have one. The private cloud should have a reservation of at least 32 GB of RAM and 4 vCPUs. It should also have at least 100 GB of disk space.-- Ensure that SCVMM administrator account has the appropriate permissions.--## Download the onboarding script --1. Go to [Azure portal](https://aka.ms/SCVMM/MgmtServers). -1. Search and select **Azure Arc**. -1. In the **Overview** page, select **Add resources** under **Manage resources across environments**. -- :::image type="content" source="media/quick-start-connect-scvmm-to-azure/overview-add-infrastructure.png" alt-text="Screenshot of how to select Add your infrastructure for free." lightbox="media/quick-start-connect-scvmm-to-azure/overview-add-infrastructure.png"::: --1. In the **Host environments** section, in **System Center VMM** select **Add**. -- :::image type="content" source="media/quick-start-connect-scvmm-to-azure/platform-add-system-center-vmm.png" alt-text="Screenshot of how to select System Center V M M platform." lightbox="media/quick-start-connect-scvmm-to-azure/platform-add-system-center-vmm.png"::: --1. Select **Create a new resource bridge** and select **Next : Basics >**. -1. Provide a name for **Azure Arc resource bridge**. For example: *contoso-nyc-resourcebridge*. -1. Select a subscription and resource group where you want to create the resource bridge. -1. Under **Region**, select an Azure location where you want to store the resource metadata. The currently supported regions are **East US** and **West Europe**. -1. Provide a name for **Custom location**. - This is the name that you'll see when you deploy virtual machines. Name it for the datacenter or the physical location of your datacenter. For example: *contoso-nyc-dc.* --1. Leave the option **Use the same subscription and resource group as your resource bridge** selected. -1. Provide a name for your **SCVMM management server instance** in Azure. For example: *contoso-nyc-scvmm.* -1. Select **Next: Tags >**. -1. Assign Azure tags to your resources in **Value** under **Physical location tags**. You can add additional tags to help you organize your resources to facilitate administrative tasks using custom tags. -1. Select **Next: Download and run script >**. -1. If your subscription isn't registered with all the required resource providers, select **Register** to proceed to next step. -1. Based on the operating system of your workstation, download the PowerShell or Bash script and copy it to the workstation. -1. To see the status of your onboarding after you run the script on your workstation, select **Next:Verification**. The onboarding isn't affected when you close this page. --### Windows --Follow these instructions to run the script on a Windows machine. --1. Open a new PowerShell window as Administrator and verify if Azure CLI is successfully installed in the workstation, and use the following command: - ```azurepowershell-interactive - az - ``` -1. Navigate to the folder where you've downloaded the PowerShell script: - *cd C:\Users\ContosoUser\Downloads* --1. Run the following command to allow the script to run since it's an unsigned script (if you close the session before you complete all the steps, run this command again in the new PowerShell Administrator session): - ```azurepowershell-interactive - Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass - ``` -1. Run the script: - ```azurepowershell-interactive - ./resource-bridge-onboarding-script.ps1 - ``` -### Linux --Follow these instructions to run the script on a Linux machine: --1. Open the terminal and navigate to the folder where you've downloaded the Bash script. -2. Execute the script using the following command: -- ```sh - bash resource-bridge-onboarding-script.sh - ``` --## Script runtime -The script execution will take up to half an hour and you'll be prompted for various details. See the following table for related information: --| **Parameter** | **Details** | -| | | -| **Azure login** | You would be asked to sign in to Azure by visiting [this site](https://www.microsoft.com/devicelogin) and pasting the prompted code. | -| **SCVMM management server FQDN/Address** | FQDN for the VMM server (or an IP address). </br> Provide role name if it’s a Highly Available VMM deployment. </br> For example: nyc-scvmm.contoso.com or 10.160.0.1 | -| **SCVMM Username**</br> (domain\username) | Username for the SCVMM administrator account. The required permissions for the account are listed in the prerequisites above.</br> Example: contoso\contosouser | -| **SCVMM password** | Password for the SCVMM admin account. | -| **Deployment location selection** | Select if you want to deploy the Arc resource bridge VM in an SCVMM Cloud or an SCVMM Host Group. | -| **Private cloud/Host group selection** | Select the name of the private cloud or the host group where the Arc resource bridge VM should be deployed. | -| **Virtual Network selection** | Select the name of the virtual network to which *Arc resource bridge VM* needs to be connected. This network should allow the appliance to talk to the VMM management server and the Azure endpoints (or internet). | -| **Resource Bridge IP Inputs** | If you have a VMM IP Pool configured, select the VMM static IP pool that will be used to allot the IP address. </br></br> If you would like to enter Custom IP range, enter the Static IP address prefix, start range IP, end range IP, VM Network VLAN ID, static gateway IP, and the IP address(es) of DNS server(s), in that order. **Note**: If you don't have a VLAN ID configured with the VM Network, enter 0 as the VLAN ID. | -| **Control Plane IP** | Provide a reserved IP address in the same subnet as the static IP pool used for Resource Bridge deployment. This IP address should be outside of the range of static IP pool used for Resource Bridge deployment and shouldn't be assigned to any other machine on the network. | -| **Appliance proxy settings** | Enter *Y* if there's a proxy in your appliance network, else enter *N*.| -| **http** | Address of the HTTP proxy server. | -| **https** | Address of the HTTPS proxy server.| -| **NoProxy** | Addresses to be excluded from proxy.| -|**CertificateFilePath** | For SSL based proxies, provide the path to the certificate. | --Once the command execution is completed, your setup is complete, and you can try out the capabilities of Azure Arc-enabled SCVMM. -->[!IMPORTANT] ->After the successful installation of Azure Arc Resource Bridge, it's recommended to retain a copy of the resource bridge config (.yaml) files in a secure place that facilitates easy retrieval. These files are needed later to run commands to perform management operations (e.g. [az arcappliance upgrade](/cli/azure/arcappliance/upgrade#az-arcappliance-upgrade-vmware)) on the resource bridge. You can find the three config files (.yaml files) in the same folder where you ran the onboarding script. ---### Retry command - Windows --If for any reason, the appliance creation fails, you need to retry it. Run the command with ```-Force``` to clean up and onboard again. --```powershell-interactive - ./resource-bridge-onboarding-script.ps1 -Force -Subscription <Subscription> -ResourceGroup <ResourceGroup> -AzLocation <AzLocation> -ApplianceName <ApplianceName> -CustomLocationName <CustomLocationName> -VMMservername <VMMservername> -``` -->[!Note] ->You can find the values for *Subscription*, *ResourceGroup*, *Azlocation*, *ApplianceName*, *CustomLocationName*, and *VMMservername* parameters from the onboarding script. -- ### Retry command - Linux --If for any reason, the appliance creation fails, you need to retry it. Run the command with ```--force``` to clean up and onboard again. -- ```sh - bash resource-bridge-onboarding-script.sh --force - ``` ->[!IMPORTANT] -> After the successful installation of Azure Arc Resource Bridge, it's recommended to retain a copy of the resource bridge config.yaml files in a place that facilitates easy retrieval. These files could be needed later to run commands to perform management operations (e.g. [az arcappliance upgrade](/cli/azure/arcappliance/upgrade#az-arcappliance-upgrade-vmware)) on the resource bridge. You can find the three .yaml files (config files) in the same folder where you ran the script. -->[!NOTE] -> - After successful deployment, we recommend maintaining the state of **Arc Resource Bridge VM** as *online*. -> - Intermittently appliance might become unreachable when you shut down and restart the VM. -> - After the execution of command, your setup is complete, and you can try out the capabilities of Azure Arc-enabled SCVMM. --## Next steps --- [Browse and enable SCVMM resources through Azure RBAC](enable-scvmm-inventory-resources.md).-- [Create a VM using Azure Arc-enabled SCVMM](create-virtual-machine.md). |
azure-arc | Remove Scvmm From Azure Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/remove-scvmm-from-azure-arc.md | - Title: Remove your SCVMM environment from Azure Arc -description: This article explains the steps to cleanly remove your SCVMM environment from Azure Arc and delete related Azure Arc resources from Azure. ---- Previously updated : 03/18/2024----# Customer intent: As an infrastructure admin, I want to cleanly remove my SCVMM environment from Azure Arc. ---# Remove your SCVMM environment from Azure Arc --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). --In this article, you learn how to cleanly remove your SCVMM managed environment from Azure Arc-enabled SCVMM. For SCVMM environments that you no longer want to manage with Azure Arc-enabled SCVMM, follow the steps in the article to: --1. Remove guest management from SCVMM virtual machines -2. Remove your SCVMM environment from Azure Arc -3. Remove Arc resource bridge related items in your SCVMM management server --## 1. Remove guest management from SCVMM virtual machines -To prevent continued billing of Azure management services after you remove the SCVMM environment from Azure Arc, you must first cleanly remove guest management from all Arc-enabled SCVMM virtual machines where it was enabled. When you enable guest management on Arc-enabled SCVMM virtual machines, the Arc connected machine agent is installed on them. --Once guest management is enabled, you can install VM extensions on them and use Azure management services like the Log Analytics on them. To cleanly remove guest management, you must follow the steps below to remove any VM extensions from the virtual machine, disconnect the agent, and uninstall the software from your virtual machine. It's important to complete each of the three steps to fully remove all related software components from your virtual machines. --### Step 1: Remove VM extensions --If you have deployed Azure VM extensions to an Azure Arc-enabled SCVMM VM, you must uninstall the extensions before disconnecting the agent or uninstalling the software. Uninstalling the Azure Connected Machine agent doesn't automatically remove extensions, and they won't be recognized if you later connect the VM to Azure Arc again. Uninstall extensions using the following steps: --1. Go to [Azure Arc center in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview) --2. Select **SCVMM management servers**. --3. Search and select the VMM management server you want to remove from Azure Arc. --4. Select Virtual machines under SCVMM inventory. --5. Search and select the virtual machine where you have Guest Management enabled. --6. Select Extensions. --7. Select the extensions and select Uninstall. --### Step 2: Disconnect the agent from Azure Arc --Disconnecting the agent clears the local state of the agent and removes agent information from our systems. To disconnect the agent, sign-in and run the following command as an administrator/root account on the virtual machine. --```powershell - azcmagent disconnect --force-local-only -``` --### Step 3: Uninstall the agent --#### For Windows virtual machines --To uninstall the Windows agent from the machine, do the following: --1. Sign in to the computer with an account that has administrator permissions. -2. In Control Panel, select Programs and Features. -3. In Programs and Features, select Azure Connected Machine Agent, select Uninstall, and then select Yes. -4. Delete the `C:\Program Files\AzureConnectedMachineAgent` folder. --#### For Linux virtual machines --To uninstall the Linux agent, the command to use depends on the Linux operating system. You must have `root` access permissions or your account must have elevated rights using sudo. --- For Ubuntu, run the following command:-- ```bash - sudo apt purge azcmagent - ``` --- For RHEL, CentOS, Oracle Linux run the following command:-- ```bash - sudo yum remove azcmagent - ``` --- For SLES, run the following command:-- ```bash - sudo zypper remove azcmagent - ``` --## 2. Remove your SCVMM environment from Azure Arc --You can remove your SCVMM resources from Azure Arc using either the deboarding script or manually. --### Remove SCVMM managed resources from Azure Arc using deboarding script --Download the [deboarding script](https://download.microsoft.com/download/a/d/b/adb5650c-5c90-4e94-8a93-2a4707c2020a/arcscvmm-deboard-windows.ps1) to do a full cleanup of all the Arc-enabled SCVMM resources. The script removes all the Azure resources, including SCVMM management server, custom location, virtual machines, virtual templates, hosts, clusters, resource pools, datastores, virtual networks, Azure Resource Manager (ARM) resource of Appliance, and the appliance VM running on the SCVMM management server. --#### Run the script --To run the deboarding script, follow these steps: --##### Windows -1. Open a PowerShell window as an Administrator and go to the folder where you've downloaded the PowerShell script. --2. Run the following command to allow the script to run because it's an unsigned script. (If you close the session before you complete all the steps, run this command again for the new session.) -- ```powershell-interactive - Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass - ``` -3. Run the script. -- ```powershell-interactive - ./arcvmm-deboard-windows.ps1 - ``` --#### Inputs for the script --- **vmmServerId**: The Azure resource ID of the SCVMM management server resource. </br> For example: */subscriptions/204898ee-cd13-4332-1111-88ca5c11111c/resourceGroups/Synthetics/providers/Microsoft.ScVmm/VMMServers/scvmmserverresource*--- **ApplianceConfigFilePath (optional)**: Path to kubeconfig, output from deploy command. Providing applianceconfigfilepath also deletes the appliance VM running on the SCVMM management server.--- **Force**: Using the Force flag deletes all the Azure resources without reaching resource bridge. Use this option if resource bridge VM isn't in running state.--### Remove SCVMM managed resources from Azure manually --If you aren't using the deboarding script, follow these steps to remove the SCVMM resources manually: -->[!NOTE] ->When you enable SCVMM resources in Azure, an Azure resource representing them is created. Before you can delete the SCVMM management server resource in Azure, you must delete all the Azure resources that represent your related SCVMM resources. --1. Go to [Azure Arc center in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview) --2. Select **SCVMM management servers**. --3. Search and select the SCVMM management server you want to remove from Azure Arc. --4. Select **Virtual machines** under **SCVMM inventory**. --5. Select all the VMs that have **Virtual hardware management** value as **Enabled**. --6. Select **Remove from Azure**. -- This action only removes these resource representations from Azure. The resources continue to remain in your SCVMM management server. --7. Do the steps 4, 5, and 6 for **Clouds**, **VM networks**, and **VM templates** by performing **Remove from Azure** operation for resources with **Azure Enabled** value as **Yes**. --8. Once the deletion is complete, select **Overview**. --9. Note the **Custom location** and the **Azure Arc Resource bridge** resource in the **Essentials** section. --10. Select **Remove from Azure** to remove the SCVMM management server resource from Azure. --11. Go to the noted **Custom location** resource and select **Delete** --12. Go to the noted **Azure Arc Resource bridge** resource and select **Delete** --At this point, all your Arc-enabled SCVMM resources are removed from Azure. --## 3. Remove Arc resource bridge related items in your VMM management server --During onboarding, to create a connection between your SCVMM management server and Azure, an Azure Arc resource bridge was deployed in your SCVMM managed environment. As the last step, you must delete the resource bridge VM and the VM template created during the onboarding. --You can find both the virtual machine and the template on the resource pool/cluster/host/cloud that you provided during [Azure Arc-enabled SCVMM onboarding](./quickstart-connect-system-center-virtual-machine-manager-to-arc.md). --## Next steps --[Connect your System Center Virtual Machine Manager management server to Azure Arc again](./quickstart-connect-system-center-virtual-machine-manager-to-arc.md). |
azure-arc | Set Up And Manage Self Service Access Scvmm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/set-up-and-manage-self-service-access-scvmm.md | - Title: Set up and manage self-service access to SCVMM resources -description: This article describes how to use built-in roles to manage granular access to SCVMM resources through Azure Role-based Access Control (RBAC). ------ Previously updated : 11/15/2023-keywords: "VMM, Arc, Azure" ---# Set up and manage self-service access to SCVMM resources --Once your SCVMM resources are enabled in Azure, as a final step, provide your teams with the required access for a self-service experience. This article describes how to use built-in roles to manage granular access to SCVMM resources through Azure Role-based Access Control (RBAC) and allow your teams to deploy and manage VMs. --## Prerequisites --- Your SCVMM instance must be connected to Azure Arc.-- Your SCVMM resources such as virtual machines, clouds, VM networks, and VM templates must be Azure enabled.-- You must have **User Access Administrator** or **Owner** role at the scope (resource group/subscription) to assign roles to other users.--## Provide access to use Arc-enabled SCVMM resources --To provision SCVMM VMs and change their size, add disks, change network interfaces, or delete them, your users need to have permission on the compute, network, storage, and to the VM template resources that they will use. These permissions are provided by the built-in Azure Arc SCVMM Private Cloud User role. --You must assign this role to an individual cloud, VM network, and VM template that a user or a group needs to access. --1. Go to the [SCVMM management servers](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/scVmmManagementServer) list in Arc center. -2. Search and select your SCVMM management server. -3. Navigate to the **Clouds** in **SCVMM inventory** section in the table of contents. -4. Find and select the cloud for which you want to assign permissions. - This will take you to the Arc resource representing the SCVMM Cloud. -1. Select **Access control (IAM)** in the table of contents. -1. Select **Add role assignments** on the **Grant access to this resource**. -1. Select **Azure Arc ScVmm Private Cloud User** role and select **Next**. -1. Select **Select members** and search for the Microsoft Entra user or group that you want to provide access to. -1. Select the Microsoft Entra user or group name. Repeat this for each user or group to which you want to grant this permission. -1. Select **Review + assign** to complete the role assignment. -1. Repeat steps 3-9 for each VM network and VM template that you want to provide access to. --If you have organized your SCVMM resources into a resource group, you can provide the same role at the resource group scope. --Your users now have access to SCVMM cloud resources. However, your users will also need to have permission on the subscription/resource group where they would like to deploy and manage VMs. --## Provide access to subscription or resource group where VMs will be deployed --In addition to having access to SCVMM resources through the **Azure Arc ScVmm Private Cloud User** role, your users must have permissions on the subscription and resource group where they deploy and manage VMs. --The **Azure Arc ScVmm VM Contributor** role is a built-in role that provides permissions to conduct all SCVMM virtual machine operations. --1. Go to the [Azure portal](https://ms.portal.azure.com/#home). -2. Search and navigate to the subscription or resource group to which you want to provide access. -3. Select **Access control (IAM)** from the table of contents on the left. -4. Select **Add role assignments** on the **Grant access to this resource**. -5. Select **Azure Arc ScVmm VM Contributor** role and select **Next**. -6. Select the option **Select members**, and search for the Microsoft Entra user or group that you want to provide access to. -7. Select the Microsoft Entra user or group name. Repeat this for each user or group to which you want to grant this permission. -8. Select on **Review + assign** to complete the role assignment. --## Next steps --[Create an Azure Arc VM](create-virtual-machine.md). |
azure-arc | Support Matrix For System Center Virtual Machine Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/support-matrix-for-system-center-virtual-machine-manager.md | - Title: Support matrix for Azure Arc-enabled System Center Virtual Machine Manager -description: Learn about the support matrix for Arc-enabled System Center Virtual Machine Manager. ------ Previously updated : 08/12/2024-keywords: "VMM, Arc, Azure" --# Customer intent: As a VI admin, I want to understand the support matrix for System Center Virtual Machine Manager. ---# Support matrix for Azure Arc-enabled System Center Virtual Machine Manager --This article documents the prerequisites and support requirements for using [Azure Arc-enabled System Center Virtual Machine Manager (SCVMM)](overview.md) to manage your SCVMM managed on-premises VMs through Azure Arc. --To use Arc-enabled SCVMM, you must deploy an Azure Arc Resource Bridge in your SCVMM managed environment. The Resource Bridge provides an ongoing connection between your SCVMM management server and Azure. Once you've connected your SCVMM management server to Azure, components on the Resource Bridge discover your SCVMM management server inventory. You can [enable them in Azure](enable-scvmm-inventory-resources.md) and start performing virtual hardware and guest OS operations on them using Azure Arc. --## System Center Virtual Machine Manager requirements --The following requirements must be met in order to use Arc-enabled SCVMM. --### Supported SCVMM versions --Azure Arc-enabled SCVMM works with VMM 2019 and 2022 versions and supports SCVMM management servers with a maximum of 15,000 VMs. --> [!NOTE] -> If VMM server is running on Windows Server 2016 machine, ensure that [Open SSH package](https://github.com/PowerShell/Win32-OpenSSH/releases) is installed. -> If you deploy an older version of appliance (version lesser than 0.2.25), Arc operation fails with the error *Appliance cluster is not deployed with AAD authentication*. To fix this issue, download the latest version of the onboarding script and deploy the Resource Bridge again. -> Azure Arc Resource Bridge deployment using private link is currently not supported. --| **Requirement** | **Details** | -| | | -| **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. | -| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud or a host group with a minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. The supported storage configurations are hybrid storage (flash and HDD) and all-flash storage (SSDs or NVMe). <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported; Dynamic IP allocation using DHCP isn't supported. Static IP allocation can be performed by one of the following approaches:<br><br> 1. **VMM IP Pool**: Follow [these steps](/system-center/vmm/network-pool?view=sc-vmm-2022&preserve-view=true) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least three IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP Pool and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. <br> <br> 2. **Custom IP range**: Ensure that your VM network has three continuous free IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP range and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. If the VM network is configured with a VLAN, the VLAN ID is required as an input. Azure Arc Resource Bridge requires internal and external DNS resolution to the required sites and the on-premises management machine for the Static gateway IP and the IP address(es) of your DNS server(s) are needed. <br/><br/> A library share with write permission for the SCVMM admin account through which Resource Bridge deployment is going to be performed. | -| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. If the SCVMM server is installed in a High Availability configuration, the user should be a part of the local administrator accounts in all the SCVMM cluster nodes. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM and the deployment of the Arc Resource Bridge VM. | -| **Workstation** | The workstation will be used to run the helper script. Ensure you have [64-bit Azure CLI installed](/cli/azure/install-azure-cli) on the workstation.<br/><br/> When you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. | --### Resource Bridge networking requirements --The following firewall URL exceptions are required for the Azure Arc Resource Bridge VM: --->[!Note] -> To configure SSL proxy and to view the exclusion list for no proxy, see [Additional network requirements](../resource-bridge/network-requirements.md#azure-arc-resource-bridge-network-requirements). --In addition, SCVMM requires the following exception: --| **Service** | **Port** | **URL** | **Direction** | **Notes**| -| | | | | | -| SCVMM Management Server | 443 | URL of the SCVMM management server. | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. | -| WinRM | WinRM Port numbers (Default: 5985 and 5986). | URL of the WinRM service. | IPs in the IP Pool used by the Appliance VM and control plane need connection with the VMM server. | Used by the SCVMM server to communicate with the Appliance VM. | ---For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md). --### Azure role/permission requirements --The minimum Azure roles required for operations related to Arc-enabled SCVMM are as follows: --| **Operation** | **Minimum role required** | **Scope** | -| | | | -| Onboarding your SCVMM Management Server to Arc | Azure Arc SCVMM Private Clouds Onboarding | On the subscription or resource group into which you want to onboard | -| Administering Arc-enabled SCVMM | Azure Arc SCVMM Administrator | On the subscription or resource group where SCVMM management server resource is created | -| VM Provisioning | Azure Arc SCVMM Private Cloud User | On the subscription or resource group that contains the SCVMM cloud, datastore, and virtual network resources, or on the resources themselves | -| VM Provisioning | Azure Arc SCVMM VM Contributor | On the subscription or resource group where you want to provision VMs | -| VM Operations | Azure Arc SCVMM VM Contributor | On the subscription or resource group that contains the VM, or on the VM itself | --Any roles with higher permissions on the same scope, such as Owner or Contributor, will also allow you to perform the operations listed above. --### Azure connected machine agent (Guest Management) requirements --Ensure the following before you install Arc agents at scale for SCVMM VMs: --- The Resource Bridge must be in a running state.-- The SCVMM management server must be in a connected state.-- The user account must have permissions listed in Azure Arc-enabled SCVMM Administrator role.-- All the target machines are:- - Powered on and the resource bridge has network connectivity to the host running the VM. - - Running a [supported operating system](/azure/azure-arc/servers/prerequisites#supported-operating-systems). - - Able to connect through the firewall to communicate over the Internet and [these URLs](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked. --### Supported SCVMM versions --Azure Arc-enabled SCVMM supports direct installation of Arc agents in VMs managed by: --- SCVMM 2022 UR1 or later versions of SCVMM server or console-- SCVMM 2019 UR5 or later versions of SCVMM server or console--For VMs managed by other SCVMM versions, [install Arc agents through the script](install-arc-agents-using-script.md). -->[!Important] ->We recommend maintaining the SCVMM management server and the SCVMM console in the same Long-Term Servicing Channel (LTSC) and Update Rollup (UR) version. --### Supported operating systems --Azure Arc-enabled SCVMM supports direct installation of Arc agents in VMs running Windows Server 2022, 2019, 2016, 2012R2, Windows 10, and Windows 11 operating systems. For other Windows and Linux operating systems, [install Arc agents through the script](install-arc-agents-using-script.md). --### Software requirements --Windows operating systems: --* Microsoft recommends running the latest version, [Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616). --Linux operating systems: --* systemd -* wget (to download the installation script) -* openssl -* gnupg (Debian-based systems, only) --### Networking requirements --The following firewall URL exceptions are required for the Azure Arc agents: --| **URL** | **Description** | -| | | -| `aka.ms` | Used to resolve the download script during installation | -| `packages.microsoft.com` | Used to download the Linux installation package | -| `download.microsoft.com` | Used to download the Windows installation package | -| `login.windows.net` | Microsoft Entra ID | -| `login.microsoftonline.com` | Microsoft Entra ID | -| `pas.windows.net` | Microsoft Entra ID | -| `management.azure.com` | Azure Resource Manager - to create or delete the Arc server resource | -| `*.his.arc.azure.com` | Metadata and hybrid identity services | -| `*.guestconfiguration.azure.com` | Extension management and guest configuration services | -| `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | Notification service for extension and connectivity scenarios | -| `azgn*.servicebus.windows.net` | Notification service for extension and connectivity scenarios | -| `*.servicebus.windows.net` | For Windows Admin Center and SSH scenarios | -| `*.blob.core.windows.net` | Download source for Azure Arc-enabled servers extensions | -| `dc.services.visualstudio.com` | Agent telemetry | --## Next steps --[Connect your System Center Virtual Machine Manager management server to Azure Arc](quickstart-connect-system-center-virtual-machine-manager-to-arc.md). |
azure-arc | Switch To The New Version Scvmm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/switch-to-the-new-version-scvmm.md | - Title: Switch to the new version of Arc-enabled SCVMM -description: Learn how to switch to the new version and use its capabilities. ------ Previously updated : 02/29/2024-keywords: "VMM, Arc, Azure" --#Customer intent: As a VI admin, I want to switch to the new version of Arc-enabled SCVMM and leverage the associated capabilities ---# Switch to the new version of Arc-enabled SCVMM --On September 22, 2023, we rolled out major changes to **Azure Arc-enabled System Center Virtual Machine Manager**. By switching to the new version, you can use all the Azure management services that are available for Arc-enabled Servers. --If you onboarded to Azure Arc-enabled SCVMM before **September 22, 2023**, and your VMs were Azure-enabled, you'll no longer be able to perform any operations on the VMs, except the **Remove from Azure** operation. --To continue using these machines, follow these instructions to switch to the new version. -->[!Note] ->If you're new to Arc-enabled SCVMM, you'll be able to leverage the new capabilities by default. To get started, see [Quick Start for Azure Arc-enabled System Center Virtual Machine Manager](quickstart-connect-system-center-virtual-machine-manager-to-arc.md). --## Switch to the new version (Existing customer) --If you onboarded to Arc-enabled SCVMM before September 22, 2023, for VMs that are Azure-enabled, follow these steps to switch to the new version: -->[!Note] -> If you had enabled guest management on any of the VMs, [disconnect](/azure/azure-arc/servers/manage-agent?tabs=windows#step-2-disconnect-the-server-from-azure-arc) and [uninstall agents](/azure/azure-arc/servers/manage-agent?tabs=windows#step-3a-uninstall-the-windows-agent). --1. From your browser, go to the SCVMM management servers blade on [Azure Arc Center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview) and select the SCVMM management server resource. -2. Select all the virtual machines that are Azure enabled with the older version. The virtual machines in the older version will have *Enabled (Deprecated)* set under the Virtual hardware management column. -3. Select **Remove from Azure**. - :::image type="Virtual Machines" source="media/switch-to-the-new-version-scvmm/virtual-machines.png" alt-text="Screenshot of virtual machines."::: -4. After successful removal from Azure, enable the same resources again in Azure. -5. Once the resources are re-enabled, the VMs are auto switched to the new version. The VM resources will now be represented as **Machine - Azure Arc (SCVMM)**. - :::image type="Overview" source="media/switch-to-the-new-version-scvmm/overview.png" alt-text="Screenshot of Overview page."::: -## Next steps --[Create a virtual machine on System Center Virtual Machine Manager using Azure Arc](quickstart-connect-system-center-virtual-machine-manager-to-arc.md). |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/validation-program/overview.md | - Title: Azure Arc-enabled services validation overview -description: Explains the Azure Arc validation process to conform to the Azure Arc-enabled Kubernetes, Data Services, and cluster extensions. Previously updated : 01/08/2024----# Overview of Azure Arc-enabled service validation --Microsoft recommends running Azure Arc-enabled services on validated platforms whenever possible. This article explains how various Azure Arc-enabled components are validated. --Currently, validated solutions are available from partners for [Azure Arc-enabled Kubernetes](../kubernetes/overview.md) and [Azure Arc-enabled data services](../dat). --## Validated Azure Arc-enabled Kubernetes distributions --Azure Arc-enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. The Azure Arc team worked with key industry Kubernetes offering providers to [validate Azure Arc-enabled Kubernetes with their Kubernetes distributions](../kubernetes/validation-program.md?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json). Future major and minor versions of Kubernetes distributions released by these providers will be validated for compatibility with Azure Arc-enabled Kubernetes. --## Validated data services solutions --The Azure Arc team worked with original equipment manufacturer (OEM) partners and storage providers to [validate Azure Arc-enabled data services solutions](../dat?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json). This includes partner solutions, versions, Kubernetes versions, SQL engine versions, and PostgreSQL server versions that have been verified to support the data services. --## Validation process --For more details about the validation process, see the [Azure Arc validation process](https://github.com/Azure/azure-arc-validation/) in GitHub. Here you find information about how offerings are validated with Azure Arc, the test harness, strategy, and more. --## Next steps --* Learn about [Validated Kubernetes distributions](../kubernetes/validation-program.md?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json) -* Learn about [validated solutions for data services](../dat?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json) |
azure-arc | Administer Arc Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/administer-arc-vmware.md | - Title: Perform ongoing administration for Arc-enabled VMware vSphere -description: Learn how to perform administrator operations related to Azure Arc-enabled VMware vSphere - Previously updated : 12/05/2023---------# Perform ongoing administration for Arc-enabled VMware vSphere --In this article, you learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere: --- Upgrading the Azure Arc resource bridge-- Updating the credentials-- Collecting logs from the Arc resource bridge--Each of these operations requires either SSH key to the resource bridge VM or the kubeconfig that provides access to the Kubernetes cluster on the resource bridge VM. --## Upgrade the Arc resource bridge manually --Azure Arc-enabled VMware vSphere requires the Arc resource bridge to connect your vSphere environment with Azure. Periodically, new images of Arc resource bridge are released to include security and feature updates. The Arc resource bridge can be manually upgraded from the vCenter server. You must meet all upgrade [prerequisites](../resource-bridge/upgrade.md#prerequisites) before attempting to upgrade. The vCenter server must have the kubeconfig and appliance configuration files stored locally. If the vSphere account credentials changed after the initial deployment of the resource bridge, [update the new account credentials](administer-arc-vmware.md#updating-the-vsphere-account-credentials-using-a-new-password-or-a-new-vsphere-account-after-onboarding) before attempting manual upgrade. --The manual upgrade generally takes between 30-90 minutes, depending on the network speed. The upgrade command takes your Arc resource bridge to the immediate next version, which might not be the latest available version. Multiple upgrades could be needed to reach a [supported version](../resource-bridge/upgrade.md#supported-versions). You can check your resource bridge version by checking the Azure resource of your Arc resource bridge. --To manually upgrade your Arc resource bridge, make sure you've installed the latest `az arcappliance` CLI extension by running the extension upgrade command from the vCenter server: --```azurecli -az extension add --upgrade --name arcappliance -``` --To manually upgrade your resource bridge, use the following command: --```azurecli -az arcappliance upgrade vmware --config-file <file path to ARBname-appliance.yaml> -``` --## Updating the vSphere account credentials (using a new password or a new vSphere account after onboarding) --Azure Arc-enabled VMware vSphere uses the vSphere account credentials you provided during the onboarding to communicate with your vCenter server. These credentials are only persisted locally on the Arc resource bridge VM. --As part of your security practices, you might need to rotate credentials for your vCenter accounts. As credentials are rotated, you must also update the credentials provided to Azure Arc to ensure the functioning of Azure Arc-enabled VMware services. You can also use the same steps in case you need to use a different vSphere account after onboarding. You must ensure the new account also has all the [required vSphere permissions](support-matrix-for-arc-enabled-vmware-vsphere.md#required-vsphere-account-privileges). --There are two different sets of credentials stored on the Arc resource bridge. You can use the same account credentials for both. --- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade.-- **Account for VMware cluster extension**. This account is used to discover inventory and perform all VM operations through Azure Arc-enabled VMware vSphere--To update the credentials of the account for Arc resource bridge, run the following Azure CLI commands. Run the commands from a workstation that can access cluster configuration IP address of the Arc resource bridge locally: --```azurecli -az account set -s <subscription id> -az arcappliance get-credentials -n <name of the appliance> -g <resource group name> -az arcappliance update-infracredentials vmware --kubeconfig kubeconfig -``` -For more details on the commands, see [`az arcappliance get-credentials`](/cli/azure/arcappliance#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware). ---To update the credentials used by the VMware cluster extension on the resource bridge. This command can be run from anywhere with `connectedvmware` CLI extension installed. --```azurecli -az connectedvmware vcenter connect --custom-location <name of the custom location> --location <Azure region> --name <name of the vCenter resource in Azure> --resource-group <resource group for the vCenter resource> --username <username for the vSphere account> --password <password to the vSphere account> -``` --## Collecting logs from the Arc resource bridge --For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](/cli/azure/arcappliance/logs#az-arcappliance-logs-vmware) command. --To save the logs to a destination folder, run the following commands. These commands need connectivity to cluster configuration IP address. --```azurecli -az account set -s <subscription id> -az arcappliance get-credentials -n <name of the appliance> -g <resource group name> -az arcappliance logs vmware --kubeconfig kubeconfig --out-dir <path to specified output directory> -``` --If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following commands. These commands require connectivity to IP address of the Azure Arc resource bridge VM via SSH --```azurecli -az account set -s <subscription id> -az arcappliance get-credentials -n <name of the appliance> -g <resource group name> -az arcappliance logs vmware --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX -``` --## Next steps --- [Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md)-- [Understand disaster recovery operations for resource bridge](recover-from-resource-bridge-deletion.md) |
azure-arc | Azure Arc Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-agent.md | - Title: Azure Arc agent -description: Learn about Azure Arc agent - Previously updated : 08/13/2024--------# Azure Arc agent --When you [enable guest management](enable-guest-management-at-scale.md) on VMware VMs, Azure Connected Machine agent is installed on the VMs. This is the same agent Arc-enabled servers use. The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. This article provides an architectural overview of Azure connected machine agent. --## Agent components ---The Azure Connected Machine agent package contains several logical components bundled together: --* The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity. --* The guest configuration agent provides functionality such as assessing whether the machine complies with required policies and enforcing compliance. -- Note the following behavior with Azure Policy [guest configuration](../../governance/machine-configuration/overview.md) for a disconnected machine: -- * An Azure Policy assignment that targets disconnected machines is unaffected. - * Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied. - * Assignments are deleted after 14 days and aren't reassigned to the machine after the 14-day period. --* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Azure downloads extensions and copies them to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension installs to the path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension installs to `/var/lib/waagent/<extension>`. -->[!NOTE] -> The [Azure Monitor agent (AMA)](/azure/azure-monitor/agents/azure-monitor-agent-overview) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines. --## Agent resources --The following information describes the directories and user accounts used by the Azure Connected Machine agent. --### Windows agent installation details --The Windows agent is distributed as a Windows Installer package (MSI). Download the Windows agent from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent). -Installing the Connected Machine agent for Window applies the following system-wide configuration changes: --* The installation process creates the following folders during setup. -- | Directory | Description | - |--|-| - | %ProgramFiles%\AzureConnectedMachineAgent | azcmagent CLI and instance metadata service executables.| - | %ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.| - | %ProgramFiles%\AzureConnectedMachineAgent\GCArcService\GC | Guest configuration (policy) service executables.| - | %ProgramData%\AzureConnectedMachineAgent | Configuration, log, and identity token files for azcmagent CLI and instance metadata service.| - | %ProgramData%\GuestConfig | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.| - | %SYSTEMDRIVE%\packages | Extension package executables. | --* Installing the agent creates the following Windows services on the target machine. -- | Service name | Display name | Process name | Description | - |--|--|--|-| - | himds | Azure Hybrid Instance Metadata Service | himds | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens | - | GCArcService | Guest configuration Arc Service | gc_service | Audits and enforces Azure guest configuration policies on the machine. | - | ExtensionService | Guest configuration Extension Service | gc_service | Installs, updates, and manages extensions on the machine. | --* Agent installation creates the following virtual service account. -- | Virtual Account | Description | - ||-| - | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. | -- > [!TIP] - > This account requires the *Log on as a service* right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you might need to adjust your Group Policy Object to grant the right to **NT SERVICE\\himds** or **NT SERVICE\\ALL SERVICES** to allow the agent to function. --* Agent installation creates the following local security group. -- | Security group name | Description | - ||-| - | Hybrid agent extension applications | Members of this security group can request Microsoft Entra tokens for the system-assigned managed identity | --* Agent installation creates the following environmental variables -- | Name | Default value | Description | - |||| - | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` | - | IMDS_ENDPOINT | `http://localhost:40342` | --* There are several log files available for troubleshooting, described in the following table. -- | Log | Description | - |--|-| - | %ProgramData%\AzureConnectedMachineAgent\Log\himds.log | Records details of the heartbeat and identity agent component. | - | %ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log | Contains the output of the azcmagent tool commands. | - | %ProgramData%\GuestConfig\arc_policy_logs\gc_agent.log | Records details about the guest configuration (policy) agent component. | - | %ProgramData%\GuestConfig\ext_mgr_logs\gc_ext.log | Records details about extension manager activity (extension install, uninstall, and upgrade events). | - | %ProgramData%\GuestConfig\extension_logs | Directory containing logs for individual extensions. | --* The process creates the local security group **Hybrid agent extension applications**. --* After uninstalling the agent, the following artifacts remain: -- * %ProgramData%\AzureConnectedMachineAgent\Log - * %ProgramData%\AzureConnectedMachineAgent - * %ProgramData%\GuestConfig - * %SystemDrive%\packages --### Linux agent installation details --The preferred package format for the distribution (`.rpm` or `.deb`) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/) provides the Connected Machine agent for Linux. The shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent) installs and configures the agent. --Installing, upgrading, and removing the Connected Machine agent isn't required after server restart. --Installing the Connected Machine agent for Linux applies the following system-wide configuration changes. --* Setup creates the following installation folders. -- | Directory | Description | - |--|-| - | /opt/azcmagent/ | azcmagent CLI and instance metadata service executables. | - | /opt/GC_Ext/ | Extension service executables. | - | /opt/GC_Service/ | Guest configuration (policy) service executables. | - | /var/opt/azcmagent/ | Configuration, log and identity token files for azcmagent CLI and instance metadata service.| - | /var/lib/GuestConfig/ | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.| --* Installing the agent creates the following daemons. -- | Service name | Display name | Process name | Description | - |--|--|--|-| - | himdsd.service | Azure Connected Machine Agent Service | himds | This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.| - | gcad.service | GC Arc Service | gc_linux_service | Audits and enforces Azure guest configuration policies on the machine. | - | extd.service | Extension Service | gc_linux_service | Installs, updates, and manages extensions on the machine. | --* There are several log files available for troubleshooting, described in the following table. -- | Log | Description | - |--|-| - | /var/opt/azcmagent/log/himds.log | Records details of the heartbeat and identity agent component. | - | /var/opt/azcmagent/log/azcmagent.log | Contains the output of the azcmagent tool commands. | - | /var/lib/GuestConfig/arc_policy_logs | Records details about the guest configuration (policy) agent component. | - | /var/lib/GuestConfig/ext_mgr_logs | Records details about extension manager activity (extension install, uninstall, and upgrade events). | - | /var/lib/GuestConfig/extension_logs | Directory containing logs for individual extensions. | --* Agent installation creates the following environment variables, set in `/lib/systemd/system.conf.d/azcmagent.conf`. -- | Name | Default value | Description | - |||-| - | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` | - | IMDS_ENDPOINT | `http://localhost:40342` | --* After uninstalling the agent, the following artifacts remain: -- * /var/opt/azcmagent - * /var/lib/GuestConfig --## Agent resource governance --The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions: --* The Guest Configuration agent can use up to 5% of the CPU to evaluate policies. -* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply: -- | Extension type | Operating system | CPU limit | - | -- | - | | - | AzureMonitorLinuxAgent | Linux | 60% | - | AzureMonitorWindowsAgent | Windows | 100% | - | AzureSecurityLinuxAgent | Linux | 30% | - | LinuxOsUpdateExtension | Linux | 60% | - | MDE.Linux | Linux | 60% | - | MicrosoftDnsAgent | Windows | 100% | - | MicrosoftMonitoringAgent | Windows | 60% | - | OmsAgentForLinux | Windows | 60%| --During normal operations, defined as the Azure Connected Machine agent being connected to Azure and not actively modifying an extension or evaluating a policy, you can expect the agent to consume the following system resources: --| | Windows | Linux | -| | - | -- | -| **CPU usage (normalized to 1 core)** | 0.07% | 0.02% | -| **Memory usage** | 57 MB | 42 MB | --The performance data above was gathered in April 2023 on virtual machines running Windows Server 2022 and Ubuntu 20.04. The actual agent performance and resource consumption vary based on the hardware and software configuration of your servers. --## Instance metadata --Metadata information about a connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers, specifically: --* Operating system name, type, and version -* Computer name -* Computer manufacturer and model -* Computer fully qualified domain name (FQDN) -* Domain name (if joined to an Active Directory domain) -* Active Directory and DNS fully qualified domain name (FQDN) -* UUID (BIOS ID) -* Connected Machine agent heartbeat -* Connected Machine agent version -* Public key for managed identity -* Policy compliance status and details (if using guest configuration policies) -* SQL Server installed (Boolean value) -* Cluster resource ID (for Azure Stack HCI nodes) -* Hardware manufacturer -* Hardware model -* CPU family, socket, physical core, and logical core counts -* Total physical memory -* Serial number -* SMBIOS asset tag -* Cloud provider -* Amazon Web Services (AWS) metadata, when running in AWS: - * Account ID - * Instance ID - * Region -* Google Cloud Platform (GCP) metadata, when running in GCP: - * Instance ID - * Image - * Machine type - * Project ID - * Project number - * Service accounts - * Zone --The agent requests the following metadata information from Azure: --* Resource location (region) -* Virtual machine ID -* Tags -* Microsoft Entra managed identity certificate -* Guest configuration policy assignments -* Extension requests - install, update, and delete. --> [!NOTE] -> Azure Arc-enabled servers don't store/process customer data outside the region the customer deploys the service instance in. --## Next steps --- [Connect VMware vCenter Server to Azure Arc](quick-start-connect-vcenter-to-arc-using-script.md).-- [Install Arc agent at scale for your VMware VMs](enable-guest-management-at-scale.md). |
azure-arc | Browse And Enable Vcenter Resources In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/browse-and-enable-vcenter-resources-in-azure.md | - Title: Enable your VMware vCenter resources in Azure -description: Learn how to browse your vCenter inventory and represent a subset of your VMware vCenter resources in Azure to enable self-service. - Previously updated : 12/15/2023-------# Customer intent: As a VI admin, I want to represent a subset of my vCenter resources in Azure to enable self-service. ---# Enable your VMware vCenter resources in Azure --After you've connected your VMware vCenter to Azure, you can browse your vCenter inventory from the Azure portal. ---Visit the VMware vCenter blade in Azure Arc center to view all the connected vCenters. From there, you'll browse your virtual machines (VMs), resource pools, templates, and networks. From the inventory of your vCenter resources, you can select and enable one or more resources in Azure. When you enable a vCenter resource in Azure, it creates an Azure resource that represents your vCenter resource. You can use this Azure resource to assign permissions or conduct management operations. --## Enable resource pools, clusters, hosts, datastores, networks, and VM templates in Azure --In this section, you will enable resource pools, networks, and other non-VM resources in Azure. -->[!NOTE] ->Enabling Azure Arc on a VMware vSphere resource is a read-only operation on vCenter. That is, it doesn't make changes to your resource in vCenter. -->[!NOTE] -> To enable VM templates, VMware tools must be installed on them. If not installed, the **Enable in Azure** option will be grayed out. --1. From your browser, go to the vCenters blade on [Azure Arc Center](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview) and navigate to your inventory resources blade. --2. Select the resource or resources you want to enable and then select **Enable in Azure**. --3. Select your Azure Subscription and Resource Group and then select **Enable**. -- This starts a deployment and creates a resource in Azure, creating representations for your VMware vSphere resources. It allows you to manage who can access those resources through Azure role-based access control (RBAC) granularly. --4. Repeat these steps for one or more network, resource pool, and VM template resources. --## Enable existing virtual machines in Azure --1. From your browser, go to the vCenters blade on [Azure Arc Center](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview) and navigate to your vCenter. -- :::image type="content" source="media/browse-and-enable-vcenter-resources-in-azure/enable-guest-management.png" alt-text="Screenshot of how to enable an existing virtual machine in the Azure portal." lightbox="media/browse-and-enable-vcenter-resources-in-azure/enable-guest-management.png"::: --1. Navigate to the VM inventory resource blade, select the VMs you want to enable, and then select **Enable in Azure**. --1. Select your Azure Subscription and Resource Group. --1. (Optional) Select **Install guest agent** and then provide the Administrator username and password of the guest operating system. -- The guest agent is the [Azure Arc connected machine agent](../servers/agent-overview.md). You can install this agent later by selecting the VM in the VM inventory view on your vCenter and selecting **Enable guest management**. For information on the prerequisites of enabling guest management, see [Manage VMware VMs through Arc-enabled VMware vSphere](perform-vm-ops-through-azure.md). --1. Select **Enable** to start the deployment of the VM represented in Azure. --For information on the capabilities enabled by a guest agent, see [Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md). -->[!NOTE] ->Moving VMware vCenter resources between Resource Groups and Subscriptions is currently not supported. - -## Next steps --[Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md). |
azure-arc | Deliver Extended Security Updates For Vmware Vms Through Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/deliver-extended-security-updates-for-vmware-vms-through-arc.md | - Title: Deliver ESUs for VMware VMs through Arc -description: Deliver ESUs for VMware VMs through Azure Arc. Previously updated : 12/06/2023--ms. -----keywords: "VMware, Arc, Azure" ---# Deliver ESUs for VMware VMs through Arc --Azure Arc-enabled VMware vSphere allows you to enroll all the Windows Server 2012/2012 R2 VMs managed by your vCenter in [Extended Security Updates (ESUs)](/windows-server/get-started/extended-security-updates-overview) at scale. --ESUs allow you to leverage cost flexibility in the form of pay-as-you-go Azure billing and enhanced delivery experience in the form of built-in inventory and keyless delivery. In addition, ESUs enabled by Azure Arc give you access to Azure management services such as [Azure Update Manager](/azure/update-manager/overview?tabs=azure-vms), [Azure Automation Change Tracking and Inventory](/azure/automation/change-tracking/overview?tabs=python-2), and [Azure Policy Guest Configuration](/azure/cloud-adoption-framework/manage/azure-server-management/guest-configuration-policy) at no additional cost. --This article provides the steps to procure and deliver ESUs to WS 2012 and 2012 R2 VMware VMs onboarded to Azure Arc-enabled VMware vSphere. -->[!Note] -> - To purchase ESUs, you must have Software Assurance through Volume Licensing Programs such as an Enterprise Agreement (EA), Enterprise Agreement Subscription (EAS), Enrollment for Education Solutions (EES), or Server and Cloud Enrollment (SCE). Alternatively, if your Windows Server 2012/2012 R2 machines are licensed through SPLA or with a Server Subscription, Software Assurance isn't required to purchase ESUs. --## Prerequisites --- The user account must have an Owner/Contributor role in a Resource Group in Azure to create and assign ESUs to VMware VMs. -- The vCenter managing the WS 2012 and 2012 R2 VMs, for which the ESUs are to be applied, should be [onboarded to Azure Arc](./quick-start-connect-vcenter-to-arc-using-script.md). After onboarding, the WS 2012 and 2012 R2 VMs, for which the ESUs are to be applied, should be [Azure-enabled](./browse-and-enable-vcenter-resources-in-azure.md) and [guest management enabled](./enable-guest-management-at-scale.md). --## Create Azure Arc ESUs --1. Sign in to the [Azure portal](https://portal.azure.com/). -2. On the **Azure Arc** page, select **Extended Security Updates** in the left pane. Here, you can view and create ESU Licenses and view eligible resources for ESUs. -3. The **Licenses** tab displays Azure Arc WS 2012 licenses that are available. Select an existing license to apply or create a new license. -- :::image type="content" source="media/deliver-esus-for-vmware-vms/select-or-create-license.png" alt-text="Screenshot of how to create a new license." lightbox="media/deliver-esus-for-vmware-vms/select-or-create-license.png"::: --4. To create a new WS 2012 license, select **Create**, and then provide the information required to configure the license on the page. For detailed information on how to complete this step, see [License provisioning guidelines for Extended Security Updates for Windows Server 2012](../servers/license-extended-security-updates.md). -5. Review the information provided and select **Create**. The license you created appears in the list, and you can link it to one or more Arc-enabled VMware vSphere VMs by following the steps in the next section. -- :::image type="content" source="media/deliver-esus-for-vmware-vms/new-license-created.png" alt-text="Screenshot showing the successful creation of a new license." lightbox="media/deliver-esus-for-vmware-vms/new-license-created.png"::: --## Link ESU licenses to Arc-enabled VMware vSphere VMs --You can select one or more Arc-enabled VMware vSphere VMs to link to an ESU license. Once you've linked a VM to an activated ESU license, the VM is eligible to receive Windows Server 2012 and 2012 R2 ESUs. -->[!Note] -> You have the flexibility to configure your patching solution of choice to receive these updates ΓÇô whether it's [Azure Update Manager](/azure/update-center/overview), [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus), Microsoft Updates, [Microsoft Endpoint Configuration Manager](/mem/configmgr/core/understand/introduction), or a third-party patch management solution. --1. Select the **Eligible Resources** tab to view a list of all your Arc-enabled server machines running Windows Server 2012 and 2012 R2, including VMware machines that are guest management enabled. The **ESUs status** column indicates whether the machine is ESUs enabled. - - :::image type="content" source="media/deliver-esus-for-vmware-vms/view-arc-enabled-machines.png" alt-text="Screenshot of arc-enabled server machines running Windows Server 2012 and 2012 R2 under the eligible resources tab." lightbox="media/deliver-esus-for-vmware-vms/view-arc-enabled-machines.png"::: --2. To enable ESUs for one or more machines, select them in the list, and then select **Enable ESUs**. -3. On the **Enable Extended Security Updates** page, you can see the number of machines selected to enable ESUs and the WS 2012 licenses available to apply. Select a license to link to the selected machine(s) and select **Enable**. -- :::image type="content" source="media/deliver-esus-for-vmware-vms/enable-license.png" alt-text="Screenshot of how to select and enable license." lightbox="media/deliver-esus-for-vmware-vms/enable-license.png"::: --4. The **ESUs status** column value of the selected machines changes to **Enabled**. -- >[!Note] - > - See [Troubleshoot delivery of Extended Security Updates for Windows Server 2012](../servers/troubleshoot-extended-security-updates.md) to troubleshoot any problems that occur during the enablement process.<br> - > - Review the [additional scenarios](../servers/deliver-extended-security-updates.md#additional-scenarios) in which you may be eligible to receive ESU patches at no additional cost. --## Next steps --[Programmatically deploy and manage Azure Arc Extended Security Updates licenses](../servers/api-extended-security-updates.md). |
azure-arc | Enable Guest Management At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/enable-guest-management-at-scale.md | - Title: Install Arc agent at scale for your VMware VMs -description: Learn how to enable guest management at scale for Arc enabled VMware vSphere VMs. - Previously updated : 07/18/2024-------#Customer intent: As an IT infra admin, I want to install arc agents to use Azure management services for VMware VMs. ---# Install Arc agents at scale for your VMware VMs --In this article, you learn how to install Arc agents at scale for VMware VMs and use Azure management capabilities. --## Prerequisites --Ensure the following before you install Arc agents at scale for VMware VMs: --- The resource bridge must be in running state.-- The vCenter must be in connected state.-- The user account must have permissions listed in Azure Arc VMware Administrator role.-- All the target machines are:- - Powered on and the resource bridge has network connectivity to the host running the VM. - - Running a [supported operating system](../servers/prerequisites.md#supported-operating-systems). - - VMware tools are installed on the machines. If VMware tools aren't installed, enable guest management operation is grayed out in the portal. - >[!Note] - >You can use the [out-of-band method](./enable-guest-management-at-scale.md#approach-d-install-arc-agents-at-scale-using-out-of-band-approach) to install Arc agents if VMware tools aren't installed. - - Able to connect through the firewall to communicate over the internet, and [these URLs](../servers/network-requirements.md#urls) aren't blocked. -- > [!NOTE] - > If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo`, and add `<username> ALL=(ALL) NOPASSWD:ALL` at the end of the file. Ensure you replace `<username>`. <br> <br>If your VM template has these changes incorporated, you won't need to do this for the VM created from that template. --## Approach A: Install Arc agents at scale from portal --An admin can install agents for multiple machines from the Azure portal if the machines share the same administrator credentials. --1. Navigate to **Azure Arc center** and select **vCenter resource**. --2. Select all the machines and choose **Enable in Azure** option. --3. Select **Enable guest management** checkbox to install Arc agents on the selected machine. --4. If you want to connect the Arc agent via proxy, provide the proxy server details. --5. If you want to connect Arc agent via private endpoint, follow these [steps](../servers/private-link-security.md) to set up Azure private link. -- >[!Note] - > Private endpoint connectivity is only available for Arc agent to Azure communications. For Arc resource bridge to Azure connectivity, Azure private link isn't supported. --6. Provide the administrator username and password for the machine. --> [!NOTE] -> For Windows VMs, the account must be part of local administrator group; and for Linux VM, it must be a root account. --## Approach B: Install Arc agents using AzCLI commands --The following Azure CLI commands can be used to install Arc agents. --```azurecli -az connectedvmware vm guest-agent enable --password -- --resource-group -- --username -- --vm-name -- [--https-proxy] -- [--no-wait] -``` --## Approach C: Install Arc agents at scale using helper script --Arc agent installation can be automated using the helper script built using the AzCLI command provided [here](./enable-guest-management-at-scale.md#approach-b-install-arc-agents-using-azcli-commands). Download this [helper script](https://aka.ms/arcvmwarebatchenable) to enable VMs and install Arc agents at scale. In a single ARM deployment, the helper script can enable and install Arc agents on 200 VMs. --### Features of the script --- Creates a log file (vmware-batch.log) for tracking its operations.--- Generates a list of Azure portal links to all the deployments created `(all-deployments-<timestamp>.txt)`. --- Creates ARM deployment files `(vmw-dep-<timestamp>-<batch>.json)`.--- Can enable up to 200 VMs in a single ARM deployment if guest management is enabled, else enables 400 VMs. --- Supports running as a cron job to enable all the VMs in a vCenter. --- Allows for service principal authentication to Azure for automation. --Before running this script, install az cli and the `connectedvmware` extension. --### Prerequisites --Before running this script, install: --- Azure CLI from [here](/cli/azure/install-azure-cli).--- The `connectedvmware` extension for Azure CLI: Install it by running `az extension add --name connectedvmware`. --### Usage --1. Download the script to your local machine. --2. Open a PowerShell terminal and navigate to the directory containing the script. --3. Run the following command to allow the script to run, as it's an unsigned script (if you close the session before you complete all the steps, run this command again for the new session): `Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass`. --4. Run the script with the required parameters. For example, `.\arcvmware-batch-enablement.ps1 -VCenterId "<vCenterId>" -EnableGuestManagement -VMCountPerDeployment 3 -DryRun`. Replace `<vCenterId>` with the ARM ID of your vCenter. --### Parameters --- `VCenterId`: The ARM ID of the vCenter where the VMs are located. --- `EnableGuestManagement`: If this switch is specified, the script will enable guest management on the VMs. --- `VMCountPerDeployment`: The number of VMs to enable per ARM deployment. The maximum value is 200 if guest management is enabled, else it's 400. --- `DryRun`: If this switch is specified, the script will only create the ARM deployment files. Else, the script will also deploy the ARM deployments. --### Running as a Cron Job --You can set up this script to run as a cron job using the Windows Task Scheduler. Here's a sample script to create a scheduled task: --```azurecli -$action = New-ScheduledTaskAction -Execute 'powershell.exe' -Argument '-File "C:\Path\To\vmware-batch-enable.ps1" -VCenterId "<vCenterId>" -EnableGuestManagement -VMCountPerDeployment 3 -DryRun' -$trigger = New-ScheduledTaskTrigger -Daily -At 3am -Register-ScheduledTask -Action $action -Trigger $trigger -TaskName "EnableVMs" -``` --Replace `<vCenterId>` with the ARM ID of your vCenter. --To unregister the task, run the following command: --```azurecli -Unregister-ScheduledTask -TaskName "EnableVMs" -``` --## Approach D: Install Arc agents at scale using out-of-band approach --Arc agents can be installed directly on machines without relying on VMware tools or APIs. By following the out-of-band approach, first onboard the machines as Arc-enabled Server resources with Resource type as Microsoft.HybridCompute/machines. After that, perform **Link to vCenter** operation to update the machine's Kind property as VMware, enabling virtual lifecycle operations. --1. **Connect the machines as Arc-enabled Server resources:** Install Arc agents using Arc-enabled Server scripts. -- You can use any of the following automation approaches to install Arc agents at scale: -- - [Install Arc agents at scale using a Service Principal](../servers/onboard-service-principal.md). - - [Install Arc agents at scale using Configuration Manager script](../servers/onboard-configuration-manager-powershell.md). - - [Install Arc agents at scale with a Configuration Manager custom task sequence](../servers/onboard-configuration-manager-custom-task.md). - - [Install Arc agents at scale using Group policy](../servers/onboard-group-policy-powershell.md). - - [Install Arc agents at scale using Ansible playbook](../servers/onboard-ansible-playbooks.md). --2. **Link Arc-enabled Server resources to the vCenter:** The following commands will update the Kind property of Hybrid Compute machines as **VMware**. Linking the machines to vCenter will enable virtual lifecycle operations and power cycle operations (start, stop, etc.) on the machines. -- - The following command scans all the Arc for Server machines that belong to the vCenter in the specified subscription and links the machines with that vCenter. -- [!INCLUDE [azure-cli-subscription](./includes/azure-cli-subscription.md)] -- - The following command scans all the Arc for Server machines that belong to the vCenter in the specified Resource Group and links the machines with that vCenter. -- [!INCLUDE [azure-cli-all](./includes/azure-cli-all.md)] -- - The following command can be used to link an individual Arc for Server resource to vCenter. -- [!INCLUDE [azure-cli-specified-arc](./includes/azure-cli-specified-arc.md)] --## Next steps --[Set up and manage self-service access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md). |
azure-arc | Enable Virtual Hardware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/enable-virtual-hardware.md | - Title: Enable additional capabilities on Arc-enabled Server machines by linking to vCenter -description: Enable additional capabilities on Arc-enabled Server machines by linking to vCenter. - Previously updated : 07/18/2024---------# Enable additional capabilities on Arc-enabled Server machines by linking to vCenter --If you have VMware machines connected to Azure via Arc-enabled Servers route, you can seamlessly get additional capabilities by deploying resource bridge and connecting vCenter to Azure. The additional capabilities include the ability to perform virtual machine lifecycle operations, such as create, resize, and power cycle operations such as start, stop, and so on. You can get additional capabilities without any disruption, retaining the VM extensions configured on the Arc-enabled Server machines. --Follow these steps [here](./quick-start-connect-vcenter-to-arc-using-script.md) to deploy the Arc Resource Bridge and connect vCenter to Azure. -->[!IMPORTANT] -> This article applies only if you've directly installed Arc agents on the VMware machines, and those machines are onboarded as *Microsoft.HybridCompute/machines* ARM resources before connecting vCenter to Azure by deploying Resource Bridge. --## Prerequisites --- An Azure subscription and resource group where you have *Azure Arc VMware Administrator role*. -- Your vCenter instance must be [onboarded](quick-start-connect-vcenter-to-arc-using-script.md) to Azure Arc.-- Arc-enabled Servers machines and vCenter resource must be in the same Azure region.--## Link Arc-enabled Servers machines to vCenter from Azure portal --1. Navigate to the Virtual machines inventory page of your vCenter in the Azure portal. --2. The Virtual machines that have Arc agent installed via Arc-enabled Servers route have **Link to vCenter** status under virtual hardware management. --3. Select **Link to vCenter** to open a pane that lists all the machines under vCenter with Arc agent installed but not linked to vCenter in Azure Arc. --4. Choose all the machines and select the option to link machines to vCenter. -- :::image type="content" source="media/enable-virtual-hardware/link-machine-to-vcenter.png" alt-text="Screenshot that shows the Link to vCenter page." lightbox="media/enable-virtual-hardware/link-machine-to-vcenter.png"::: --5. After linking to vCenter, the virtual hardware status reflects as **Enabled** for all the VMs, and you can perform [virtual hardware operations](./perform-vm-ops-through-azure.md). -- :::image type="content" source="media/enable-virtual-hardware/perform-virtual-hardware-operations.png" alt-text="Screenshot that shows the page for performing virtual hardware operations." lightbox="media/enable-virtual-hardware/perform-virtual-hardware-operations.png"::: -- After linking to vCenter, virtual lifecycle operations and power cycle operations are enabled on the machines, and the kind property of Hybrid Compute Machine is updated as VMware. --## Link Arc-enabled Server machines to vCenter using Azure CLI --Use the following az commands to link Arc-enabled Server machines to vCenter at scale. --**Create VMware resource from the specified Arc for Server machine in the vCenter** ---**Create VMware resources from all Arc for Server machines in the specified resource group belonging to that vCenter** ---**Create VMware resources from all Arc for Server machines in the specified subscription belonging to that vCenter** ---### Required Parameters --**--vcenter-id -v** --ARM ID of the vCenter to which the machines will be linked. --### Optional Parameters --**--ids** --One or more resource IDs (space-delimited). It must be a complete resource ID containing all the information of *Resource Id* arguments. You must provide either *--ids* or other *Resource Id* arguments. --**--name -n** --Name of the Microsoft.HybridCompute Machine resource. Provide this parameter if you want to convert a single machine to a VMware VM. --**--resource-group -g** --Name of the resource group that will be scanned for HCRP machines. -->[!NOTE] ->The default group configured using `az configure --defaults group=` is not used, and it must be specified explicitly. --**--subscription** --Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`. --#### Known issue - -During the first scan of the vCenter inventory after onboarding to Azure Arc-enabled VMware vSphere, Arc-enabled Servers machines will be discovered under vCenter inventory. If the Arc-enabled Server machines aren't discovered and you try to perform the **Enable in Azure** operation, you'll encounter the following error:<br> --*A machine '/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXXX/resourceGroups/rg-contoso/providers/Microsoft.HybridCompute/machines/testVM1' already exists with the specified virtual machine MoRefId: 'vm-4441'. The existing machine resource can be extended with private cloud capabilities by creating the VirtualMachineInstance resource under it.* --When you encounter this error message, you'll be able to perform the **Link to vCenter** operation in 10 minutes. Alternatively, you can use any of the Azure CLI commands listed above to link an existing Arc-enabled Server machine to vCenter. --## Next steps --[Set up and manage self-service access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md). |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md | - Title: What is Azure Arc-enabled VMware vSphere? -description: Azure Arc-enabled VMware vSphere extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. - Previously updated : 08/08/2024---------# What is Azure Arc-enabled VMware vSphere? --Azure Arc-enabled VMware vSphere is an [Azure Arc](../overview.md) service that helps you simplify management of hybrid IT estate distributed across VMware vSphere and Azure. It does so by extending the Azure control plane to VMware vSphere infrastructure and enabling the use of Azure security, governance, and management capabilities consistently across VMware vSphere and Azure. --Arc-enabled VMware vSphere allows you to: --- Discover your VMware vSphere estate (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register resources with Arc at scale. --- Perform various virtual machine (VM) operations directly from Azure, such as create, resize, delete, and power cycle operations such as start/stop/restart on VMware VMs consistently with Azure. --- Empower developers and application teams to self-serve VM operations on-demand using [Azure role-based access control](../../role-based-access-control/overview.md) (RBAC).--- Install the Azure connected machine agent at scale on VMware VMs to [govern, protect, configure, and monitor](../servers/overview.md#supported-cloud-operations) them.--- Browse your VMware vSphere resources (VMs, templates, networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.--> [!NOTE] -> For more information regarding the different services Azure Arc offers, see [Choosing the right Azure Arc service for machines](../choose-service.md). --## Onboard resources to Azure management at scale --Azure services such as Microsoft Defender for Cloud, Azure Monitor, Azure Update Manager, and Azure Policy provide a rich set of capabilities to secure, monitor, patch, and govern off-Azure resources via Arc. --By using Arc-enabled VMware vSphere's capabilities to discover your VMware estate and install the Arc agent at scale, you can simplify onboarding your entire VMware vSphere estate to these services. --## Set up self-service access for your teams to use vSphere resources using Azure Arc --Arc-enabled VMware vSphere extends Azure's control plane (Azure Resource Manager) to VMware vSphere infrastructure. This enables you to use Microsoft Entra ID-based identity management, granular Azure RBAC, and Azure Resource Manager (ARM) templates to help your app teams and developers get self-service access to provision and manage VMs on VMware vSphere environment, providing greater agility. --1. Virtualized Infrastructure Administrators/Cloud Administrators can connect a vCenter instance to Azure. --2. Administrators can then use the Azure portal to browse VMware vSphere inventory and register virtual machines resource pools, networks, and templates into Azure. --3. Administrators can provide app teams/developers fine-grained permissions on those VMware resources through Azure RBAC. --4. App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart). --5. App teams can use Azure Resource Manager (ARM) templates/Bicep (Infrastructure as Code) to deploy VMs as part of CI/CD pipelines. --## How does it work? --Arc-enabled VMware vSphere provides these capabilities by integrating with your VMware vCenter Server. To connect your VMware vCenter Server to Azure Arc, you need to deploy the [Azure Arc resource bridge](../resource-bridge/overview.md) in your vSphere environment. Azure Arc resource bridge is a virtual appliance that hosts the components that communicate with your vCenter Server and Azure. --When a VMware vCenter Server is connected to Azure, an automatic discovery of the inventory of vSphere resources is performed. This inventory data is continuously kept in sync with the vCenter Server. --All guest OS-based capabilities are provided by enabling guest management (installing the Arc agent) on the VMs. Once guest management is enabled, VM extensions can be installed to use the Azure management capabilities. You can perform virtual hardware operations such as resizing, deleting, adding disks, and power cycling without guest management enabled. --## How is Arc-enabled VMware vSphere different from Arc-enabled Servers --The easiest way to think of this is as follows: --- Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that they're running on. Since Arc-enabled servers also support bare-metal machines, there can, in fact, not even be a host hypervisor in some cases.--- Azure Arc-enabled VMware vSphere is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on a VMware vSphere VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. Azure Arc-enabled VMware vSphere also provides guest operating system managementΓÇöin fact, it uses the same components as Azure Arc-enabled servers. --You have the flexibility to start with either option, and incorporate the other one later without any disruption. With both the options, you enjoy the same consistent experience. --## Supported VMware vSphere versions --Azure Arc-enabled VMware vSphere currently works with vCenter Server versions 7 and 8. --> [!NOTE] -> Azure Arc-enabled VMware vSphere supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point. --If you're trying to enable Arc for Azure VMware Solution (AVS) private cloud, see [Deploy Arc-enabled VMware vSphere for Azure VMware Solution private cloud](../../azure-vmware/deploy-arc-for-azure-vmware-solution.md). --## Supported regions --You can use Azure Arc-enabled VMware vSphere in these supported regions: --- East US-- East US 2-- West US 2-- West US 3-- Central US-- North Central US-- South Central US-- Canada Central-- UK West-- UK South-- North Europe-- West Europe-- Sweden Central-- Japan East-- East Asia-- Southeast Asia-- Central India-- Australia East--For the most up-to-date information about region availability of Azure Arc-enabled VMware vSphere, see [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc®ions=all) page. --## Data Residency --Azure Arc-enabled VMware vSphere doesn't store/process customer data outside the region the customer deploys the service instance in. --## Azure Kubernetes Service (AKS) Arc on VMware (preview) --Starting March 2024, Azure Kubernetes Service (AKS) enabled by Azure Arc on VMware is available for preview. AKS Arc on VMware enables you to use Azure Arc to create new Kubernetes clusters on VMware vSphere. For more information, see [What is AKS enabled by Arc on VMware?](/azure/aks/hybrid/aks-vmware-overview). --The following capabilities are available in the AKS Arc on VMware preview: --- **Simplified infrastructure deployment on Arc-enabled VMware vSphere**: Onboard VMware vSphere to Azure using a single-step process with the AKS Arc extension installed.-- **Azure CLI**: A consistent command-line experience, with [AKS Arc on Azure Stack HCI 23H2](/azure/aks/hybrid/aks-create-clusters-cli), for creating and managing Kubernetes clusters. Note that the preview only supports a limited set of commands.-- **Cloud-based management**: Use familiar tools such as Azure CLI to create and manage Kubernetes clusters on VMware.-- **Support for managing and scaling node pools and clusters**.--## Next steps --- Plan your resource bridge deployment by reviewing the [support matrix for Arc-enabled VMware vSphere](support-matrix-for-arc-enabled-vmware-vsphere.md).-- Once ready, [connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md).-- Try out Arc-enabled VMware vSphere by using the [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_vsphere). |
azure-arc | Perform Vm Ops Through Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/perform-vm-ops-through-azure.md | - Title: Perform VM operations on VMware VMs through Azure -description: Learn how to view the operations that you can do on VMware virtual machines and install the Log Analytics agent. - Previously updated : 03/12/2024--------# Manage VMware VMs in Azure through Arc-enabled VMware vSphere --In this article, you learn how to perform various operations on the Azure Arc-enabled VMware vSphere VMs such as: --- Start, stop, and restart a VM--- Control access and add Azure tags--- Add, remove, and update network interfaces--- Add, remove, and update disks and update VM size (CPU cores, memory)--- Enable guest management--- Install extensions (enabling guest management is required). All the [extensions](../servers/manage-vm-extensions.md#extensions) that are available with Arc-enabled Servers are supported.---To perform guest OS operations on Arc-enabled VMs, you must enable guest management on the VMs. When you enable guest management, the Arc Connected Machine Agent is installed on the VM. --## Enable guest management --Before you can install an extension, you must enable guest management on the VMware VM. --1. Make sure your target machine: -- - is running a [supported operating system](../servers/prerequisites.md#supported-operating-systems). -- - is able to connect through the firewall to communicate over the internet and these [URLs](../servers/network-requirements.md#urls) aren't blocked. -- - has VMware tools installed and running. -- - is powered on and the resource bridge has network connectivity to the host running the VM. -- >[!NOTE] - >If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo` and add `<username> ALL=(ALL) NOPASSWD:ALL` to the end of the file. Make sure to replace `<username>`. - > - >If your VM template has these changes incorporated, you won't need to do this for the VM created from that template. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --2. Search for and select the VMware VM for which you want to enable guest management and select **Configuration**. --3. Select **Enable guest management** and provide the administrator username and password to enable guest management. Then select **Apply**. -- For Linux, use the root account, and for Windows, use an account that is a member of the Local Administrators group. --## Delete a VM --If you no longer need the VM, you can delete it. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --2. Search for and select the VM you want to delete. --3. In the selected VM's Overview page, select **Delete**. --4. When prompted, confirm that you want to delete it. -->[!NOTE] ->This also deletes the VM in your VMware vCenter. --## Next steps --[Tutorial - Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md). |
azure-arc | Quick Start Connect Vcenter To Arc Using Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md | - Title: Connect VMware vCenter Server to Azure Arc by using the helper script -description: In this quickstart, you learn how to use the helper script to connect your VMware vCenter Server instance to Azure Arc. -- Previously updated : 09/04/2024-------# Customer intent: As a VI admin, I want to connect my vCenter Server instance to Azure to enable self-service through Azure Arc. ---# Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script --To start using the Azure Arc-enabled VMware vSphere features, you need to connect your VMware vCenter Server instance to Azure Arc. This quickstart shows you how to connect your VMware vCenter Server instance to Azure Arc by using a helper script. --First, the script deploys a virtual appliance called [Azure Arc resource bridge](../resource-bridge/overview.md) in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between vCenter Server and Azure Arc. --> [!IMPORTANT] -> This article describes a way to connect a generic vCenter Server to Azure Arc. If you're trying to enable Arc for Azure VMware Solution (AVS) private cloud, follow this guide instead - [Deploy Arc-enabled VMware vSphere for Azure VMware Solution private cloud](../../azure-vmware/deploy-arc-for-azure-vmware-solution.md). With the Arc for AVS onboarding process you need to provide fewer inputs and Arc capabilities are better integrated into the AVS private cloud portal experience. --## Prerequisites --### Azure --- An Azure subscription.--- A resource group in the subscription where you have the *Owner*, *Contributor*, or *Azure Arc VMware Private Clouds Onboarding* role for onboarding.--### Azure Arc Resource Bridge --- Azure Arc resource bridge IP needs access to the URLs listed [here](../vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md#resource-bridge-networking-requirements).--### vCenter Server --- vCenter Server version 7 or 8.--- A virtual network that can provide internet access, directly or through a proxy. It must also be possible for VMs on this network to communicate with the vCenter server on TCP port (usually 443).--- At least three free static IP addresses on the above network.--- A resource pool or a cluster with a minimum capacity of 8 GB of RAM and 4 vCPUs.--- A datastore with a minimum of 200 GB of free disk space or 400 GB for High Availability deployment, available through the resource pool or cluster.--> [!NOTE] -> Azure Arc-enabled VMware vSphere supports vCenter Server instances with a maximum of 9,500 virtual machines (VMs). If your vCenter Server instance has more than 9,500 VMs, we don't recommend that you use Azure Arc-enabled VMware vSphere with it at this point. --### vSphere account --You need a vSphere account that can: -- Read all inventory. -- Deploy and update VMs to all the resource pools (or clusters), networks, and VM templates that you want to use with Azure Arc.--> [!IMPORTANT] -> As part of the Azure Arc-enabled VMware onboarding script, you will be prompted to provide a vSphere account to deploy the Azure Arc resouce bridge VM on the ESXi host. This account will be stored locally within the Azure Arc resource bridge VM and encrypted as a Kubernetes secret at rest. The vSphere account allows Azure Arc-enabled VMware to interact with VMware vSphere. If your organization practices routine credential rotation, you must [update the credentials in Azure Arc-enabled VMware](administer-arc-vmware.md#updating-the-vsphere-account-credentials-using-a-new-password-or-a-new-vsphere-account-after-onboarding) to maintain the connection between Azure Arc-enabled VMware and VMware vSphere. ---### Workstation --You need a Windows or Linux machine that can access both your vCenter Server instance and the internet, directly or through a proxy. The workstation must also have outbound network connectivity to the ESXi host backing the datastore. Datastore connectivity is needed for uploading the Arc resource bridge image to the datastore as part of the onboarding. --## Prepare vCenter Server --1. Create a resource pool with a reservation of at least 16 GB of RAM and four vCPUs. It should also have access to a datastore with at least 100 GB of free disk space. --2. Ensure that the vSphere accounts have the appropriate permissions. --## Download the onboarding script --1. Go to the Azure portal. --2. Search for **Azure Arc** and select it. --3. On the **Overview** page, select **Add** under **Add your infrastructure for free** or move to the **Infrastructure** tab. --4. In the **Platform** section, select **Add** under **VMware vCenter**. -- :::image type="content" source="media/quick-start-connect-vcenter-to-arc-using-script/add-vmware-vcenter.png" alt-text="Screenshot that shows how to add VMware vCenter through Azure Arc."::: --5. Select **Create a new resource bridge**, and then select **Next**. --6. Provide a name of your choice for the Azure Arc resource bridge. For example: **contoso-nyc-resourcebridge**. --7. Select a subscription and resource group where the resource bridge will be created. --8. Under **Region**, select an Azure location where the resource metadata will be stored. Currently, the supported regions are **East US**, **West Europe**, **Australia East**, and **Canada Central**. --9. Provide a name for **Custom location**. You'll see this name when you deploy VMs. Name it for the datacenter or the physical location of your datacenter. For example: **contoso-nyc-dc**. --10. Leave **Use the same subscription and resource group as your resource bridge** selected. --11. Provide a name for your vCenter Server instance in Azure. For example: **contoso-nyc-vcenter**. --12. You can choose to **Enable Kubernetes Service on VMware [Preview]**. If you choose to do so, please ensure you update the namespace of your custom location to "default" in the onboarding script: $customLocationNamespace = ("default".ToLower() -replace '[^a-z0-9-]', ''). For more details about this update, refer to the [known issues from AKS on VMware (preview)](/azure/aks/hybrid/aks-vmware-known-issues) --13. Select **Next: Download and run script**. --14. If your subscription isn't registered with all the required resource providers, a **Register** button will appear. Select the button before you proceed to the next step. -- :::image type="content" source="media/quick-start-connect-vcenter-to-arc-using-script/register-arc-vmware-providers.png" alt-text="Screenshot that shows the button to register required resource providers during vCenter onboarding to Azure Arc."::: --15. Based on the operating system of your workstation, download the PowerShell or Bash script and copy it to the [workstation](#prerequisites). --16. If you want to see the status of your onboarding after you run the script on your workstation, select **Next: Verification**. Closing this page won't affect the onboarding. --## Run the script --Use the following instructions to run the script, depending on which operating system your machine is using. --### Windows --1. Open a PowerShell window as an Administrator and go to the folder where you've downloaded the PowerShell script. -- > [!NOTE] - > On Windows workstations, the script must be run in PowerShell window and not in PowerShell Integrated Script Editor (ISE) as PowerShell ISE doesn't display the input prompts from Azure CLI commands. If the script is run on PowerShell ISE, it could appear as though the script is stuck while it is waiting for input. --2. Run the following command to allow the script to run, because it's an unsigned script. (If you close the session before you complete all the steps, run this command again for the new session.) -- ``` powershell-interactive - Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass - ``` --3. Run the script: -- ``` powershell-interactive - ./resource-bridge-onboarding-script.ps1 - ``` --### Linux --1. Open the terminal and go to the folder where you've downloaded the Bash script. --2. Run the script by using the following command: -- ``` sh - bash resource-bridge-onboarding-script.sh - ``` --## Inputs for the script --A typical onboarding that uses the script takes 30 to 60 minutes. During the process, you're prompted for the following details: --| **Requirement** | **Details** | -| | | -| **Azure login** | When you're prompted, go to the [device sign-in page](https://www.microsoft.com/devicelogin), enter the authorization code shown in the terminal, and sign in to Azure. | -| **vCenter FQDN/Address** | Enter the fully qualified domain name for the vCenter Server instance (or an IP address). For example: **10.160.0.1** or **nyc-vcenter.contoso.com**. | -| **vCenter Username** | Enter the username for the vSphere account. The required permissions for the account are listed in the [prerequisites](#prerequisites). | -| **vCenter password** | Enter the password for the vSphere account. | -| **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge VM should be deployed. | -| **Network selection** | Select the name of the virtual network or segment to which the Azure Arc resource bridge VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). | -| **Static IP** | Arc Resource Bridge requires static IP address assignment and DHCP isn't supported. </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc resource bridge VM for DNS resolution. Azure Arc resource bridge VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the Azure Arc resource bridge VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br>| -| **Control Plane IP address** | Azure Arc resource bridge runs a Kubernetes cluster, and its control plane always requires a static IP address. Provide an IP address that meets the following requirements: <br> - The IP address must have internet access. <br> - The IP address must be within the subnet defined by IP address prefix. <br> - If you're using static IP address option for resource bridge VM IP address, the control plane IP address must be outside of the IP address range provided for the VM (Start range IP - End range IP). | -| **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge VM will be deployed. | -| **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge VM. | -| **Folder** | Select the name of the vSphere VM and the template folder where the Azure Arc resource bridge's VM will be deployed. | -| **Appliance proxy settings** | Enter **y** if there's a proxy in your appliance network. Otherwise, enter **n**. </br> You need to populate the following boxes when you have a proxy set up: </br> 1. **Http**: Address of the HTTP proxy server. </br> 2. **Https**: Address of the HTTPS proxy server. </br> 3. **NoProxy**: Addresses to be excluded from the proxy. </br> 4. **CertificateFilePath**: For SSL-based proxies, the path to the certificate to be used. --After the command finishes running, your setup is complete. You can now use the capabilities of Azure Arc-enabled VMware vSphere. --> [!IMPORTANT] -> After the successful installation of Azure Arc Resource Bridge, it's recommended to retain a copy of the resource bridge config.yaml files in a place that facilitates easy retrieval. These files could be needed later to run commands to perform management operations (e.g. [az arcappliance upgrade](/cli/azure/arcappliance/upgrade#az-arcappliance-upgrade-vmware)) on the resource bridge. You can find the three .yaml files (config files) in the same folder where you ran the script. --## Recovering from failed deployments --If the Azure Arc resource bridge deployment fails, consult the [Azure Arc resource bridge troubleshooting document](../resource-bridge/troubleshoot-resource-bridge.md). While there can be many reasons why the Azure Arc resource bridge deployment fails, one of them is KVA timeout error. For more information about the KVA timeout error and how to troubleshoot it, see [KVA timeout error](../resource-bridge/troubleshoot-resource-bridge.md#kva-timeout-error). --To clean up the installation and retry the deployment, use the following commands. --### Retry command - Windows --Run the command with ```-Force``` to clean up the installation and onboard again. --```powershell-interactive -./resource-bridge-onboarding-script.ps1 -Force -``` --### Retry command - Linux --Run the command with ```--force``` to clean up the installation and onboard again. -```bash -bash resource-bridge-onboarding-script.sh --force -``` --## Next steps --- [Browse and enable VMware vCenter resources in Azure](browse-and-enable-vcenter-resources-in-azure.md) |
azure-arc | Quick Start Create A Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-create-a-vm.md | - Title: Create a virtual machine on VMware vCenter using Azure Arc -description: In this quickstart, you learn how to create a virtual machine on VMware vCenter using Azure Arc - Previously updated : 08/29/2024--zone_pivot_groups: vmware-portal-bicep-terraform ------# Customer intent: As a self-service user, I want to provision a VM using vCenter resources through Azure so that I can deploy my code ---# Create a virtual machine on VMware vCenter using Azure Arc ---This article describes how to provision a VM using vCenter resources from Azure portal. --## Create a VM in the Azure portal --Once your administrator has connected a VMware vCenter to Azure, represented VMware vCenter resources in Azure, and provided you with permissions on those resources, you'll create a virtual machine. --### Prerequisites --- An Azure subscription and resource group where you have an Arc VMware VM contributor role.-- A resource pool/cluster/host on which you have Arc Private Cloud Resource User Role.-- A virtual machine template resource on which you have Arc Private Cloud Resource User Role.-- A virtual network resource on which you have Arc Private Cloud Resource User Role.--Follow these steps to create VM in the Azure portal: --1. From your browser, go to the [Azure portal](https://portal.azure.com). Navigate to virtual machines browse view. You'll see a unified browse experience for Azure and Arc virtual machines. -- :::image type="content" source="media/quick-start-create-a-vm/browse-virtual-machines.png" alt-text="Screenshot showing the unified browse experience for Azure and Arc virtual machines." lightbox="media/quick-start-create-a-vm/browse-virtual-machines.png"::: --2. Select **Add** and then select **Azure Arc machine** from the drop-down. -- :::image type="content" source="media/quick-start-create-a-vm/create-azure-arc-virtual-machine.png" alt-text="Screenshot showing the Basic tab for creating an Azure Arc virtual machine." lightbox="media/quick-start-create-a-vm/create-azure-arc-virtual-machine.png"::: --3. Select the **Subscription** and **Resource group** where you want to deploy the VM. --4. Provide the **Virtual machine name** and then select a **Custom location** that your administrator has shared with you. -- If multiple kinds of VMs are supported, select **VMware** from the **Virtual machine kind** drop-down. --5. Select the **Resource pool/cluster/host** into which the VM should be deployed. --6. Select the **datastore** that you want to use for storage. --7. Select the **Template** based on which you'll create the VM. -- >[!TIP] - >You can override the template defaults for **CPU Cores** and **Memory**. -- If you selected a Windows template, provide a **Username**, **Password** for the **Administrator account**. --8. (Optional) Change the disks configured in the template. For example, you can add more disks or update existing disks. All the disks and VM will be on the datastore selected in step 6. --9. (Optional) Change the network interfaces configured in the template. For example, you can add network interface (NIC) cards or update existing NICs. You can also change the network to which this NIC will be attached, provided you have appropriate permissions to the network resource. --10. (Optional) Add tags to the VM resource if necessary. --11. Select **Create** after reviewing all the properties. It should take a few minutes to create the VM. ----This article describes how to provision a VM using vCenter resources using a Bicep template. --## Create an Arc VMware machine using Bicep template --The following bicep template can be used to create an Arc VMware machine. [Here](/azure/templates/microsoft.connectedvmwarevsphere/2023-12-01/virtualmachineinstances?pivots=deployment-language-arm-template) is the list of available Azure Resource Manager (ARM), Bicep, and Terraform templates for Arc-enabled VMware resources. To trigger any other Arc operation, convert the corresponding [ARM template to Bicep template](/azure/azure-resource-manager/bicep/decompile#decompile-from-json-to-bicep). --```bicep -// Parameters -param vmName string = 'contoso-vm' -param vmAdminPassword string = 'examplepassword!#' -param vCenterId string = '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ConnectedVMwarevSphere/vcenters/contoso-vcenter' -param templateId string = '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ConnectedVMwarevSphere/VirtualMachineTemplates/contoso-template-win22' -param resourcePoolId string = '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ConnectedVMwarevSphere/ResourcePools/contoso-respool' -param datastoreId string = '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ConnectedVMwarevSphere/Datastores/contoso-datastore' -param networkId string = '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ConnectedVMwarevSphere/VirtualNetworks/contoso-network' -param extendedLocation object = { - type: 'customLocation' - name: '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ExtendedLocation/customLocations/contoso-customlocation' -} -param ipSettings object = { - allocationMethod: 'static' - gateway: ['172.24.XXX.1'] - ipAddress: '172.24.XXX.105' - subnetMask: '255.255.255.0' - dnsServers: ['172.24.XXX.9'] -} --resource contosoMachine 'Microsoft.HybridCompute/machines@2023-10-03-preview' = { - name: vmName - location:'westeurope' - kind:'VMware' - properties:{} - tags: { - foo: 'bar' - } -} --resource vm 'Microsoft.ConnectedVMwarevSphere/virtualMachineInstances@2023-12-01' = { - name: 'default' - scope: contosoMachine - extendedLocation: extendedLocation - properties: { - hardwareProfile: { - memorySizeMB: 4096 - numCPUs: 2 - } - osProfile: { - computerName: vmName - adminPassword: vmAdminPassword - } - placementProfile: { - resourcePoolId: resourcePoolId - datastoreId: datastoreId - } - infrastructureProfile: { - templateId: templateId - vCenterId: vCenterId - } - networkProfile: { - networkInterfaces: [ - { - nicType: 'vmxnet3' - ipSettings: ipSettings - networkId: networkId - name: 'VLAN103NIC' - powerOnBoot: 'enabled' - } - ] - } - } -} --// Outputs -output vmId string = vm.id --``` ---This article describes how to provision a VM using vCenter resources using a Terraform template. --## Create an Arc VMware machine with Terraform --### Prerequisites --- **Azure Subscription**: Ensure you have an active Azure subscription.-- **Terraform**: Install Terraform on your machine.-- **Azure CLI**: Install Azure CLI to authenticate and manage resources.--Follow these steps to create an Arc VMware machine using Terraform. The following two scenarios are covered in this article: --1. For VMs discovered in vCenter inventory, perform enable in Azure operation and install Arc agents. -2. Create a new Arc VMware VM using templates, Resource pool, Datastore and install Arc agents. --### Scenario 1 --For VMs discovered in vCenter inventory, perform enable in Azure operation and install Arc agents. --#### Step 1: Define variables in a variables.tf file --Create a file named variables.tf and define all the necessary variables. --```terraform -variable "subscription_id" { - description = "The subscription ID for the Azure account." - type = string -} - -variable "resource_group_name" { - description = "The name of the resource group." - type = string -} - -variable "location" { - description = "The location/region where the resources will be created." - type = string -} - -variable "machine_name" { - description = "The name of the machine." - type = string -} - -variable "inventory_item_id" { - description = "The ID of the Inventory Item for the VM." - type = string -} - -variable "custom_location_id" { - description = "The ID of the custom location." - type = string -} - -variable "vm_username" { - description = "The admin username for the VM." - type = string -} - -variable "vm_password" { - description = "The admin password for the VM." - type = string -} --variable "resource_group_name" { - description = "The name of the resource group." - type = string -} - -variable "location" { - description = "The location/region where the resources will be created." - type = string -} - -variable "machine_name" { - description = "The name of the machine." - type = string -} - -variable "vm_username" { - description = "The admin username for the VM." - type = string -} - -variable "vm_password" { - description = "The admin password for the VM." - type = string -} - -variable "inventory_id" { - description = "The Inventory ID for the VM." - type = string -} - -variable "vcenter_id" { - description = "The ID of the vCenter." - type = string -} - -variable "custom_location_id" { - description = "The ID of the custom location." - type = string -} --``` -#### Step 2: Create a tfvars file --Create a file named *CreateVMwareVM.tfvars* and provide sample values for the variables. --```terraform -subscription_id = "your-subscription-id" -resource_group_name = "your-resource-group" -location = "eastus" -machine_name = "test_machine0001" -inventory_item_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ConnectedVMwarevSphere/VCenters/your-vcenter-id/InventoryItems/your-inventory-item-id" -custom_location_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ExtendedLocation/customLocations/your-custom-location-id" -vm_username = "Administrator" -vm_password = " The admin password for the VM " --``` --#### Step 3: Modify the configuration to use variables --Create a file named *main.tf* and insert the following code. --```terraform -terraform { - required_providers { - azurerm = { - source = "hashicorp/azurerm" - version = ">= 3.0" - } - azapi = { - source = "azure/azapi" - version = ">= 1.0.0" - } - } -} - -# Configure the AzureRM provider with the subscription ID -provider "azurerm" { - features {} - subscription_id = var.subscription_id -} - -# Configure the AzAPI provider with the subscription ID -provider "azapi" { - subscription_id = var.subscription_id -} - -# Retrieve the resource group details -data "azurerm_resource_group" "example" { - name = var.resource_group_name -} - -# Create a VMware machine resource in Azure -resource "azapi_resource" "test_machine0001" { - schema_validation_enabled = false - parent_id = data.azurerm_resource_group.example.id - type = "Microsoft.HybridCompute/machines@2023-06-20-preview" - name = var.machine_name - location = data.azurerm_resource_group.example.location - body = jsonencode({ - kind = "VMware" - identity = { - type = "SystemAssigned" - } - }) -} - -# Create a Virtual Machine instance using the VMware machine and Inventory Item ID -resource "azapi_resource" "test_inventory_vm0001" { - schema_validation_enabled = false - type = "Microsoft.ConnectedVMwarevSphere/VirtualMachineInstances@2023-10-01" - name = "default" - parent_id = azapi_resource.test_machine0001.id - body = jsonencode({ - properties = { - infrastructureProfile = { - inventoryItemId = var.inventory_item_id - } - } - extendedLocation = { - type = "CustomLocation" - name = var.custom_location_id - } - }) - depends_on = [azapi_resource.test_machine0001] -} - -# Install Arc agent on the VM -resource "azapi_resource" "guestAgent" { - type = "Microsoft.ConnectedVMwarevSphere/virtualMachineInstances/guestAgents@2023-10-01" - parent_id = azapi_resource.test_inventory_vm0001.id - name = "default" - body = jsonencode({ - properties = { - credentials = { - username = var.vm_username - password = var.vm_password - } - provisioningAction = "install" - } - }) - schema_validation_enabled = false - ignore_missing_property = false - depends_on = [azapi_resource.test_inventory_vm0001] -} --``` -#### Step 4: Run Terraform commands --Use the -var-file flag to pass the *.tfvars* file during Terraform commands. --1. Initialize Terraform (if not already initialized): -`terraform init` -2. Validate the configuration: -`terraform validate -var-file="CreateVMwareVM.tfvars"` -3. Plan the changes: -`terraform plan -var-file="CreateVMwareVM.tfvars"` -4. Apply the changes: -`terraform apply -var-file="CreateVMwareVM.tfvars"` --Confirm the prompt by entering yes to apply the changes. --### Best practices --- **Use version control**: Keep your Terraform configuration files under version control (for example, Git) to track changes over time.-- **Review plans carefully**: Always review the output of terraform plan before applying changes to ensure that you understand what changes will be made.-- **State management**: Regularly back up your Terraform state files to avoid data loss.--By following these steps, you can effectively create and manage HCRP and Arc VMware VMs on Azure using Terraform and install guest agents on the created VMs. --### Scenario 2 --Create a new Arc VMware VM using templates, Resource pool, Datastore and install Arc agents. --#### Step 1: Define variables in a variables.tf file --Create a file named variables.tf and define all the necessary variables. --```terraform -variable "subscription_id" { - description = "The subscription ID for the Azure account." - type = string -} - -variable "resource_group_name" { - description = "The name of the resource group." - type = string -} - -variable "location" { - description = "The location/region where the resources will be created." - type = string -} - -variable "machine_name" { - description = "The name of the machine." - type = string -} - -variable "vm_username" { - description = "The admin username for the VM." - type = string -} - -variable "vm_password" { - description = "The admin password for the VM." - type = string -} - -variable "template_id" { - description = "The ID of the VM template." - type = string -} - -variable "vcenter_id" { - description = "The ID of the vCenter." - type = string -} - -variable "resource_pool_id" { - description = "The ID of the resource pool." - type = string -} - -variable "datastore_id" { - description = "The ID of the datastore." - type = string -} - -variable "custom_location_id" { - description = "The ID of the custom location." - type = string -} --``` --#### Step 2: Create tfvars file --Create a file named *CreateVMwareVM.tfvars* and provide sample values for the variables. --```terraform -subscription_id = "your-subscription-id" -resource_group_name = "your-resource-group" -location = "eastus" -machine_name = "test_machine0002" -vm_username = "Administrator" -vm_password = "*********" -template_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ConnectedVMwarevSphere/virtualmachinetemplates/your-template-id" -vcenter_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ConnectedVMwarevSphere/VCenters/your-vcenter-id" -resource_pool_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ConnectedVMwarevSphere/resourcepools/your-resource-pool-id" -datastore_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ConnectedVMwarevSphere/datastores/your-datastore-id" -custom_location_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ExtendedLocation/customLocations/your-custom-location-id" --``` --#### Step 3: Modify the configuration to use variables --Create a file named *main.tf* and insert the following code. --```terraform -terraform { - required_providers { - azurerm = { - source = "hashicorp/azurerm" - version = ">= 3.0" - } - azapi = { - source = "azure/azapi" - version = ">= 1.0.0" - } - } -} - -# Configure the AzureRM provider with the subscription ID -provider "azurerm" { - features {} - subscription_id = var.subscription_id -} - -# Configure the AzAPI provider with the subscription ID -provider "azapi" { - subscription_id = var.subscription_id -} - -# Retrieve the resource group details -data "azurerm_resource_group" "example" { - name = var.resource_group_name -} - -# Create a VMware machine resource in Azure -resource "azapi_resource" "test_machine0002" { - schema_validation_enabled = false - parent_id = data.azurerm_resource_group.example.id - type = "Microsoft.HybridCompute/machines@2023-06-20-preview" - name = var.machine_name - location = data.azurerm_resource_group.example.location - body = jsonencode({ - kind = "VMware" - identity = { - type = "SystemAssigned" - } - }) -} - -# Create a Virtual Machine instance using the VMware machine created above -resource "azapi_resource" "test_vm0002" { - schema_validation_enabled = false - type = "Microsoft.ConnectedVMwarevSphere/VirtualMachineInstances@2023-10-01" - name = "default" - parent_id = azapi_resource.test_machine0002.id - body = jsonencode({ - properties = { - infrastructureProfile = { - templateId = var.template_id - vCenterId = var.vcenter_id - } - - placementProfile = { - resourcePoolId = var.resource_pool_id - datastoreId = var.datastore_id - } - - osProfile = { - adminPassword = var.vm_password - } - } - extendedLocation = { - type = "CustomLocation" - name = var.custom_location_id - } - }) - depends_on = [azapi_resource.test_machine0002] -} - -# Create a guest agent for the VM instance -resource "azapi_resource" "guestAgent" { - type = "Microsoft.ConnectedVMwarevSphere/virtualMachineInstances/guestAgents@2023-10-01" - parent_id = azapi_resource.test_vm0002.id - name = "default" - body = jsonencode({ - properties = { - credentials = { - username = var.vm_username - password = var.vm_password - } - provisioningAction = "install" - } - }) - schema_validation_enabled = false - ignore_missing_property = false - depends_on = [azapi_resource.test_vm0002] -} --``` --#### Step 4: Run Terraform commands --Use the -var-file flag to pass the *.tfvars* file during Terraform commands. --1. Initialize Terraform (if not already initialized): -`terraform init` -2. Validate the configuration: -`terraform validate -var-file="CreateVMwareVM.tfvars"` -3. Plan the changes: -`terraform plan -var-file="CreateVMwareVM.tfvars"` -4. Apply the changes: -`terraform apply -var-file="CreateVMwareVM.tfvars"` --Confirm the prompt by entering yes to apply the changes. --### Best practices --- **Use version control**: Keep your Terraform configuration files under version control (for example, Git) to track changes over time.-- **Review plans carefully**: Always review the output of terraform plan before applying changes to ensure that you understand what changes will be made.-- **State management**: Regularly back up your Terraform state files to avoid data loss.---## Next steps --[Perform operations on VMware VMs in Azure](perform-vm-ops-through-azure.md). |
azure-arc | Recover From Resource Bridge Deletion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion.md | - Title: Perform disaster recovery operations -description: Learn how to perform recovery operations for the Azure Arc resource bridge VM in Azure Arc-enabled VMware vSphere disaster scenarios. -- Previously updated : 11/06/2023--------# Recover from accidental deletion of resource bridge VM --In this article, you learn how to recover the Azure Arc resource bridge connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc fail. --## Prerequisites --1. The disaster recovery script must be run from the same folder where the config (.yaml) files are present. The config files are present on the machine used to run the script to deploy Arc resource bridge. --1. The machine being used to run the script must have bidirectional connectivity to the Arc resource bridge VM on port 6443 (Kubernetes API server) and 22 (SSH), and outbound connectivity to the Arc resource bridge VM on port 443 (HTTPS). --## Recovering the Arc resource bridge if there is VM deletion --To recover from Arc resource bridge VM deletion, you need to deploy a new resource bridge with the same resource ID as the current resource bridge using the following steps. --1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and vCenter Azure resources. --2. Find and delete the old Arc resource bridge template from your vCenter. --3. Download the [onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#download-the-onboarding-script) from the Azure portal and update the following section in the script, using the same information as the original resources in Azure. -- ```powershell - $location = <Azure region of the resources> - $applianceSubscriptionId = <subscription-id> - $applianceResourceGroupName = <resource-group-name> - $applianceName = <resource-bridge-name> - - $customLocationSubscriptionId = <subscription-id> - $customLocationResourceGroupName = <resource-group-name> - $customLocationName = <custom-location-name> - - $vCenterSubscriptionId = <subscription-id> - $vCenterResourceGroupName = <resource-group-name> - $vCenterName = <vcenter-name-in-azure> - ``` --4. [Run the onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) again with the `--force` parameter. -- ``` powershell-interactive - ./resource-bridge-onboarding-script.ps1 --force - ``` --5. [Provide the inputs](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted. --6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources are manageable in Azure again. --## Next steps --[Troubleshoot Azure Arc resource bridge issues](../resource-bridge/troubleshoot-resource-bridge.md) --If the recovery steps mentioned above are unsuccessful in restoring Arc resource bridge to its original state, try one of the following channels for support: --- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).-- Connect with [@AzureSupport](https://x.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.-- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). |
azure-arc | Remove Vcenter From Arc Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware.md | - Title: Remove your VMware vCenter environment from Azure Arc -description: This article explains the steps to cleanly remove your VMware vCenter environment from Azure Arc-enabled VMware vSphere and delete related Azure Arc resources from Azure. ---- Previously updated : 03/12/2024----# Customer intent: As an infrastructure admin, I want to cleanly remove my VMware vCenter environment from Azure Arc-enabled VMware vSphere. ----# Remove your VMware vCenter environment from Azure Arc --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). --In this article, you learn how to cleanly remove your VMware vCenter environment from Azure Arc-enabled VMware vSphere. For VMware vSphere environments that you no longer want to manage with Azure Arc-enabled VMware vSphere, follow the steps in the article to: --1. Remove guest management from VMware virtual machines -2. Remove your VMware vCenter environment from Azure Arc -3. Remove Arc resource bridge related items in your vCenter --## 1. Remove guest management from VMware virtual machines --To prevent continued billing of Azure management services after you remove the vSphere environment from Azure Arc, you must first cleanly remove guest management from all Arc-enabled VMware vSphere virtual machines where it was enabled. -When you enable guest management on Arc-enabled VMware vSphere virtual machines, the Arc connected machine agent is installed on them. --Once guest management is enabled, you can install VM extensions on them and use Azure management services like the Log Analytics on them. -To cleanly remove guest management, you must follow the steps below to remove any VM extensions from the virtual machine, disconnect the agent, and uninstall the software from your virtual machine. It's important to complete each of the three steps to fully remove all related software components from your virtual machines. --### Step 1: Remove VM extensions --If you have deployed Azure VM extensions to an Azure Arc-enabled VMware vSphere VM, you must uninstall the extensions before disconnecting the agent or uninstalling the software. Uninstalling the Azure Connected Machine agent doesn't automatically remove extensions, and they won't be recognized if you later connect the VM to Azure Arc again. -Uninstall extensions using the following steps: --1. Go to [Azure Arc center in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview) --2. Select **VMware vCenters**. --3. Search and select the vCenter you want to remove from Azure Arc. -- :::image type="content" source="media/remove-vcenter-from-arc-vmware/browse-vmware-inventory.png" alt-text="Screenshot of where to browse your VMware Inventory from Azure portal." lightbox="media/remove-vcenter-from-arc-vmware/browse-vmware-inventory.png"::: --4. Select **Virtual machines** under **vCenter inventory**. --5. Search and select the virtual machine where you have Guest Management enabled. --6. Select **Extensions**. --7. Select the extensions and select **Uninstall** --### Step 2: Disconnect the agent from Azure Arc --Disconnecting the agent clears the local state of the agent and removes agent information from our systems. To disconnect the agent, sign-in and run the following command as an administrator/root account on the virtual machine. --```powershell - azcmagent disconnect --force-local-only -``` --### Step 3: Uninstall the agent --#### For Windows virtual machines --To uninstall the Windows agent from the machine, do the following: --1. Sign in to the computer with an account that has administrator permissions. -2. In Control Panel, select Programs and Features. -3. In Programs and Features, select Azure Connected Machine Agent, select Uninstall, and then select Yes. -4. Delete the `C:\Program Files\AzureConnectedMachineAgent` folder --#### For Linux virtual machines --To uninstall the Linux agent, the command to use depends on the Linux operating system. You must have `root` access permissions or your account must have elevated rights using sudo. --- For Ubuntu, run the following command:-- ```bash - sudo apt purge azcmagent - ``` --- For RHEL, CentOS, Oracle Linux run the following command:-- ```bash - sudo yum remove azcmagent - ``` --- For SLES, run the following command:-- ```bash - sudo zypper remove azcmagent - ``` --## 2. Remove your VMware vCenter environment from Azure Arc --You can remove your VMware vSphere resources from Azure Arc using either the deboarding script or manually. --### Remove VMware vSphere resources from Azure Arc using deboarding script --Download the [deboarding script](https://aka.ms/arcvmwaredeboard) to do a full cleanup of all the Arc-enabled VMware resources. The script removes all the Azure resources, including vCenter, custom location, virtual machines, virtual templates, hosts, clusters, resource pools, datastores, virtual networks, Azure Resource Manager (ARM) resource of Appliance, and the appliance VM running on vCenter. --#### Run the script -To run the deboarding script, follow these steps: --##### Windows -1. Open a PowerShell window as an Administrator and go to the folder where you've downloaded the PowerShell script. -- >[!Note] - >On Windows workstations, the script must be run in PowerShell window and not in PowerShell Integrated Script Editor (ISE), as PowerShell ISE doesn't display the input prompts from Azure CLI commands. If the script is run on PowerShell ISE, it can appear as though the script is stuck while it's waiting for input. --2. Run the following command to allow the script to run because it's an unsigned script. (If you close the session before you complete all the steps, run this command again for the new session.) -- ```powershell-interactive - Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass - ``` -3. Run the script. -- ```powershell-interactive - ./arcvmware-deboard.ps1 - ``` --#### Inputs for the script --- **vCenterId**: The Azure resource ID of the VMware vCenter resource. </br> For example: */subscriptions/204898ee-cd13-4332-1111-88ca5c11111c/resourceGroups/Synthetics/providers/Microsoft.ConnectedVMwarevSphere/VCenters/vcenterresource*--- **AVSId**: The Azure resource ID of the AVS instance. Specifying vCenterId or AVSId is mandatory.--- **ApplianceConfigFilePath (optional)**: Path to kubeconfig, output from deploy command. Providing applianceconfigfilepath also deletes the appliance VM running on the vCenter.--- **Force**: Using the Force flag deletes all the Azure resources without reaching resource bridge. Use this option if resource bridge VM isn't in running state.--### Remove VMware vSphere resources from Azure manually --If you aren't using the deboarding script, follow these steps to remove the VMware vSphere resources manually: -->[!NOTE] ->When you enable VMware vSphere resources in Azure, an Azure resource representing them is created. Before you can delete the vCenter resource in Azure, you must delete all the Azure resources that represent your related vSphere resources. --1. Go to [Azure Arc center in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview) --2. Select **VMware vCenters**. --3. Search and select the vCenter you remove from Azure Arc. --4. Select **Virtual machines** under **vCenter inventory**. --5. Select all the VMs that have **Virtual hardware management** value as **Enabled**. --6. Select **Remove from Azure**. -- This action only removes these resource representations from Azure. The resources continue to remain in your vCenter. --7. Do the steps 4, 5, and 6 for **Clouds**, **VM networks**, and **VM templates** by performing **Remove from Azure** operation for resources with **Azure Enabled** value as **Yes**. --8. Once the deletion is complete, select **Overview**. --9. Note the **Custom location** and the **Azure Arc Resource bridge** resource in the **Essentials** section. --10. Select **Remove from Azure** to remove the vCenter resource from Azure. --11. Go to the **Custom location** resource and select **Delete**. --12. Go to the **Azure Arc Resource bridge** resource and select **Delete**. --At this point, all your Arc-enabled VMware vSphere resources are removed from Azure. --## 3. Remove Arc resource bridge related items in your vCenter --During onboarding, to create a connection between your VMware vCenter and Azure, an Azure Arc resource bridge is deployed in your VMware vSphere environment. As the last step, you must delete the resource bridge VM and the VM template created during the onboarding. --You can find both the virtual machine and the template on the resource pool/cluster/host that you provided during [Azure Arc-enabled VMware vSphere onboarding](quick-start-connect-vcenter-to-arc-using-script.md). --## Next steps --[Connect the vCenter to Azure Arc again](quick-start-connect-vcenter-to-arc-using-script.md). |
azure-arc | Setup And Manage Self Service Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/setup-and-manage-self-service-access.md | - Title: Set up and manage self-service access to VMware resources through Azure RBAC -description: Learn how to manage access to your on-premises VMware resources through Azure role-based access control (Azure RBAC). - Previously updated : 11/06/2023-------# Customer intent: As a VI admin, I want to manage access to my vCenter resources in Azure so that I can keep environments secure ---# Set up and manage self-service access to VMware resources --Once your VMware vSphere resources are enabled in Azure, the final step in setting up a self-service experience for your teams is to provide them with access. This article describes how to use built-in roles to manage granular access to VMware resources through Azure role-based access control (RBAC) and allow your teams to deploy and manage VMs. --## Prerequisites --- Your vCenter must be connected to Azure Arc.-- Your vCenter resources such as Resourcepools/clusters/hosts, networks, templates, and datastores must be Arc-enabled.-- You must have User Access Administrator or Owner role at the scope (resource group/subscription) to assign roles to other users.---## Provide access to use Arc-enabled vSphere resources --To provision VMware VMs and change their size, add disks, change network interfaces, or delete them, your users need to have permissions on the compute, network, storage, and to the VM template resources that they will use. These permissions are provided by the built-in **Azure Arc VMware Private Cloud User** role. --You must assign this role on individual resource pool (or cluster or host), network, datastore, and template that a user or a group needs to access. --1. Go to the [**VMware vCenters** list in Arc center](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/vCenter). --2. Search and select your vCenter. --3. Navigate to the **Resourcepools/clusters/hosts** in **vCenter inventory** section in the table of contents. --3. Find and select resourcepool (or cluster or host). This takes you to the Arc resource representing the resourcepool. --4. Select **Access control (IAM)** in the table of contents. --5. Select **Add role assignments** on the **Grant access to this resource**. --6. Select **Azure Arc VMware Private Cloud User** role and select **Next**. --7. Select **Select members** and search for the Microsoft Entra user or group that you want to provide access. --8. Select the Microsoft Entra user or group name. Repeat this for each user or group to which you want to grant this permission. --9. Select **Review + assign** to complete the role assignment. --10. Repeat steps 3-9 for each datastore, network, and VM template that you want to provide access to. --If you have organized your vSphere resources into a resource group, you can provide the same role at the resource group scope. --Your users now have access to VMware vSphere cloud resources. However, your users also need to have permissions on the subscription/resource group where they would like to deploy and manage VMs. --## Provide access to subscription or resource group where VMs will be deployed --In addition to having access to VMware vSphere resources through the **Azure Arc VMware Private Cloud User**, your users must have permissions on the subscription and resource group where they deploy and manage VMs. --The **Azure Arc VMware VM Contributor** role is a built-in role that provides permissions to conduct all VMware virtual machine operations. --1. Go to the [Azure portal](https://portal.azure.com/). --2. Search and navigate to the subscription or resource group to which you want to provide access. --3. Select **Access control (IAM)** in the table of contents on the left. --4. Select **Add role assignments** on the **Grant access to this resource**. --5. Select **Azure Arc VMware VM Contributor** role and select **Next**. --6. Select the option **Select members**, and search for the Microsoft Entra user or group that you want to provide access. --8. Select the Microsoft Entra user or group name. Repeat this for each user or group to which you want to grant this permission. --9. Select on **Review + assign** to complete the role assignment. ---## Next steps --[Tutorial - Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md). |
azure-arc | Support Matrix For Arc Enabled Vmware Vsphere | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md | - Title: Plan for deployment -description: Learn about the support matrix for Arc-enabled VMware vSphere including vCenter Server versions supported, network requirements, and more. - Previously updated : 09/04/2024-------# Customer intent: As a VI admin, I want to understand the support matrix for Arc-enabled VMware vSphere. ---# Support matrix for Azure Arc-enabled VMware vSphere --This article documents the prerequisites and support requirements for using [Azure Arc-enabled VMware vSphere](overview.md) to manage your VMware vSphere VMs through Azure Arc. --To use Arc-enabled VMware vSphere, you must deploy an Azure Arc resource bridge in your VMware vSphere environment. The resource bridge provides an ongoing connection between your VMware vCenter Server and Azure. Once you've connected your VMware vCenter Server to Azure, components on the resource bridge discover your vCenter inventory. You can enable them in Azure and start performing virtual hardware and guest OS operations on them using Azure Arc. --## VMware vSphere requirements --The following requirements must be met in order to use Azure Arc-enabled VMware vSphere. --### Supported vCenter Server versions --Azure Arc-enabled VMware vSphere works with vCenter Server versions 7 and 8. --> [!NOTE] -> Azure Arc-enabled VMware vSphere currently supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it's not recommended to use Arc-enabled VMware vSphere with it at this point. --### Required vSphere account privileges --You need a vSphere account that can: --- Read all inventory.-- Deploy and update VMs to all the resource pools (or clusters), networks, and VM templates that you want to use with Azure Arc.-->[!Important] -> As part of the Azure Arc-enabled VMware onboarding script, you will be prompted to provide a vSphere account to deploy the Azure Arc resouce bridge VM on the ESXi host. This account will be stored locally within the Azure Arc resource bridge VM and encrypted as a Kubernetes secret at rest. The vSphere account allows Azure Arc-enabled VMware to interact with VMware vSphere. If your organization practices routine credential rotation, you must [update the credentials in Azure Arc-enabled VMware](administer-arc-vmware.md#updating-the-vsphere-account-credentials-using-a-new-password-or-a-new-vsphere-account-after-onboarding) to maintain the connection between Azure Arc-enabled VMware and VMware vSphere. --### Resource bridge resource requirements --For Arc-enabled VMware vSphere, resource bridge has the following minimum virtual hardware requirements: --- 8 GB of memory-- 4 vCPUs-- An external virtual switch that can provide access to the internet directly or through a proxy. If internet access is through a proxy or firewall, ensure [these URLs](#resource-bridge-networking-requirements) are allow-listed.--### Resource bridge networking requirements ---The following firewall URL exceptions are needed for the Azure Arc resource bridge VM: ---In addition, VMware VSphere requires the following: ---For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md). --## Azure role/permission requirements --The minimum Azure roles required for operations related to Arc-enabled VMware vSphere are as follows: --| **Operation** | **Minimum role required** | **Scope** | -| | | | -| Onboarding your vCenter Server to Arc | Azure Arc VMware Private Clouds Onboarding | On the subscription or resource group into which you want to onboard | -| Administering Arc-enabled VMware vSphere | Azure Arc VMware Administrator | On the subscription or resource group where vCenter server resource is created | -| VM Provisioning | Azure Arc VMware Private Cloud User | On the subscription or resource group that contains the resource pool/cluster/host, datastore and virtual network resources, or on the resources themselves | -| VM Provisioning | Azure Arc VMware VM Contributor | On the subscription or resource group where you want to provision VMs | -| VM Operations | Azure Arc VMware VM Contributor | On the subscription or resource group that contains the VM, or on the VM itself | --Any roles with higher permissions on the same scope, such as Owner or Contributor, will also allow you to perform the operations listed above. --## Guest management (Arc agent) requirements --With Arc-enabled VMware vSphere, you can install the Arc connected machine agent on your VMs at scale and use Azure management services on the VMs. There are additional requirements for this capability. --To enable guest management (install the Arc connected machine agent), ensure the following: --- VM is powered on.-- VM has VMware tools installed and running.-- Resource bridge has access to the host on which the VM is running.-- VM is running a [supported operating system](#supported-operating-systems).-- VM has internet connectivity directly or through proxy. If the connection is through a proxy, ensure [these URLs](#networking-requirements) are allow-listed.--Additionally, be sure that the requirements below are met in order to enable guest management. --### Supported operating systems --Make sure you're using a version of the Windows or Linux [operating systems that are officially supported for the Azure Connected Machine agent](../servers/prerequisites.md#supported-operating-systems). Only x86-64 (64-bit) architectures are supported. x86 (32-bit) and ARM-based architectures, including x86-64 emulation on arm64, aren't supported operating environments. --### Software requirements --Windows operating systems: --- NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).-- Windows PowerShell 5.1 is required. [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).--Linux operating systems: --- systemd-- wget (to download the installation script)--### Networking requirements --The following firewall URL exceptions are needed for the Azure Arc agents: --| **URL** | **Description** | -| | | -| `aka.ms` | Used to resolve the download script during installation | -| `packages.microsoft.com` | Used to download the Linux installation package | -| `download.microsoft.com` | Used to download the Windows installation package | -| `login.windows.net` | Microsoft Entra ID | -| `login.microsoftonline.com` | Microsoft Entra ID | -| `pas.windows.net` | Microsoft Entra ID | -| `management.azure.com` | Azure Resource Manager - to create or delete the Arc server resource | -| `*.his.arc.azure.com` | Metadata and hybrid identity services | -| `*.guestconfiguration.azure.com` | Extension management and guest configuration services | -| `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | Notification service for extension and connectivity scenarios | -| `azgn*.servicebus.windows.net` | Notification service for extension and connectivity scenarios | -| `*.servicebus.windows.net` | For Windows Admin Center and SSH scenarios | -| `*.blob.core.windows.net` | Download source for Azure Arc-enabled servers extensions | -| `dc.services.visualstudio.com` | Agent telemetry | --## Next steps --- [Connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md) |
azure-arc | Switch To New Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/switch-to-new-version.md | - Title: Switch to the new version -description: Learn how to switch to the new version of Azure Arc-enabled VMware vSphere and use its capabilities. - Previously updated : 03/13/2024-------# Customer intent: As a VI admin, I want to switch to the new version of Arc-enabled VMware vSphere and leverage the associated capabilities. ---# Switch to the new version --On August 21, 2023, we rolled out major changes to **Azure Arc-enabled VMware vSphere**. By switching to the new version, you can use all the Azure management services that are available for Arc-enabled Servers. --If you onboarded to Azure Arc-enabled VMware vSphere before **August 21, 2023**, and your VMs were Azure-enabled, you'll encounter the following breaking changes: --- For the VMs with Arc agents, starting from **February 27, 2024**, you'll no longer be able to perform any Azure management service-related operations. -- From **April 1, 2024**, you'll no longer be able to perform any operations on the VMs, except the **Remove from Azure** operation. --To continue using these machines, follow these instructions to switch to the new version. --> [!NOTE] -> If you're new to Arc-enabled VMware vSphere, you'll be able to leverage the new capabilities by default. To get started with the new version, see [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](quick-start-connect-vcenter-to-arc-using-script.md). ---## Switch to the new version (Existing customer) --If you onboarded to **Azure Arc-enabled VMware** before August 21, 2023, for VMs that are Azure-enabled, follow these steps to switch to the new version: -->[!Note] ->If you had enabled guest management on any of the VMs, remove [VM extensions](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-1-remove-vm-extensions) and [disconnect agents](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-2-disconnect-the-agent-from-azure-arc). --1. From your browser, go to the vCenters blade on [Azure Arc Center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview) and select the vCenter resource. --2. Select all the virtual machines that are Azure enabled with the older version. --3. Select **Remove from Azure**. -- :::image type="VM Inventory view" source="media/switch-to-new-version/vm-inventory-view-inline.png" alt-text="Screenshot of VM Inventory view." lightbox="media/switch-to-new-version/vm-inventory-view-expanded.png"::: --4. After successful removal from Azure, enable the same resources again in Azure. --5. Once the resources are re-enabled, the VMs are auto switched to the new version. The VM resources will now be represented as **Machine - Azure Arc (VMware)**. -- :::image type=" New VM browse view" source="media/switch-to-new-version/new-vm-browse-view-inline.png" alt-text="Screenshot of New VM browse view." lightbox="media/switch-to-new-version/new-vm-browse-view-expanded.png"::: - -## Next steps --[Create a virtual machine on VMware vCenter using Azure Arc](/azure/azure-arc/vmware-vsphere/quick-start-create-a-vm). |
azure-arc | Troubleshoot Guest Management Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/troubleshoot-guest-management-issues.md | - Title: Troubleshoot Guest Management Issues -description: Learn how to troubleshoot the guest management issues for Arc-enabled VMware vSphere. - Previously updated : 08/29/2024-------# Customer intent: As a VI admin, I want to understand the troubleshooting process for guest management issues. --# Troubleshoot Guest Management for Linux VMs --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). --This article provides information on how to troubleshoot and resolve the issues that can occur when you enable guest management on Arc-enabled VMware vSphere virtual machines. --## Troubleshoot issues while enabling Guest Management --# [Arc agent installation fails on a domain-joined Linux VM](#tab/linux) --**Error message**: Enabling Guest Management on a domain-joined Linux VM fails with the error message **InvalidGuestLogin: Failed to authenticate to the system with the credentials**. --**Resolution**: Before you enable Guest Management on a domain-joined Linux VM using active directory credentials, follow these steps to set the configuration on the VM: --1. In the SSSD configuration file (typically, */etc/sssd/sssd.conf*), add the following under the section for the domain: -- [domain/contoso.com] - ad_gpo_map_batch = +vmtoolsd --2. After making the changes to SSSD configuration, restart the SSSD process. If SSSD is running as a system process, run `sudo systemctl restart sssd` to restart it. --### Additional information --The parameter `ad_gpo_map_batch` according to the [sssd main page](https://jhrozek.fedorapeople.org/sssd/1.13.4/man/sssd-ad.5.html): --A comma-separated list of Pluggable Authentication Module (PAM) service names for which GPO-based access control is evaluated based on the BatchLogonRight and DenyBatchLogonRight policy settings. --It's possible to add another PAM service name to the default set by using **+service_name** or to explicitly remove a PAM service name from the default set by using **-service_name**. For example, to replace a default PAM service name for this sign in (for example, **crond**) with a custom PAM service name (for example, **my_pam_service**), use this configuration: --`ad_gpo_map_batch = +my_pam_service, -crond` --Default: The default set of PAM service names includes: --- crond:-- `vmtoolsd` PAM is enabled for SSSD evaluation. For any request coming through VMware tools, SSSD is invoked since VMware tools use this PAM for authenticating to the Linux Guest VM. --#### References --- [Invoke VMScript to a domain-joined Ubuntu VM](https://communities.vmware.com/t5/VMware-PowerCLI-Discussions/Invoke-VMScript-to-an-domain-joined-Ubuntu-VM/td-p/2257554).--# [Arc agent installation fails on RHEL Linux distros](#tab/rhel) --**Applies to:**<br> -:heavy_check_mark: RedHat Linux :heavy_check_mark: CentOS :heavy_check_mark: Rocky Linux :heavy_check_mark: Oracle Linux :heavy_check_mark: SUSE Linux :heavy_check_mark: SUSE Linux Enterprise Server :heavy_check_mark: Alma Linux :heavy_check_mark: Fedora --**Error message**: Provisioning of the resource failed with Code: `AZCM0143`; Message: `install_linux_azcmagent.sh: installation error`. --**Workaround** --Before you enable the guest agent, follow these steps on the VM: --1. Create a file named `vmtools_unconfined_rpm_script_kcs5347781.te`, and add the following to it: -- ``` - policy_module(vmtools_unconfined_rpm_script_kcs5347781, 1.0) - gen_require(` - type vmtools_unconfined_t; - ') - optional_policy(` - rpm_transition_script(vmtools_unconfined_t,system_r) - ') - ``` --2. Install the package to build the policy module: -- `sudo yum -y install selinux-policy-devel` --3. Compile the module: -- `make -f /usr/share/selinux/devel/Makefile vmtools_unconfined_rpm_script_kcs5347781.pp` --4. Install the module: -- `sudo semodule -i vmtools_unconfined_rpm_script_kcs5347781.pp` --### Additional information --Track the issue through [BZ 1872245 - [VMware][RHEL 8] vmtools is not able to install rpms](https://bugzilla.redhat.com/show_bug.cgi?id=1872245). --Upon executing a command using `vmrun` command, the context of the `yum` or `rpm` command is `vmtools_unconfined_t`. --Upon `yum` or `rpm` executing scriptlets, the context is changed to `rpm_script_t`, which is currently denied because of the missing rule in the SELinux policy. --#### References --- [Executing yum/rpm commands using VMware tools facility (vmrun) fails in error when packages have scriptlets](https://access.redhat.com/solutions/5347781).----## Next steps --If you don't see your problem here or you can't resolve your issue, try one of the following channels for support: --- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).--- Connect with [@AzureSupport](https://x.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.--- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). |
azure-cache-for-redis | Cache Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-insights-overview.md | Selecting any of the other tabs for **Performance** or **Operations** opens that ## Pin, export, and expand -To pin any metric section to an [Azure dashboard](../azure-portal/azure-portal-dashboards.md), select the pushpin symbol in the section's upper right. +To pin any metric section to an [Azure dashboard](/azure/azure-portal/azure-portal-dashboards), select the pushpin symbol in the section's upper right. :::image type="content" source="~/reusable-content/ce-skilling/azure/media/cosmos-db/pin.png" alt-text="Screenshot of metrics with the pushpin symbol highlighted."::: |
azure-cache-for-redis | Monitor Cache Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/monitor-cache-reference.md | The following list provides details and more information about the supported Azu - Geo-replication metrics - Geo-replication metrics are affected by monthly internal maintenance operations. The Azure Cache for Redis service periodically patches all caches with the latest platform features and improvements. During these updates, each cache node is taken offline, which temporarily disables the geo-replication link. If your geo replication link is unhealthy, check to see if it was caused by a patching event on either the geo-primary or geo-secondary cache by using **Diagnose and Solve Problems** from the Resource menu in the portal. Depending on the amount of data in the cache, the downtime from patching can take anywhere from a few minutes to an hour. If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). + Geo-replication metrics are affected by monthly internal maintenance operations. The Azure Cache for Redis service periodically patches all caches with the latest platform features and improvements. During these updates, each cache node is taken offline, which temporarily disables the geo-replication link. If your geo replication link is unhealthy, check to see if it was caused by a patching event on either the geo-primary or geo-secondary cache by using **Diagnose and Solve Problems** from the Resource menu in the portal. Depending on the amount of data in the cache, the downtime from patching can take anywhere from a few minutes to an hour. If the geo-replication link is unhealthy for over an hour, [file a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). The [Geo-Replication Dashboard](cache-insights-overview.md#workbooks) workbook is a simple and easy way to view all Premium-tier geo-replication metrics in the same place. This dashboard pulls together metrics that are only emitted by the geo-primary or geo-secondary, so they can be viewed simultaneously. The following list provides details and more information about the supported Azu - In caches on the Premium tier, this metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value. - This metric might indicate a disconnected/unhealthy replication status for several reasons, including: monthly patching, host OS updates, network misconfiguration, or failed geo-replication link provisioning. - A value of 0 doesn't mean that data on the geo-replica is lost. It just means that the link between geo-primary and geo-secondary is unhealthy.- - If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). + - If the geo-replication link is unhealthy for over an hour, [file a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). - Gets - The number of get operations from the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_get`, `cmdstat_hget`, `cmdstat_hgetall`, `cmdstat_hmget`, `cmdstat_mget`, `cmdstat_getbit`, and `cmdstat_getrange`, and is equivalent to the sum of cache hits and misses during the reporting interval. |
azure-functions | Create First Function Arc Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-arc-cli.md | ms.devlang: azurecli # Create your first function on Azure Arc (preview) -In this quickstart, you create an Azure Functions project and deploy it to a function app running on an [Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/overview.md). To learn more, see [App Service, Functions, and Logic Apps on Azure Arc](../app-service/overview-arc-integration.md). This scenario only supports function apps running on Linux. +In this quickstart, you create an Azure Functions project and deploy it to a function app running on an [Azure Arc-enabled Kubernetes cluster](/azure/azure-arc/kubernetes/overview). To learn more, see [App Service, Functions, and Logic Apps on Azure Arc](../app-service/overview-arc-integration.md). This scenario only supports function apps running on Linux. > [!NOTE] > Support for running functions on an Azure Arc-enabled Kubernetes cluster is currently in preview. |
azure-functions | Create First Function Arc Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-arc-custom-container.md | zone_pivot_groups: programming-languages-set-functions # Create your first containerized Azure Functions on Azure Arc (preview) -In this article, you create a function app running in a Linux container and deploy it to an [Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/overview.md) from a container registry. When you create your own container, you can customize the execution environment for your function app. To learn more, see [App Service, Functions, and Logic Apps on Azure Arc](../app-service/overview-arc-integration.md). +In this article, you create a function app running in a Linux container and deploy it to an [Azure Arc-enabled Kubernetes cluster](/azure/azure-arc/kubernetes/overview) from a container registry. When you create your own container, you can customize the execution environment for your function app. To learn more, see [App Service, Functions, and Logic Apps on Azure Arc](../app-service/overview-arc-integration.md). > [!NOTE] > Support for deploying a custom container to an Azure Arc-enabled Kubernetes cluster is currently in preview. |
azure-functions | Functions Bindings Timer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md | Each field can have one of the following types of values: |A set of values (`,` operator)|<nobr>`5,8,10 * * * * *`</nobr>| Three times a minute - at seconds 5, 8, and 10 during every minute of every hour of each day | |An interval value (`/` operator)|<nobr>`0 */5 * * * *`</nobr>| 12 times an hour - at second 0 of every 5th minute of every hour of each day | #### NCRONTAB examples |
azure-functions | Functions Create Storage Blob Triggered Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-storage-blob-triggered-function.md | Title: Create a function in Azure triggered by Blob storage description: Use Azure Functions to create a serverless function that is invoked by items added to a Blob storage container. ms.assetid: d6bff41c-a624-40c1-bbc7-80590df29ded Previously updated : 12/28/2023 Last updated : 09/18/2024 # Create a function in Azure that's triggered by Blob storage Learn how to create a function triggered when files are uploaded to or updated i [!INCLUDE [Create function app Azure portal](../../includes/functions-create-function-app-portal.md)] -You've successfully created your new function app. ---Next, you create a function in the new function app. +You've successfully created your new function app. Next, you create a function in the new function app. <a name="create-function"></a> Next, you create a function in the new function app. 1. In your function app, select **Overview**, and then select **+ Create** under **Functions**. -1. Under **Select a template**, scroll down and choose the **Azure Blob Storage trigger** template. +1. Under **Select a template**, choose the **Blob trigger** template and select **Next**. 1. In **Template details**, configure the new trigger with the settings as specified in this table, then select **Create**: | Setting | Suggested value | Description | ||||+ | **Job type** | Append to app | You only see this setting for a Python v2 app. | | **New Function** | Unique in your function app | Name of this blob triggered function. | | **Path** | samples-workitems/{name} | Location in Blob storage being monitored. The file name of the blob is passed in the binding as the _name_ parameter. | | **Storage account connection** | AzureWebJobsStorage | You can use the storage account connection already being used by your function app, or create a new one. | - Azure creates the Blob Storage triggered function based on the provided values. --Next, create the **samples-workitems** container. + Azure creates the Blob Storage triggered function based on the provided values. Next, create the **samples-workitems** container. ## Create the container -1. In your function, on the **Overview** page, select your resource group. -- :::image type="content" source="./media/functions-create-storage-blob-triggered-function/functions-storage-resource-group.png" alt-text="Select your Azure portal resource group." border="true"::: --1. Find and select your resource group's storage account. -- :::image type="content" source="./media/functions-create-storage-blob-triggered-function/functions-storage-account-access.png" alt-text="Access the storage account." border="true"::: --1. Choose **Containers**, and then choose **+ Container**. +1. Return to the **Overview** page for your function app, select your **Resource group**, then find and select the storage account in your resource group. - :::image type="content" source="./media/functions-create-storage-blob-triggered-function/functions-storage-add-container.png" alt-text="Add container to your storage account in the Azure portal." border="true"::: +1. In the storage account page, select **Data storage** > **Containers** > **+ Container**. -1. In the **Name** field, type `samples-workitems`, and then select **Create**. +1. In the **Name** field, type `samples-workitems`, and then select **Create** to create a container. - :::image type="content" source="./media/functions-create-storage-blob-triggered-function/functions-storage-name-blob-container.png" alt-text="Name the storage container." border="true"::: --Now that you have a blob container, you can test the function by uploading a file to the container. +1. Select the new `samples-workitems` container, which you use to test the function by uploading a file to the container. ## Test the function -1. Back in the Azure portal, browse to your function expand the **Logs** at the bottom of the page and make sure that log streaming isn't paused. -- :::image type="content" source="./media/functions-create-storage-blob-triggered-function/functions-storage-log-expander.png" alt-text="Expand the log in the Azure portal." border="true"::: --1. In a separate browser window, go to your resource group in the Azure portal, and select the storage account. --1. Select **Containers**, and then select the **samples-workitems** container. -- :::image type="content" source="./media/functions-create-storage-blob-triggered-function/functions-storage-container.png" alt-text="Go to your samples-workitems container in the Azure portal." border="true"::: --1. Select **Upload**, and then select the folder icon to choose a file to upload. -- :::image type="content" source="./media/functions-create-storage-blob-triggered-function/functions-storage-manager-upload-file-blob.png" alt-text="Upload a file to the blob container." border="true"::: +1. In a new browser window, return to your function app page and select **Log stream**, which displays real-time logging for your app. -1. Browse to a file on your local computer, such as an image file, choose the file. Select **Open** and then **Upload**. +1. From the `samples-workitems` container page, select **Upload** > **Browse for files**, browse to a file on your local computer (such as an image file), and choose the file. -1. Go back to your function logs and verify that the blob has been read. +1. Select **Open** and then **Upload**. - :::image type="content" source="./media/functions-create-storage-blob-triggered-function/function-app-in-portal-editor.png" alt-text="View message in the logs." border="true"::: +1. Go back to your function app logs and verify that the blob has been read. >[!NOTE]- > When your function app runs in the default Consumption plan, there may be a delay of up to several minutes between the blob being added or updated and the function being triggered. If you need low latency in your blob triggered functions, consider running your function app in an App Service plan. + > When your function app runs in the default Consumption plan, there may be a delay of up to several minutes between the blob being added or updated and the function being triggered. If you need low latency in your blob triggered functions, consider one of these [other blob trigger options](./storage-considerations.md#trigger-on-a-blob-container). ## Clean up resources |
azure-functions | Functions Create Storage Queue Triggered Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-storage-queue-triggered-function.md | Title: Create a function in Azure triggered by queue messages description: Use Azure Functions to create a serverless function that is invoked by messages submitted to a queue in Azure. ms.assetid: 361da2a4-15d1-4903-bdc4-cc4b27fc3ff4 Previously updated : 12/28/2023 Last updated : 09/18/2024 # Create a function triggered by Azure Queue storage Learn how to create a function that is triggered when messages are submitted to [!INCLUDE [Create function app Azure portal](../../includes/functions-create-function-app-portal.md)] - :::image type="content" source="./media/functions-create-storage-queue-triggered-function/function-app-create-success.png" alt-text="Function app successfully created.." border="true"::: - Next, you create a function in the new function app. <a name="create-function"></a> Next, you create a function in the new function app. | Setting | Suggested value | Description | ||||+ | **Job type** | Append to app | You only see this setting for a Python v2 app. | | **Name** | Unique in your function app | Name of this queue triggered function. | | **Queue name** | myqueue-items | Name of the queue to connect to in your Storage account. | | **Storage account connection** | AzureWebJobsStorage | You can use the storage account connection already being used by your function app, or create a new one. | - Azure creates the Queue Storage triggered function based on the provided values --Next, you connect to your Azure storage account and create the **myqueue-items** storage queue. + Azure creates the Queue Storage triggered function based on the provided values. Next, you connect to your Azure storage account and create the **myqueue-items** storage queue. ## Create the queue -1. In your function, on the **Overview** page, select your resource group. -- :::image type="content" source="./media/functions-create-storage-queue-triggered-function/functions-storage-resource-group.png" alt-text="Select your Azure portal resource group." border="true"::: --1. Find and select your resource group's storage account. +1. Return to the **Overview** page for your function app, select your **Resource group**, then find and select the storage account in your resource group. - :::image type="content" source="./media/functions-create-storage-queue-triggered-function/functions-storage-account-access.png" alt-text="Access the storage account." border="true"::: --1. Choose **Queues**, and then choose **+ Queue**. -- :::image type="content" source="./media/functions-create-storage-queue-triggered-function/functions-storage-add-queue.png" alt-text="Add a queue to your storage account in the Azure portal." border="true"::: +1. In the storage account page, select **Data storage** > **Queues** > **+ Queue**. 1. In the **Name** field, type `myqueue-items`, and then select **Create**. - :::image type="content" source="./media/functions-create-storage-queue-triggered-function/functions-storage-name-queue.png" alt-text="Name the queue storage container." border="true"::: --Now that you have a storage queue, you can test the function by adding a message to the queue. +1. Select the new **myqueue-items** queue, which you use to test the function by adding a message to the queue. ## Test the function -1. Back in the Azure portal, browse to your function expand the **Logs** at the bottom of the page and make sure that log streaming isn't paused. -- :::image type="content" source="./media/functions-create-storage-queue-triggered-function/functions-queue-storage-log-expander.png" alt-text="Expand the log in the Azure portal." border="true"::: --1. In a separate browser window, go to your resource group in the Azure portal, and select the storage account. --1. Select **Queues**, and then select the **myqueue-items** container. -- :::image type="content" source="./media/functions-create-storage-queue-triggered-function/functions-storage-queue.png" alt-text="Go to your myqueue-items queue in the Azure portal." border="true"::: --1. Select **Add message**, and type "Hello World!" in **Message text**. Select **OK**. -- :::image type="content" source="./media/functions-create-storage-queue-triggered-function/functions-storage-queue-test.png" alt-text="Screenshot shows the Add message button selected and the Message text field highlighted." border="true"::: --1. Wait for a few seconds, then go back to your function logs and verify that the new message has been read from the queue. +1. In a new browser window, return to your function app page and select **Log stream**, which displays real-time logging for your app. + +1. In the **myqueue-items** queue, select **Add message**, type "Hello World!" in **Message text**, and select **OK**. - :::image type="content" source="./media/functions-create-storage-queue-triggered-function/function-app-in-portal-editor.png" alt-text="View message in the logs." border="true"::: +1. Go back to your function app logs and verify that the function ran to process the message from the queue. 1. Back in your storage queue, select **Refresh** and verify that the message has been processed and is no longer in the queue. |
azure-functions | Functions Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-get-started.md | Title: Getting started with Azure Functions description: Take the first steps toward working with Azure Functions. Previously updated : 12/13/2022 Last updated : 09/18/2024 zone_pivot_groups: programming-languages-set-functions-full Complete one of our quickstart articles to create and deploy your first function ::: zone pivot="programming-language-csharp" You can create C# functions by using one of the following tools: ++ [Azure Developer CLI (azd)](create-first-function-azure-developer-cli.md?pivots=programming-language-csharp)++ [Command line](./create-first-function-cli-csharp.md) + [Visual Studio](./functions-create-your-first-function-visual-studio.md) + [Visual Studio Code](./create-first-function-vs-code-csharp.md)-+ [Command line](./create-first-function-cli-csharp.md) + ::: zone-end ::: zone pivot="programming-language-java" You can create Java functions by using one of the following tools: ++ [Azure Developer CLI (azd)](create-first-function-azure-developer-cli.md?pivots=programming-language-java) + [Eclipse](functions-create-maven-eclipse.md) + [Gradle](functions-create-first-java-gradle.md) + [IntelliJ IDEA](functions-create-maven-intellij.md) You can create Java functions by using one of the following tools: ::: zone pivot="programming-language-javascript" You can create JavaScript functions by using one of the following tools: -+ [Visual Studio Code](./create-first-function-vs-code-node.md) -+ [Command line](./create-first-function-cli-node.md) ++ [Azure Developer CLI (azd)](create-first-function-azure-developer-cli.md?pivots=programming-language-javascript) + [Azure portal](./functions-create-function-app-portal.md#create-a-function-app)++ [Command line](./create-first-function-cli-node.md)++ [Visual Studio Code](./create-first-function-vs-code-node.md) ::: zone-end ::: zone pivot="programming-language-powershell" You can create PowerShell functions by using one of the following tools: -+ [Visual Studio Code](./create-first-function-vs-code-powershell.md) -+ [Command line](./create-first-function-cli-powershell.md) ++ [Azure Developer CLI (azd)](create-first-function-azure-developer-cli.md?pivots=programming-language-powershell) + [Azure portal](./functions-create-function-app-portal.md#create-a-function-app)++ [Command line](./create-first-function-cli-powershell.md)++ [Visual Studio Code](./create-first-function-vs-code-powershell.md) ::: zone-end ::: zone pivot="programming-language-python" You can create Python functions by using one of the following tools: -+ [Visual Studio Code](./create-first-function-vs-code-python.md) ++ [Azure Developer CLI (azd)](create-first-function-azure-developer-cli.md?pivots=programming-language-python)++ [Azure portal](./functions-create-function-app-portal.md#create-a-function-app) + [Command line](./create-first-function-cli-python.md)-+ [Azure portal](./functions-create-function-app-portal.md#create-a-function-app) ++ [Visual Studio Code](./create-first-function-vs-code-python.md) ::: zone-end ::: zone pivot="programming-language-typescript" You can create TypeScript functions by using one of the following tools: -+ [Visual Studio Code](./create-first-function-vs-code-typescript.md) ++ [Azure Developer CLI (azd)](create-first-function-azure-developer-cli.md?pivots=programming-language-typescript) + [Command line](./create-first-function-cli-typescript.md)++ [Visual Studio Code](./create-first-function-vs-code-typescript.md) ::: zone-end ::: zone pivot="programming-language-other" Besides the natively supported programming languages, you can use [custom handle ::: zone pivot="programming-language-csharp,programming-language-java,programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-typescript" ## Review end-to-end samples +These sites let you browse existing functions reference projects and samples in your desired language: ::: zone-end ::: zone pivot="programming-language-csharp" -The following sites let you browse existing C# functions reference projects and samples: --+ [Azure Samples Browser](/samples/browse/?expanded=azure&languages=csharp&products=azure-functions) ++ [Awesome azd template library](https://azure.github.io/awesome-azd/?tags=functions&tags=dotnetCsharp) + [Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=C%23)-++ [Azure Samples Browser](/samples/browse/?expanded=azure&languages=csharp&products=azure-functions) ::: zone-end ::: zone pivot="programming-language-java" -The following sites let you browse existing Java functions reference projects and samples: --+ [Azure Samples Browser](/samples/browse/?expanded=azure&languages=java&products=azure-functions) ++ [Awesome azd template library](https://azure.github.io/awesome-azd/?tags=functions&tags=java) + [Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Java)-++ [Azure Samples Browser](/samples/browse/?expanded=azure&languages=java&products=azure-functions) ::: zone-end-The following sites let you browse existing Node.js functions reference projects and samples: --+ [Azure Samples Browser](/samples/browse/?expanded=azure&languages=javascript%2ctypescript&products=azure-functions) -+ [Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=JavaScript%2CTypeScript) - ++ [Awesome azd template library](https://azure.github.io/awesome-azd/?tags=functions&tags=javascript)++ [Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=JavaScript)++ [Azure Samples Browser](/samples/browse/?expanded=azure&languages=javascript&products=azure-functions) ::: zone-end++ [Awesome azd template library](https://azure.github.io/awesome-azd/?tags=functions&tags=typescript)++ [Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=TypeScript)++ [Azure Samples Browser](/samples/browse/?expanded=azure&languages=typescript&products=azure-functions) ::: zone pivot="programming-language-powershell" -The following sites let you browse existing PowerShell functions reference projects and samples: --+ [Azure Samples Browser](/samples/browse/?expanded=azure&languages=powershell&products=azure-functions) ++ [Awesome azd template library](https://azure.github.io/awesome-azd/?tags=functions&tags=powershell) + [Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=PowerShell) --The following sites let you browse existing Python functions reference projects and samples: --+ [Azure Samples Browser](/samples/browse/?expanded=azure&languages=python&products=azure-functions) ++ [Azure Samples Browser](/samples/browse/?expanded=azure&languages=powershell&products=azure-functions)++ [Awesome azd template library](https://azure.github.io/awesome-azd/?tags=functions&tags=python) + [Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Python) ++ [Azure Samples Browser](/samples/browse/?expanded=azure&languages=python&products=azure-functions) ## Explore an interactive tutorial Complete one of the following interactive training modules to learn more about F To learn even more, see the [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions). -## Next steps +## Related content ::: zone pivot="programming-language-csharp" -If you're already familiar with developing C# functions, consider reviewing one of the following language reference articles: +Learn more about developing functions by reviewing one of these C# reference articles: + [In-process C# class library functions](./functions-dotnet-class-library.md) + [Isolated worker process C# class library functions](./dotnet-isolated-process-guide.md)-+ [C# Script functions](./functions-reference-csharp.md) - ::: zone-end ::: zone pivot="programming-language-java" -If you're already familiar with developing Java functions, consider reviewing the [language reference](./functions-reference-java.md) article. +Learn more about developing functions by reviewing the [Java language reference](./functions-reference-java.md) article. ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-typescript" -If you're already familiar with developing Node.js functions, consider reviewing the [language reference](./functions-reference-node.md) article. +Learn more about developing functions by reviewing the [Node.js language reference](./functions-reference-node.md) article. ::: zone-end ::: zone pivot="programming-language-powershell" -If you're already familiar with developing PowerShell functions, consider reviewing the [language reference](./functions-reference-powershell.md) article. +Learn more about developing functions by reviewing the [PowerShell language reference](./functions-reference-powershell.md) article. ::: zone-end ::: zone pivot="programming-language-python" -If you're already familiar with developing Python functions, consider reviewing the [language reference](./functions-reference-python.md) article. +Learn more about developing functions by reviewing the [Python language reference](./functions-reference-python.md) article. ::: zone-end ::: zone pivot="programming-language-other" -Consider reviewing the [custom handlers](functions-custom-handlers.md) documentation. +Learn more about developing functions using Rust, Go, and other languages by reviewing the [custom handlers](functions-custom-handlers.md) documentation. ::: zone-end -You might also be interested in one of these more advanced articles: +You might also be interested in these articles: + [Deploying Azure Functions](./functions-deployment-technologies.md) + [Monitoring Azure Functions](./functions-monitoring.md) + [Performance and reliability](./functions-best-practices.md) + [Securing Azure Functions](./security-concepts.md)++ [Durable Functions](./durable/durable-functions-overview.md) |
azure-functions | Functions Infrastructure As Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md | When you deploy multiple resources in a single Bicep file or ARM template, the o + This article assumes that you have already created a [managed environment](../container-apps/environment.md) in Azure Container Apps. You need both the name and the ID of the managed environment to create a function app hosted on Container Apps. ::: zone-end ::: zone pivot="azure-arc" -+ This article assumes that you have already created an [App Service-enabled custom location](../app-service/overview-arc-integration.md) on an [Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/overview.md). You need both the custom location ID and the Kubernetes environment ID to create a function app hosted in an Azure Arc custom location. ++ This article assumes that you have already created an [App Service-enabled custom location](../app-service/overview-arc-integration.md) on an [Azure Arc-enabled Kubernetes cluster](/azure/azure-arc/kubernetes/overview). You need both the custom location ID and the Kubernetes environment ID to create a function app hosted in an Azure Arc custom location. ::: zone-end <a name="storage"></a> ## Create storage account |
azure-functions | Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/deploy.md | To manage the automation method to control the start and stop of your VMs, you c If you need additional schedules, you can duplicate one of the Logic Apps provided using the **Clone** option in the Azure portal. - ## Scheduled start and stop scenario Perform the following steps to configure the scheduled start and stop action for Azure Resource Manager and classic VMs. For example, you can configure the **ststv2_vms_Scheduled_start** schedule to start them in the morning when you are in the office, and stop all VMs across a subscription when you leave work in the evening based on the **ststv2_vms_Scheduled_stop** schedule. |
azure-functions | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md | Last updated 09/23/2022 The Start/Stop VMs v2 feature starts or stops Azure Virtual Machines instances across multiple subscriptions. It starts or stops virtual machines on user-defined schedules, provides insights through [Azure Application Insights](/azure/azure-monitor/app/app-insights-overview), and send optional notifications by using [action groups](/azure/azure-monitor/alerts/action-groups). For most scenarios, Start/Stop VMs can manage virtual machines deployed and managed both by Azure Resource Manager and by Azure Service Manager (classic), which is [deprecated](/azure/virtual-machines/classic-vm-deprecation). -This new version of Start/Stop VMs v2 provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the original version that was available with Azure Automation, but it's designed to take advantage of newer technology in Azure. The Start/Stop VMs v2 relies on multiple Azure services and it will be charged based on the service that are deployed and consumed. +This new version of Start/Stop VMs v2 provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the original version that was available with Azure Automation, but it's designed to take advantage of newer technology in Azure. The Start/Stop VMs v2 relies on multiple Azure services and it will be charged based on the services that are deployed and consumed. ## Important Start/Stop VMs v2 Updates An Azure Storage account, which is required by Functions, is also used by Start/ - Uses Azure Queue Storage to support the Azure Functions queue-based triggers. -All trace logging data from the function app execution is sent to your connected Application Insights instance. You can view the telemetry data stored in Application Insights from a set of pre-defined visualizations presented in a shared [Azure dashboard](../../azure-portal/azure-portal-dashboards.md). -+All trace logging data from the function app execution is sent to your connected Application Insights instance. You can view the telemetry data stored in Application Insights from a set of pre-defined visualizations presented in a shared [Azure dashboard](/azure/azure-portal/azure-portal-dashboards). Email notifications are also sent as a result of the actions performed on the VMs. |
azure-government | Azure Secure Isolation Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md | During the term of your Azure subscription, you always have the ability to acces If your subscription expires or is terminated, Microsoft will preserve your customer data for a 90-day retention period to permit you to extract your data or renew your subscriptions. After this retention period, Microsoft will delete all your customer data within another 90 days, that is, your customer data will be permanently deleted 180 days after expiration or termination. Given the data retention procedure, you can control how long your data is stored by timing when you end the service with Microsoft. It's recommended that you don't terminate your service until you've extracted all data so that the initial 90-day retention period can act as a safety buffer should you later realize you missed something. -If you deleted an entire storage account by mistake, you should contact [Azure Support](https://azure.microsoft.com/support/options/) promptly for assistance with recovery. You can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. A storage account deleted within a subscription is retained for two weeks to allow for recovery from accidental deletion, after which it's permanently deleted. However, when a storage object (for example, blob, file, queue, table) is itself deleted, the delete operation is immediate and irreversible. Unless you made a backup, deleted storage objects can't be recovered. For Blob storage, you can implement extra protection against accidental or erroneous modifications or deletions by enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md). When [soft delete is enabled](../storage/blobs/soft-delete-blob-enable.md) for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they're deleted, within a retention period that you specified. To avoid retention of data after storage account or subscription deletion, you can delete storage objects individually before deleting the storage account or subscription. +If you deleted an entire storage account by mistake, you should contact [Azure Support](https://azure.microsoft.com/support/options/) promptly for assistance with recovery. You can [create and manage support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request) in the Azure portal. A storage account deleted within a subscription is retained for two weeks to allow for recovery from accidental deletion, after which it's permanently deleted. However, when a storage object (for example, blob, file, queue, table) is itself deleted, the delete operation is immediate and irreversible. Unless you made a backup, deleted storage objects can't be recovered. For Blob storage, you can implement extra protection against accidental or erroneous modifications or deletions by enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md). When [soft delete is enabled](../storage/blobs/soft-delete-blob-enable.md) for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they're deleted, within a retention period that you specified. To avoid retention of data after storage account or subscription deletion, you can delete storage objects individually before deleting the storage account or subscription. For accidental deletion involving Azure SQL Database, you should check backups that the service makes automatically and use point-in-time restore. For example, full database backup is done weekly, and differential database backups are done hourly. Also, individual services (such as Azure DevOps) can have their own policies for [accidental data deletion](/azure/devops/organizations/security/data-protection#mistakes-happen). |
azure-government | Compare Azure Government Global Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md | The following Automation **features aren't currently available** in Azure Govern For feature variations and limitations, see [Azure Advisor in sovereign clouds](/azure/advisor/advisor-sovereign-clouds). -### [Azure Lighthouse](../lighthouse/index.yml) +### [Azure Lighthouse](/azure/lighthouse/) The following Azure Lighthouse **features aren't currently available** in Azure Government: |
azure-government | Azure Services In Fedramp Auditscope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Microsoft Entra multifactor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | ✅ | ✅ | | [Azure Health Data Services](../../healthcare-apis/azure-api-for-fhir/index.yml) | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** |-| [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | ✅ | ✅ | -| [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | ✅ | ✅ | +| [Azure Arc-enabled servers](/azure/azure-arc/servers/) | ✅ | ✅ | +| [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/) | ✅ | ✅ | | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | ✅ | ✅ | | [Azure Cosmos DB](/azure/cosmos-db/) | ✅ | ✅ | | [Azure Container Apps](../../container-apps/index.yml) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure AI | [Azure AI | [Container Instances](/azure/container-instances/) | ✅ | ✅ |-| [Container Registry](../../container-registry/index.yml) | ✅ | ✅ | +| [Container Registry](/azure/container-registry/) | ✅ | ✅ | | [Content Delivery Network (CDN)](../../cdn/index.yml) | ✅ | ✅ | | [Cost Management and Billing](../../cost-management-billing/index.yml) | ✅ | ✅ | | [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Key Vault](/azure/key-vault/) | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | | [Lab Services](../../lab-services/index.yml) | ✅ | ✅ |-| [Lighthouse](../../lighthouse/index.yml) | ✅ | ✅ | +| [Lighthouse](/azure/lighthouse/) | ✅ | ✅ | | [Load Balancer](../../load-balancer/index.yml) | ✅ | ✅ | | [Logic Apps](../../logic-apps/index.yml) | ✅ | ✅ | | [Machine Learning](/azure/machine-learning/) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Microsoft Entra ID Governance](/entra/) | ✅ | ✅ | | | | | [Microsoft Entra multifactor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | ✅ | ✅ | ✅ | ✅ | |-| [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | ✅ | ✅ | ✅ | ✅ | | -| [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | ✅ | ✅ | ✅ | ✅ | | +| [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/) | ✅ | ✅ | ✅ | ✅ | | +| [Azure Arc-enabled servers](/azure/azure-arc/servers/) | ✅ | ✅ | ✅ | ✅ | | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Cosmos DB](/azure/cosmos-db/) | ✅ | ✅ | ✅ | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure AI | [Azure AI | [Container Instances](/azure/container-instances/)| ✅ | ✅ | ✅ | ✅ | ✅ |-| [Container Registry](../../container-registry/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [Container Registry](/azure/container-registry/) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Content Delivery Network (CDN)](../../cdn/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Cost Management and Billing](../../cost-management-billing/index.yml) | ✅ | ✅ | ✅ | ✅ | | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [IoT Hub](../../iot-hub/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Key Vault](/azure/key-vault/) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Lab Services](../../lab-services/index.yml) | ✅ | ✅ | ✅ | ✅ | |-| [Lighthouse](../../lighthouse/index.yml)| ✅ | ✅ | ✅ | ✅ | | +| [Lighthouse](/azure/lighthouse/)| ✅ | ✅ | ✅ | ✅ | | | [Load Balancer](../../load-balancer/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Logic Apps](../../logic-apps/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Machine Learning](/azure/machine-learning/) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Media Services](/azure/media-services/) | ✅ | ✅ | ✅ | ✅ | ✅ |-| [Microsoft Azure portal](../../azure-portal/index.yml) | ✅ | ✅ | ✅| ✅ | ✅ | +| [Microsoft Azure portal](/azure/azure-portal/) | ✅ | ✅ | ✅| ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Microsoft Azure Government portal](../documentation-government-get-started-connect-with-portal.md) | ✅ | ✅ | ✅| ✅ | | | [Microsoft Defender for Cloud](/azure/defender-for-cloud/) (formerly Azure Security Center) | ✅ | ✅ | ✅ | ✅ | ✅ | |
azure-government | Documentation Government Csp List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md | Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Ingram Micro Inc.](https://usa.ingrammicro.com/)| |[Insight Public Sector Inc](https://www.ips.insight.com/en_US/public-sector.html)| |[Pax8](https://www.pax8.com/en-us/microsoft/)|-|[Synnex](https://www.synnexcorp.com)| -|[Tech Data Corporation](https://www.techdata.com/)| +|[TD Synnex](https://tdsynnex.com/)| ## Approved LSPs |
azure-government | Documentation Government Impact Level 5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md | For Containers services availability in Azure Government, see [Products availabl - Azure Container Instances automatically encrypts data related to your containers when it's persisted in the cloud. Data in Container Instances is encrypted and decrypted with 256-bit AES encryption and enabled for all Container Instances deployments. You can rely on Microsoft-managed keys for the encryption of your container data, or you can manage the encryption by using your own keys. For more information, see [Encrypt deployment data](/azure/container-instances/container-instances-encrypt-data). -### [Container Registry](../container-registry/index.yml) +### [Container Registry](/azure/container-registry/) -- When you store images and other artifacts in a Container Registry, Azure automatically encrypts the registry content at rest by using service-managed keys. You can supplement the default encryption with an extra encryption layer by [using a key that you create and manage in Azure Key Vault](../container-registry/tutorial-enable-customer-managed-keys.md).+- When you store images and other artifacts in a Container Registry, Azure automatically encrypts the registry content at rest by using service-managed keys. You can supplement the default encryption with an extra encryption layer by [using a key that you create and manage in Azure Key Vault](/azure/container-registry/tutorial-enable-customer-managed-keys). ## Databases |
azure-government | Documentation Government Overview Wwps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md | Inter-region traffic is encrypted using [Media Access Control Security](https:// **Azure is a 24x7 globally operated service; however, support and troubleshooting rarely requires access to your data. If you want extra control over support and troubleshooting scenarios, you can use Customer Lockbox for Azure to approve or deny access requests to your data.** -Microsoft [Azure support](https://azure.microsoft.com/support/options/) is available in markets where Azure is offered. It's staffed globally to accommodate 24x7 access to support engineers via email and phone for technical support. You can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. As needed, frontline support engineers can escalate your requests to Azure DevOps personnel responsible for Azure service development and operations. These Azure DevOps engineers are also staffed globally. The same production access controls and processes are imposed on all Microsoft engineers, which include support staff comprised of both Microsoft full-time employees and subprocessors/vendors. +Microsoft [Azure support](https://azure.microsoft.com/support/options/) is available in markets where Azure is offered. It's staffed globally to accommodate 24x7 access to support engineers via email and phone for technical support. You can [create and manage support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request) in the Azure portal. As needed, frontline support engineers can escalate your requests to Azure DevOps personnel responsible for Azure service development and operations. These Azure DevOps engineers are also staffed globally. The same production access controls and processes are imposed on all Microsoft engineers, which include support staff comprised of both Microsoft full-time employees and subprocessors/vendors. As explained in *[Data encryption at rest](#data-encryption-at-rest)* section, **your data is encrypted at rest** by default when stored in Azure and you can control your own encryption keys in Azure Key Vault. Moreover, access to your data isn't needed to resolve most customer support requests. Microsoft engineers rely heavily on logs to provide customer support. As described in *[Insider data access](#insider-data-access)* section, Azure has controls in place to restrict access to your data for support and troubleshooting scenarios should that access be necessary. For example, **Just-in-Time (JIT)** access provisions restrict access to production systems to Microsoft engineers who are authorized to be in that role and were granted temporary access credentials. As part of the support workflow, **Customer Lockbox** puts you in charge of approving or denying access to your data by Microsoft engineers. When combined, these Azure technologies and processes (data encryption, JIT, and Customer Lockbox) provide appropriate risk mitigation to safeguard confidentiality and integrity of your data. |
azure-government | Documentation Government Stig Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-linux-vm.md | Select the resource group for the virtual machine, then select **Delete**. Confi ## Support -Contact Azure support to get assistance with issues related to STIG solution templates. You can create and manage support requests in the Azure portal. For more information see, [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). Use the following support paths when creating a ticket: +Contact Azure support to get assistance with issues related to STIG solution templates. You can create and manage support requests in the Azure portal. For more information see, [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Use the following support paths when creating a ticket: Azure -> Virtual Machine running Linux -> Cannot create a VM -> Troubleshoot my ARM template error |
azure-government | Documentation Government Stig Windows Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-windows-vm.md | Select the resource group for the virtual machine, then select **Delete**. Confi ## Support -Contact Azure support to get assistance with issues related to STIG solution templates. You can create and manage support requests in the Azure portal. For more information see, [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). Use the following support paths when creating a ticket: +Contact Azure support to get assistance with issues related to STIG solution templates. You can create and manage support requests in the Azure portal. For more information see, [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Use the following support paths when creating a ticket: Azure -> Virtual Machine running Windows -> Cannot create a VM -> Troubleshoot my ARM template error |
azure-linux | Concepts Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/concepts-core.md | - Title: Azure Linux Container Host for AKS basic core concepts -description: Learn the basic core concepts that make up the Azure Linux Container Host for AKS. ---- Previously updated : 08/18/2024----# Core concepts for the Azure Linux Container Host for AKS --Microsoft Azure Linux is an open-sourced project maintained by Microsoft, which means that Microsoft is responsible for the entire Azure Linux Container Host stack, from the Linux kernel to the Common Vulnerabilities and Exposures (CVEs) infrastructure, support, and end-to-end validation. Microsoft makes it easy for you to create an AKS cluster with Azure Linux, without worrying about details such as verification and critical security vulnerability patches from a third party distribution. --## CVE infrastructure --One of the responsibilities of Microsoft in maintaining the Azure Linux Container Host is establishing a process for CVEs, such as identifying applicable CVEs and publishing CVE fixes, and adhering to defined Service Level Agreements (SLAs) for package fixes. The Azure Linux team builds and maintains the SLA for package fixes for production purposes. For more information, see the [Azure Linux package repo structure](https://github.com/microsoft/CBL-Mariner/blob/2.0/toolkit/docs/building/building.md#packagesmicrosoftcom-repository-structure). For the packages included in the Azure Linux Container Host, Azure Linux scans for security vulnerabilities twice a day via CVEs in the [National Vulnerability Database (NVD)](https://nvd.nist.gov/). --Azure Linux CVEs are published in the [Security Update Guide (SUG) Common Vulnerability Reporting Framework (CVRF) API](https://github.com/microsoft/MSRC-Microsoft-Security-Updates-API). This allows you to get detailed Microsoft security updates about security vulnerabilities that have been investigated by the [Microsoft Security Response Center (MSRC)](https://www.microsoft.com/msrc). By collaborating with MSRC, Azure Linux can quickly and consistently discover, evaluate, and patch CVEs, and contribute critical fixes back upstream. --High and critical CVEs are taken seriously and may be released out-of-band as a package update before a new AKS node image is available. Medium and low CVEs are included in the next image release. --> [!NOTE] -> At this time, the scan results aren't published publicly. --## Feature additions and upgrades --Given that Microsoft owns the entire Azure Linux Container Host stack, including the CVE infrastructure and other support streams, the process of submitting a feature request is streamlined. You can communicate directly with the Microsoft team that owns the Azure Linux Container Host, which ensures an accelerated process for submitting and implementing feature requests. If you have a feature request, please file an issue on the [AKS GitHub repository](https://github.com/Azure/AKS/issues). --## Testing --Before an Azure Linux node image is released for testing, it undergoes a series of Azure Linux and AKS specific tests to ensure that the image meets AKS's requirements. This approach to quality testing helps catch and mitigate issues before they're deployed to your production nodes. Part of these tests are performance related, testing CPU, network, storage, memory, and cluster metrics such as cluster creation and upgrade times. This ensures that the performance of the Azure Linux Container Host doesn't regress as we upgrade the image. --In addition, the Azure Linux packages published to [packages.microsoft.com](https://packages.microsoft.com/cbl-mariner/) are also given an extra degree of confidence and safety through our testing. Both the Azure Linux node image and packages are run through a suite of tests that simulate an Azure environment. This includes Build Verification Tests (BVTs) that validate [AKS extensions and add-ons](/azure/aks/integrations) are supported on each release of the Azure Linux Container Host. Patches are also tested against the current Azure Linux node image before being released to ensure that there are no regressions, significantly reducing the likelihood of a corrupt package being rolled out to your production nodes. --## Next steps --This article covers some of the core Azure Linux Container Host concepts such as CVE infrastructure and testing. For more information on the Azure Linux Container Host concepts, see the following articles: --- [Azure Linux Container Host overview](./intro-azure-linux.md)-- [Azure Linux Container Host for AKS packages](./concepts-packages.md) |
azure-linux | Concepts Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/concepts-packages.md | - Title: Azure Linux Container Host for AKS packages -description: Learn about the packages supported by the Azure Linux Container Host for AKS. ---- Previously updated : 08/18/2024----# Packages --The Azure Linux Container Host for AKS is based on the Microsoft Azure Linux distribution, which supports thousands of packages. The container host contains a subset of those packages based on our customers' operating system and Kubernetes needs. This set of curated packages is among the most requested and necessary packages to run container workloads based on feedback from customers and the open-source community. --## List of Azure Linux Container Host packages --The Azure Linux Container Host package list includes all the needed dependencies to run an Azure Linux VM and also pulls in any necessary Azure Kubernetes Service dependencies. A list of all the packages in the Azure Linux Container Host can be viewed [here](https://github.com/Azure/AgentBaker/blob/master/vhdbuilder/release-notes/AKSCBLMariner/gen2/latest.txt). --Whenever a new image is released by AKS, the [AKS Azure Linux release notes folder](https://github.com/Azure/AgentBaker/blob/master/vhdbuilder/release-notes/AKSAzureLinux/gen2/latest.txt) is updated with a new `latest.txt` file, which details the most up-to-date package list. You can also view previous image package lists and the historical versions of each package in the most recent image release in the GitHub repository. For each prior image release, you can find a corresponding `.txt` file with the naming convention `YYYY.MM.DD.txt`, where `YYYY.MM.DD` is the date of each previous image release. ---> [!NOTE] -> Packages on a running Azure Linux Container Host cluster may have been automatically updated to their latest versions as new packages are released on [packages.microsoft.com](https://packages.microsoft.com/). --One of the key benefits of the Azure Linux Container Host package set is the kernel package. The Linux kernel package for the Azure Linux Container Host is patched and updated at least twice a month. This package is managed and owned by an entire Microsoft team, which ensures it's secure and contains all the latest updates for development. --## Determining package versions in a cluster --If you have direct access to the container host, you can query packages from the host itself. --To list all the installed packages and their versions, run the following command: --```console -rpm -qa -``` --To determine when individual packages were installed, run the following command: --```console -cat /var/log/dnf.log -``` --If you don't have direct access to the container host, you can work backwards from the node image version date to determine the package versions in a cluster. --To determine the `nodeImageVersion`, run the following command: --```azurecli -az aks show -g <groupname> -n <clustername> | grep nodeImageVersion -``` --Then, as described above, check the [AKS Azure Linux release notes folder](https://github.com/Azure/AgentBaker/blob/master/vhdbuilder/release-notes/AKSAzureLinux/gen2) for the file that corresponds with the previously determined node image version date. In the file, the *Installed Packages Begin* section lists all the package versions in your cluster. ---## Next steps --This article covers some of the core Azure Linux Container Host components such as packages. For more information on the Azure Linux Container Host concepts, see the following articles: --- [Azure Linux Container Host overview](./intro-azure-linux.md)-- [Azure Linux Container Host for AKS core concepts](./concepts-core.md) |
azure-linux | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/faq.md | - - Title: Frequently asked questions about the Azure Linux Container Host for AKS - description: Find answers to some of the common questions about the Azure Linux Container Host for AKS. - - - - - - Last updated 12/12/2023 ---# Frequently asked questions about the Azure Linux Container Host for AKS --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). --This article answers common questions about the Azure Linux Container Host. --## General FAQs --### What is Azure Linux? --The Azure Linux Container Host is an operating system image that's optimized for running container workloads on [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes). Microsoft maintains the Azure Linux Container Host and based it on Azure Linux (also known as *Mariner*), an open-source Linux distribution created by Microsoft. --### What are the benefits of using Azure Linux? --For more information, see the [Azure Linux Container Host key benefits](./intro-azure-linux.md#azure-linux-container-host-key-benefits). --### What's the difference between Azure Linux and Mariner? --Azure Linux and Mariner are the same image with different branding. Please use the Azure Linux OS SKU when referring to the image on AKS. --### Are Azure Linux container images supported on AKS? --The only supported container images are the Microsoft .NET and Open JDK container images based on Azure Linux. All other images are on a best effort community support basis in our [GitHub issues page](https://github.com/microsoft/CBL-Mariner/issues). --### What's the pricing for Azure Linux? --Azure Linux is available at no additional cost. You only pay for the underlying Azure resources, such as virtual machines (VMs) and storage. --### What GPUs does Azure Linux support? --Azure Linux supports the V100 and T4 GPUs. --### What certifications does Azure Linux have? --Azure Linux passes all CIS level 1 benchmarks and offers a FIPS image. For more information, see [Azure Linux Container Host core concepts](./concepts-core.md). --### Is the Microsoft Azure Linux source code released? --Yes. Azure Linux is an open-source project with a thriving community of contributors. You can find the global Azure Linux source code at https://github.com/microsoft/CBL-Mariner. --### What is the Service Level Agreement (SLA) for CVEs? --High and critical CVEs are taken seriously and may be released out-of-band as a package update before a new AKS node image is available. Medium and low CVEs are included in the next image release. --For more information on CVEs, see [Azure Linux Container Host for AKS core concepts](./concepts-core.md#cve-infrastructure). --### How does Microsoft notify users of new Azure Linux versions? --Azure Linux releases can be tracked alongside AKS releases on the [AKS release tracker](/azure/aks/release-tracker). --### Does the Azure Linux Container Host support AppArmor? --No, the Azure Linux Container Host doesn't support AppArmor. Instead, it supports SELinux, which can be manually configured. --### How does Azure Linux read time for time synchronization on Azure? --For time synchronization, Azure Linux reads the time from the Azure VM host using [chronyd](/azure/virtual-machines/linux/time-sync#chrony) and the /dev/ptp device. --### How can I get help with Azure Linux? --Submit a [GitHub issue](https://github.com/microsoft/CBL-Mariner/issues/new/choose) to ask a question, provide feedback, or submit a feature request. Please create an [Azure support request](./support-help.md#create-an-azure-support-request) for any issues or bugs. --### How can I stay informed of updates and new releases? --We're hosting public community calls for Azure Linux users to get together and discuss new features, provide feedback, and learn more about how others use Azure Linux. In each session, we will feature a new demo. The schedule for the upcoming community calls is as follows: --| Date | Time | Meeting link | -| | | | -| 9/26/2024 | 8-9 AM PST | [Click to join](https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDcyZjRkYWMtOWQxYS00OTk3LWFhNmMtMTMwY2VhMTA4OTZi%40thread.v2/0?context=%7b%22Tid%22%3a%2272f988bf-86f1-41af-91ab-2d7cd011db47%22%2c%22Oid%22%3a%2271a6ce92-58a5-4ea0-96f4-bd4a0401370a%22%7d). | -| 11/21/2024 | 8-9 AM PST | [Click to join](https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDcyZjRkYWMtOWQxYS00OTk3LWFhNmMtMTMwY2VhMTA4OTZi%40thread.v2/0?context=%7b%22Tid%22%3a%2272f988bf-86f1-41af-91ab-2d7cd011db47%22%2c%22Oid%22%3a%2271a6ce92-58a5-4ea0-96f4-bd4a0401370a%22%7d). | -| 1/23/2025 | 8-9 AM PST | [Click to join](https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDcyZjRkYWMtOWQxYS00OTk3LWFhNmMtMTMwY2VhMTA4OTZi%40thread.v2/0?context=%7b%22Tid%22%3a%2272f988bf-86f1-41af-91ab-2d7cd011db47%22%2c%22Oid%22%3a%2271a6ce92-58a5-4ea0-96f4-bd4a0401370a%22%7d). | -| 3/27/2025 | 8-9 AM PST | [Click to join](https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDcyZjRkYWMtOWQxYS00OTk3LWFhNmMtMTMwY2VhMTA4OTZi%40thread.v2/0?context=%7b%22Tid%22%3a%2272f988bf-86f1-41af-91ab-2d7cd011db47%22%2c%22Oid%22%3a%2271a6ce92-58a5-4ea0-96f4-bd4a0401370a%22%7d). | -| 5/22/2025 | 8-9 AM PST | [Click to join](https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDcyZjRkYWMtOWQxYS00OTk3LWFhNmMtMTMwY2VhMTA4OTZi%40thread.v2/0?context=%7b%22Tid%22%3a%2272f988bf-86f1-41af-91ab-2d7cd011db47%22%2c%22Oid%22%3a%2271a6ce92-58a5-4ea0-96f4-bd4a0401370a%22%7d). | --## Cluster FAQs --### Is there a migration tool available to switch from a different distro to Azure Linux on Azure Kubernetes Service (AKS)? --Yes, the migration from another distro to Azure Linux on AKS is straightforward. For more information, see [Tutorial 3 - Migrating to Azure Linux](./tutorial-azure-linux-migration.md). --### Can an existing AKS cluster be updated to use the Azure Linux Container Host, or does a new cluster with the Azure Linux Container Host need to be created? --An existing AKS cluster can add an Azure Linux node pool with the `az aks nodepool add` command and specifying `--os-sku AzureLinux`. Once a node pool starts, it can coexist with another distro and work gets scheduled between both node pools. For detailed instructions see, [Tutorial 2 - Add an Azure Linux node pool to your existing cluster](./tutorial-azure-linux-add-nodepool.md). --### Can I use a specific Azure Linux version indefinitely? --You can decide to opt out of automatic node image upgrades and manually upgrade your node image to control what version of Azure Linux you use. This way, you can use a specific Azure Linux version for as long as you want. --### I added a new node pool on an AKS cluster using the Azure Linux Container Host, but the kernel version isn't the same as the one that booted. Is this intended? --The base image that AKS uses to start clusters runs about two weeks behind the latest packages. When the image was built, the latest kernel was booted when the cluster started. However, one of the first things the cluster does is install package updates, which is where the new kernel came from. Most updated packages take effect immediately, but in order for a new kernel to be used the node needs to reboot. --The expected pattern for rebooting is to run a tool like [Kured](https://github.com/weaveworks/kured), which monitors each node, then gracefully reboots the cluster one machine at a time to bring everything up to date. --## Update FAQs --### What is Azure Linux's release cycle? --Azure Linux releases major image versions every ~two years, using the Linux LTS kernel and regularly updating the new stable packages. Monthly updates with CVE fixes are also made. --### How do upgrades from one major Azure Linux version to another work? --When upgrading between major Azure Linux versions, a [SKU migration](./tutorial-azure-linux-migration.md) is required. In the next major Azure Linux version release, the osSKU will be a rolling release. --### When are the latest Azure Linux Container Host image/node image released? --New Azure Linux Container Host base images on AKS are built weekly, but the release cycle may not be as frequent. We spend a week performing end-to-end testing, and the image version may take a few days to roll out to all regions. --### Is it possible to skip multiple Azure Linux minor versions during an upgrade? --If you choose to manually upgrade your node image instead of using automatic node image upgrades, you can skip Azure Linux minor versions during an upgrade. The next manual node image upgrade you perform upgrades you to the latest Azure Linux Container Host for AKS image. --### Some packages (CNCF, K8s) have a more aggressive release cycle, and I don't want to be up to a year behind. Does the Azure Linux Container Host have any plans for more frequent upgrades? --The Azure Linux Container Host adopts newer CNCF packages like K8s with higher cadence and doesn't delay them for annual releases. However, major compiler upgrades or deprecating language stacks like Python 2.7x may be held for major releases. |
azure-linux | How To Install Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/how-to-install-certs.md | - Title: Installing certificates on the Azure Linux Container Host for AKS -description: How to install certificates on the Azure Linux Container Host for AKS. ---ms.editor: schaffererin -- Previously updated : 08/18/2023----# Installing certificates on the Azure Linux Container host for AKS --By default, the Azure Linux Container Host for AKS image has a minimal set of root certs to trust certain Microsoft resources, such as `packages.microsoft.com`. All Microsoft certificates aren't automatically included in our image, which is consistent with the least-privilege principle and gives you the flexibility to opt in to just the root certificates you need and to customize your image. --The `ca-certificates-base` is preinstalled in the container host image and contains certificates from a small set of Microsoft-owned CAs. It consists of certificates from Microsoft's root and intermediate CAs. This package allows your container host to trust a minimal set of servers, all of which were verified and had their certificates issued by Microsoft. --The `ca-certificates` cover the root CAs trust by Microsoft through the [Microsoft Trusted Root Program](/security/trusted-root/participants-list). --The directory `/etc/pki/ca-trust/source/` contains the CA certificates and trust settings in the PEM file format. The trust settings found here are interpreted with a high priority, higher than the ones found in `/usr/share/pki/ca-trust-source/`. --For more information on the Azure Linux Container Host for AKS image certifications, see the [GitHub documentation](https://github.com/microsoft/CBL-Mariner/blob/2.0/toolkit/docs/security/ca-certificates.md). --## Add a certificate in the PEM or DER file format --You can add individual or multiple certificates to your Azure Linux Container Host for AKS image. To add a certificate in the simple PEM or DER file format to the list of CAs trusted on the system, follow these steps: --1. Save your certificate under `etc/pki/ca-trust/source/anchors/`. -1. Run `update-ca-trust` to consolidate CA certificates and associated trust. --## Add a certificate in the extended BEGIN TRUSTED file format --If your certificate is in the extended BEGIN TRUSTED file format (which may contain distrust trust flags or trust flags for usages other than TLS), then follow these steps: --1. Save your certificate under `etc/pki/ca-trust/source/`. -2. Run `update-ca-trust` to consolidate CA certificates and associated trust. --## Next steps --- Learn more about [Azure Linux Container Host core concepts](./concepts-core.md).-- Follow our tutorial to [Deploy, manage, and update applications](./tutorial-azure-linux-create-cluster.md).-- Get started by [Creating an Azure Linux Container Host for AKS cluster using Azure CLI](./quickstart-azure-cli.md). |
azure-linux | Intro Azure Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/intro-azure-linux.md | - Title: Introduction to the Azure Linux Container Host for AKS -description: Learn about the Azure Linux Container Host to use the container-optimized OS in your AKS clusters. ----- Previously updated : 12/12/2023---# What is the Azure Linux Container Host for AKS? --The Azure Linux Container Host is an operating system image that's optimized for running container workloads on [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes). Microsoft maintains the Azure Linux Container Host and based it on [CBL-Mariner][cbl-mariner], an open-source Linux distribution created by Microsoft. --The Azure Linux Container Host is lightweight, containing only the packages needed to run container workloads. It's hardened based on significant validation tests and internal usage and is compatible with Azure agents. It provides reliability and consistency from cloud to edge across AKS, AKS for Azure Stack HCI, and Azure Arc. You can deploy Azure Linux node pools in a new cluster, add Azure Linux node pools to your existing clusters, or migrate your existing nodes to Azure Linux nodes. --To learn more about Azure Linux, see the [Azure Linux GitHub repository](https://github.com/microsoft/CBL-Mariner). --## Azure Linux Container Host key benefits --The Azure Linux Container Host offers the following key benefits: --- **Small and lightweight**- - The Azure Linux Container Host only includes the necessary set of packages needed to run container workloads. As a result, it consumes limited disk and memory resources and produces faster cluster operations (create, upgrade, delete, scale, node creation, and pod creation) on AKS. - - Azure Linux has only 500 packages, and as a result takes up the least disk space by up to *5 GB* on AKS. -- **Secure supply chain**- - The Linux and AKS teams at Microsoft build, sign, and validate the [Azure Linux Container Host packages][azure-linux-packages] from source, and host packages and sources in Microsoft-owned and secured platforms. - - Before we release a package, each package runs through a full set of unit tests and end-to-end testing on the existing image to prevent regressions. The extensive testing, in combination with the smaller package count, reduces the chances of disruptive updates to applications. - - Azure Linux has a focus on stability, often backporting fixes in core components like the kernel or openssl. It also limits substantial changes or significant version bumps to major release boundaries (for example, Azure Linux 2.0 to 3.0), which prevents customer outages. -- **Secure by default**- - The Azure Linux Container Host has an emphasis on security. It follows the secure-by-default principles, including using a hardened Linux kernel with Azure cloud optimizations and flags tuned for Azure. It also provides a reduced attack surface and eliminates patching and maintenance of unnecessary packages. - - Microsoft monitors the CVE database and releases security patches monthly and critical updates within days if necessary. - - Azure Linux passes all the [CIS Level 1 benchmarks][cis-benchmarks], making it the only Linux distribution on AKS that does so. - - For more information on Azure Linux Container Host security principles, see the [AKS security concepts](/azure/aks/concepts-security). -- **Maintains compatibility with existing workloads**- - All existing and future AKS extensions, add-ons, and open-source projects on AKS support Azure Linux. This includes support for runtime components like Dapr, IaC tools like Terraform, and monitoring solutions like Dynatrace. - - Azure Linux ships with containerd as its container runtime and the upstream Linux kernel, which enables existing containers based on Linux images (like Alpine) to work seamlessly on Azure Linux. --## Azure Linux Container Host supported GPU SKUs --The Azure Linux Container Host supports the following GPU SKUs: --- [NVIDIA V100][nvidia-v100]-- [NVIDIA T4][nvidia-t4]--> [!NOTE] -> Azure Linux doesn't support the NC A100 v4 series. All other VM SKUs that are available on AKS are available with Azure Linux. -> -> If there are any areas you would like to have priority, please file an issue in the [AKS GitHub repository](https://github.com/Azure/AKS/issues). --## Next steps --- Learn more about [Azure Linux Container Host core concepts](./concepts-core.md).-- Follow our tutorial to [Deploy, manage, and update applications](./tutorial-azure-linux-create-cluster.md).-- Get started by [Creating an Azure Linux Container Host for AKS cluster using Azure CLI](./quickstart-azure-cli.md).--<!-- LINKS - internal --> -[nvidia-v100]: /azure/virtual-machines/ncv3-series -[nvidia-t4]: /azure/virtual-machines/nct4-v3-series -[cis-benchmarks]: /azure/aks/cis-azure-linux --<!-- LINKS - external --> -[cbl-mariner]: https://github.com/microsoft/CBL-Mariner -[azure-linux-packages]: https://packages.microsoft.com/cbl-mariner/2.0/prod/ |
azure-linux | Quickstart Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-cli.md | - Title: 'Quickstart: Deploy an Azure Linux Container Host for AKS cluster by using the Azure CLI' -description: Learn how to quickly create an Azure Linux Container Host for AKS cluster using the Azure CLI. ----- Previously updated : 04/18/2023---# Quickstart: Deploy an Azure Linux Container Host for AKS cluster by using the Azure CLI --Get started with the Azure Linux Container Host by using the Azure CLI to deploy an Azure Linux Container Host for AKS cluster. After installing the prerequisites, you will create a resource group, create an AKS cluster, connect to the cluster, and run a sample multi-container application in the cluster. --## Prerequisites --- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]--- Use the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Azure Cloud Shell Quickstart - Bash](/azure/cloud-shell/quickstart).- :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com"::: -- If you prefer to run CLI reference commands locally, [install](/cli/azure/install-azure-cli) the Azure CLI. If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).-- - If you're using a local installation, sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli). -- - When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview). -- - Run [az version](/cli/azure/reference-index?#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index?#az-upgrade). --## Create a resource group --An Azure resource group is a logical group in which Azure resources are deployed and managed. When creating a resource group, it is required to specify a location. This location is: -- The storage location of your resource group metadata.-- Where your resources will run in Azure if you don't specify another region when creating a resource.--To create a resource group named *testAzureLinuxResourceGroup* in the *eastus* region, follow this step: --Create a resource group using the `az group create` command. --```azurecli-interactive -az group create --name testAzureLinuxResourceGroup --location eastus -``` -The following output resembles that your resource group was successfully created: --```json -{ - "id": "/subscriptions/<guid>/resourceGroups/testAzureLinuxResourceGroup", - "location": "eastus", - "managedBy": null, - "name": "testAzureLinuxResourceGroup", - "properties": { - "provisioningState": "Succeeded" - }, - "tags": null -} -``` -> [!NOTE] -> The above example uses *eastus*, but Azure Linux Container Host clusters are available in all regions. --## Create an Azure Linux Container Host cluster --Create an AKS cluster using the `az aks create` command with the `--os-sku` parameter to provision the AKS cluster with an Azure Linux image. The following example creates an Azure Linux cluster named *testAzureLinuxCluster* with one node: --```azurecli-interactive -az aks create --name testAzureLinuxCluster --resource-group testAzureLinuxResourceGroup --os-sku AzureLinux -``` -After a few minutes, the command completes and returns JSON-formatted information about the cluster. --## Connect to the cluster --To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl](https://kubernetes.io/docs/reference/kubectl/kubectl/). --1. Configure `kubectl` to connect to your Kubernetes cluster using the `az aks get-credentials` command. -- ```azurecli-interactive - az aks get-credentials --resource-group testAzureLinuxResourceGroup --name testAzureLinuxCluster - ``` --2. Verify the connection to your cluster using the [kubectl get](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command. The command returns a list of the pods. -- ```azurecli-interactive - kubectl get pods --all-namespaces - ``` --## Deploy the application --A [Kubernetes manifest file](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests) defines a cluster's desired state, such as which container images to run. --In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application](https://github.com/Azure-Samples/azure-voting-app-redis). This manifest includes two Kubernetes deployments: --* The sample Azure Vote Python applications. -* A Redis instance. --Two [Kubernetes Services](/azure/aks/concepts-network-services) are also created: --* An internal service for the Redis instance. -* An external service to access the Azure Vote application from the internet. --1. Create a file named `azure-vote.yaml` and copy in the following manifest. -- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system. -- ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: azure-vote-back - spec: - replicas: 1 - selector: - matchLabels: - app: azure-vote-back - template: - metadata: - labels: - app: azure-vote-back - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: azure-vote-back - image: mcr.microsoft.com/oss/bitnami/redis:6.0.8 - env: - - name: ALLOW_EMPTY_PASSWORD - value: "yes" - resources: - requests: - cpu: 100m - memory: 128Mi - limits: - cpu: 250m - memory: 256Mi - ports: - - containerPort: 6379 - name: redis - - apiVersion: v1 - kind: Service - metadata: - name: azure-vote-back - spec: - ports: - - port: 6379 - selector: - app: azure-vote-back - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: azure-vote-front - spec: - replicas: 1 - selector: - matchLabels: - app: azure-vote-front - template: - metadata: - labels: - app: azure-vote-front - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: azure-vote-front - image: mcr.microsoft.com/azuredocs/azure-vote-front:v1 - resources: - requests: - cpu: 100m - memory: 128Mi - limits: - cpu: 250m - memory: 256Mi - ports: - - containerPort: 80 - env: - - name: REDIS - value: "azure-vote-back" - - apiVersion: v1 - kind: Service - metadata: - name: azure-vote-front - spec: - type: LoadBalancer - ports: - - port: 80 - selector: - app: azure-vote-front - ``` -- For a breakdown of YAML manifest files, see [Deployments and YAML manifests](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests). --1. Deploy the application using the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command and specify the name of your YAML manifest: -- ```console - kubectl apply -f azure-vote.yaml - ``` -- The following example resembles output showing the successfully created deployments and -- ```output - deployment "azure-vote-back" created - service "azure-vote-back" created - deployment "azure-vote-front" created - service "azure-vote-front" created - ``` --## Test the application --When the application runs, a Kubernetes service exposes the application front-end to the internet. This process can take a few minutes to complete. --Monitor progress using the [kubectl get service](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command with the `--watch` argument. --```azurecli-interactive -kubectl get service azure-vote-front --watch -``` --The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*. --```output -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s -``` --Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service: --```output -azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m -``` --To see the Azure Vote app in action, open a web browser to the external IP address of your service. ---## Delete the cluster --If you're not going to continue through the following tutorials, to avoid Azure charges clean up any unnecessary resources. Use the `az group delete` command to remove the resource group and all related resources. --```azurecli-interactive -az group delete --name testAzureLinuxCluster --yes --no-wait -``` --## Next steps --In this quickstart, you deployed an Azure Linux Container Host cluster. To learn more about the Azure Linux Container Host, and walk through a complete cluster deployment and management example, continue to the Azure Linux Container Host tutorial. --> [!div class="nextstepaction"] -> [Azure Linux Container Host tutorial](./tutorial-azure-linux-create-cluster.md) |
azure-linux | Quickstart Azure Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-powershell.md | - Title: 'Quickstart: Deploy an Azure Linux Container Host for an AKS cluster using Azure PowerShell' -description: Learn how to quickly create an Azure Linux Container Host for an AKS cluster using Azure PowerShell. ----- Previously updated : 11/20/2023---# Quickstart: Deploy an Azure Linux Container Host for an AKS cluster using Azure PowerShell --Get started with the Azure Linux Container Host by using Azure PowerShell to deploy an Azure Linux Container Host for an AKS cluster. After installing the prerequisites, you create a resource group, create an AKS cluster, connect to the cluster, and run a sample multi-container application in the cluster. --## Prerequisites --- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]-- Use the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Azure Cloud Shell Quickstart](/azure/cloud-shell/quickstart).- :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com"::: -- If you're running PowerShell locally, install the `Az PowerShell` module and connect to your Azure account using the [`Connect-AzAccount`](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell].-- The identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](/azure/aks/concepts-identity).--## Create a resource group --An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When creating a resource group, you need to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation. --The following example creates resource group named *testAzureLinuxResourceGroup* in the *eastus* region. --- Create a resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet.-- ```azurepowershell-interactive - New-AzResourceGroup -Name testAzureLinuxResourceGroup -Location eastus - ``` -- The following example output resembles successful creation of the resource group: -- ```output - ResourceGroupName : testAzureLinuxResourceGroup - Location : eastus - ProvisioningState : Succeeded - Tags : - ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testAzureLinuxResourceGroup - ``` -- > [!NOTE] - > The above example uses *eastus*, but Azure Linux Container Host clusters are available in all regions. --## Create an Azure Linux Container Host cluster --The following example creates a cluster named *testAzureLinuxCluster* with one node. --- Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet with the `-NodeOsSKU` flag set to *AzureLinux*.-- ```azurepowershell-interactive - New-AzAksCluster -ResourceGroupName testAzureLinuxResourceGroup -Name testAzureLinuxCluster -NodeOsSKU AzureLinux - ``` -- After a few minutes, the command completes and returns JSON-formatted information about the cluster. --## Connect to the cluster --To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl](https://kubernetes.io/docs/reference/kubectl/kubectl/). `kubectl` is already installed if you use Azure Cloud Shell. --1. Install `kubectl` locally using the `Install-AzAksCliTool` cmdlet. -- ```azurepowershell-interactive - Install-AzAksCliTool - ``` --2. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them. -- ```azurepowershell-interactive - Import-AzAksCredential -ResourceGroupName testAzureLinuxResourceGroup -Name testAzureLinuxCluster - ``` --3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster pods. -- ```azurepowershell-interactive - kubectl get pods --all-namespaces - ``` --## Deploy the application --A [Kubernetes manifest file](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests) defines a cluster's desired state, such as which container images to run. --In this quickstart, you use a manifest to create all objects needed to run the [Azure Vote application](https://github.com/Azure-Samples/azure-voting-app-redis). This manifest includes two Kubernetes deployments: --- The sample Azure Vote Python applications.-- A Redis instance.--This manifest also creates two [Kubernetes Services](/azure/aks/concepts-network-services): --- An internal service for the Redis instance.-- An external service to access the Azure Vote application from the internet.--1. Create a file named `azure-vote.yaml` and copy in the following manifest. -- - If you use the Azure Cloud Shell, you can create the file using `code`, `vi`, or `nano`. -- ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: azure-vote-back - spec: - replicas: 1 - selector: - matchLabels: - app: azure-vote-back - template: - metadata: - labels: - app: azure-vote-back - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: azure-vote-back - image: mcr.microsoft.com/oss/bitnami/redis:6.0.8 - env: - - name: ALLOW_EMPTY_PASSWORD - value: "yes" - resources: - requests: - cpu: 100m - memory: 128Mi - limits: - cpu: 250m - memory: 256Mi - ports: - - containerPort: 6379 - name: redis - - apiVersion: v1 - kind: Service - metadata: - name: azure-vote-back - spec: - ports: - - port: 6379 - selector: - app: azure-vote-back - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: azure-vote-front - spec: - replicas: 1 - selector: - matchLabels: - app: azure-vote-front - template: - metadata: - labels: - app: azure-vote-front - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: azure-vote-front - image: mcr.microsoft.com/azuredocs/azure-vote-front:v1 - resources: - requests: - cpu: 100m - memory: 128Mi - limits: - cpu: 250m - memory: 256Mi - ports: - - containerPort: 80 - env: - - name: REDIS - value: "azure-vote-back" - - apiVersion: v1 - kind: Service - metadata: - name: azure-vote-front - spec: - type: LoadBalancer - ports: - - port: 80 - selector: - app: azure-vote-front - ``` -- For a breakdown of YAML manifest files, see [Deployments and YAML manifests](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests). --2. Deploy the application using the [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command and specify the name of your YAML manifest: -- ```azurepowershell-interactive - kubectl apply -f azure-vote.yaml - ``` -- The following example resembles output showing the successfully created deployments and -- ```output - deployment "azure-vote-back" created - service "azure-vote-back" created - deployment "azure-vote-front" created - service "azure-vote-front" created - ``` --## Test the application --When the application runs, a Kubernetes service exposes the application frontend to the internet. This process can take a few minutes to complete. --1. Monitor progress using the [`kubectl get service`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command with the `--watch` argument. -- ```azurepowershell-interactive - kubectl get service azure-vote-front --watch - ``` -- The **EXTERNAL-IP** output for the `azure-vote-front` service initially shows as *pending*. -- ```output - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s - ``` --2. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service: -- ```output - azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m - ``` --3. Open a web browser to the external IP address of your service to see the application in action. -- :::image type="content" source="./media/azure-voting-application.png" alt-text="Screenshot of browsing to Azure Vote sample application."::: --## Delete the cluster --If you don't plan on continuing through the following tutorials, remove the created resources to avoid incurring Azure charges. --- Remove the resource group and all related resources using the [`RemoveAzResourceGroup`][remove-azresourcegroup] cmdlet.-- ```azurepowershell-interactive - Remove-AzResourceGroup -Name testAzureLinuxResourceGroup - ``` --## Next steps --In this quickstart, you deployed an Azure Linux Container Host AKS cluster. To learn more about the Azure Linux Container Host and walk through a complete cluster deployment and management example, continue to the Azure Linux Container Host tutorial. --> [!div class="nextstepaction"] -> [Azure Linux Container Host tutorial](./tutorial-azure-linux-create-cluster.md) --<!-- LINKS - internal --> -[install-azure-powershell]: /powershell/azure/install-az-ps -[azure-resource-group]: ../azure-resource-manager/management/overview.md -[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup -[new-azakscluster]: /powershell/module/az.aks/new-azakscluster -[import-azakscredential]: /powershell/module/az.aks/import-azakscredential -[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get -[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup |
azure-linux | Quickstart Azure Resource Manager Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-resource-manager-template.md | - Title: 'Quickstart: Deploy an Azure Linux Container Host for AKS cluster by using an ARM template' -description: Learn how to quickly create an Azure Linux Container Host for AKS cluster using an Azure Resource Manager template. ----- Previously updated : 04/18/2023---# Quickstart: Deploy an Azure Linux Container Host for AKS cluster by using an ARM template --Get started with the Azure Linux Container Host by using an Azure Resource Manager (ARM) template to deploy an Azure Linux Container Host cluster. After installing the prerequisites, you'll create a SSH key pair, review the template, deploy the template and validate it, and then deploy an application. ---## Prerequisites --- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]--- Use the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Azure Cloud Shell Quickstart - Bash](/azure/cloud-shell/quickstart).- :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com"::: -- If you prefer to run CLI reference commands locally, [install](/cli/azure/install-azure-cli) the Azure CLI. If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).-- - If you're using a local installation, sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli). -- - When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview). -- - Run [az version](/cli/azure/reference-index?#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index?#az-upgrade). --- If you don't already have kubectl installed, install it through Azure CLI using `az aks install-cli` or follow the [upstream instructions](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/).-- To create an AKS cluster using a Resource Manager template, you provide an SSH public key. If you need this resource, see the following section; otherwise skip to the [Review the template](#review-the-template) section.-- The identity you're using to create your cluster has the appropriate minimum permissions. For more information on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](/azure/aks/concepts-identity).-- To deploy a Bicep file or ARM template, you need write access on the resources you're deploying and access to all operations on the Microsoft.Resources/deployments resource type. For example, to deploy a virtual machine, you need Microsoft.Compute/virtualMachines/write and Microsoft.Resources/deployments/* permissions. For a list of roles and permissions, see [Azure built-in roles](../../articles/role-based-access-control/built-in-roles.md).--### Create an SSH key pair --To access AKS nodes, you connect using an SSH key pair (public and private), which you generate using the `ssh-keygen` command. By default, these files are created in the *~/.ssh* directory. Running the `ssh-keygen` command overwrites any SSH key pair with the same name already existing in the given location. --1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser. --1. Run the `ssh-keygen` command. The following example creates an SSH key pair using RSA encryption and a bit length of 4096: -- ```console - ssh-keygen -t rsa -b 4096 - ``` --For more information about creating SSH keys, see [Create and manage SSH keys for authentication in Azure](/azure/virtual-machines/linux/create-ssh-keys-detailed). --## Review the template --The following deployment uses an ARM template from [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.kubernetes/aks-azure-linux). --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.1", - "parameters": { - "clusterName": { - "type": "string", - "defaultValue": "azurelinuxakscluster", - "metadata": { - "description": "The name of the Managed Cluster resource." - } - }, - "location": { - "type": "string", - "defaultValue": "[resourceGroup().location]", - "metadata": { - "description": "The location of the Managed Cluster resource." - } - }, - "dnsPrefix": { - "type": "string", - "metadata": { - "description": "Optional DNS prefix to use with hosted Kubernetes API server FQDN." - } - }, - "osDiskSizeGB": { - "type": "int", - "defaultValue": 0, - "minValue": 0, - "maxValue": 1023, - "metadata": { - "description": "Disk size (in GB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize." - } - }, - "agentCount": { - "type": "int", - "defaultValue": 3, - "minValue": 1, - "maxValue": 50, - "metadata": { - "description": "The number of nodes for the cluster." - } - }, - "agentVMSize": { - "type": "string", - "defaultValue": "Standard_DS2_v2", - "metadata": { - "description": "The size of the Virtual Machine." - } - }, - "linuxAdminUsername": { - "type": "string", - "metadata": { - "description": "User name for the Linux Virtual Machines." - } - }, - "sshRSAPublicKey": { - "type": "string", - "metadata": { - "description": "Configure all linux machines with the SSH RSA public key string. Your key should include three parts, for example 'ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm'" - } - }, - "osType": { - "type": "string", - "defaultValue": "Linux", - "allowedValues": [ - "Linux" - ], - "metadata": { - "description": "The type of operating system." - } - }, - "osSKU": { - "type": "string", - "defaultValue": "AzureLinux", - "allowedValues": [ - "AzureLinux", - "Ubuntu" - ], - "metadata": { - "description": "The Linux SKU to use." - } - } - }, - "resources": [ - { - "type": "Microsoft.ContainerService/managedClusters", - "apiVersion": "2021-03-01", - "name": "[parameters('clusterName')]", - "location": "[parameters('location')]", - "properties": { - "dnsPrefix": "[parameters('dnsPrefix')]", - "agentPoolProfiles": [ - { - "name": "agentpool", - "mode": "System", - "osDiskSizeGB": "[parameters('osDiskSizeGB')]", - "count": "[parameters('agentCount')]", - "vmSize": "[parameters('agentVMSize')]", - "osType": "[parameters('osType')]", - "osSKU": "[parameters('osSKU')]", - "storageProfile": "ManagedDisks" - } - ], - "linuxProfile": { - "adminUsername": "[parameters('linuxAdminUsername')]", - "ssh": { - "publicKeys": [ - { - "keyData": "[parameters('sshRSAPublicKey')]" - } - ] - } - } - }, - "identity": { - "type": "SystemAssigned" - } - } - ], - "outputs": { - "controlPlaneFQDN": { - "type": "string", - "value": "[reference(parameters('clusterName')).fqdn]" - } - } -} -``` --To add Azure Linux to an existing ARM template, you need to add `"osSKU": "AzureLinux"` and `"mode": "System"` to `agentPoolProfiles` and set the apiVersion to 2021-03-01 or newer (`"apiVersion": "2021-03-01"`). ---## Deploy the template --1. Select the following button to sign in to Azure and open a template. -- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Button to deploy the Resource Manager template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks-mariner%2Fazuredeploy.json"::: --1. Select or enter the following values. -- For this quickstart, leave the default values for the *OS Disk Size GB*, *Agent Count*, *Agent VM Size*, *OS Type*, and *Kubernetes Version*. Provide your own values for the following template parameters: -- * **Subscription**: Select an Azure subscription. - * **Resource group**: Select **Create new**. Enter a unique name for the resource group, such as *testAzureLinuxResourceGroup*, then choose **OK**. - * **Location**: Select a location, such as **East US**. - * **Cluster name**: Enter a unique name for the AKS cluster, such as *testAzureLinuxCluster*. - * **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myAzureLinuxCluster*. - * **Linux Admin Username**: Enter a username to connect using SSH, such as *azureUser*. - * **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*). -- :::image type="content" source="./media/create-aks-cluster-using-template-portal.png" alt-text="Screenshot of Resource Manager template to create an Azure Kubernetes Service cluster in the portal."::: --1. Select **Review + Create**. --It takes a few minutes to create the Azure Linux Container Host cluster. Wait for the cluster to be successfully deployed before you move on to the next step. --## Validate the deployment --### Connect to the cluster --To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl](https://kubernetes.io/docs/reference/kubectl/). --1. Install `kubectl` locally using the `az aks install-cli` command: -- ```azurecli - az aks install-cli - ``` --1. Configure `kubectl` to connect to your Kubernetes cluster using the `az aks get-credentials` command. This command downloads credentials and configures the Kubernetes CLI to use them. -- ```azurecli-interactive - az aks get-credentials --resource-group testAzureLinuxResourceGroup --name testAzureLinuxCluster - ``` --1. Verify the connection to your cluster using the `kubectl get` command. This command returns a list of the cluster nodes. -- ```console - kubectl get nodes - ``` -- The following output example shows the three nodes created in the previous steps. Make sure the node status is *Ready*: -- ```output - NAME STATUS ROLES AGE VERSION - aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6 - aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6 - aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6 - ``` --### Deploy the application --A [Kubernetes manifest file](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests) defines a cluster's desired state, such as which container images to run. --In this quickstart, you use a manifest to create all objects needed to run the [Azure Vote application](https://github.com/Azure-Samples/azure-voting-app-redis). This manifest includes two Kubernetes deployments: --* The sample Azure Vote Python applications. -* A Redis instance. --Two [Kubernetes Services](/azure/aks/concepts-network-services) are also created: --* An internal service for the Redis instance. -* An external service to access the Azure Vote application from the internet. --1. Create a file named `azure-vote.yaml`. - * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system -1. Copy in the following YAML definition: -- ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: azure-vote-back - spec: - replicas: 1 - selector: - matchLabels: - app: azure-vote-back - template: - metadata: - labels: - app: azure-vote-back - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: azure-vote-back - image: mcr.microsoft.com/oss/bitnami/redis:6.0.8 - env: - - name: ALLOW_EMPTY_PASSWORD - value: "yes" - resources: - requests: - cpu: 100m - memory: 128Mi - limits: - cpu: 250m - memory: 256Mi - ports: - - containerPort: 6379 - name: redis - - apiVersion: v1 - kind: Service - metadata: - name: azure-vote-back - spec: - ports: - - port: 6379 - selector: - app: azure-vote-back - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: azure-vote-front - spec: - replicas: 1 - selector: - matchLabels: - app: azure-vote-front - template: - metadata: - labels: - app: azure-vote-front - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: azure-vote-front - image: mcr.microsoft.com/azuredocs/azure-vote-front:v1 - resources: - requests: - cpu: 100m - memory: 128Mi - limits: - cpu: 250m - memory: 256Mi - ports: - - containerPort: 80 - env: - - name: REDIS - value: "azure-vote-back" - - apiVersion: v1 - kind: Service - metadata: - name: azure-vote-front - spec: - type: LoadBalancer - ports: - - port: 80 - selector: - app: azure-vote-front - ``` -- For a breakdown of YAML manifest files, see [Deployments and YAML manifests](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests). --1. Deploy the application using the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command and specify the name of your YAML manifest: -- ```console - kubectl apply -f azure-vote.yaml - ``` -- The following example resembles output showing the successfully created deployments and -- ```output - deployment "azure-vote-back" created - service "azure-vote-back" created - deployment "azure-vote-front" created - service "azure-vote-front" created - ``` --### Test the application --When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. --Monitor progress using the [kubectl get service](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command with the `--watch` argument. --```console -kubectl get service azure-vote-front --watch -``` --The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*. --```output -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s -``` --Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service: --```output -azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m -``` --To see the Azure Vote app in action, open a web browser to the external IP address of your service. ---## Delete the cluster --If you're not going to continue through the following tutorials, to avoid Azure charges clean up any unnecessary resources. Use the `az group delete` command to remove the resource group and all related resources. --```azurecli-interactive -az group delete --name testAzureLinuxCluster --yes --no-wait -``` --## Next steps --In this quickstart, you deployed an Azure Linux Container Host cluster. To learn more about the Azure Linux Container Host, and walk through a complete cluster deployment and management example, continue to the Azure Linux Container Host tutorial. --> [!div class="nextstepaction"] -> [Azure Linux Container Host tutorial](./tutorial-azure-linux-create-cluster.md) |
azure-linux | Quickstart Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-terraform.md | - Title: 'Quickstart: Deploy an Azure Linux Container Host for AKS cluster by using Terraform' -description: Learn how to quickly create an Azure Linux Container Host for AKS cluster using Terraform. -----ms.editor: schaffererin - Previously updated : 06/27/2023---# Quickstart: Deploy an Azure Linux Container Host for AKS cluster using Terraform --Get started with the Azure Linux Container Host using Terraform to deploy an Azure Linux Container Host cluster. After installing the prerequisites, you implement the Terraform code, initialize Terraform, and create and apply a Terraform execution plan. --[Terraform](https://www.terraform.io/) enables the definition, preview, and deployment of cloud infrastructure. With Terraform, you create configuration files using [HCL syntax](https://developer.hashicorp.com/terraform/language/syntax/configuration). The HCL syntax allows you to specify the cloud provider and elements that make up your cloud infrastructure. After you create your configuration files, you create an execution plan that allows you to preview your infrastructure changes before they're deployed. Once you verify the changes, you apply the execution plan to deploy the infrastructure. --> [!NOTE] -> The example code in this article is located in the [Microsoft Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/201-k8s-cluster-with-tf-and-aks). --## Prerequisites --- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]--- If you haven't already configured Terraform, you can do so using one of the following options:- - [Azure Cloud Shell with Bash](/azure/developer/terraform/get-started-cloud-shell-bash?tabs=bash) - - [Azure Cloud Shell with PowerShell](/azure/developer/terraform/get-started-cloud-shell-powershell?tabs=bash) - - [Windows with Bash](/azure/developer/terraform/get-started-windows-bash?tabs=bash) - - [Windows with PowerShell](/azure/developer/terraform/get-started-windows-powershell?tabs=bash) -- If you don't have an Azure service principal, [create a service principal](/azure/developer/terraform/authenticate-to-azure?tabs=bash#create-a-service-principal). Make note of the `appId`, `display_name`, `password`, and `tenant`.-- You need the Kubernetes command-line tool `kubectl`. If you don't have it, [download kubectl](https://kubernetes.io/releases/download/).--### Create an SSH key pair --To access AKS nodes, you connect using an SSH key pair (public and private), which you generate using the `ssh-keygen` command. By default, these files are created in the *~/.ssh* directory. Running the `ssh-keygen` command overwrites any SSH key pair with the same name already existing in the given location. --1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser. -2. Run the `ssh-keygen` command. The following example creates an SSH key pair using RSA encryption and a bit length of 4096: -- ```console - ssh-keygen -t rsa -b 4096 - ``` --For more information about creating SSH keys, see [Create and manage SSH keys for authentication in Azure](/azure/virtual-machines/linux/create-ssh-keys-detailed). --## Implement the Terraform code --1. Create a directory in which to test the sample Terraform code and make it the current directory. -2. Create a file named `providers.tf` and insert the following code: -- ```terraform - terraform { - required_version = ">=1.0" - - required_providers { - azurerm = { - source = "hashicorp/azurerm" - version = "~>3.0" - } - random = { - source = "hashicorp/random" - version = "~>3.0" - } - } - } - - provider "azurerm" { - features {} - } - ``` --3. Create a file named `main.tf` and insert the following code: -- ```terraform - # Generate random resource group name - resource "random_pet" "rg_name" { - prefix = var.resource_group_name_prefix - } - - resource "azurerm_resource_group" "rg" { - location = var.resource_group_location - name = random_pet.rg_name.id - } - - resource "random_id" "log_analytics_workspace_name_suffix" { - byte_length = 8 - } - - resource "azurerm_log_analytics_workspace" "test" { - location = var.log_analytics_workspace_location - # The WorkSpace name has to be unique across the whole of azure; - # not just the current subscription/tenant. - name = "${var.log_analytics_workspace_name}-${random_id.log_analytics_workspace_name_suffix.dec}" - resource_group_name = azurerm_resource_group.rg.name - sku = var.log_analytics_workspace_sku - } - - resource "azurerm_log_analytics_solution" "test" { - location = azurerm_log_analytics_workspace.test.location - resource_group_name = azurerm_resource_group.rg.name - solution_name = "ContainerInsights" - workspace_name = azurerm_log_analytics_workspace.test.name - workspace_resource_id = azurerm_log_analytics_workspace.test.id - - plan { - product = "OMSGallery/ContainerInsights" - publisher = "Microsoft" - } - } - - resource "azurerm_kubernetes_cluster" "k8s" { - location = azurerm_resource_group.rg.location - name = var.cluster_name - resource_group_name = azurerm_resource_group.rg.name - dns_prefix = var.dns_prefix - tags = { - Environment = "Development" - } - - default_node_pool { - name = "azurelinuxpool" - vm_size = "Standard_D2_v2" - node_count = var.agent_count - os_sku = "AzureLinux" - } - linux_profile { - admin_username = "azurelinux" - - ssh_key { - key_data = file(var.ssh_public_key) - } - } - network_profile { - network_plugin = "kubenet" - load_balancer_sku = "standard" - } - service_principal { - client_id = var.aks_service_principal_app_id - client_secret = var.aks_service_principal_client_secret - } - } - ``` -- Similarly, you can specify the Azure Linux `os_sku` in [azurerm_kubernetes_cluster_node_pool](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster_node_pool#os_sku). --4. Create a file named `variables.tf` and insert the following code: -- ```terraform - variable "agent_count" { - default = 3 - } - - # The following two variable declarations are placeholder references. - # Set the values for these variable in terraform.tfvars - variable "aks_service_principal_app_id" { - default = "" - } - - variable "aks_service_principal_client_secret" { - default = "" - } - - variable "cluster_name" { - default = "k8stest" - } - - variable "dns_prefix" { - default = "k8stest" - } - - # Refer to https://azure.microsoft.com/global-infrastructure/services/?products=monitor for available Log Analytics regions. - variable "log_analytics_workspace_location" { - default = "eastus" - } - - variable "log_analytics_workspace_name" { - default = "testLogAnalyticsWorkspaceName" - } - - # Refer to https://azure.microsoft.com/pricing/details/monitor/ for Log Analytics pricing - variable "log_analytics_workspace_sku" { - default = "PerGB2018" - } - - variable "resource_group_location" { - default = "eastus" - description = "Location of the resource group." - } - - variable "resource_group_name_prefix" { - default = "rg" - description = "Prefix of the resource group name that's combined with a random ID so name is unique in your Azure subscription." - } - - variable "ssh_public_key" { - default = "~/.ssh/id_rsa.pub" - } - ``` --5. Create a file named `outputs.tf` and insert the following code: -- ```terraform - output "client_certificate" { - value = azurerm_kubernetes_cluster.k8s.kube_config[0].client_certificate - sensitive = true - } - - output "client_key" { - value = azurerm_kubernetes_cluster.k8s.kube_config[0].client_key - sensitive = true - } - - output "cluster_ca_certificate" { - value = azurerm_kubernetes_cluster.k8s.kube_config[0].cluster_ca_certificate - sensitive = true - } - - output "cluster_password" { - value = azurerm_kubernetes_cluster.k8s.kube_config[0].password - sensitive = true - } - - output "cluster_username" { - value = azurerm_kubernetes_cluster.k8s.kube_config[0].username - sensitive = true - } - - output "host" { - value = azurerm_kubernetes_cluster.k8s.kube_config[0].host - sensitive = true - } - - output "kube_config" { - value = azurerm_kubernetes_cluster.k8s.kube_config_raw - sensitive = true - } - - output "resource_group_name" { - value = azurerm_resource_group.rg.name - } - ``` --6. Create a file named `terraform.tfvars` and insert the following code: -- ```terraform - aks_service_principal_app_id = "<service_principal_app_id>" - aks_service_principal_client_secret = "<service_principal_password>" - ``` --## Initialize Terraform and create an execution plan --1. Initialize Terraform and download the Azure modules required to manage your Azure resources using the [`terraform init`](https://developer.hashicorp.com/terraform/cli/commands/init) command. -- ```console - terraform init - ``` --2. Create a Terraform execution plan using the [`terraform plan`](https://developer.hashicorp.com/terraform/cli/commands/plan) command. -- ```console - terraform plan -out main.tfplan - ``` -- The `terraform plan` command creates an execution plan, but doesn't execute it. Instead, it determines what actions are necessary to create the configuration specified in your configuration files. This pattern allows you to verify whether the execution plan matches your expectations before making any changes to actual resources. -- The optional `-out` parameter allows you to specify an output file for the plan. Using the `-out` parameter ensures that the plan you reviewed is exactly what is applied. -- To read more about persisting execution plans and security, see the [security warnings](https://developer.hashicorp.com/terraform/cli/commands/plan#security-warning). --3. Apply the Terraform execution plan using the [`terraform apply`](https://developer.hashicorp.com/terraform/cli/commands/apply) command. -- ```console - terraform apply main.tfplan - ``` -- The `terraform apply` command above assumes you previously ran `terraform plan -out main.tfplan`. If you specified a different file name for the `-out` parameter, use that same file name in the call to `terraform apply`. If you didn't use the `-out` parameter, call `terraform apply` without any parameters. --## Verify the results --1. Get the resource group name using the following `echo` command. -- ```console - echo "$(terraform output resource_group_name)" - ``` --2. Browse to the [Azure portal](https://portal.azure.com). -3. Under **Azure services**, select **Resource groups** and locate your new resource group to see the following resources created in this demo: - - **Solution:** By default, the demo names this solution **ContainerInsights**. The portal shows the solution's workspace name in parenthesis. - - **Kubernetes service:** By default, the demo names this service **k8stest**. (A managed Kubernetes cluster is also known as an AKS/Azure Kubernetes Service.) - - **Log Analytics Workspace:** By default, the demo names this workspace with a prefix of **TestLogAnalyticsWorkspaceName-** followed by a random number. -4. Get the Kubernetes configuration from the Terraform state and store it in a file that kubectl can read using the following `echo` command. - - ```console - echo "$(terraform output kube_config)" > ./azurek8s - ``` --5. Verify the previous command didn't add an ASCII EOT character using the following `cat` command. -- ```console - cat ./azurek8s - ``` -- If you see `<< EOT` at the beginning and `EOT` at the end, remove these characters from the file. Otherwise, you could receive the following error message: `error: error loading config file "./azurek8s": yaml: line 2: mapping values are not allowed in this context`. --6. Set an environment variable so kubectl picks up the correct config using the following `export` command. -- ```console - export KUBECONFIG=./azurek8s - ``` --7. Verify the health of the cluster using the `kubectl get nodes` command. -- ```console - kubectl get nodes - ``` -- When the Azure Linux Container Host cluster was created, monitoring was enabled to capture health metrics for both the cluster nodes and pods. These health metrics are available in the Azure portal. For more information on container health monitoring, see [Monitor Azure Kubernetes Service health](/azure/azure-monitor/insights/container-insights-overview). -- Several key values were output when you applied the Terraform execution plan. For example, the host address, Azure Linux Container Host cluster username, and Azure Linux Container Host cluster password are output. -- To view all of the output values, run `terraform output`. To view a specific output value, run `echo "$(terraform output <output_value_name>)"`. --## Clean up resources --### Delete AKS resources --When you no longer need the resources created with Terraform, you can remove them using the following steps. --1. Run the [`terraform plan`](https://developer.hashicorp.com/terraform/cli/commands/plan) command and specify the `destroy` flag. -- ```console - terraform plan -destroy -out main.destroy.tfplan - ``` --2. Remove the execution plan using the [`terraform apply`](https://www.terraform.io/docs/commands/apply.html) command. -- ```console - terraform apply main.destroy.tfplan - ``` --### Delete service principal --> [!CAUTION] -> Delete the service principal you used in this demo only if you're not using it for anything else. --1. Get the object ID of the service principal using the [`az ad sp list`][az-ad-sp-list] command -- ```azurecli - az ad sp list --display-name "<display_name>" --query "[].{\"Object ID\":id}" --output table - ``` --2. Delete the service principal using the [`az ad sp delete`][az-ad-sp-delete] command. -- ```azurecli - az ad sp delete --id <service_principal_object_id> - ``` --## Troubleshoot Terraform on Azure --[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot). --## Next steps --In this quickstart, you deployed an Azure Linux Container Host cluster. To learn more about the Azure Linux Container Host and walk through a complete cluster deployment and management example, continue to the Azure Linux Container Host tutorial. --> [!div class="nextstepaction"] -> [Azure Linux Container Host tutorial](./tutorial-azure-linux-create-cluster.md) --<!-- LINKS - internal --> -[az-ad-sp-list]: /cli/azure/ad/sp#az_ad_sp_list -[az-ad-sp-delete]: /cli/azure/ad/sp#az_ad_sp_delete |
azure-linux | Support Cycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/support-cycle.md | - Title: Azure Linux Container Host for AKS support lifecycle -description: Learn about the support lifecycle for the Azure Linux Container Host for AKS. ----- Previously updated : 09/29/2023---# Azure Linux Container Host support lifecycle --This article describes the support lifecycle for the Azure Linux Container Host for AKS. --> [!IMPORTANT] -> Microsoft is committed to meeting this support lifecycle and reserves the right to make changes to the support agreement and new scenarios that require modifications at any time with proper notice to customers and partners. --## Image releases --### Minor releases --At the beginning of each month, Mariner releases a minor image version containing medium, high, and critical package updates from the previous month. This release also includes minor kernel updates and bug fixes. --For more information on the CVE service level agreement (SLA), see [CVE infrastructure](./concepts-core.md#cve-infrastructure). --### Major releases --About every two years, Azure Linux releases a major image version containing new packages and package versions, an updated kernel, and enhancements to security, tooling, performance, and developer experience. Azure Linux releases a beta version of the major release about three months before the general availability (GA) release. --Azure Linux supports previous releases for six months following the GA release of the major image version. This support window enables a smooth migration between major releases while providing stable security and support. --> [!NOTE] -> The preview version of Azure Linux 3.0 is expected to release in March 2024. --## Next steps --- Learn more about [Azure Linux Container Host support](./support-help.md). |
azure-linux | Support Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/support-help.md | - Title: Azure Linux Container Host for AKS support and help options -description: How to obtain help and support for questions or problems when you create solutions using the Azure Linux Container Host. ----- Previously updated : 03/28/2024---# Support and help for the Azure Linux Container Host for AKS --This article covers where you can get help when developing your solutions with the Azure Linux Container Host. --## Self help troubleshooting ---We have supporting documentation explaining how to determine, diagnose, and fix issues that you might encounter when using the Azure Linux Container Host. Use this article to troubleshoot deployment failures, security-related problems, connection issues and more. --For a full list of self help troubleshooting content, see the Azure Linux Container Host troubleshooting documentation: --- [Package upgrade](./troubleshoot-packages.md)-- [Kernel versioning](./troubleshoot-kernel.md)-- [Troubleshoot common issues](/troubleshoot/azure/azure-kubernetes/troubleshoot-common-mariner-aks)--## Create an Azure support request ---Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal. --- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).--- To sign up for a new Azure Support Plan, [compare support plans](https://azure.microsoft.com/support/plans/) and select the plan that works for you.--## Create a GitHub issue ---Submit a [GitHub issue](https://github.com/microsoft/CBL-Mariner/issues/new/choose) to ask a question, provide feedback, or submit a feature request. Create an [Azure support request](#create-an-azure-support-request) for any issues or bugs. --## Stay connected with Azure Linux --We're hosting public community calls for Azure Linux users to get together and discuss new features, provide feedback, and learn more about how others use Azure Linux. In each session, we will feature a new demo. --Azure Linux published a [feature roadmap](https://github.com/orgs/microsoft/projects/970/views/2) that contains features that are in development and available for GA and public preview. This feature roadmap will be reviewed in each community call. We welcome you to leave feedback or ask questions on feature items. --The schedule for the upcoming community calls is as follows: --| Date | Time | Meeting link | -| | | | -| 9/26/2024 | 8-9 AM PST | [Click to join](https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDcyZjRkYWMtOWQxYS00OTk3LWFhNmMtMTMwY2VhMTA4OTZi%40thread.v2/0?context=%7b%22Tid%22%3a%2272f988bf-86f1-41af-91ab-2d7cd011db47%22%2c%22Oid%22%3a%2271a6ce92-58a5-4ea0-96f4-bd4a0401370a%22%7d). | -| 11/21/2024 | 8-9 AM PST | [Click to join](https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDcyZjRkYWMtOWQxYS00OTk3LWFhNmMtMTMwY2VhMTA4OTZi%40thread.v2/0?context=%7b%22Tid%22%3a%2272f988bf-86f1-41af-91ab-2d7cd011db47%22%2c%22Oid%22%3a%2271a6ce92-58a5-4ea0-96f4-bd4a0401370a%22%7d). | -| 1/23/2025 | 8-9 AM PST | [Click to join](https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDcyZjRkYWMtOWQxYS00OTk3LWFhNmMtMTMwY2VhMTA4OTZi%40thread.v2/0?context=%7b%22Tid%22%3a%2272f988bf-86f1-41af-91ab-2d7cd011db47%22%2c%22Oid%22%3a%2271a6ce92-58a5-4ea0-96f4-bd4a0401370a%22%7d). | -| 3/27/2025 | 8-9 AM PST | [Click to join](https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDcyZjRkYWMtOWQxYS00OTk3LWFhNmMtMTMwY2VhMTA4OTZi%40thread.v2/0?context=%7b%22Tid%22%3a%2272f988bf-86f1-41af-91ab-2d7cd011db47%22%2c%22Oid%22%3a%2271a6ce92-58a5-4ea0-96f4-bd4a0401370a%22%7d). | -| 5/22/2025 | 8-9 AM PST | [Click to join](https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDcyZjRkYWMtOWQxYS00OTk3LWFhNmMtMTMwY2VhMTA4OTZi%40thread.v2/0?context=%7b%22Tid%22%3a%2272f988bf-86f1-41af-91ab-2d7cd011db47%22%2c%22Oid%22%3a%2271a6ce92-58a5-4ea0-96f4-bd4a0401370a%22%7d). | --## Next steps --Learn more about [Azure Linux Container Host](./index.yml). |
azure-linux | Troubleshoot Kernel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/troubleshoot-kernel.md | - Title: Troubleshooting Azure Linux Container Host for AKS kernel version issues -description: How to troubleshoot Azure Linux Container Host for AKS kernel version issues. ----- Previously updated : 04/18/2023---# Troubleshoot outdated kernel versions in Azure Linux Container Host node images -During migration or when adding new node pools to your Azure Linux Container Host, you may encounter issues with outdated kernel versions. [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes) releases a new Azure Linux node-image every week, which is used for new node pools and as the starting image for scaling up. However, older node pools may not be updating their kernel versions as expected. --To check the KERNEL-VERSION of your node pools run: --```azurecli-interactive - kubectl get nodes -o wide -``` --Then, compare the kernel version of your node pools with the latest kernel published on [packages.microsoft.com](https://packages.microsoft.com/cbl-mariner/). --## Symptom --A common symptom of this issue includes: -- Azure Linux nodes aren't using the latest kernel version.--## Causes --There are two primary causes for this issue: -1. Automatic node-image upgrades weren't enabled when the node pool was created. -1. The base image that AKS uses to start clusters runs two weeks behind the latest kernel versions due to their rollout procedure. --## Solution --You can enable automatic upgrades using [GitHub Actions](/azure/aks/node-upgrade-github-actions) and reboot the nodes to resolve this issue. --### Enable automatic node-image upgrades by using Azure CLI --To enable automatic node-image upgrades when deploying a cluster from az-cli, add the parameter `--auto-upgrade-channel node-image`. --```azurecli-interactive -az aks create --name testAzureLinuxCluster --resource-group testAzureLinuxResourceGroup --os-sku AzureLinux --auto-upgrade-channel node-image -``` --### Enable automatic node-image upgrades by using ARM templates --To enable automatic node-image upgrades when using an ARM template you can set the [upgradeChannel](/azure/templates/microsoft.containerservice/managedclusters?tabs=bicep&pivots=deployment-language-bicep#managedclusterautoupgradeprofile) property in `autoUpgradeProfile` to `node-image`. --```json - autoUpgradeProfile: { - upgradeChannel: 'node-image' - } -``` --<!--### Enable automatic node-image upgrades by using Terraform --To enable automatic node-image upgrades when using a Terraform template, you can set the [automatic_channel_upgrade](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#automatic_channel_upgrade) property in `azurerm_kubernetes_cluster` to `node-image`. --```json - resource "azurerm_kubernetes_cluster" "example" { - name = "example-azurelinuxaks1" - [...] - automatic_channel_upgrade = "node-image" - [...] - } -``` >-### Reboot the nodes --When updating the kernel version, you need to reboot the node to use the new kernel version. We recommend that you set up the [kured daemonset](/azure/aks/node-updates-kured). [Kured](https://github.com/kubereboot/kured) to monitor your nodes for the `/var/run/reboot-required` file, drain the workload, and reboot the nodes. --## Workaround: Manual upgrades -If you need a quick workaround, you can manually upgrade the node-image on a cluster using [az aks nodepool upgrade](/azure/aks/node-image-upgrade#upgrade-a-specific-node-pool). This can be done by running --```azurecli -az aks nodepool upgrade \ - --resource-group testAzureLinuxResourceGroup \ - --cluster-name testAzureLinuxCluster \ - --name myAzureLinuxNodepool \ - --node-image-only -``` --## Next steps --If the preceding steps don't resolve the issue, open a [support ticket](https://azure.microsoft.com/support/). |
azure-linux | Troubleshoot Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/troubleshoot-packages.md | - Title: Troubleshooting Azure Linux Container Host for AKS package upgrade issues -description: How to troubleshoot Azure Linux Container Host for AKS package upgrade issues. ----- Previously updated : 08/18/2024---# Troubleshoot issues with package upgrades on the Azure Linux Container Host --The Azure Linux Container Host for AKS has `dnf-automatic` enabled by default, a systemd service that runs daily and automatically installs any recently published updated packages. This ensures that packages in the Azure Linux Container Host should automatically update when a fix is published. Note, that for some settings of [Node OS Upgrade Channel](/azure/aks/auto-upgrade-node-image), `dnf-automatic` will be disabled by default. --## Symptoms --However, sometimes the packages in the Azure Linux Container Host fail to receive automatic upgrades, which can lead to the following symptoms: -- Error messages while referencing or using an updated package.-- Packages not functioning as expected.-- Outdated versions of packages are displayed when checking the Azure Linux Container Host package list. You can verify if the packages on your image are synchronized with the recently published packaged by visiting the repository on [packages.microsoft.com](https://packages.microsoft.com/cbl-mariner/) or checking the release notes in the [Azure Linux GitHub](https://github.com/microsoft/CBL-Mariner/releases) repository.--## Cause --Some packages, such as the Linux Kernel, require a reboot for the updates to take effect. To facilitate automatic reboots, the Azure Linux VM runs the check-restart service, which creates the `/var/run/reboot-required` file when a package update requires a reboot. --## Solution --To ensure that Kubernetes acts on the request for a reboot, we recommend setting up the [kured daemonset](/azure/aks/node-updates-kured). [Kured](https://github.com/kubereboot/kured) monitors your nodes for the `/var/run/reboot-required` file and, when it's found, drains the work off the node and reboots it. --## Next steps --If the preceding steps do not resolve the issue, open a [support ticket](https://azure.microsoft.com/support/). |
azure-linux | Tutorial Azure Linux Add Nodepool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-add-nodepool.md | - Title: Azure Linux Container Host for AKS tutorial - Add an Azure Linux node pool to your existing AKS cluster -description: In this Azure Linux Container Host for AKS tutorial, you learn how to add an Azure Linux node pool to your existing cluster. ----- Previously updated : 06/06/2023---# Tutorial: Add an Azure Linux node pool to your existing AKS cluster --In AKS, nodes with the same configurations are grouped together into node pools. Each pool contains the VMs that run your applications. In the previous tutorial, you created an Azure Linux Container Host cluster with a single node pool. To meet the varying compute or storage requirements of your applications, you can create additional user node pools. --In this tutorial, part two of five, you learn how to: --> [!div class="checklist"] -> -> * Add an Azure Linux node pool. -> * Check the status of your node pools. --In later tutorials, you learn how to migrate nodes to Azure Linux and enable telemetry to monitor your clusters. --## Prerequisites --* In the previous tutorial, you created and deployed an Azure Linux Container Host cluster. If you haven't done these steps and would like to follow along, start with [Tutorial 1: Create a cluster with the Azure Linux Container Host for AKS](./tutorial-azure-linux-create-cluster.md). -* You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --## 1 - Add an Azure Linux node pool --To add an Azure Linux node pool into your existing cluster, use the `az aks nodepool add` command and specify `--os-sku AzureLinux`. The following example creates a node pool named *ALnodepool* that runs three nodes in the *testAzureLinuxCluster* cluster in the *testAzureLinuxResourceGroup* resource group: --```azurecli-interactive -az aks nodepool add \ - --resource-group testAzureLinuxResourceGroup \ - --cluster-name testAzureLinuxCluster \ - --name ALnodepool \ - --node-count 3 \ - --os-sku AzureLinux -``` --> [!NOTE] -> The name of a node pool must start with a lowercase letter and can only contain alphanumeric characters. For Linux node pools the length must be between one and 12 characters. --## 2 - Check the node pool status --To see the status of your node pools, use the `az aks node pool list` command and specify your resource group and cluster name. --```azurecli-interactive -az aks nodepool list --resource-group testAzureLinuxResourceGroup --cluster-name testAzureLinuxCluster -``` --## Next steps --In this tutorial, you added an Azure Linux node pool to your existing cluster. You learned how to: --> [!div class="checklist"] -> -> * Add an Azure Linux node pool. -> * Check the status of your node pools. --In the next tutorial, you learn how to migrate existing nodes to Azure Linux. --> [!div class="nextstepaction"] -> [Migrating to Azure Linux](./tutorial-azure-linux-migration.md) |
azure-linux | Tutorial Azure Linux Create Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-create-cluster.md | - Title: Azure Linux Container Host for AKS tutorial - Create a cluster -description: In this Azure Linux Container Host for AKS tutorial, you will learn how to create an AKS cluster with Azure Linux. ----- Previously updated : 04/18/2023---# Tutorial: Create a cluster with the Azure Linux Container Host for AKS --To create a cluster with the Azure Linux Container Host, you will use: -1. Azure resource groups, a logical container into which Azure resources are deployed and managed. -1. [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes), a hosted Kubernetes service that allows you to quickly create a production ready Kubernetes cluster. --In this tutorial, part one of five, you will learn how to: --> [!div class="checklist"] -> * Install the Kubernetes CLI, `kubectl`. -> * Create an Azure resource group. -> * Create and deploy an Azure Linux Container Host cluster. -> * Configure `kubectl` to connect to your Azure Linux Container Host cluster. --In later tutorials, you'll learn how to add an Azure Linux node pool to an existing cluster and migrate existing nodes to Azure Linux. --## Prerequisites --- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]-- You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).--## 1 - Install the Kubernetes CLI --Use the Kubernetes CLI, kubectl, to connect to the Kubernetes cluster from your local computer. --If you don't already have kubectl installed, install it through Azure CLI using `az aks install-cli` or follow the [upstream instructions](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/). --```azurecli-interactive -az aks install-cli -``` --## 2 - Create a resource group --When creating a resource group, it is required to specify a location. This location is: -- The storage location of your resource group metadata.-- Where your resources will run in Azure if you don't specify another region when creating a resource.--Create a resource group with the `az group create` command. To create a resource group named *testAzureLinuxResourceGroup* in the *eastus* region, follow this step: --```azurecli-interactive -az group create --name testAzureLinuxResourceGroup --location eastus -``` -> [!NOTE] -> The above example uses *eastus*, but Azure Linux Container Host clusters are available in all regions. --## 3 - Create an Azure Linux Container Host cluster --Create an AKS cluster using the `az aks create` command with the `--os-sku` parameter to provision the Azure Linux Container Host with an Azure Linux image. The following example creates an Azure Linux Container Host cluster named *testAzureLinuxCluster* using the *testAzureLinuxResourceGroup* resource group created in the previous step: --```azurecli-interactive -az aks create --name testAzureLinuxCluster --resource-group testAzureLinuxResourceGroup --os-sku AzureLinux -``` -After a few minutes, the command completes and returns JSON-formatted information about the cluster. --## 4 - Connect to the cluster using kubectl --To configure `kubectl` to connect to your Kubernetes cluster, use the `az aks get-credentials` command. The following example gets credentials for the Azure Linux Container Host cluster named *testAzureLinuxCluster* in the *testAzureLinuxResourceGroup* resource group: --```azurecli -az aks get-credentials --resource-group testAzureLinuxResourceGroup --name testAzureLinuxCluster -``` --To verify the connection to your cluster, run the [kubectl get nodes](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command to return a list of the cluster nodes: --```azurecli-interactive -kubectl get nodes -``` --## Next steps --In this tutorial, you created and deployed an Azure Linux Container Host cluster. You learned how to: --> [!div class="checklist"] -> * Install the Kubernetes CLI, `kubectl`. -> * Create an Azure resource group. -> * Create and deploy an Azure Linux Container Host cluster. -> * Configure `kubectl` to connect to your Azure Linux Container Host cluster. --In the next tutorial, you'll learn how to add an Azure Linux node pool to an existing cluster. --> [!div class="nextstepaction"] -> [Add an Azure Linux node pool](./tutorial-azure-linux-add-nodepool.md) |
azure-linux | Tutorial Azure Linux Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-migration.md | - Title: Azure Linux Container Host for AKS tutorial - Migrating to Azure Linux -description: In this Azure Linux Container Host for AKS tutorial, you learn how to migrate your nodes to Azure Linux nodes. ------ Previously updated : 01/19/2024---# Tutorial: Migrate nodes to Azure Linux --In this tutorial, part three of five, you migrate your existing nodes to Azure Linux. You can migrate your existing nodes to Azure Linux using one of the following methods: --* Remove existing node pools and add new Azure Linux node pools. -* In-place OS SKU migration. --If you don't have any existing nodes to migrate to Azure Linux, skip to the [next tutorial](./tutorial-azure-linux-telemetry-monitor.md). In later tutorials, you learn how to enable telemetry and monitoring in your clusters and upgrade Azure Linux nodes. --## Prerequisites --* In previous tutorials, you created and deployed an Azure Linux Container Host for AKS cluster. To complete this tutorial, you need to add an Azure Linux node pool to your existing cluster. If you haven't done this step and would like to follow along, start with [Tutorial 2: Add an Azure Linux node pool to your existing AKS cluster](./tutorial-azure-linux-add-nodepool.md). -- > [!NOTE] - > When adding a new Azure Linux node pool, you need to add at least one as `--mode System`. Otherwise, AKS won't allow you to delete your existing node pool. --* You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --## Add Azure Linux node pools and remove existing node pools --1. Add a new Azure Linux node pool using the `az aks nodepool add` command. This command adds a new node pool to your cluster with the `--mode System` flag, which makes it a system node pool. System node pools are required for Azure Linux clusters. -- ```azurecli-interactive - az aks nodepool add --resource-group <resource-group-name> --cluster-name <cluster-name> --name <node-pool-name> --mode System --os-sku AzureLinux - ``` --2. Remove your existing nodes using the `az aks nodepool delete` command. -- ```azurecli-interactive - az aks nodepool delete --resource-group <resource-group-name> --cluster-name <cluster-name> --name <node-pool-name> - ``` --## In-place OS SKU migration --You can now migrate your existing Ubuntu node pools to Azure Linux by changing the OS SKU of the node pool, which rolls the cluster through the standard node image upgrade process. This new feature doesn't require the creation of new node pools. --### Limitations --There are several settings that can block the OS SKU migration request. To ensure a successful migration, review the following guidelines and limitations: --* The OS SKU migration feature isn't available through PowerShell or the Azure portal. -* The OS SKU migration feature isn't able to rename existing node pools. -* Ubuntu and Azure Linux are the only supported Linux OS SKU migration targets. -* An Ubuntu OS SKU with `UseGPUDedicatedVHD` enabled can't perform an OS SKU migration. -* An Ubuntu OS SKU with CVM 20.04 enabled can't perform an OS SKU migration. -* Node pools with Kata enabled can't perform an OS SKU migration. -* Windows OS SKU migration isn't supported. --### Prerequisites --* An existing AKS cluster with at least one Ubuntu node pool. -* We recommend that you ensure your workloads configure and run successfully on the Azure Linux container host before attempting to use the OS SKU migration feature by [deploying an Azure Linux cluster](./quickstart-azure-cli.md) in dev/prod and verifying your service remains healthy. -* Ensure the migration feature is working for you in test/dev before using the process on a production cluster. -* Ensure that your pods have enough [Pod Disruption Budget](/azure/aks/operator-best-practices-scheduler#plan-for-availability-using-pod-disruption-budgets) to allow AKS to move pods between VMs during the upgrade. -* You need Azure CLI version [2.61.0](/cli/azure/release-notes-azure-cli#may-21-2024) or higher. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). -* If you are using Terraform, you must have [v3.111.0](https://github.com/hashicorp/terraform-provider-azurerm/releases/tag/v3.111.0) or greater of the AzureRM Terraform module. --### [Azure CLI](#tab/azure-cli) --#### Migrate the OS SKU of your Ubuntu node pool --* Migrate the OS SKU of your node pool to Azure Linux using the `az aks nodepool update` command. This command updates the OS SKU for your node pool from Ubuntu to Azure Linux. The OS SKU change triggers an immediate upgrade operation, which takes several minutes to complete. -- ```azurecli-interactive - az aks nodepool update --resource-group <resource-group-name> --cluster-name <cluster-name> --name <node-pool-name> --os-sku AzureLinux - ``` -- > [!NOTE] - > If you experience issues during the OS SKU migration, you can [roll back to your previous OS SKU](#rollback). --### [ARM template](#tab/arm-template) --#### Example ARM templates --##### 0base.json --```json - { - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "resources": [ - { - "type": "Microsoft.ContainerService/managedClusters", - "apiVersion": "2023-07-01", - "name": "akstestcluster", - "location": "[resourceGroup().location]", - "tags": { - "displayname": "Demo of AKS Nodepool Migration" - }, - "identity": { - "type": "SystemAssigned" - }, - "properties": { - "enableRBAC": true, - "dnsPrefix": "testcluster", - "agentPoolProfiles": [ - { - "name": "testnp", - "count": 3, - "vmSize": "Standard_D4a_v4", - "osType": "Linux", - "osSku": "Ubuntu", - "mode": "System" - } - ] - } - } - ] -} -``` --##### 1mcupdate.json --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "resources": [ - { - "type": "Microsoft.ContainerService/managedClusters", - "apiVersion": "2023-07-01", - "name": "akstestcluster", - "location": "[resourceGroup().location]", - "tags": { - "displayname": "Demo of AKS Nodepool Migration" - }, - "identity": { - "type": "SystemAssigned" - }, - "properties": { - "enableRBAC": true, - "dnsPrefix": "testcluster", - "agentPoolProfiles": [ - { - "name": "testnp", - "osType": "Linux", - "osSku": "AzureLinux", - "mode": "System" - } - ] - } - } - ] -} -``` --##### 2apupdate.json --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "resources": [ - { - "apiVersion": "2023-07-01", - "type": "Microsoft.ContainerService/managedClusters/agentPools", - "name": "akstestcluster/testnp", - "location": "[resourceGroup().location]", - "properties": { - "osType": "Linux", - "osSku": "Ubuntu", - "mode": "System" - } - } - ] -} -``` --#### Deploy a test cluster --1. Create a resource group for the test cluster using the `az group create` command. -- ```azurecli-interactive - az group create --name testRG --location eastus - ``` --2. Deploy a baseline Ubuntu OS SKU cluster with three nodes using the `az deployment group create` command and the [0base.json example ARM template](#0basejson). -- ```azurecli-interactive - az deployment group create --resource-group testRG --template-file 0base.json - ``` --3. Migrate the OS SKU of your system node pool to Azure Linux using the `az deployment group create` command. -- ```azurecli-interactive - az deployment group create --resource-group testRG --template-file 1mcupdate.json - ``` --4. Migrate the OS SKU of your system node pool back to Ubuntu using the `az deployment group create` command. -- ```azurecli-interactive - az deployment group create --resource-group testRG --template-file 2apupdate.json - ``` --### [Terraform](#tab/terraform) --#### Example Terraform template --1. Confirm that your `providers.tf` file is updated to pick up the required version of the Azure provider. --##### providers.tf --```terraform -terraform { - required_version = ">=1.0" -- required_providers { - azurerm = { - source = "hashicorp/azurerm" - version = "~>3.111.0" - } - random = { - source = "hashicorp/random" - version = "~>3.0" - } - } - } -- provider "azurerm" { - features {} - } -``` --2. For brevity, only the snippet of the Terraform template that is of interest is displayed below. In this initial configuration, an AKS cluster with a nodepool of **os_sku** with **Ubuntu** is deployed. --##### base.tf --```terraform -resource "azurerm_kubernetes_cluster" "k8s" { - location = azurerm_resource_group.rg.location - name = var.cluster_name - resource_group_name = azurerm_resource_group.rg.name - dns_prefix = var.dns_prefix - tags = { - Environment = "Development" - } -- default_node_pool { - name = "azurelinuxpool" - vm_size = "Standard_D2_v2" - node_count = var.agent_count - os_sku = "Ubuntu" - } - linux_profile { - admin_username = "azurelinux" -- ssh_key { - key_data = file(var.ssh_public_key) - } - } - network_profile { - network_plugin = "kubenet" - load_balancer_sku = "standard" - } - service_principal { - client_id = var.aks_service_principal_app_id - client_secret = var.aks_service_principal_client_secret - } -} -``` --3. To run an in-place OS SKU migration, just replace the **os_sku** to **AzureLinux** and re-apply the Terraform plan. --##### update.tf --```terraform -resource "azurerm_kubernetes_cluster" "k8s" { - location = azurerm_resource_group.rg.location - name = var.cluster_name - resource_group_name = azurerm_resource_group.rg.name - dns_prefix = var.dns_prefix - tags = { - Environment = "Development" - } -- default_node_pool { - name = "azurelinuxpool" - vm_size = "Standard_D2_v2" - node_count = var.agent_count - os_sku = "AzureLinux" - } - linux_profile { - admin_username = "azurelinux" -- ssh_key { - key_data = file(var.ssh_public_key) - } - } - network_profile { - network_plugin = "kubenet" - load_balancer_sku = "standard" - } - service_principal { - client_id = var.aks_service_principal_app_id - client_secret = var.aks_service_principal_client_secret - } -} -``` ----### Verify the OS SKU migration --Once the migration is complete on your test clusters, you should verify the following to ensure a successful migration: --* If your migration target is Azure Linux, run the `kubectl get nodes -o wide` command. The output should show `CBL-Mariner/Linux` as your OS image and `.cm2` at the end of your kernel version. -* Run the `kubectl get pods -o wide -A` command to verify that all of your pods and daemonsets are running on the new node pool. -* Run the `kubectl get nodes --show-labels` command to verify that all of the node labels in your upgraded node pool are what you expect. --> [!TIP] -> We recommend monitoring the health of your service for a couple weeks before migrating your production clusters. --### Run the OS SKU migration on your production clusters --1. Update your existing templates to set `OSSKU=AzureLinux`. In ARM templates, you use `"OSSKU: "AzureLinux"` in the `agentPoolProfile` section. In Bicep, you use `osSku: "AzureLinux"` in the `agentPoolProfile` section. Lastly, for Terraform, you use `"os_sku = "AzureLinux"` in the `default_node_pool` section. Make sure that your `apiVersion` is set to `2023-07-01` or later. -2. Redeploy your ARM, Bicep, or Terraform template for the cluster to apply the new `OSSKU` setting. During this deploy, your cluster behaves as if it's taking a node image upgrade. Your cluster surges capacity, and then reboots your existing nodes one by one into the latest AKS image from your new OS SKU. --### Rollback --If you experience issues during the OS SKU migration, you can roll back to your previous OS SKU. To do this, you need to change the OS SKU field in your template and resubmit the deployment, which triggers another upgrade operation and restores the node pool to its previous OS SKU. --* Roll back to your previous OS SKU using the `az aks nodepool update` command. This command updates the OS SKU for your node pool from Azure Linux back to Ubuntu. -- ```azurecli-interactive - az aks nodepool update --resource-group myResourceGroup --cluster-name myAKSCluster --name mynodepool --os-sku Ubuntu - ``` --## Next steps --In this tutorial, you migrated existing nodes to Azure Linux using one of the following methods: --* Remove existing node pools and add new Azure Linux node pools. -* In-place OS SKU migration. --In the next tutorial, you learn how to enable telemetry to monitor your clusters. --> [!div class="nextstepaction"] -> [Enable telemetry and monitoring](./tutorial-azure-linux-telemetry-monitor.md) |
azure-linux | Tutorial Azure Linux Telemetry Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-telemetry-monitor.md | - Title: Azure Linux Container Host for AKS tutorial - Enable telemetry and monitoring for the Azure Linux Container Host -description: In this Azure Linux Container Host for AKS tutorial, you'll learn how to enable telemetry and monitoring for the Azure Linux Container Host. ----- Previously updated : 08/18/2024---# Tutorial: Enable telemetry and monitoring for your Azure Linux Container Host cluster --In this tutorial, part four of five, you'll set up Container Insights to monitor an Azure Linux Container Host cluster. You'll learn how to: --> [!div class="checklist"] -> * Enable monitoring for an existing cluster. -> * Verify that the agent is deployed successfully. -> * Verify that the solution is enabled. --In the next and last tutorial, you'll learn how to upgrade your Azure Linux nodes. --## Prerequisites --- In previous tutorials, you created and deployed an Azure Linux Container Host cluster. To complete this tutorial, you need an existing cluster. If you haven't done this step and would like to follow along, start with [Tutorial 1: Create a cluster with the Azure Linux Container Host for AKS](./tutorial-azure-linux-create-cluster.md).-- If you're connecting an existing AKS cluster to a Log Analytics workspace in another subscription, the Microsoft.ContainerService resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see [Register resource provider](../../articles/azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).-- You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).--## 1 - Enable monitoring --### Option 1: Use a default Log Analytics workspace --The following step enables monitoring for your Azure Linux Container Host cluster using Azure CLI. In this example, you aren't required to precreate or specify an existing workspace. This command simplifies the process for you by creating a default workspace in the default resource group of the AKS cluster subscription. If one doesn't already exist in the region, the default workspace created will resemble the format *DefaultWorkspace-< GUID >-< Region >*. --```azurecli -az aks enable-addons -a monitoring -n testAzureLinuxCluster -g testAzureLinuxResourceGroup -``` --The first few lines of the output should contain the following in the `addonProfiles` configuration : --```output -{ - "aadProfile": null, - "addonProfiles": { - "omsagent": { - "config": { - "logAnalyticsWorkspaceResourceID": "/subscriptions/<WorkspaceSubscription>/resourceGroups/DefaultResourceGroup-EUS2/providers/Microsoft.OperationalInsights/workspaces/DefaultWorkspace-<WorkspaceSubscription>-EUS2", - "useAADAuth": "true" - }, - "enabled": true, - "identity": null - } - }, -} -``` --### Option 2: Specify a Log Analytics workspace --In this example, you can specify a Log Analytics workspace to enable monitoring of your Azure Linux Container Host cluster. The resource ID of the workspace will be in the form `"/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<WorkspaceName>"`. --```azurecli -az aks enable-addons -a monitoring -n testAzureLinuxCluster -g testAzureLinuxResourceGroup --workspace-resource-id <workspace-resource-id> -``` --The output will resemble the following example: --```output -provisioningState : Succeeded -``` --## 2 - Verify agent and solution deployment --Run the following command to verify that the agent is deployed successfully. --``` -kubectl get ds ama-logs --namespace=kube-system -``` --The output should resemble the following example, which indicates that it was deployed properly: --```output -User@aksuser:~$ kubectl get ds ama-logs --namespace=kube-system -NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE -ama-logs 3 3 3 3 3 <none> 3m22s -``` --To verify deployment of the solution, run the following command: --``` -kubectl get deployment ama-logs-rs -n=kube-system -``` --The output should resemble the following example, which indicates that it was deployed properly: --```output -User@aksuser:~$ kubectl get deployment ama-logs-rs -n=kube-system -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -ama-logs-rs 1 1 1 1 3h -``` --## 3 - Verify solution configuration --Use the `aks show` command to find out whether the solution is enabled or not, what the Log Analytics workspace resource ID is, and summary information about the cluster. --```azurecli -az aks show -g testAzureLinuxResourceGroup -n testAzureLinuxCluster -``` --After a few minutes, the command completes and returns JSON-formatted information about the solution. The results of the command should show the monitoring add-on profile and resemble the following example output: --```output -"addonProfiles": { - "omsagent": { - "config": { - "logAnalyticsWorkspaceResourceID": "/subscriptions/<WorkspaceSubscription>/resourceGroups/<DefaultWorkspaceRG>/providers/Microsoft.OperationalInsights/workspaces/<defaultWorkspaceName>" - }, - "enabled": true - } - } -``` --## Next steps --In this tutorial, you enabled telemetry and monitoring for your Azure Linux Container Host cluster. You learned how to: --> [!div class="checklist"] -> * Enable monitoring for an existing cluster. -> * Verify that the agent is deployed successfully. -> * Verify that the solution is enabled. --In the next tutorial, you'll learn how to upgrade your Azure Linux nodes. --> [!div class="nextstepaction"] -> [Upgrade Azure Linux nodes](./tutorial-azure-linux-upgrade.md) |
azure-linux | Tutorial Azure Linux Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-upgrade.md | - Title: Azure Linux Container Host for AKS tutorial - Upgrade Azure Linux Container Host nodes -description: In this Azure Linux Container Host for AKS tutorial, you learn how to upgrade Azure Linux Container Host nodes. ----- Previously updated : 08/18/2024---# Tutorial: Upgrade Azure Linux Container Host nodes --The Azure Linux Container Host ships updates through two mechanisms: updated Azure Linux node images and automatic package updates. --As part of the application and cluster lifecycle, we recommend keeping your clusters up to date and secured by enabling upgrades for your cluster. You can enable automatic node-image upgrades to ensure your clusters use the latest Azure Linux Container Host image when it scales up. You can also manually upgrade the node-image on a cluster. --In this tutorial, part five of five, you learn how to: --> [!div class="checklist"] -> -> * Manually upgrade the node-image on a cluster. -> * Automatically upgrade an Azure Linux Container Host cluster. -> * Deploy Kured in an Azure Linux Container Host cluster. --> [!NOTE] -> Any upgrade operation, whether performed manually or automatically, upgrades the node image version if not already on the latest. The latest version is contingent on a full AKS release, and can be determined by visiting the [AKS release tracker](/azure/aks/release-tracker). --## Prerequisites --* In previous tutorials, you created and deployed an Azure Linux Container Host cluster. To complete this tutorial, you need an existing cluster. If you haven't done this step and would like to follow along, start with [Tutorial 1: Create a cluster with the Azure Linux Container Host for AKS](./tutorial-azure-linux-create-cluster.md). -* You need the latest version of Azure CLI. Find the version using the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --## Manually upgrade your cluster --To manually upgrade the node-image on a cluster, you can run `az aks nodepool upgrade`: --```azurecli -az aks nodepool upgrade \ - --resource-group testAzureLinuxResourceGroup \ - --cluster-name testAzureLinuxCluster \ - --name myAzureLinuxNodepool \ - --node-image-only -``` --## Automatically upgrade your cluster --Auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest Azure Linux Container Host features or patches from AKS and upstream Kubernetes. --Automatically completed upgrades are functionally the same as manual upgrades. The selected channel determines the timing of upgrades. When making changes to auto-upgrade, allow 24 hours for the changes to take effect. --To set the auto-upgrade channel on existing cluster, update the `--auto-upgrade-channel` parameter, similar to the following example, which automatically upgrades the cluster to the latest supported patch release of a previous minor version. --```azurecli-interactive -az aks update --resource-group testAzureLinuxResourceGroup --name testAzureLinuxCluster --auto-upgrade-channel stable -``` --For more information on upgrade channels, see [Using cluster auto-upgrade](/azure/aks/auto-upgrade-cluster). --## Enable automatic package upgrades --Similar to setting your clusters to auto-upgrade, you can use the same set once and forget mechanism for package upgrades by enabling the node-os upgrade channel. If automatic package upgrades are enabled, the `dnf-automatic` systemd service runs daily and installs any updated packages that have been published. --To set the node-os upgrade channel on existing cluster, update the `--node-os-upgrade-channel` parameter, similar to the following example, which automatically enables package upgrades. --```azurecli-interactive -az aks update --resource-group testAzureLinuxResourceGroup --name testAzureLinuxCluster --node-os-upgrade-channel Unmanaged -``` --## Enable an automatic reboot daemon --To protect your clusters, security updates are automatically applied to Azure Linux nodes. These updates include OS security fixes, kernel updates, and package upgrades. Some of these updates require a node reboot to complete the process. AKS doesn't automatically reboot these nodes to complete the update process. --We recommend enabling an automatic reboot daemon, such as [Kured](https://kured.dev/docs/), so that your cluster can reboot nodes that have taken kernel updates. To deploy the Kured DaemonSet in an Azure Linux Container Host cluster, see [Deploy Kured in an AKS cluster](/azure/aks/node-updates-kured#deploy-kured-in-an-aks-cluster). --## Clean up resources --As this tutorial is the last part of the series, you may want to delete your Azure Linux Container Host cluster. The Kubernetes nodes run on Azure virtual machines and continue incurring charges even if you don't use the cluster. Use the `az group delete` command to remove the resource group and all related resources. --```azurecli-interactive -az group delete --name testAzureLinuxCluster --yes --no-wait -``` --## Next steps --In this tutorial, you upgraded your Azure Linux Container Host cluster. You learned how to: --> [!div class="checklist"] -> -> * Manually upgrade the node-image on a cluster. -> * Automatically upgrade an Azure Linux Container Host cluster. -> * Deploy kured in an Azure Linux Container Host cluster. --For more information on the Azure Linux Container Host, see the [Azure Linux Container Host overview](./intro-azure-linux.md). |
azure-netapp-files | Azure Netapp Files Register | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-register.md | To use the Azure NetApp Files service, you need to register the NetApp Resource ## Next steps * [Create a NetApp account](azure-netapp-files-create-netapp-account.md)-* [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md) +* [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) |
azure-netapp-files | Faq Application Resilience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-resilience.md | Boomi recommends that SMB file share is used with Windows VMs; for NFS, Boomi re ## Next steps -- [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)+- [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) - [Networking FAQs](faq-networking.md) - [Security FAQs](faq-security.md) - [Performance FAQs](faq-performance.md) |
azure-netapp-files | Faq Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-backup.md | The system makes 10 retries when processing a scheduled backup job. If the job f ## Next steps -- [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)+- [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) - [Networking FAQs](faq-networking.md) - [Security FAQs](faq-security.md) - [Performance FAQs](faq-performance.md) |
azure-netapp-files | Faq Capacity Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-capacity-management.md | When deleting a very large amount of data in a volume (which can include snapsho ## Next steps -- [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)+- [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) - [Networking FAQs](faq-networking.md) - [Security FAQs](faq-security.md) - [Performance FAQs](faq-performance.md) |
azure-netapp-files | Faq Data Migration Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-data-migration-protection.md | No. Azure Import/Export service does not support Azure NetApp Files currently. ## Next steps -- [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)+- [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) - [Azure Data Box](../databox/index.yml) - [Networking FAQs](faq-networking.md) - [Security FAQs](faq-security.md) |
azure-netapp-files | Faq Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-integration.md | No, [Azure Databricks](/azure/databricks/) does not support mounting any NFS vol ## Next steps -- [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)+- [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) - [Networking FAQs](faq-networking.md) - [Security FAQs](faq-security.md) - [Performance FAQs](faq-performance.md) |
azure-netapp-files | Faq Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-networking.md | For further flexibility, see [How to use DFS Namespaces with Azure NetApp Files] - [Microsoft Azure ExpressRoute FAQs](../expressroute/expressroute-faqs.md) - [Microsoft Azure Virtual Network FAQ](../virtual-network/virtual-networks-faq.md)-- [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)+- [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) - [Security FAQs](faq-security.md) - [Performance FAQs](faq-performance.md) - [NFS FAQs](faq-nfs.md) |
azure-netapp-files | Faq Nfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-nfs.md | When using NFSv4.1, dNFS won't work with multiple paths. If you need multiple pa - [Microsoft Azure ExpressRoute FAQs](../expressroute/expressroute-faqs.md) - [Microsoft Azure Virtual Network FAQ](../virtual-network/virtual-networks-faq.md)-- [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)+- [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) - [Azure Data Box](../databox/index.yml) - [FAQs about SMB performance for Azure NetApp Files](azure-netapp-files-smb-performance.md) - [Networking FAQs](faq-networking.md) |
azure-netapp-files | Faq Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-performance.md | Jumbo frames aren't supported with Azure virtual machines. - [Performance benchmark test recommendations for Azure NetApp Files](azure-netapp-files-performance-metrics-volumes.md) - [Performance benchmarks for Linux](performance-benchmarks-linux.md) - [Performance impact of Kerberos on NFSv4.1 volumes](performance-impact-kerberos.md)-- [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)+- [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) - [Networking FAQs](faq-networking.md) - [Security FAQs](faq-security.md) - [NFS FAQs](faq-nfs.md) |
azure-netapp-files | Faq Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-security.md | The AD Connector credentials are stored in the Azure NetApp Files control plane ## Next steps -- [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)+- [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) - [Networking FAQs](faq-networking.md) - [Performance FAQs](faq-performance.md) - [NFS FAQs](faq-nfs.md) |
azure-netapp-files | Faq Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md | SMB/CIFS oplocks (opportunistic locks) enable the redirector on a SMB/CIFS clien ## Next steps - [FAQs about SMB performance for Azure NetApp Files](azure-netapp-files-smb-performance.md)-- [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)+- [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) - [Networking FAQs](faq-networking.md) - [Security FAQs](faq-security.md) - [Performance FAQs](faq-performance.md) |
azure-portal | Azure Portal Add Remove Sort Favorites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-add-remove-sort-favorites.md | - Title: Manage favorites in Azure portal -description: Learn how to add or remove services from the Favorites list. Previously updated : 03/04/2024----# Manage favorites --The **Favorites** list in the Azure portal lets you quickly go to the services you use most often. We've already added some common services to your **Favorites** list, but you may want to customize it by adding or removing items. You're the only one who sees the changes you make to **Favorites**. --You can view your **Favorites** list in the Azure portal menu, or from the **Favorites** section within **All services**. --## Add a favorite service --Items that are listed under **Favorites** are selected from **All services**. Within **All services**, you can hover over a service name to display information and resources related to the service. A filled star icon ![Filled star icon](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-graystar.png) next to the service name indicates that the item appears in the **Favorites** list. If the star icon isn't filled in for a service, select the star icon to add it to your **Favorites** list. --In this example, we'll add **Cost Management + Billing** to the **Favorites** list. --1. Select **All services** from the Azure portal menu. -- :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-new-all-services.png" alt-text="Screenshot showing All services in the Azure portal menu."::: --1. Enter the word "cost" in the **Filter services** field near the top of the **All services** pane. Services that have "cost" in the title or that have "cost" as a keyword are shown. -- :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-find-service.png" alt-text="Screenshot showing a search in All services in the Azure portal."::: --1. Hover over the service name to display the **Cost Management + Billing** information card. Select the star icon. -- :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-add.png" alt-text="Screenshot showing the star icon to add a service to Favorites in the Azure portal."::: --1. **Cost Management + Billing** is now added as the last item in your **Favorites** list. --## Rearrange your favorite services --When you add a new service to your **Favorites** list, it appears as the last item in the list. To move it to a different position, select the new service, then drag and drop it to the desired location. --You can continue to drag and drop any service in your **Favorites** list to place them in the order you choose. --## Remove an item from Favorites --You can remove items directly from the **Favorites** list. --1. In the **Favorites** section of the portal menu, or within the **Favorites** section of **All services**, hover over the name of the service you want to remove. -- :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-remove.png" alt-text="Screenshot showing how to remove a service from Favorites in the Azure portal."::: --2. On the information card, select the star so that it changes from filled to unfilled. --The service is then removed from your **Favorites** list. --## Next steps --- Learn how to [manage your settings and preferences in the Azure portal](set-preferences.md).-- To create a project-focused workspace, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).- |
azure-portal | Azure Portal Dashboard Share Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboard-share-access.md | - Title: Share Azure portal dashboards by using Azure role-based access control -description: This article explains how to share a dashboard in the Azure portal by using Azure role-based access control. - Previously updated : 09/05/2023---# Share Azure dashboards by using Azure role-based access control --After configuring a dashboard, you can publish it and share it with other users in your organization. When you share a dashboard, you can control who can view it by using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.yml) to assign roles to either a single user or a group of users. You can select a role that allows them only to view the published dashboard, or a role that also allows them to modify it. --> [!TIP] -> Within a dashboard, individual tiles enforce their own access control requirements based on the resources they display. You can share any dashboard broadly, even if some data on specific tiles might not be visible to all users. --## Understand access control for dashboards --From an access control perspective, dashboards are no different from other resources, such as virtual machines or storage accounts. Published dashboards are implemented as Azure resources. Each dashboard exists as a manageable item contained in a resource group within your subscription. --Azure RBAC lets you assign users to roles at four different [levels of scope](/azure/role-based-access-control/scope-overview): management group, subscription, resource group, or resource. Azure RBAC permissions are inherited from higher levels down to the individual resource. In many cases, you may already have users assigned to roles for the subscription that will give them access to the published dashboard. --For example, users who have the [Owner](/azure/role-based-access-control/built-in-roles#owner) or [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role for a subscription can list, view, create, modify, or delete dashboards within the subscription. Users with a [custom role](/azure/role-based-access-control/custom-roles) that includes the `Microsoft.Portal/Dashboards/Write` permission can also perform these tasks. --Users with the [Reader](/azure/role-based-access-control/built-in-roles#reader) role for the subscription (or a custom role with `Microsoft.Portal/Dashboards/Read` permission) can list and view dashboards within that subscription, but they can't modify or delete them. These users are able to make private copies of dashboards for themselves. They can also make local edits to a published dashboard for their own use, such as when troubleshooting an issue, but they can't publish those changes back to the server. --To expand access to a dashboard beyond the access granted at the subscription level, you can assign permissions to an individual dashboard, or to a resource group that contains several dashboards. For example, if a user should have limited permissions across the subscription, but needs to be able to edit one particular dashboard, you can assign a different role with more permissions (such as [Contributor](/azure/role-based-access-control/built-in-roles#contributor)) for that dashboard only. --> [!IMPORTANT] -> Since individual tiles within a dashboard can enforce their own access control requirements, some users with access to view or edit a dashboard may not be able to see information within specific tiles. To ensure that users can see data within a certain tile, be sure that they have the appropriate permissions for the underlying resources accessed by that tile. --## Publish a dashboard --To share access to a dashboard, you must first publish it. When you do so, other users in your organization will be able to access and modify the dashboard based on their Azure RBAC roles. --1. In the dashboard, select **Share**. -- :::image type="content" source="media/azure-portal-dashboard-share-access/share-dashboard-for-access-control.png" alt-text="Screenshot showing the Share option for an Azure portal dashboard."::: --1. In **Sharing + access control**, select **Publish**. -- :::image type="content" source="media/azure-portal-dashboard-share-access/publish-dashboard-for-access-control.png" alt-text="Screenshot showing how to publish an Azure portal dashboard."::: -- By default, sharing publishes your dashboard to a resource group named **dashboards**. To select a different resource group, clear the checkbox. --1. To [add optional tags](../azure-resource-manager/management/tag-resources.md) to the dashboard, enter one or more name/value pairs. --1. Select **Publish**. --Your dashboard is now published. If the permissions that users inherit from the subscription are sufficient, you don't need to do anything more. Otherwise, read on to learn how to expand access to specific users or groups. --## Assign access to a dashboard --For each dashboard that you have published, you can assign Azure RBAC built-in roles to groups of users (or to individual users). This lets them use that role on the dashboard, even if their subscription-level permissions wouldn't normally allow it. --1. After publishing the dashboard, select **Manage sharing**, then select **Access control**. -- :::image type="content" source="media/azure-portal-dashboard-share-access/manage-sharing-dashboard.png" alt-text="Screenshot showing the Access control option for an Azure portal dashboard."::: --1. In **Access Control**, select **Role assignments** to see existing users that are already assigned a role for this dashboard. --1. To add a new user or group, select **Add** then **Add role assignment**. -- :::image type="content" source="media/azure-portal-dashboard-share-access/manage-users-existing-users.png" alt-text="Screenshot showing how to add a role assignment for an Azure portal dashboard."::: --1. Select the role you want to grant, such as [Contributor](/azure/role-based-access-control/built-in-roles#contributor) or [Reader](/azure/role-based-access-control/built-in-roles#reader), and then select **Next**. --1. Select **Select members**, then select one or more Microsoft Entra groups and/or users. If you don't see the user or group you're looking for in the list, use the search box. When you have finished, choose **Select**. --1. Select **Review + assign** to complete the assignment. --> [!TIP] -> As noted above, individual tiles within a dashboard can enforce their own access control requirements based on the resources that the tile displays. If users need to see data for a specific tile, be sure that they have the appropriate permissions for the underlying resources accessed by that tile. --## Next steps --* View the list of [Azure built-in roles](../role-based-access-control/built-in-roles.md). -* Learn about [managing groups in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). -* Learn more about [managing Azure resources by using the Azure portal](../azure-resource-manager/management/manage-resources-portal.md). -* [Create a dashboard](azure-portal-dashboards.md) in the Azure portal. |
azure-portal | Azure Portal Dashboards Create Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboards-create-programmatically.md | - Title: Programmatically create Azure Dashboards -description: Use a dashboard in the Azure portal as a template to programmatically create Azure Dashboards. Includes JSON reference. -- Previously updated : 09/05/2023---# Programmatically create Azure dashboards --This article walks you through the process of programmatically creating and publishing Azure dashboards. The sample dashboard shown below is referenced throughout the document, but you can use this process with any dashboard. ---## Overview --Shared dashboards in the [Azure portal](https://portal.azure.com) are [resources](../azure-resource-manager/management/overview.md), just like virtual machines and storage accounts. You can manage resources programmatically by using the [REST APIs](/rest/api/?view=Azure&preserve-view=true), the [Azure CLI](/cli/azure), and [Azure PowerShell commands](/powershell/azure/get-started-azureps). --Many features build on these APIs to make resource management easier. Each of these APIs and tools offers ways to create, list, retrieve, modify, and delete resources. Since dashboards are resources, you can pick your favorite API or tool to use. --Whichever tools you use, to create a dashboard programmatically, you construct a JSON representation of your dashboard object. This object contains information about the tiles on the dashboard. It includes sizes, positions, resources they're bound to, and any user customizations. --The most practical way to generate this JSON document is to use the Azure portal to create an initial dashboard with the tiles you want. Then export the JSON and create a template from the result that you can modify further and use in scripts, programs, and deployment tools. --## Fetch the JSON representation of a dashboard --We'll start by downloading the JSON representation of an existing dashboard. Open the dashboard that you want to start with. Select **Export** and then select **Download**. ---You can also retrieve information about the dashboard resource programmatically by using [REST APIs](/rest/api/resources/Resources/Get) or other methods. --## Create a template from the JSON --The next step is to create a template from the downloaded JSON. You'll be able to use the template programmatically with the appropriate resource management APIs, command-line tools, or within the portal. --In most cases, you want to preserve the structure and configuration of each tile. Then parameterize the set of Azure resources that the tiles point to. You don't have to fully understand the [dashboard JSON structure](azure-portal-dashboards-structure.md) to create a template. --In your exported JSON dashboard, find all occurrences of Azure resource IDs. Our example dashboard has multiple tiles that all point at a single Azure virtual machine. That's because our dashboard only looks at this single resource. If you search the sample JSON included at the end of the document for "/subscriptions", you'll find several occurrences of this ID. --`/subscriptions/6531c8c8-df32-4254-d717-b6e983273e5d/resourceGroups/contoso/providers/Microsoft.Compute/virtualMachines/myVM1` --To publish this dashboard for any virtual machine in the future, parameterize every occurrence of this string within the JSON. --## Create a dashboard template --Azure offers the ability to orchestrate the deployment of multiple resources. You create a deployment template that expresses the set of resources to deploy and the relationships between them. For more information, see [Deploy resources with Resource Manager templates and Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md). --The JSON format of each deployed template resource is the same as if you were creating them individually by uploading an exported dashboard, except that the template language adds a few concepts like variables, parameters, basic functions, and more. This extended syntax is only supported in the context of a template deployment. For more information, see [Understand the structure and syntax of ARM templates](../azure-resource-manager/templates/syntax.md). --Parameterization should be done using the template's parameter syntax. You replace all instances of the resource ID we found earlier as shown here. --Example JSON property with hard-coded resource ID: --```json -id: "/subscriptions/6531c8c8-df32-4254-d717-b6e983273e5d/resourceGroups/contoso/providers/Microsoft.Compute/virtualMachines/myVM1" -``` --Example JSON property converted to a parameterized version based on template parameters --```json -id: "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]" -``` --Declare required template metadata and the parameters at the top of the JSON template like this: --```json --{ - "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "virtualMachineName": { - "type": "string" - }, - "virtualMachineResourceGroup": { - "type": "string" - }, - "dashboardName": { - "type": "string" - } - }, - "variables": {}, - "resources": [ - ... rest of template omitted ... - ] -} -``` --Once you've configured your template, deploy it using any of the following methods: --- [REST APIs](/rest/api/resources/deployments)-- [PowerShell](../azure-resource-manager/templates/deploy-powershell.md)-- [Azure CLI](/cli/azure/deployment/group#az-deployment-group-create)-- [The Azure portal template deployment page](https://portal.azure.com/#create/Microsoft.Template)--Next you'll see two versions of our example dashboard JSON. The first is the version that we exported from the portal that was already bound to a resource. The second is the template version that can be programmatically bound to any virtual machine and deployed using Azure Resource Manager. --## Example JSON representation exported from dashboard --This example is similar to what you'll see when you export a dashboard similar to the example at the beginning of this article. The hard-coded resource identifiers show that this dashboard is pointing at a specific Azure virtual machine. --```json -{ - "properties": { - "lenses": { - "0": { - "order": 0, - "parts": { - "0": { - "position": { - "x": 0, - "y": 0, - "colSpan": 3, - "rowSpan": 2 - }, - "metadata": { - "inputs": [], - "type": "Extension/HubsExtension/PartType/MarkdownPart", - "settings": { - "content": { - "settings": { - "content": "## Azure Virtual Machines Overview\r\nNew team members should watch this video to get familiar with Azure Virtual Machines.", - "markdownUri": null - } - } - } - } - }, - "1": { - "position": { - "x": 3, - "y": 0, - "colSpan": 8, - "rowSpan": 4 - }, - "metadata": { - "inputs": [], - "type": "Extension/HubsExtension/PartType/MarkdownPart", - "settings": { - "content": { - "settings": { - "content": "This is the team dashboard for the test VM we use on our team. Here are some useful links:\r\n\r\n1. [Create a Linux virtual machine](https://docs.microsoft.com/azure/virtual-machines/linux/quick-create-portal)\r\n1. [Create a Windows virtual machine](https://docs.microsoft.com/azure/virtual-machines/windows/quick-create-portal)\r\n1. [Create a virtual machine scale set](https://docs.microsoft.com/azure/virtual-machine-scale-sets/quick-create-portal)", - "title": "Test VM Dashboard", - "subtitle": "Contoso", - "markdownUri": null - } - } - } - } - }, - "2": { - "position": { - "x": 0, - "y": 2, - "colSpan": 3, - "rowSpan": 2 - }, - "metadata": { - "inputs": [], - "type": "Extension/HubsExtension/PartType/VideoPart", - "settings": { - "content": { - "settings": { - "src": "https://www.youtube.com/watch?v=rOiSRkxtTeU", - "autoplay": false - } - } - } - } - }, - "3": { - "position": { - "x": 0, - "y": 4, - "colSpan": 11, - "rowSpan": 3 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1", - "chartType": 0, - "metrics": [ - { - "name": "Percentage CPU", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart", - "settings": {} - } - }, - "4": { - "position": { - "x": 0, - "y": 7, - "colSpan": 3, - "rowSpan": 2 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1", - "chartType": 0, - "metrics": [ - { - "name": "Disk Read Operations/Sec", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - }, - { - "name": "Disk Write Operations/Sec", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart", - "settings": {} - } - }, - "5": { - "position": { - "x": 3, - "y": 7, - "colSpan": 3, - "rowSpan": 2 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1", - "chartType": 0, - "metrics": [ - { - "name": "Disk Read Bytes", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - }, - { - "name": "Disk Write Bytes", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart", - "settings": {} - } - }, - "6": { - "position": { - "x": 6, - "y": 7, - "colSpan": 3, - "rowSpan": 2 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1", - "chartType": 0, - "metrics": [ - { - "name": "Network In Total", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - }, - { - "name": "Network Out Total", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart", - "settings": {} - } - }, - "7": { - "position": { - "x": 9, - "y": 7, - "colSpan": 2, - "rowSpan": 2 - }, - "metadata": { - "inputs": [ - { - "name": "id", - "value": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - } - ], - "type": "Extension/HubsExtension/PartType/ResourcePart", - "asset": { - "idInputName": "id", - "type": "VirtualMachine" - } - } - } - } - } - }, - "metadata": { - "model": { - "timeRange": { - "value": { - "relative": { - "duration": 24, - "timeUnit": 1 - } - }, - "type": "MsPortalFx.Composition.Configuration.ValueTypes.TimeRange" - } - } - } - }, - "name": "Simple VM Dashboard", - "type": "Microsoft.Portal/dashboards", - "location": "INSERT LOCATION", - "tags": { - "hidden-title": "Simple VM Dashboard" - }, - "apiVersion": "2015-08-01-preview" -} -``` --## Example dashboard template representation --The templatized version of the example dashboard has defined three parameters called `virtualMachineName`, `virtualMachineResourceGroup`, and `dashboardName`. The parameters let you point this dashboard at a different Azure virtual machine every time you deploy. This dashboard can be programmatically configured and deployed to point to any Azure virtual machine. To test this feature, copy the following template and paste it into the [Azure portal template deployment page](https://portal.azure.com/#create/Microsoft.Template). --This example deploys a dashboard by itself, but the template language lets you deploy multiple resources, and bundle one or more dashboards alongside them. --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "virtualMachineName": { - "type": "string", - "metadata": { - "description": "Name of the existing virtual machine to show in the dashboard" - } - }, - "virtualMachineResourceGroup": { - "type": "string", - "metadata": { - "description": "Name of the resource group that contains the virtual machine" - } - }, - "dashboardName": { - "type": "string", - "defaultValue": "[guid(parameters('virtualMachineName'), parameters('virtualMachineResourceGroup'))]", - "metadata": { - "Description": "Resource name that Azure portal uses for the dashboard" - } - }, - "dashboardDisplayName": { - "type": "string", - "defaultValue": "Simple VM Dashboard", - "metadata": { - "description": "Name of the dashboard to display in Azure portal" - } - }, - "location": { - "type": "string", - "defaultValue": "[resourceGroup().location]" - } - }, - "resources": [ - { - "type": "Microsoft.Portal/dashboards", - "apiVersion": "2020-09-01-preview", - "name": "[parameters('dashboardName')]", - "location": "[parameters('location')]", - "tags": { - "hidden-title": "[parameters('dashboardDisplayName')]" - }, - "properties": { - "lenses": [ - { - "order": 0, - "parts": [ - { - "position": { - "x": 0, - "y": 0, - "rowSpan": 2, - "colSpan": 3 - }, - "metadata": { - "inputs": [], - "type": "Extension/HubsExtension/PartType/MarkdownPart", - "settings": { - "content": { - "settings": { - "content": "## Azure Virtual Machines Overview\r\nNew team members should watch this video to get familiar with Azure Virtual Machines." - } - } - } - } - }, - { - "position": { - "x": 3, - "y": 0, - "rowSpan": 4, - "colSpan": 8 - }, - "metadata": { - "inputs": [], - "type": "Extension/HubsExtension/PartType/MarkdownPart", - "settings": { - "content": { - "settings": { - "content": "This is the team dashboard for the test VM we use on our team. Here are some useful links:\r\n\r\n1. [Create a Linux virtual machine](https://docs.microsoft.com/azure/virtual-machines/linux/quick-create-portal)\r\n1. [Create a Windows virtual machine](https://docs.microsoft.com/azure/virtual-machines/windows/quick-create-portal)\r\n1. [Create a virtual machine scale set](https://docs.microsoft.com/azure/virtual-machine-scale-sets/quick-create-portal)", - "title": "Test VM Dashboard", - "subtitle": "Contoso" - } - } - } - } - }, - { - "position": { - "x": 0, - "y": 2, - "rowSpan": 2, - "colSpan": 3 - }, - "metadata": { - "inputs": [], - "type": "Extension/HubsExtension/PartType/VideoPart", - "settings": { - "content": { - "settings": { - "src": "https://www.youtube.com/watch?v=rOiSRkxtTeU", - "autoplay": false - } - } - } - } - }, - { - "position": { - "x": 0, - "y": 4, - "rowSpan": 3, - "colSpan": 11 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]", - "chartType": 0, - "metrics": [ - { - "name": "Percentage CPU", - "resourceId": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart" - } - }, - { - "position": { - "x": 0, - "y": 7, - "rowSpan": 2, - "colSpan": 3 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]", - "chartType": 0, - "metrics": [ - { - "name": "Disk Read Operations/Sec", - "resourceId": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]" - }, - { - "name": "Disk Write Operations/Sec", - "resourceId": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart" - } - }, - { - "position": { - "x": 3, - "y": 7, - "rowSpan": 2, - "colSpan": 3 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]", - "chartType": 0, - "metrics": [ - { - "name": "Disk Read Bytes", - "resourceId": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]" - }, - { - "name": "Disk Write Bytes", - "resourceId": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart" - } - }, - { - "position": { - "x": 6, - "y": 7, - "rowSpan": 2, - "colSpan": 3 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]", - "chartType": 0, - "metrics": [ - { - "name": "Network In Total", - "resourceId": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]" - }, - { - "name": "Network Out Total", - "resourceId": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart" - } - }, - { - "position": { - "x": 9, - "y": 7, - "rowSpan": 2, - "colSpan": 2 - }, - "metadata": { - "inputs": [ - { - "name": "id", - "value": "[resourceId(parameters('virtualMachineResourceGroup'), 'Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]" - } - ], - "type": "Extension/Microsoft_Azure_Compute/PartType/VirtualMachinePart", - "asset": { - "idInputName": "id", - "type": "VirtualMachine" - } - } - } - ] - } - ] - } - } - ] -} -``` --Now that you've seen an example of using a parameterized template to deploy a dashboard, you can try deploying the template by using the [Azure Resource Manager REST APIs](/rest/api/), the [Azure CLI](quickstart-portal-dashboard-azure-cli.md), or [Azure PowerShell](quickstart-portal-dashboard-powershell.md). --## Next steps --- Learn more about the [structure of Azure dashboards](azure-portal-dashboards-structure.md).-- Learn how to [use markdown tiles on Azure dashboards to show custom content](azure-portal-markdown-tile.md).-- Learn how to [manage access for shared dashboards](azure-portal-dashboard-share-access.md).-- Learn how to [manage Azure portal settings and preferences](set-preferences.md). |
azure-portal | Azure Portal Dashboards Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboards-structure.md | - Title: The structure of Azure dashboards -description: Walk through the JSON structure of an Azure dashboard using an example dashboard. Includes reference to resource properties. - Previously updated : 09/05/2023---# The structure of Azure dashboards --This document walks through the structure of an Azure dashboard, using the following dashboard as an example: ---Since shared [Azure dashboards are resources](../azure-resource-manager/management/overview.md), this dashboard can be represented as JSON. You can download the JSON representation of a dashboard by selecting **Export** and then **Download** in the Azure portal. --The following JSON represents the dashboard shown above. --```json -{ -{ - "properties": { - "lenses": { - "0": { - "order": 0, - "parts": { - "0": { - "position": { - "x": 0, - "y": 0, - "colSpan": 3, - "rowSpan": 2 - }, - "metadata": { - "inputs": [], - "type": "Extension/HubsExtension/PartType/MarkdownPart", - "settings": { - "content": { - "settings": { - "content": "## Azure Virtual Machines Overview\r\nNew team members should watch this video to get familiar with Azure Virtual Machines.", - "markdownUri": null - } - } - } - } - }, - "1": { - "position": { - "x": 3, - "y": 0, - "colSpan": 8, - "rowSpan": 4 - }, - "metadata": { - "inputs": [], - "type": "Extension/HubsExtension/PartType/MarkdownPart", - "settings": { - "content": { - "settings": { - "content": "This is the team dashboard for the test VM we use on our team. Here are some useful links:\r\n\r\n1. [Create a Linux virtual machine](https://docs.microsoft.com/azure/virtual-machines/linux/quick-create-portal)\r\n1. [Create a Windows virtual machine](https://docs.microsoft.com/azure/virtual-machines/windows/quick-create-portal)\r\n1. [Create a virtual machine scale set](https://docs.microsoft.com/azure/virtual-machine-scale-sets/quick-create-portal)", - "title": "Test VM Dashboard", - "subtitle": "Contoso", - "markdownUri": null - } - } - } - } - }, - "2": { - "position": { - "x": 0, - "y": 2, - "colSpan": 3, - "rowSpan": 2 - }, - "metadata": { - "inputs": [], - "type": "Extension/HubsExtension/PartType/VideoPart", - "settings": { - "content": { - "settings": { - "src": "https://www.youtube.com/watch?v=rOiSRkxtTeU", - "autoplay": false - } - } - } - } - }, - "3": { - "position": { - "x": 0, - "y": 4, - "colSpan": 11, - "rowSpan": 3 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1", - "chartType": 0, - "metrics": [ - { - "name": "Percentage CPU", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart", - "settings": {} - } - }, - "4": { - "position": { - "x": 0, - "y": 7, - "colSpan": 3, - "rowSpan": 2 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1", - "chartType": 0, - "metrics": [ - { - "name": "Disk Read Operations/Sec", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - }, - { - "name": "Disk Write Operations/Sec", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart", - "settings": {} - } - }, - "5": { - "position": { - "x": 3, - "y": 7, - "colSpan": 3, - "rowSpan": 2 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1", - "chartType": 0, - "metrics": [ - { - "name": "Disk Read Bytes", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - }, - { - "name": "Disk Write Bytes", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart", - "settings": {} - } - }, - "6": { - "position": { - "x": 6, - "y": 7, - "colSpan": 3, - "rowSpan": 2 - }, - "metadata": { - "inputs": [ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1", - "chartType": 0, - "metrics": [ - { - "name": "Network In Total", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - }, - { - "name": "Network Out Total", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - } - ] - } - } - ], - "type": "Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart", - "settings": {} - } - }, - "7": { - "position": { - "x": 9, - "y": 7, - "colSpan": 2, - "rowSpan": 2 - }, - "metadata": { - "inputs": [ - { - "name": "id", - "value": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - } - ], - "type": "Extension/HubsExtension/PartType/ResourcePart", - "asset": { - "idInputName": "id", - "type": "VirtualMachine" - } - } - } - } - } - }, - "metadata": { - "model": { - "timeRange": { - "value": { - "relative": { - "duration": 24, - "timeUnit": 1 - } - }, - "type": "MsPortalFx.Composition.Configuration.ValueTypes.TimeRange" - } - } - } - }, - "name": "Simple VM Dashboard", - "type": "Microsoft.Portal/dashboards", - "location": "INSERT LOCATION", - "tags": { - "hidden-title": "Simple VM Dashboard" - }, - "apiVersion": "2015-08-01-preview" -} -``` --## Common resource properties --LetΓÇÖs break down the relevant sections of the JSON. The common resource properties appear near the end of the example above. These properties are shared across all Azure resource types. They don't relate specifically to the dashboard's content. --### ID --The `ID` represents the dashboard's Azure resource ID, subject to the [naming conventions of Azure resources](/azure/azure-resource-manager/management/resource-name-rules). When the portal creates a dashboard, it generally chooses an ID in the form of a guid, but you can use any valid name when you create a dashboard programmatically. --When you export a dashboard from the Azure portal, the `id` field isn't included. If you create a new dashboard by importing a JSON file that includes the `id` field, the value will be ignored and a new ID value will be assigned to each new dashboard. --### Name --The resource name that Azure portal uses for the dashboard. --### Type --All dashboards are of type `Microsoft.Portal/dashboards`. --### Location --Unlike other resources, dashboards donΓÇÖt have a runtime component. For dashboards, `location`` indicates the primary geographic location that stores the dashboardΓÇÖs JSON representation. The value should be one of the location codes that can be fetched using the [locations API on the subscriptions resource](/rest/api/resources/subscriptions). --### Tags --Tags are a common feature of Azure resources that let you organize your resource by arbitrary name value pairs. Dashboards include one special tag called `hidden-title`. If your dashboard has this property populated, then that value is used as the display name for your dashboard in the portal. This tag gives you a way to have a renamable display name for your dashboard --## Properties --The properties object contains two properties, `lenses` and `metadata`. The `lenses` property contains information about the tiles on the dashboard. The `metadata` property is reserved for potential future features. --### Lenses --The `lenses` property contains the dashboard. The lens object in this example contains a single property called "0". Lenses are a grouping concept that isn't currently implemented. For now, all of your dashboards have this single "0" property on the lens object. --### Parts --The object underneath the "0" contains two properties, `order` and `parts`. Currently, `order` is always set to 0. The `parts` property contains an object that defines the individual parts (also referred to as tiles) on the dashboard. --The `parts` object contains a property for each part, where the name of the property is a number. The number isn't significant. --Each individual part object contains `position` and `metadata`. --### Position --The `position` property contains the size and location information for the part expressed as `x`, `y`, `rowSpan`, and `colSpan`. The values are in terms of grid units. These grid units are visible when the dashboard is in the customize mode as shown here. ---For example, if you want a tile to have a width of two grid units, a height of one grid unit, and a location in the top left corner of the dashboard, then the position object looks like this: --`position: { x: 0, y: 0, rowSpan: 2, colSpan: 1 }` --### Metadata --Each part has a metadata property. An object has only one required property: `type`. This string tells the portal which [tile type](azure-portal-dashboards.md#add-tiles-from-the-tile-gallery) to show. Our example dashboard uses these types of tiles: --1. `Extension/Microsoft_Azure_Monitoring/PartType/MetricsChartPart` ΓÇô Used to show monitoring metrics -1. `Extension[azure]/HubsExtension/PartType/MarkdownPart` ΓÇô Used to show customized markdown content, such as text or images, with basic formatting for lists, links, etc. -1. `Extension[azure]/HubsExtension/PartType/VideoPart` ΓÇô Used to show videos from YouTube, Channel 9, and any other type of video that works in an HTML video tag. --Each type of part has its own options for configuration. The possible configuration properties are called `inputs`, `settings`, and `asset`. --### Inputs --The inputs object generally contains information that binds a tile to a resource instance. --Each `MetricsChartPart` in our example has a single input that expresses the resource to bind to, representing the Azure resource ID of the VM, along with information about the data being displayed. For example, here's the `inputs` object for the tile that shows the **Network In Total** and **Network Out Total** metrics. --```json -"inputs": -[ - { - "name": "queryInputs", - "value": { - "timespan": { - "duration": "PT1H" - }, - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1", - "chartType": 0, - "metrics": [ - { - "name": "Network In Total", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - }, - { - "name": "Network Out Total", - "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SimpleWinVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM1" - } - ] - } - } -] --``` --### Settings --The settings object contains the configurable elements of a part. In our sample dashboard, the Markdown part uses settings to store the custom markdown content, along with a configurable title and subtitle. --```json -"settings": { - "content": { - "settings": { - "content": "This is the team dashboard for the test VM we use on our team. Here are some useful links:\r\n\r\n1. [Create a Linux virtual machine](https://docs.microsoft.com/azure/virtual-machines/linux/quick-create-portal)\r\n1. [Create a Windows virtual machine](https://docs.microsoft.com/azure/virtual-machines/windows/quick-create-portal)\r\n1. [Create a virtual machine scale set](https://docs.microsoft.com/azure/virtual-machine-scale-sets/quick-create-portal)", - "title": "Test VM Dashboard", - "subtitle": "Contoso" - } - } -} -``` --Similarly, the video tile has its own settings that contain a pointer to the video to play, an autoplay setting, and optional title information. --```json --"settings": { - "content": { - "settings": { - "src": "https://www.youtube.com/watch?v=rOiSRkxtTeU", - "autoplay": false - } - } -} -``` --### Asset --Tiles that are bound to first class manageable portal objects (called assets) have this relationship expressed via the `asset` object. In our example dashboard, the virtual machine tile contains this asset description. The `idInputName` property tells the portal that the ID input contains the unique identifier for the asset, in this case the resource ID. Most Azure resource types have assets defined in the portal. --`"asset": { "idInputName": "id", "type": "VirtualMachine" }` --## Next steps --- Learn how to create a dashboard [in the Azure portal](azure-portal-dashboards.md) or [programmatically](azure-portal-dashboards-create-programmatically.md).-- Learn how to [use markdown tiles on Azure dashboards to show custom content](azure-portal-markdown-tile.md). |
azure-portal | Azure Portal Dashboards | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboards.md | - Title: Create a dashboard in the Azure portal -description: This article describes how to create and customize a dashboard in the Azure portal. - Previously updated : 09/01/2023---# Create a dashboard in the Azure portal --Dashboards are a focused and organized view of your cloud resources in the Azure portal. Use dashboards as a workspace where you can monitor resources and quickly launch tasks for day-to-day operations. For example, you can build custom dashboards based on projects, tasks, or user roles in your organization. --The Azure portal provides a default dashboard as a starting point. You can edit this default dashboard, and you can create and customize additional dashboards. --All dashboards are private when created, and each user can create up to 100 private dashboards. If you publish and [share a dashboard with other users in your organization](azure-portal-dashboard-share-access.md), the shared dashboard is implemented as an Azure resource in your subscription, and doesn't count towards the private dashboard limit. --## Create a new dashboard --This example shows how to create a new private dashboard with an assigned name. --1. Sign in to the [Azure portal](https://portal.azure.com). --1. From the Azure portal menu, select **Dashboard**. Your default view might already be set to dashboard. -- :::image type="content" source="media/azure-portal-dashboards/portal-menu-dashboard.png" alt-text="Screenshot of the Azure portal with Dashboard selected."::: --1. Select **Create**, then select **Custom**. -- This action opens the **Tile Gallery**, from which you can select tiles that display different types of information. You'll also see an empty grid representing the dashboard layout, where you can arrange the tiles. --1. Select the text in the dashboard label and enter a name that will help you easily identify the custom dashboard. -- :::image type="content" source="media/azure-portal-dashboards/dashboard-name.png" alt-text="Screenshot of an empty grid with the Tile Gallery."::: --1. To save the dashboard as is, select **Save** in the page header. --The dashboard view now shows your new dashboard. Select the arrow next to the dashboard name to see other available dashboards. The list might include dashboards that other users have created and shared. --## Edit a dashboard --Now, let's edit the example dashboard you created to add, resize, and arrange tiles that show your Azure resources or display other helpful information. We'll start by working with the Tile Gallery, then explore other ways to customize dashboards. --### Add tiles from the Tile Gallery --To add tiles to a dashboard by using the Tile Gallery, follow these steps. --1. Select **Edit** from the dashboard's page header. -- :::image type="content" source="media/azure-portal-dashboards/dashboard-edit.png" alt-text="Screenshot of dashboard highlighting the Edit option."::: --1. Browse the **Tile Gallery** or use the search field to find a certain tile. Select the tile you want to add to your dashboard. -- :::image type="content" source="media/azure-portal-dashboards/dashboard-tile-gallery.png" alt-text="Screenshot of the Tile Gallery."::: --1. Select **Add** to add the tile to the dashboard with a default size and location. Or, drag the tile to the grid and place it where you want. --1. To save your changes, select **Save**. You can also preview the changes without saving by selecting **Preview**. This preview mode also allows you to see how [filters](#apply-dashboard-filters) affect your tiles. From the preview screen, you can select **Save** to keep the changes, **Cancel** to remove them, or **Edit** to go back to the editing options and make further changes. -- :::image type="content" source="media/azure-portal-dashboards/dashboard-save.png" alt-text="Screenshot of the Save and Preview options for an edited dashboard."::: --### Resize or rearrange tiles --To change the size of a tile, or to rearrange the tiles on a dashboard, follow these steps: --1. Select **Edit** from the page header. --1. Select the context menu in the upper right corner of a tile. Then, choose a tile size. Tiles that support any size also include a "handle" in the lower right corner that lets you drag the tile to the size you want. -- :::image type="content" source="media/azure-portal-dashboards/dashboard-tile-resize.png" alt-text="Screenshot of dashboard with tile size menu open."::: --1. Select a tile and drag it to a new location on the grid to arrange your dashboard. --1. When you're finished, select **Save**. --### Pin content from a resource page --Another way to add tiles to your dashboard is directly from a resource page. --Many resource pages include a pin icon in the page header, which means that you can pin a tile representing the source page. In some cases, a pin icon may also appear by specific content within a page, which means you can pin a tile for that specific content, rather than the entire page. ---Select this icon to pin the tile to an existing private or shared dashboard. You can also create a new dashboard which will include this pin by selecting **Create new**. ---### Copy a tile to a new dashboard --If you want to reuse a tile on a different dashboard, you can copy it from one dashboard to another. To do so, select the context menu in the upper right corner and then select **Copy**. ---You can then select whether to copy the tile to a different private or shared dashboard, or create a copy of the tile within the dashboard you're already working in. You can also create a new dashboard that includes a copy of the tile by selecting **Create new**. --## Modify tile settings --Some tiles might require more configuration to show the information you want. For example, the **Metrics chart** tile has to be set up to display a metric from Azure Monitor. You can also customize tile data to override the dashboard's default time settings and filters, or to change the title and subtitle of a tile. --> [!NOTE] -> The **Markdown** tile lets you display custom, static content on your dashboard. This can be any information you provide, such as basic instructions, an image, a set of hyperlinks, or even contact information. For more information about using markdown tiles, see [Use a markdown tile on Azure dashboards to show custom content](azure-portal-markdown-tile.md). --### Change the title and subtitle of a tile --Some tiles allow you to edit their title and/or subtitle. To do so, select **Configure tile settings** from the context menu. ---Make your changes, then select **Apply**. ---### Complete tile configuration --Any tile that requires configuration displays a banner until you customize the tile. For example, in the **Metrics chart**, the banner reads **Edit in Metrics**. Other banners may use different text, such as **Configure tile**. --To customize the tile: --1. If needed, select **Save** or **Cancel** near the top of the page to exit edit mode. --1. Select the banner, then do the required setup. -- :::image type="content" source="media/azure-portal-dashboards/dashboard-configure-tile.png" alt-text="Screenshot of a tile that requires configuration."::: --### Apply dashboard filters --Near the top of your dashboard, you'll see options to set the **Auto refresh** and **Time settings** for data displayed in the dashboard, along with an option to add additional filters. ---To change how often data is refreshed, select **Auto refresh**, then choose a new refresh interval. When you've made your selection, select **Apply**. --The default time settings are **UTC Time**, showing data for the **Past 24 hours**. To change this, select the button and choose a new time range, time granularity, and/or time zone, then select **Apply**. --To apply additional filters, select **Add filter**. The options you'll see will vary depending on the tiles in your dashboard. For example, you may see options to filter data for a specific subscription or location. In some cases, you'll see that no additional filters are available. --If you see additional filter options, select the one you'd like to use and make your selections. The filter will then be applied to your data. --To remove a filter, select the **X** in its button. --### Override dashboard filters for specific tiles --Tiles which support filtering have a ![filter icon](./media/azure-portal-dashboards/dashboard-filter.png) filter icon in the top-left corner of the tile. These tiles allow you to override the global filters with filters specific to that tile. --To do so, select **Configure tile settings** from the tile's context menu, or select the filter icon. Then you can change the desired filters for that tile. For example, some tiles provide an option to override the dashboard time settings at the tile level, allowing you to select a different time span to refresh data. --When you apply filters for a particular tile, the left corner of that tile changes to show a double filter icon, indicating that the data in that tile reflects its own filters. ---## Delete a tile --To remove a tile from a dashboard, do one of the following: --- Select the context menu in the upper right corner of the tile, then select **Remove from dashboard**.--- Select **Edit** to enter customization mode. Hover in the upper right corner of the tile, then select the ![delete icon](./media/azure-portal-dashboards/dashboard-delete-icon.png) delete icon to remove the tile from the dashboard.--## Clone a dashboard --To use an existing dashboard as a template for a new dashboard, follow these steps: --1. Make sure that the dashboard view is showing the dashboard that you want to copy. --1. In the page header, select ![clone icon](./media/azure-portal-dashboards/dashboard-clone.png) **Clone**. --1. A copy of the dashboard, named **Clone of *your dashboard name* ** opens in edit mode. You can then rename and customize the dashboard. --## Publish and share a dashboard --When you create a dashboard, it's private by default, which means you're the only one who can see it. To make dashboards available to others, you can publish and share them. For more information, see [Share Azure dashboards by using Azure role-based access control](azure-portal-dashboard-share-access.md). --### Open a shared dashboard --To find and open a shared dashboard, follow these steps. --1. Select the arrow next to the dashboard name. --1. Select from the displayed list of dashboards. If the dashboard you want to open isn't listed: -- 1. Select **Browse all dashboards**. -- :::image type="content" source="media/azure-portal-dashboards/dashboard-browse.png" alt-text="Screenshot of dashboard selection menu."::: -- 1. Select the **Type equals** filter, then select **Shared dashboard**. -- :::image type="content" source="media/azure-portal-dashboards/dashboard-browse-all.png" alt-text="Screenshot of all dashboards selection menu."::: -- 1. Select a dashboard from the list of shared dashboards. If you don't see the one you want, use the filters to limit the results shown, such as selecting a specific subscription or filtering by name. --## Delete a dashboard --You can delete your private dashboards, or a shared dashboard that you created or have permissions to modify. --To permanently delete a private or shared dashboard, follow these steps. --1. Select the dashboard you want to delete from the list next to the dashboard name. --1. Select ![delete icon](./media/azure-portal-dashboards/dashboard-delete-icon.png) **Delete** from the page header. --1. For a private dashboard, select **OK** on the confirmation dialog to remove the dashboard. For a shared dashboard, on the confirmation dialog, select the checkbox to confirm that the published dashboard will no longer be viewable by others. Then, select **OK**. -- :::image type="content" source="media/azure-portal-dashboards/dashboard-delete-dash.png" alt-text="Screenshot of delete confirmation."::: --> [!TIP] -> In the global Azure cloud, if you delete a _published_ dashboard in the Azure portal, you can recover that dashboard within 14 days of the delete. For more information, see [Recover a deleted dashboard in the Azure portal](recover-shared-deleted-dashboard.md). --## Next steps --- [Share Azure dashboards by using Azure role-based access control](azure-portal-dashboard-share-access.md)-- [Programmatically create Azure dashboards](azure-portal-dashboards-create-programmatically.md) |
azure-portal | Azure Portal Keyboard Shortcuts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-keyboard-shortcuts.md | - Title: Azure portal keyboard shortcuts -description: The Azure portal supports global keyboard shortcuts to help you perform actions, navigate, and go to locations in the Azure portal. - Previously updated : 04/12/2024----# Keyboard shortcuts in the Azure portal --This article lists the keyboard shortcuts that work throughout the Azure portal. --The letters that appear below represent letter keys on your keyboard. For example, to use **G+N**, hold down the **G** key and then press **N**. --## Actions --|To do this action |Press | -| | | -|Create a resource|G+N| -|Search resources, services, and docs|G+/| -|Search resource menu items|CTRL+/ | -|Move up the selected left sidebar item |ALT+Shift+Up Arrow| -|Move the selected left sidebar item down |ALT+Shift+Down Arrow| --## Navigation --|To do this navigation |Press | -| | | -|Move focus to command bar |G+, | -|Toggle focus between header and left sidebar | G+. | --## Go to --|To go to this location |Press | -| | | -|Go to **Dashboard** |G+D | -|Go to **All resources**|G+A | -|Go to **All services**|G+B| -|Go to **Resource groups**|G+R | -|Open the left sidebar item at this position |G+number| --## Keyboard shortcuts for specific areas --Individual services may have their own additional keyboard shortcuts. Examples include: --- [Azure Resource Graph Explorer](../governance/resource-graph/reference/keyboard-shortcuts.md)-- [Kusto Explorer](/azure/data-explorer/kusto/tools/kusto-explorer-shortcuts)-- [Azure Maps drawing module](../azure-maps/drawing-tools-interactions-keyboard-shortcuts.md)--## Next steps --- [Turn on high contrast or change theme](set-preferences.md#choose-a-theme-or-enable-high-contrast) in the Azure portal.-- Learn about [supported browsers and devices](azure-portal-supported-browsers-devices.md). |
azure-portal | Azure Portal Markdown Tile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-markdown-tile.md | - Title: Use a custom markdown tile on Azure dashboards -description: Learn how to add a markdown tile to an Azure dashboard to display static content Previously updated : 03/27/2023----# Use a markdown tile on Azure dashboards to show custom content --You can add a markdown tile to your Azure dashboards to display custom, static content. For example, you can show basic instructions, an image, or a set of hyperlinks on a markdown tile. --## Add a markdown tile to your dashboard --1. Select **Dashboard** from the Azure portal menu. --1. In the dashboard view, select the dashboard where the custom markdown tile should appear, then select **Edit**. -- :::image type="content" source="media/azure-portal-markdown-tile/azure-portal-dashboard-edit.png" alt-text="Screenshot showing the dashboard edit option in the Azure portal."::: --1. In the **Tile Gallery**, locate the tile called **Markdown** and select **Add**. The tile is added to the dashboard and the **Edit Markdown** pane opens. --1. Enter values for **Title** and **Subtitle**, which display on the tile after you move to another field. -- :::image type="content" source="media/azure-portal-markdown-tile/azure-portal-dashboard-enter-title.png" alt-text="Screenshot showing how to add a title and subtitle to a markdown tile."::: --1. Select one of the options for including markdown content: **Inline editing** or **Insert content using URL**. -- - Select **Inline editing** if you want to enter markdown directly. -- ![Screenshot showing entering inline content](./media/azure-portal-markdown-tile/azure-portal-dashboard-markdown-inline-content.png) -- - Select **Insert content using URL** if you want to use existing markdown content that's hosted online. -- ![Screenshot showing entering URL](./media/azure-portal-markdown-tile/azure-portal-dashboard-markdown-url.png) -- > [!NOTE] - > For added security, create a markdown file and store it in an [Azure storage account blob where encryption is enabled](../storage/common/storage-service-encryption.md). For additional control, configure the encryption with [customer-managed keys stored in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md?tabs=portal). You can then point to the file using the **Insert content using URL** option. Only users with permissions to the file can see the markdown content on the dashboard. You might need to set a [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) rule on the storage account so that the Azure portal (`https://portal.azure.com/`) can access the markdown file in the blob. --1. Select **Done** to dismiss the **Edit Markdown** pane. Your content appears on the Markdown tile, which you can resize by dragging the handle in the lower right-hand corner. -- :::image type="content" source="media/azure-portal-markdown-tile/azure-portal-custom-markdown-tile.png" alt-text="Screenshot showing the custom markdown tile on a dashboard."::: --## Markdown content capabilities and limitations --You can use any combination of plain text, Markdown syntax, and HTML content on the markdown tile. The Azure portal uses an open-source library called _marked_ to transform your content into HTML that is shown on the tile. The HTML produced by _marked_ is pre-processed by the portal before it's rendered. This step helps make sure that your customization won't affect the security or layout of the portal. During that pre-processing, any part of the HTML that poses a potential threat is removed. The following types of content aren't allowed by the portal: --- JavaScript ΓÇô `<script>` tags and inline JavaScript evaluations are removed.-- iframes - `<iframe>` tags are removed.-- Style - `<style>` tags are removed. Inline style attributes on HTML elements aren't officially supported. You may find that some inline style elements work for you, but if they interfere with the layout of the portal, they could stop working at any time. The Markdown tile is intended for basic, static content that uses the default styles of the portal.--## Next steps --- Learn more about [creating dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).-- Learn how to [share a dashboard by using Azure role-based access control](azure-portal-dashboard-share-access.md). |
azure-portal | Azure Portal Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-overview.md | - Title: What is the Azure portal? -description: The Azure portal is a graphical user interface that you can use to manage your Azure services. Learn how to navigate and find resources in the Azure portal. Previously updated : 07/18/2024----# What is the Azure portal? --The Azure portal is a web-based, unified console that lets you create and manage all your Azure resources. With the Azure portal, you can manage your Azure subscription using a graphical user interface. You can build, manage, and monitor everything from simple web apps to complex cloud deployments in the portal. For example, you can set up a new database, increase the compute power of your virtual machines, and monitor your monthly costs. You can review all available resources, and use guided wizards to create new ones. --The Azure portal is designed for resiliency and continuous availability. It has a presence in every Azure datacenter. This configuration makes the Azure portal resilient to individual datacenter failures and helps avoid network slowdowns by being close to users. The Azure portal updates continuously, and it requires no downtime for maintenance activities. You can access the Azure portal with [any supported browser](azure-portal-supported-browsers-devices.md). --In this article, you learn about the different parts of the Azure portal. --## Home --By default, the first thing you see after you [sign in to the portal](https://portal.azure.com) is **Home**. This page compiles resources that help you get the most from your Azure subscription. Select **Create a resource** to quickly create a new resource in the current subscription, or choose a service to start working in. For quick and easy access to work in progress, we show a list of your most recently visited resources. We also include links to free online courses, documentation, and other useful resources. --## Portal elements and controls --The [portal menu](#portal-menu) and page header are global elements that are always present in the Azure portal. These persistent features are the "shell" for the user interface associated with each individual service or feature. The header provides access to global controls. --The working pane for a resource or service may also have a [service menu](#service-menu) with commands specific to that area. --The illustration below labels the basic elements of the Azure portal, each of which are described in the following table. In this example, the current focus is a virtual machine (VM), but the same elements generally apply, no matter what type of resource or service you're working with. ---|Key|Description | -|::|| -|1|**[Portal menu](#portal-menu)**. This global element can help you to navigate between services. Here, the portal menu is in flyout mode, so it's hidden until you select the menu icon.| -|2|**Breadcrumb**. Use the breadcrumb links to move back a level in your workflow.| -|3|**Page header**. Appears at the top of every portal page and holds global elements.| -|4|**Global search**. Use the search bar in the page header to quickly find a specific resource, a service, or documentation.| -|5|**Copilot**. Provides quick access to [Microsoft Copilot in Azure (preview)](/azure/copilot/).| -|6|**Global controls**. These controls for common tasks persist in the page header: Cloud Shell, Notifications, Settings, Support + Troubleshooting, and Feedback.| -|7|**Your account**. View information about your account, switch directories, sign out, or sign in with a different account.| -|8|**Command bar**. A group of controls that are contextual to your current focus.| -|9|**[Service menu](#service-menu)**. A menu with commands that are contextual to the service or resource that you're working with. Sometimes referred to as the resource menu.| -|10|**Working pane**. Displays details about the resource or service that's currently in focus.| --## Portal menu --The Azure portal menu lets you quickly get to key functionality and resource types. It's available from anywhere in the Azure portal. ---Useful commands in the portal menu include: --- **Create a resource**. An easy way to get started creating a new resource in the current subscription.-- **Favorites**. Your list of favorite Azure services. To learn how to customize this list, see [Add, remove, and sort favorites](../azure-portal/azure-portal-add-remove-sort-favorites.md).--In your portal settings, you can [choose a default mode for the portal menu](set-preferences.md#portal-menu-behavior): flyout or docked. --When the portal menu is in flyout mode, it's hidden until you need it. Select the menu icon to open or close the menu. ---If you choose docked mode for the portal menu, it's always visible. You can select the arrows to manually collapse the menu if you want more working space. ---## Service menu --The service menu appears when you're working with an Azure service or resource. Commands in this menu are contextual to the service or resource that you're working with. You can use the search box at the top of the service menu to quickly find commands. --By default, menu items appear collapsed within menu groups. If you prefer to have all menu items expanded by default, you can set **Service menu behavior** to **Expanded** in your [portal settings](set-preferences.md#service-menu-behavior). --When you're working within a service, you can select any top-level menu item to expand it and see the available commands within that menu group. Select that top-level item again to collapse that menu group. --To toggle all folders in a service menu between collapsed and expanded, select the expand/collapse icon near the service icon search box. ---If you use certain service menu commands frequently, you may want to save them as favorites for that service. To do so, hover over the command and then select the star icon. ---When you save a command as a favorite, it appears in a **Favorites** folder near the top of the service menu. ---Your menu group selections are preserved by resource type and throughout sessions. For example, if you add a favorite command while working with a VM, that command will appear in your **Favorites** if you later work with a different VM. Specific menu groups will also appear collapsed or expanded based on your previous selections. --## Dashboard --Dashboards provide a focused view of the resources in your subscription that matter most to you. We give you a default dashboard to get you started. You can customize this dashboard to bring resources you use frequently into a single view, or to display other information. --You can create other dashboards for your own use, or publish customized dashboards and share them with other users in your organization. For more information, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md). --As noted earlier, you can [set your startup page to Dashboard](set-preferences.md#choose-a-startup-page) if you want to see your most recently used dashboard when you sign in to the Azure portal. --## Get started --If you're a new subscriber, you'll have to create a resource before there's anything to manage. Select **+ Create a resource** from the portal menu or **Home** page to view the services available in the Azure Marketplace. You'll find hundreds of applications and services from many providers here, all certified to run on Azure. --To view all available services, select **All services** from the sidebar. --> [!TIP] -> Often, the quickest way to get to a resource, service, or documentation is to use *Search* in the global header. --For more help getting started with Azure, explore the [Azure Quickstart Center](azure-portal-quickstart-center.md). --## Next steps --- Take the [Manage services with the Azure portal training module](/training/modules/tour-azure-portal/).-- Stay connected on the go with the [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/). |
azure-portal | Azure Portal Quickstart Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-quickstart-center.md | - Title: Get started with the Azure Quickstart Center -description: Use the Azure Quickstart Center guided experience to get started with Azure. Learn to set up, migrate, and innovate. Previously updated : 11/15/2023----# Get started with the Azure Quickstart Center --Azure Quickstart Center is a guided experience in the Azure portal. Available to anyone who wants to improve their knowledge of Azure. For organizations new to Azure, it's the fastest way to onboard and set up your cloud environment. --## Use Quickstart Center --1. Sign in to the [Azure portal](https://portal.azure.com). --1. In the search bar, type "Quickstart Center", and then select it. -- Or, select **All services** from the Azure portal menu, then select **General** > **Get started** > **Quickstart Center**. --Once you're in Quickstart Center, you'll see three tabs: **Get started**, **Projects and guides**, and **Take an online course**. --## Get started --If you're new to Azure, use the checklist in the **Get started** to get familiar with some basic tasks and services. Watch videos and use the links to explore more about topics like using basic account features, estimating costs, and deploying different types of resources. --## Projects and guides --In the **Projects and guides** tab, you'll find two sections: --* **Start a project**: If you're ready to create a resource, this section lets you learn more about your choices before you commit to an option. Select **Start** for any service to see options, learn more about scenarios, explore costs, and identify prerequisites. After making your choices, you can go directly to create. --* **Setup guides**: Designed for the IT admin and cloud architect, our guides introduce key concepts for Azure adoption. Structured steps help you take action as you learn, applying Microsoft's recommended best practices. Our guides walk you through deployment scenarios to help you set up, manage, and secure your Azure environment, including migrating workloads to Azure. --## Take an online course --The **Take an online course** tab of the Azure Quickstart Center highlights free introductory course modules. --Select a tile to launch a course and learn more about cloud concepts and managing resources in Azure. You can also select **Browse** to see all courses, learning paths and modules. --## Next steps --* Learn more about Azure setup and migration in the [Microsoft Cloud Adoption Framework for Azure](/azure/architecture/cloud-adoption/). -* Unlock your cloud skills with free [Learn modules](/training/azure/). |
azure-portal | Azure Portal Safelist Urls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md | - Title: Allow the Azure portal URLs on your firewall or proxy server -description: To optimize connectivity between your network and the Azure portal and its services, we recommend you add these URLs to your allowlist. Previously updated : 07/12/2024----# Allow the Azure portal URLs on your firewall or proxy server --To optimize connectivity between your network and the Azure portal and its services, you may want to add specific Azure portal URLs to your allowlist. Doing so can improve performance and connectivity between your local- or wide-area network and the Azure cloud. --Network administrators often deploy proxy servers, firewalls, or other devices, which can help secure and give control over how users access the internet. Rules designed to protect users can sometimes block or slow down legitimate business-related internet traffic. This traffic includes communications between you and Azure over the URLs listed here. --> [!TIP] -> For help diagnosing issues with network connections to these domains, check https://portal.azure.com/selfhelp. --You can use [service tags](../virtual-network/service-tags-overview.md) to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md), [Azure Firewall](../firewall/service-tags.md), and user-defined routes. Use service tags in place of fully qualified domain names (FQDNs) or specific IP addresses when you create security rules and routes. --## Azure portal URLs for proxy bypass --The URL endpoints to allow for the Azure portal are specific to the Azure cloud where your organization is deployed. To allow network traffic to these endpoints to bypass restrictions, select your cloud, then add the list of URLs to your proxy server or firewall. We do not recommend adding any additional portal-related URLs aside from those listed here, although you may want to add URLs related to other Microsoft products and services. Depending on which services you use, you may not need to include all of these URLs in your allowlist. --> [!IMPORTANT] -> Including the wildcard symbol (\*) at the start of an endpoint will allow all subdomains. For endpoints with wildcards, we also advise you to add the URL without the wildcard. For example, you should add both `*.portal.azure.com` and `portal.azure.com` to ensure that access to the domain is allowed with or without a subdomain. -> -> Avoid adding a wildcard symbol to endpoints listed here that don't already include one. Instead, if you identify additional subdomains of an endpoint that are needed for your particular scenario, we recommend that you allow only that particular subdomain. --### [Public Cloud](#tab/public-cloud) --> [!TIP] -> The service tags required to access the Azure portal (including authentication and resource listing) are **AzureActiveDirectory**, **AzureResourceManager**, and **AzureFrontDoor.Frontend**. Access to other services may require additional permissions, as described below. -> However, there is a possibility that unnecessary communication other than communication to access the portal may also be allowed. If granular control is required, FQDN-based access control such as Azure Firewall is required. --#### Azure portal authentication --``` -login.microsoftonline.com -*.aadcdn.msftauth.net -*.aadcdn.msftauthimages.net -*.aadcdn.msauthimages.net -*.logincdn.msftauth.net -login.live.com -*.msauth.net -*.aadcdn.microsoftonline-p.com -*.microsoftonline-p.com -``` --#### Azure portal framework --``` -*.portal.azure.com -*.hosting.portal.azure.net -*.reactblade.portal.azure.net -management.azure.com -*.ext.azure.com -*.graph.windows.net -*.graph.microsoft.com -``` --#### Account data --``` -*.account.microsoft.com -*.bmx.azure.com -*.subscriptionrp.trafficmanager.net -*.signup.azure.com -``` --#### General Azure services and documentation --``` -aka.ms (Microsoft short URL) -*.asazure.windows.net (Analysis Services) -*.azconfig.io (AzConfig Service) -*.aad.azure.com (Microsoft Entra) -*.aadconnecthealth.azure.com (Microsoft Entra) -ad.azure.com (Microsoft Entra) -adf.azure.com (Azure Data Factory) -api.aadrm.com (Microsoft Entra) -api.loganalytics.io (Log Analytics Service) -api.azrbac.mspim.azure.com (Microsoft Entra) -*.applicationinsights.azure.com (Application Insights Service) -appservice.azure.com (Azure App Services) -*.arc.azure.net (Azure Arc) -asazure.windows.net (Analysis Services) -bastion.azure.com (Azure Bastion Service) -batch.azure.com (Azure Batch Service) -catalogapi.azure.com (Azure Marketplace) -catalogartifact.azureedge.net (Azure Marketplace) -changeanalysis.azure.com (Change Analysis) -cognitiveservices.azure.com (Cognitive Services) -config.office.com (Microsoft Office) -cosmos.azure.com (Azure Cosmos DB) -*.database.windows.net (SQL Server) -datalake.azure.net (Azure Data Lake Service) -dev.azure.com (Azure DevOps) -dev.azuresynapse.net (Azure Synapse) -digitaltwins.azure.net (Azure Digital Twins) -learn.microsoft.com (Azure documentation) -elm.iga.azure.com (Microsoft Entra) -eventhubs.azure.net (Azure Event Hubs) -functions.azure.com (Azure Functions) -gallery.azure.com (Azure Marketplace) -go.microsoft.com (Microsoft documentation placeholder) -help.kusto.windows.net (Azure Kusto Cluster Help) -identitygovernance.azure.com (Microsoft Entra) -iga.azure.com (Microsoft Entra) -informationprotection.azure.com (Microsoft Entra) -kusto.windows.net (Azure Kusto Clusters) -learn.microsoft.com (Azure documentation) -logic.azure.com (Logic Apps) -marketplacedataprovider.azure.com (Azure Marketplace) -marketplaceemail.azure.com (Azure Marketplace) -media.azure.net (Azure Media Services) -monitor.azure.com (Azure Monitor Service) -*.msidentity.com (Microsoft Entra) -mspim.azure.com (Microsoft Entra) -network.azure.com (Azure Network) -purview.azure.com (Azure Purview) -quantum.azure.com (Azure Quantum Service) -rest.media.azure.net (Azure Media Services) -search.azure.com (Azure Search) -servicebus.azure.net (Azure Service Bus) -servicebus.windows.net (Azure Service Bus) -shell.azure.com (Azure Command Shell) -sphere.azure.net (Azure Sphere) -azure.status.microsoft (Azure Status) -storage.azure.com (Azure Storage) -storage.azure.net (Azure Storage) -vault.azure.net (Azure Key Vault Service) -ux.console.azure.com (Azure Cloud Shell) -``` --### [U.S. Government Cloud](#tab/us-government-cloud) --``` -*.applicationinsights.us -*.azure.us -*.azureedge.net -*.loganalytics.us -*.microsoft.us -*.microsoftonline.us -*.msauth.net -*.msidentity.us -*.s-microsoft.com -*.usgovcloudapi.net -*.usgovtrafficmanager.net -*.windowsazure.us -graph.microsoftazure.us -``` --### [Microsoft Azure operated by 21Vianet Cloud](#tab/azure-china-cloud) --``` -aadcdn.msauth.cn -aadcdn.msftauth.cn -login.live.com -catalogartifact.azureedge.net -store-images.s-microsoft.com -*.azure.cn -*.microsoft.cn -*.microsoftonline.cn -*.msidentity.cn -*.chinacloudapi.cn -*.trafficmanager.cn -*.windowsazure.cn -``` ----> [!NOTE] -> Traffic to these endpoints uses standard TCP ports for HTTP (80) and HTTPS (443). |
azure-portal | Azure Portal Supported Browsers Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-supported-browsers-devices.md | - Title: Supported browsers and devices for Azure portal -description: You can use the Azure portal on all modern devices and with the latest browser versions. - Previously updated : 04/10/2024----# Supported devices --The [Azure portal](https://portal.azure.com) is a web-based console that runs in the browser of all modern desktops and tablet devices. To use the portal, you must have JavaScript enabled on your browser. We recommend not using ad blockers in your browser, because they may cause issues with some portal features. --## Recommended browsers --We recommend using the most up-to-date browser that's compatible with your operating system. The following browsers are supported: --* Microsoft Edge (latest version) -* Safari (latest version, Mac only) -* Chrome (latest version) -* Firefox (latest version) --## Mobile app --To manage Azure resources from a mobile device, try the [Azure mobile app](mobile-app/overview.md). It's available for iOS and Android. |
azure-portal | Azure Portal Video Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-video-series.md | - Title: Azure portal how-to video series -description: Find video demos for how to work with Azure services in the portal. View and link directly to the latest how-to videos. Previously updated : 12/06/2023----# Azure portal how-to video series --The [Azure portal how-to video series](https://www.youtube.com/playlist?list=PLLasX02E8BPBKgXP4oflOL29TtqTzwhxR) showcases how to work with Azure services in the Azure portal. These interactive demos can help you be more efficient and productive. --## Featured video --In this featured video, we show you how to create a storage account in the Azure portal. --> [!VIDEO https://www.youtube.com/embed/AhuNgBafmUo] --[How to create a storage account](https://www.youtube.com/watch?v=AhuNgBafmUo) --Catch up on these videos you may have missed: --| [How to use search in the Azure portal](https://www.youtube.com/watch?v=PcHF_DzsETA) | [How to check your subscription's secure score](https://www.youtube.com/watch?v=yqb3qvsjqXY) | [How to find and use Translator](https://www.youtube.com/watch?v=6xBHkHkFmZ4) | -| | | | -| [![Image of YouTube video about how to use search in the Azure portal](https://i.ytimg.com/vi/PcHF_DzsETA/hqdefault.jpg)](http://www.youtube.com/watch?v=PcHF_DzsETA) | [![Image of YouTube video about how to check your subscription's secure score](https://i.ytimg.com/vi/yqb3qvsjqXY/hqdefault.jpg)](https://www.youtube.com/watch?v=yqb3qvsjqXY) | [![Image of YouTube video about how to find and use Translator](https://i.ytimg.com/vi/6xBHkHkFmZ4/hqdefault.jpg)](http://www.youtube.com/watch?v=6xBHkHkFmZ4) | --## Video playlist --Explore the [Azure portal how-to series](https://www.youtube.com/playlist?list=PLLasX02E8BPBKgXP4oflOL29TtqTzwhxR) for some great tips on how to make the most of the Azure portal. Subscribe to the channel to be notified when new videos are added. --## Next steps --- Explore hundreds of videos for Azure services in the [video library](https://azure.microsoft.com/resources/videos/index/?tag=microsoft-azure-portal).-- Get an [overview of the Azure portal](azure-portal-overview.md). |
azure-portal | Capture Browser Trace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/capture-browser-trace.md | - Title: Capture a browser trace for troubleshooting -description: Capture network information from a browser trace to help troubleshoot issues with the Azure portal. Previously updated : 05/01/2023----# Capture a browser trace for troubleshooting --If you're troubleshooting an issue with the Azure portal, and you need to contact Microsoft support, you may want to first capture some additional information. For example, it can be helpful to share a browser trace, a step recording, and console output. This information can provide important details about what exactly is happening in the portal when your issue occurs. --> [!WARNING] -> Browser traces often contain sensitive information and might include authentication tokens linked to your identity. Please remove any sensitive information before sharing traces with others. Microsoft support uses these traces for troubleshooting purposes only. --You can capture this information any [supported browser](azure-portal-supported-browsers-devices.md): Microsoft Edge, Google Chrome, Safari (on Mac), or Firefox. Steps for each browser are shown below. --## Microsoft Edge --The following steps show how to use the developer tools in Microsoft Edge. For more information, see [Microsoft Edge DevTools](/microsoft-edge/devtools-guide-chromium). --> [!NOTE] -> The screenshots below show the DevTools in Focus Mode with a vertical **Activity Bar**. Depending on your settings, your configuration may look different. For more information, see [Simplify DevTools using Focus Mode](/microsoft-edge/devtools-guide-chromium/experimental-features/focus-mode). --1. Sign in to the [Azure portal](https://portal.azure.com). It's important to sign in _before_ you start the trace so that the trace doesn't contain sensitive information related to your account. --1. Start recording the steps you take in the portal, using [Steps Recorder](https://support.microsoft.com/windows/record-steps-to-reproduce-a-problem-46582a9b-620f-2e36-00c9-04e25d784e47). --1. In the portal, navigate to the step prior to where the issue occurs. --1. Press F12 to launch Microsoft Edge DevTools. You can also launch the tools from the toolbar menu under **More tools** > **Developer tools**. --1. By default, the browser keeps trace information only for the page that's currently loaded. Set the following options so the browser keeps all trace information, even if your repro steps require going to more than one page. -- 1. Select the **Console** tab, select **Console settings**, then select **Preserve Log**. -- :::image type="content" source="media/capture-browser-trace/edge-console-preserve-log.png" alt-text="Screenshot that highlights the Preserve log option on the Console tab in Edge."::: -- 1. Select the **Network** tab. If that tab isn't visible, click the **More tools** (+) button and select **Network**. Then, from the **Network** tab, select **Preserve log**. -- :::image type="content" source="media/capture-browser-trace/edge-network-preserve-log.png" alt-text="Screenshot that highlights the Preserve log option on the Network tab in Edge."::: --1. On the **Network** tab, select **Stop recording network log** and **Clear**. -- :::image type="content" source="media/capture-browser-trace/edge-stop-clear-session.png" alt-text="Screenshot showing the Stop recording network log and Clear options on the Network tab in Edge."::: --1. Select **Record network log**, then reproduce the issue in the portal. -- :::image type="content" source="media/capture-browser-trace/edge-start-session.png" alt-text="Screenshot showing how to record the network log in Edge."::: -- You'll see session output similar to the following image. -- :::image type="content" source="media/capture-browser-trace/edge-browser-trace-results.png" alt-text="Screenshot showing session output in Edge."::: --1. After you have reproduced the unexpected portal behavior, select **Stop recording network log**, then select **Export HAR** and save the file. -- :::image type="content" source="media/capture-browser-trace/edge-network-export-har.png" alt-text="Screenshot showing how to Export HAR on the Network tab in Edge."::: --1. Stop the Steps Recorder and save the recording. --1. Back in the browser developer tools pane, select the **Console** tab. Right-click one of the messages, then select **Save as...**, and save the console output to a text file. -- :::image type="content" source="media/capture-browser-trace/edge-console-select.png" alt-text="Sccreenshot showing how to save the console output in Edge."::: --1. Package the browser trace HAR file, console output, and screen recording files in a compressed format such as .zip. --1. Share the compressed file with Microsoft support by [using the **File upload** option in your support request](supportability/how-to-manage-azure-support-request.md#upload-files). --## Google Chrome --The following steps show how to use the developer tools in Google Chrome. For more information, see [Chrome DevTools](https://developers.google.com/web/tools/chrome-devtools). --1. Sign in to the [Azure portal](https://portal.azure.com). It's important to sign in _before_ you start the trace so that the trace doesn't contain sensitive information related to your account. --1. Start recording the steps you take in the portal, using [Steps Recorder](https://support.microsoft.com/windows/record-steps-to-reproduce-a-problem-46582a9b-620f-2e36-00c9-04e25d784e47). --1. In the portal, navigate to the step prior to where the issue occurs. --1. Press F12 to launch the developer tools. You can also launch the tools from the toolbar menu under **More tools** > **Developer tools**. --1. By default, the browser keeps trace information only for the page that's currently loaded. Set the following options so the browser keeps all trace information, even if your repro steps require going to more than one page: -- 1. Select the **Console** tab, select **Console settings**, then select **Preserve Log**. -- ![Screenshot that highlights the Preserve log option on the Console tab in Chrome.](media/capture-browser-trace/chromium-console-preserve-log.png) -- 1. Select the **Network** tab, then select **Preserve log**. -- ![Screenshot that highlights the Preserve log option on the Network tab in Chrome.](media/capture-browser-trace/chromium-network-preserve-log.png) --1. On the **Network** tab, select **Stop recording network log** and **Clear**. -- ![Screenshot of "Stop recording network log" and "Clear" on the Network tab in Chrome.](media/capture-browser-trace/chromium-stop-clear-session.png) --1. Select **Record network log**, then reproduce the issue in the portal. -- ![Screenshot that shows how to record the network log in Chrome.](media/capture-browser-trace/chromium-start-session.png) -- You'll see session output similar to the following image. -- ![Screenshot that shows the session output in Chrome.](media/capture-browser-trace/chromium-browser-trace-results.png) --1. After you have reproduced the unexpected portal behavior, select **Stop recording network log**, then select **Export HAR** and save the file. -- ![Screenshot that shows how to Export HAR on the Network tab in Chrome.](media/capture-browser-trace/chromium-network-export-har.png) --1. Stop the Steps Recorder and save the recording. --1. Back in the browser developer tools pane, select the **Console** tab. Right-click one of the messages, then select **Save as...**, and save the console output to a text file. -- ![Screenshot that shows how to save the console output in Chrome.](media/capture-browser-trace/chromium-console-select.png) --1. Package the browser trace HAR file, console output, and screen recording files in a compressed format such as .zip. --1. Share the compressed file with Microsoft support by [using the **File upload** option in your support request](supportability/how-to-manage-azure-support-request.md#upload-files). --## Safari --The following steps show how to use the developer tools in Apple Safari on Mac. For more information, see [Safari Developer Tools overview](https://support.apple.com/guide/safari-developer/safari-developer-tools-overview-dev073038698/11.0/mac). --1. Enable the developer tools in Safari: -- 1. Select **Safari**, then select **Preferences**. -- 1. Select the **Advanced** tab, then select **Show Develop menu in menu bar**. -- ![Screenshot of the Safari advanced preferences options.](media/capture-browser-trace/safari-show-develop-menu.png) --1. Sign in to the [Azure portal](https://portal.azure.com). It's important to sign in _before_ you start the trace so that the trace doesn't contain sensitive information related to your account. --1. Start recording the steps you take in the portal. For more information, see [How to record the screen on your Mac](https://support.apple.com/HT208721). --1. In the portal, navigate to the step prior to where the issue occurs. --1. Select **Develop**, then select **Show Web Inspector**. -- ![Screenshot of the "Show Web Inspector" command.](media/capture-browser-trace/safari-show-web-inspector.png) --1. By default, the browser keeps trace information only for the page that's currently loaded. Set the following options so the browser keeps all trace information, even if your repro steps require going to more than one page: -- 1. Select the **Console** tab, then select **Preserve Log**. -- ![Screenshot that shows the Preserve Log on the Console tab.](media/capture-browser-trace/safari-console-preserve-log.png) -- 1. Select the **Network** tab, then select **Preserve Log**. -- ![Screenshot that shows the Preserve Log option on the Network tab.](media/capture-browser-trace/safari-network-preserve-log.png) --1. On the **Network** tab, select **Clear Network Items**. -- ![Screenshot of "Clear Network Items" on the Network tab.](media/capture-browser-trace/safari-clear-session.png) --1. Reproduce the issue in the portal. You'll see session output similar to the following image. -- ![Screenshot that shows the output after you've reproduced the issue.](media/capture-browser-trace/safari-browser-trace-results.png) --1. After you have reproduced the unexpected portal behavior, select **Export** and save the file. -- ![Screenshot of the "Export" command on the Network tab.](media/capture-browser-trace/safari-network-export-har.png) --1. Stop the screen recorder, and save the recording. --1. Back in the browser developer tools pane, select the **Console** tab, and expand the window. Place your cursor at the start of the console output then drag and select the entire contents of the output. Use Command-C to copy the output and save it to a text file. -- ![Screenshot that shows where you can view and copy the console output.](media/capture-browser-trace/safari-console-select.png) --1. Package the browser trace HAR file, console output, and screen recording files in a compressed format such as .zip. --1. Share the compressed file with Microsoft support by [using the **File upload** option in your support request](supportability/how-to-manage-azure-support-request.md#upload-files). --## Firefox --The following steps show how to use the developer tools in Firefox. For more information, see [Firefox Developer Tools](https://developer.mozilla.org/docs/Tools). --1. Sign in to the [Azure portal](https://portal.azure.com). It's important to sign in _before_ you start the trace so that the trace doesn't contain sensitive information related to your account. --1. Start recording the steps you take in the portal. Use [Steps Recorder](https://support.microsoft.com/windows/record-steps-to-reproduce-a-problem-46582a9b-620f-2e36-00c9-04e25d784e47) on Windows, or see [How to record the screen on your Mac](https://support.apple.com/HT208721). --1. In the portal, navigate to the step prior to where the issue occurs. --1. Press F12 to launch the developer tools. You can also launch the tools from the toolbar menu under **More tools** > **Web developer tools**. --1. By default, the browser keeps trace information only for the page that's currently loaded. Set the following options so the browser keeps all trace information, even if your repro steps require going to more than one page: -- 1. Select the **Console** tab, select the **Settings** icon, and then select **Persist Logs**. -- :::image type="content" source="media/capture-browser-trace/firefox-console-persist-logs.png" alt-text="Screenshot of the Console setting for Persist Logs."::: -- 1. Select the **Network** tab, select the **Settings** icon, and then select **Persist Logs**. -- :::image type="content" source="media/capture-browser-trace/firefox-network-persist-logs.png" alt-text="Screenshot of the Network setting for Persist Logs."::: --1. On the **Network** tab, select **Clear**. -- ![Screenshot of the "Clear" option on the Network tab.](media/capture-browser-trace/firefox-clear-session.png) --1. Reproduce the issue in the portal. You'll see session output similar to the following image. -- ![Screenshot showing example browser trace results.](media/capture-browser-trace/firefox-browser-trace-results.png) --1. After you have reproduced the unexpected portal behavior, select **Save All As HAR**. -- ![Screenshot of the "Save All As HAR" command on the Network tab.](media/capture-browser-trace/firefox-network-export-har.png) --1. Stop the Steps Recorder on Windows or the screen recording on Mac, and save the recording. --1. Back in the browser developer tools pane, select the **Console** tab. Right-click one of the messages, then select **Save All Messages to File**, and save the console output to a text file. -- :::image type="content" source="media/capture-browser-trace/firefox-console-select.png" alt-text="Screenshot of the Save All Messages to File command on the Console tab."::: --1. Package the browser trace HAR file, console output, and screen recording files in a compressed format such as .zip. --1. Share the compressed file with Microsoft support by [using the **File upload** option in your support request](supportability/how-to-manage-azure-support-request.md). --## Next steps --- Read more about the [Azure portal](azure-portal-overview.md).-- Learn how to [open a support request](supportability/how-to-create-azure-support-request.md) in the Azure portal.-- Learn more about [file upload requirements for support requests](supportability/how-to-manage-azure-support-request.md). |
azure-portal | Dashboard Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/dashboard-hub.md | - Title: Create and manage dashboards in Dashboard hub -description: This article describes how to create and customize a shared dashboard in Dashboard hub in the Azure portal. - Previously updated : 08/28/2024---# Create and manage dashboards in Dashboard hub (preview) --Dashboards are a focused and organized view of your cloud resources in the Azure portal. The new Dashboard hub (preview) experience offers editing features such as tabs, a rich set of tiles with support for different data sources, and dashboard access in the latest version of the [Azure mobile app](mobile-app/overview.md). --Currently, Dashboard hub can only be used to create and manage shared dashboards. These shared dashboards are implemented as Azure resources in your subscription. They're visible in the Azure portal or the Azure mobile app, to all users who have subscription-level access. --> [!IMPORTANT] -> Dashboard hub is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Current limitations --Before using the new Dashboard hub experience, be aware of the following current limitations and make sure that your new dashboard meets your organization's needs. --Private dashboards aren't currently supported in Dashboard hub. These dashboards are shared with all users in a subscription by default. To create a private dashboard, or to share it with only a limited set of users, create your dashboard [from the **Dashboard** view in the Azure portal](azure-portal-dashboards.md) rather than using the new experience. --Some tiles aren't yet available in the Dashboard hub experience. Currently, the following tiles are available: --- **Azure Resource Graph query**-- **Metrics**-- **Resource**-- **Resource Group**-- **Recent Resources**-- **All Resources**-- **Markdown**-- **Policy**--If your dashboard relies on one of these tiles, we recommend that you don't use the new experience for that dashboard at this time. We'll update this page as we add more tile types to the new experience. --## Create a new dashboard --To create a new shared dashboard with an assigned name, follow these steps. --1. Sign in to the [Azure portal](https://portal.azure.com). --1. Search for **Dashboard hub** and then select it. --1. Under **Dashboards (preview)**, select **Shared dashboards**. Then select **Create**. -- :::image type="content" source="media/dashboard-hub/dashboard-hub-create.png" alt-text="Screenshot of the Create option in the Dashboard hub."::: -- You'll see an empty dashboard with a grid where you can arrange tiles. --1. If you want to use a template to create your dashboard, select **Select Templates**, then choose an available template to start from. Enter a name and any other applicable information. For example, if you select **SQL database health**, you'll need to specify a SQL database resource. When you're finished, select **Submit**. --1. If you aren't using a template, or if you want to add more tiles, select **Add tile** to open the **Tile Gallery**. The **Tile Gallery** features various tiles that display different types of information. Select a tile, then select **Add**. You can also drag tiles from the **Tile Gallery** onto your grid. Resize or rearrange the tiles as desired. --1. If you haven't already provided a name, or want to change what you entered, select **Rename dashboard** to enter a name that will help you easily identify your dashboard. -- :::image type="content" source="media/dashboard-hub/dashboard-hub-rename.png" alt-text="Screenshot showing a dashboard being renamed in the Dashboard hub."::: --1. When you're finished, select **Publish dashboardV2** in the command bar. --1. Select the subscription and resource group to which the dashboard will be saved. -1. Enter a name for the dashboard. This name is used for the dashboard resource in Azure, and it can't be changed after publishing. However, you can edit the displayed title of the dashboard later. -1. Select **Submit**. --You'll see a notification confirming that your dashboard has been published. You can continue to [edit your dashboard](#edit-a-dashboard) as needed. --> [!IMPORTANT] -> Since all dashboards in the new experience are shared by default, anyone with access to the subscription will have access to the dashboard resource. For more access control options, see [Understand access control](#understand-access-control). --## Create a dashboard based on an existing dashboard --To create a new shared dashboard with an assigned name, based on an existing dashboard, follow these steps. --> [!TIP] -> Review the [current limitations ](#current-limitations) before you proceed. If your dashboard includes tiles that aren't currently supported in the new experience, you can still create a new dashboard based on the original one. However, any tiles that aren't yet available won't be included. --1. Navigate to the dashboard that you want to start with. You can do this by selecting **Dashboard** from the Azure menu, then selecting the dashboard that you wish to start with. Alternately, in the new Dashboard hub, expand **Dashboards** and then select either **Private dashboards** or **Shared dashboards** to find your dashboard. -1. From the Select **Try it now**. -- :::image type="content" source="media/dashboard-hub/dashboard-try-it-now.png" alt-text="Screenshot showing the Try it now link for a dashboard."::: -- The dashboard opens in the new Dashboard hub editing experience. Follow the process described in the previous section to publish the dashboard as a new shared dashboard, or read on to learn how to make edits to your dashboard before publishing. --## Edit a dashboard --After you create a dashboard, you can add, resize, and arrange tiles that show your Azure resources or display other helpful information. --To open the editing page for a dashboard, select **Edit** from its command bar. Make changes as described in the sections below, then select **Publish dashboardV2** when you're finished. --### Add tiles from the Tile Gallery --To add tiles to a dashboard by using the Tile Gallery, follow these steps. --1. Click **Add tile** to open the Tile Gallery. -1. Select the tile you want to add to your dashboard, then select **Add**. Alternately, you can drag the tile to the desired location in your grid. -1. To configure the tile, select **Edit** to open the tile editor. -- :::image type="content" source="media/dashboard-hub/dashboard-hub-edit-tile.png" alt-text="Screenshot of the Edit Tile option in the Dashboard hub in the Azure portal."::: --1. Make the desired changes to the tile, including editing its title or changing its configuration. When you're done, select **Apply changes**. --### Resize or rearrange tiles --To change the size of a tile, select the arrow on the bottom right corner of the tile, then drag to resize it. If there's not enough grid space to resize the tile, it bounces back to its original size. --To change the placement of a tile, select it and then drag it to a new location on the dashboard. --Repeat these steps as needed until you're happy with the layout of your tiles. --### Delete tiles --To remove a tile from the dashboard, hover in the upper right corner of the tile and then select **Delete**. --### Manage tabs --The new dashboard experience lets you create multiple tabs where you can group information. To create tabs: --1. Select **Manage tabs** from the command bar to open the **Manage tabs** pane. -- :::image type="content" source="media/dashboard-hub/dashboard-hub-manage-tabs.png" alt-text="Screenshot of the Manage tabs page in the Dashboard hub in the Azure portal."::: --1. Enter name for the tabs you want to create. -1. To change the tab order, drag and drop your tabs, or select the checkbox next to a tab and use the **Move up** and **Move down** buttons. -1. When you're finished, select **Apply changes**. --You can then select each tab to make individual edits. --### Apply dashboard filters --To add filters to your dashboard, select **Parameters** from the command bar to open the **Manage parameters** pane --The options you see depend on the tiles used in your dashboard. For example, you may see options to filter data for a specific subscription or location. --If your dashboard includes the **Metrics** tile, the default parameters are **Time range** and **Time granularity.** ---To edit a parameter, select the pencil icon. --To add a new parameter, select **Add**, then configure the parameter as desired. --To remove a parameter, select the trash can icon. --### Pin content from a resource page --Another way to add tiles to your dashboard is directly from a resource page. --Many resource pages include a pin icon in the command bar, which means that you can pin a tile representing that resource. ---In some cases, a pin icon may also appear by specific content within a page, which means you can pin a tile for that specific content, rather than the entire page. For example, you can pin some resources through the context pane. ---To pin content to your dashboard, select the **Pin to dashboard** option or the pin icon. Be sure to select the **Shared** dashboard type. You can also create a new dashboard which will include this pin by selecting **Create new**. --## Export a dashboard --You can export a dashboard from the Dashboard hub to view its structure programmatically. These exported templates can also be used as the basis for creating future dashboards. --To export a dashboard, select **Export**. Select the option for the format you wish to download: --- **ARM template**: Downloads an ARM template representation of the dashboard.-- **Dashboard**: Downloads a JSON representation of the dashboard.-- **View**: Downloads a declarative view of the dashboard.--After you make your selection, you can view the downloaded version in the editor of your choice. --## Understand access control --Published dashboards are implemented as Azure resources, Each dashboard exists as a manageable item contained in a resource group within your subscription. You can manage access control through the Dashboard hub. --Azure role-based access control (Azure RBAC) lets you assign users to roles at different levels of scope: management group, subscription, resource group, or resource. Azure RBAC permissions are inherited from higher levels down to the individual resource. In many cases, you may already have users assigned to roles for the subscription that will give them access to the published dashboard. --For example, users who have the **Owner** or **Contributor** role for a subscription can list, view, create, modify, or delete dashboards within the subscription. Users with a custom role that includes the `Microsoft.Portal/Dashboards/Write` permission can also perform these tasks. --Users with the **Reader** role for the subscription (or a custom role with `Microsoft.Portal/Dashboards/Read permission`) can list and view dashboards within that subscription, but they can't modify or delete them. These users can make private copies of dashboards for themselves. They can also make local edits to a published dashboard for their own use, such as when troubleshooting an issue, but they can't publish those changes back to the server. These users can also view these dashboards in the Azure mobile app. --To expand access to a dashboard beyond the access granted at the subscription level, you can assign permissions to an individual dashboard, or to a resource group that contains several dashboards. For example, if a user has limited permissions across the subscription, but needs to be able to edit one particular dashboard, you can assign a different role with more permissions (such as Contributor) for that dashboard only. |
azure-portal | Get Subscription Tenant Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/get-subscription-tenant-id.md | - Title: Get subscription and tenant IDs in the Azure portal -description: Learn how to locate and copy the IDs of Azure tenants and subscriptions. Previously updated : 09/27/2023----# Get subscription and tenant IDs in the Azure portal --A tenant is a [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) entity that typically encompasses an organization. Tenants can have one or more subscriptions, which are agreements with Microsoft to use cloud services, including Azure. Every Azure resource is associated with a subscription. --Each subscription has an ID associated with it, as does the tenant to which a subscription belongs. As you perform different tasks, you may need the ID for a subscription or tenant. You can find these values in the Azure portal. --## Find your Azure subscription --Follow these steps to retrieve the ID for a subscription in the Azure portal. --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Under the Azure services heading, select **Subscriptions**. If you don't see **Subscriptions** here, use the search box to find it. -1. Find the subscription in the list, and note the **Subscription ID** shown in the second column. If no subscriptions appear, or you don't see the right one, you may need to [switch directories](set-preferences.md#switch-and-manage-directories) to show the subscriptions from a different Microsoft Entra tenant. -1. To easily copy the **Subscription ID**, select the subscription name to display more details. Select the **Copy to clipboard** icon shown next to the **Subscription ID** in the **Essentials** section. You can paste this value into a text document or other location. -- :::image type="content" source="media/get-subscription-tenant-id/copy-subscription-id.png" alt-text="Screenshot showing the option to copy a subscription ID in the Azure portal."::: --> [!TIP] -> You can also list your subscriptions and view their IDs programmatically by using [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) (Azure PowerShell) or [az account list](/cli/azure/account#az-account-list) (Azure CLI). --<a name='find-your-azure-ad-tenant'></a> --## Find your Microsoft Entra tenant --Follow these steps to retrieve the ID for a Microsoft Entra tenant in the Azure portal. --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Confirm that you are signed into the tenant for which you want to retrieve the ID. If not, [switch directories](set-preferences.md#switch-and-manage-directories) so that you're working in the right tenant. -1. Under the Azure services heading, select **Microsoft Entra ID**. If you don't see **Microsoft Entra ID** here, use the search box to find it. -1. Find the **Tenant ID** in the **Basic information** section of the **Overview** screen. -1. Copy the **Tenant ID** by selecting the **Copy to clipboard** icon shown next to it. You can paste this value into a text document or other location. -- :::image type="content" source="media/get-subscription-tenant-id/copy-tenant-id.png" alt-text="Screenshot showing the option to copy a tenant ID in the Azure portal."::: --> [!TIP] -> You can also find your tenant programmatically by using [Azure Powershell](/azure/active-directory/fundamentals/how-to-find-tenant#find-tenant-id-with-powershell) or [Azure CLI](/azure/active-directory/fundamentals/how-to-find-tenant#find-tenant-id-with-cli). --## Next steps --- Learn more about [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md).-- Learn how to manage Azure subscriptions [with Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli) or [with Azure PowerShell](/powershell/azure/manage-subscriptions-azureps).-- Learn how to [manage Azure portal settings and preferences](set-preferences.md). |
azure-portal | Manage Filter Resource Views | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/manage-filter-resource-views.md | - Title: View and filter Azure resource information -description: Filter information and use different views to better understand your Azure resources. - Previously updated : 04/12/2024---# View and filter Azure resource information --The Azure portal enables you to browse detailed information about resources across your Azure subscriptions. This article shows you how to filter information and use different views to better understand your resources. --This article focuses on filtering information the **All resources** screen. Screens for individual resource types, such as virtual machines, may have different options. --## Filter resources --Start exploring **All resources** by using filters to focus on a subset of your resources. The following screenshot shows filtering on resource groups, selecting two of the four resource groups in a subscription. ---You can combine filters, including those based on text searches. For example, after selecting specific resource groups, you can enter text in the filter box, or select a different filter option. --To change which columns are included in a view, select **Manage view**, then select **Edit columns**. ---## Save, use, and delete views --You can save views that include the filters and columns you've selected. To save and use a view: --1. Select **Manage view**, then select **Save view**. --1. Enter a name for the view, then select **Save**. The saved view now appears in the **Manage view** menu. -- :::image type="content" source="media/manage-filter-resource-views/simple-view.png" alt-text="Saved view"::: --Try switching between **Default** and one of your own views to see how that affects the list of resources displayed. --You can also select **Choose favorite view** to use one of your views as the default views for **All resources**. --To delete a view you've created: --1. Select **Manage view**, then select **Browse all views for "All resources"**. --1. In the **Saved views** pane, select the **Delete** icon ![Delete view icon](media/manage-filter-resource-views/icon-delete.png) next to the view that you want to delete. Select **OK** to confirm the deletion. --## Export information from a view --You can export the resource information from a view. To export information in CSV format: --1. Select **Export to CSV**. -- :::image type="content" source="media/manage-filter-resource-views/export-csv.png" alt-text="Screenshot of exporting to CSV format"::: --1. Save the file locally, then open the file in Excel or another application that supports the CSV format. --As you move around the portal, you'll see other areas where you can export information, such as an individual resource group. --## Summarize resources with visuals --The views we've looked at so far have been _list views_, but there are also _summary views_ that include visuals. You can save and use these views just like you can with list views. Filters persist between the two types of views. There are standard views, like the **Location** view shown below, as well as views that are relevant to specific services, such as the **Status** view for Azure Storage. ---To save and use a summary view: --1. From the view menu, select **Summary view**. -- :::image type="content" source="media/manage-filter-resource-views/menu-summary-view.png" alt-text="Summary view menu"::: --1. The summary view enables you to summarize by different attributes, including **Location** and **Type**. Select a **Summarize by** option and an appropriate visual. The following screenshot shows the **Type summary** with a **Bar chart** visual. -- :::image type="content" source="media/manage-filter-resource-views/type-summary-bar-chart.png" alt-text="Type summary showing a bar chart"::: --1. Select **Manage view**, then select **Save view** to save this view, just like you did with the list view. --In the summary view, you can select an item to view details filtered to that item. Using the previous example, you can select a bar in the chart under **Type summary** to view a list filtered down to one type of resource. ---## Run queries in Azure Resource Graph --Azure Resource Graph provides efficient and performant resource exploration with the ability to query at scale across a set of subscriptions. The **All resources** screen in the Azure portal includes a link to open a Resource Graph query scoped to the current filtered view. --To run a Resource Graph query: --1. Select **Open query**. -- :::image type="content" source="media/manage-filter-resource-views/open-query.png" alt-text="Open Azure Resource Graph query"::: --1. In **Azure Resource Graph Explorer**, select **Run query** to see the results. -- :::image type="content" source="media/manage-filter-resource-views/run-query.png" alt-text="Run Azure Resource Graph query"::: --For more information, see [Run your first Resource Graph query using Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md). --## Next steps --- Read an [overview of the Azure portal overview](azure-portal-overview.md).-- Learn how to [create and share dashboards in the Azure portal](azure-portal-dashboards.md). |
azure-portal | Alerts Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/alerts-notifications.md | - Title: Manage alerts and notifications in the Azure mobile app -description: Use Azure mobile app notifications to get up-to-date alerts and information on your resources and services. Previously updated : 11/2/2023----# Manage alerts and notifications in the Azure mobile app --Use Azure mobile app notifications to get up-to-date alerts and information on your resources and services. --Azure mobile app notifications offer users flexibility in how they receive push notifications. --Azure mobile app notifications are a way to monitor and manage your Azure resources from your mobile device. You can use the Azure mobile app to view the status of your resources, get alerts on issues, and take corrective actions. --This article describes different options for configuring your notifications in the Azure mobile app. --## Enable push notifications for Service Health alerts --To enable push notifications for Service Health on specific subscriptions: --1. Open the Azure mobile app and sign in with your Azure account. -1. Select the menu icon on the top left corner, then select **Settings**. -1. Select **Service Health issue alerts**. -- :::image type="content" source="media/alerts-notifications/service-health.png" alt-text="Screenshot showing the Service Health issue alerts section of the Settings page in the Azure mobile app."::: --1. Use the toggle switches to select subscriptions for which you want to receive push notifications. -1. Select **Save** to confirm your changes. --## Enable push notifications for custom alerts --You can enable push notifications in the Azure mobile app for custom alerts that you define. To do so, you first [create a new alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=metric) in the Azure portal. --1. Sign in to the [Azure portal](https://portal.azure.com) using the same Azure account information that you're using in the Azure mobile app. -1. In the Azure portal, open **Azure Monitor**. -1. Select **Alerts**. -1. Select **Create alert rule** and select the target resource that you want to monitor. -1. Configure the condition, severity, and action group for your alert rule. You can use an existing [action group](/azure/azure-monitor/alerts/action-groups), or create a new one. -1. In the action group, make sure to add a notification type of **Push Notification** and select the Azure mobile app as the destination. This enables notifications in your Azure mobile app. -1. Select **Create alert rule** to save your changes. --## View alerts --There are several ways to view current alerts on the Azure mobile app. --### Notifications list view --Select the **Notifications** icon on the bottom toolbar to see a list view of all current alerts. ---In the list view you have the option to search for specific alerts or utilize the filter option in the top right of the screen to filter by specific subscriptions. ---When you select a specific alert, you'll see an alert details page that will provide more information, including: --- Severity-- Fired time-- App Service plan-- Alert condition-- User response-- Why the alert fired-- Additional details- - Description - - Monitor service - - AlertID - - Suppression status - - Target resource type - - Signal type --You can change the user response by selecting the edit option (pencil icon) next to the current response. Select either **New**, **Acknowledged**, or **Closed**, and then select **Done** in the top right corner. You can also select **History** near the top of the screen to view the timeline of events for the alert. ---### Alerts card on Home view --You can also view alerts on the **Alerts** tile on your [Azure mobile app **Home**](home.md). --The **Alerts** tile includes two viewing options: **List** or **Chart**. --The **List** view will show your latest alerts along with top level information including: --- Title-- Alert state-- Severity-- Time--You can select **See All** to display the notifications list view showing all of your alerts. ---Alternately, you can select the **Chart** view to see the severity of the latest alerts on a bar chart. ---## Next steps --- Learn more about the [Azure mobile app](overview.md).-- Download the Azure mobile app for free from the [Apple App Store](https://aka.ms/ReferAzureIOSAlertsNotifsMobileAppDocs), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc). |
azure-portal | Cloud Shell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/cloud-shell.md | - Title: Use Cloud Shell in the Azure mobile app -description: Use the Azure mobile app to execute commands in Cloud Shell. Previously updated : 05/21/2024----# Use Cloud Shell in the Azure mobile app --The Azure Cloud Shell feature in the Azure mobile app provides an interactive, authenticated, browser-accessible terminal for managing Azure resources. --You can execute commands in Cloud Shell through either Bash or PowerShell, and you can switch shells anytime. ---## Access Cloud Shell in the Azure mobile app --To launch Cloud Shell from within the Azure mobile app, select the **Cloud Shell** card on the [Azure mobile app **Home**](home.md). If you don't see the **Cloud Shell** card, you may need to scroll down. You can rearrange the order in which cards are displayed by selecting the **Edit** (pencil) icon on Azure mobile app **Home**. ---## Set up storage account --Cloud Shell requires a storage account to be associated with your sessions (or an [ephemeral session](/azure/cloud-shell/get-started/ephemeral)). If you already set up a storage account for Cloud shell, or you opted to use ephemeral sessions, that selection is remembered when you launch Cloud Shell in the Azure mobile app. --If you haven't used Cloud Shell before, you need to create a new storage account for Cloud Shell. When you first launch Cloud Shell, you'll be prompted to select a subscription in which a new storage account will be created. ---## Use toolbar actions --The Cloud Shell toolbar in the Azure mobile app offers several helpful commands: ---- Select **X** to close Cloud Shell and return to **Home**.-- Select the dropdown to switch between Bash and PowerShell.-- Select the **Power** button to restart Cloud Shell with a new session.-- Select the **Clipboard** icon to paste content from your device's clipboard.--## Current limitations --The Cloud Shell feature in the Azure mobile app has certain limitations compared to the same feature in the Azure portal. The following functionalities are currently unavailable in the Azure mobile app: --- Command history-- IntelliSense-- File/script uploading-- Cloud Shell editor-- Port preview-- Retrieve additional tokens-- Reset user settings-- Font changes--## Next steps --- Learn more about the [Azure mobile app](overview.md).-- Learn more about [Azure mobile app **Home**](home.md) and how to customize it.-- Download the Azure mobile app for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc).-- Learn more about [Azure Cloud Shell](/azure/cloud-shell/overview). |
azure-portal | Home | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/home.md | - Title: Azure mobile app Home -description: Azure mobile app Home surfaces the most essential information and the resources you use most often. Previously updated : 08/28/2024----# Azure mobile app Home --Azure mobile app **Home** surfaces the most essential information and the resources you use most often. It provides a convenient way to access and manage your Azure resources or your Microsoft Entra tenant from your mobile device. --## Display cards --Azure mobile app **Home** consists of customizable display cards that show information and let you quickly access frequently used resources and services. You can select and organize these cards depending on what's most important for you and how you want to use the app. --Current card options include: --- **Learn**: Explore the [most popular Microsoft learn modules for Azure](learn-training.md).-- **Resource groups**: Quick access to all your resource groups.-- **Microsoft Entra ID**: Quick access to [Microsoft Entra ID management](microsoft-entra-id.md).-- **Azure services**: Quick access to Virtual machines, Web Apps, SQL databases, and Application Insights.-- **Latest alerts**: A list and chart view of the alerts fired in the last 24 hours and the option to [see all notifications](alerts-notifications.md).-- **Service Health**: A current count of service issues, maintenance, health advisories, and security advisories.-- **Cloud Shell**: Quick access to the [Cloud Shell terminal](cloud-shell.md).-- **Recent resources**: A list of your four most recently viewed resources, with the option to see all.-- **Favorites**: A list of the resources you have added to your favorites, and the option to see all.-- **Dashboards (preview)**: Access to [shared dashboards](../dashboard-hub.md).---## Customize Azure mobile app Home --You can customize the cards displayed on your Azure mobile app **Home** by selecting the :::image type="icon" source="media/edit-icon.png" border="false"::: **Edit** icon in the top right of **Home**. From there, you can select which cards you see by toggling the switch. You can also drag and drop the display cards in the list to reorder how they appear on your **Home**. --For instance, you could rearrange the default order as follows: ---This would result in a **Home** similar to the following image: ---## Global search --The global search button appears the top left of **Home**. Select this button to search for anything specific you may be looking for on your Azure account. This includes: --- Resources-- Services-- Resource groups-- Subscriptions--You can filter these results by subscription using the **Home** filtering option. --## Filtering --In the top right of **Home**, you'll see a filter option. When selecting the filter icon, the app gives you the option to filter the results shown on **Home** by specific subscriptions. This includes results for: --- Resource groups-- Azure services-- Latest alerts-- Service health-- Global search--This filtering option is specific to **Home**, and doesn't filter for the other bottom navigation sections. --## Next steps --- Learn more about the [Azure mobile app](overview.md).-- Download the Azure mobile app for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc).- |
azure-portal | Intune Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/intune-management.md | - Title: Use Microsoft Intune MAM on devices that run the Azure mobile app -description: Learn about setting and enforcing app protection policies on devices that run the Azure mobile app. Previously updated : 06/17/2024--- - build-2024 ---# Use Microsoft Intune mobile application management (MAM) on devices that run the Azure mobile app --[Microsoft Intune mobile application management (MAM)](/mem/intune/apps/app-management) is a cloud-based service that allows an organization to protect its data at the app level on both company devices and users' personal devices, such as smartphones, tablets, and laptops. --Since the Azure mobile app is an Intune-protected app, app protection policies (APP) can be applied and enforced on devices that run the Azure mobile app. --## App protection policies and settings --Intune [app protection policies (APP)](/mem/intune/apps/app-protection-policy) are rules or sets of action that ensure an organization's data remains safe. Administrators use these policies to control how data is accessed and shared. For an overview of how to create an app protection policy, see [How to create and assign app protection policies](/mem/intune/apps/app-protection-policies). --Available app protection settings are continuously being updated and may vary across platforms. For details about the currently available settings, see [Android app protection policy settings in Microsoft Intune](/mem/intune/apps/app-protection-policy-settings-android) and [iOS app protection policy settings](/mem/intune/apps/app-protection-policy-settings-ios) devices. --## User management --With Intune MAM, you can select and assign groups of users to include and exclude in your policies, allowing you to control who has access to your data in Azure Mobile. For more information on user and group assignments, see [Include and exclude app assignments in Microsoft Intune](/mem/intune/apps/apps-inc-exl-assignments). --An Intune license is required in order for app protection policies to apply correctly to a user or group. If an unlicensed user is included in an app protection policy, the rules of that policy won't be applied to that user. --Only Intune-targeted users and groups will be subject to the rules of the app protection policy. To ensure data remains protected, verify that the necessary groups and users were included in your policy during creation. --Users that are out of compliance with their MAM policy or Conditional Access policy may lose access to data and resources, including full access to the Azure mobile app. When a user is marked as out of compliance, the Azure mobile app may initially try automated remediation to regain compliance. If automatic remediation is disabled or unsuccessful, the user is signed out of the app. --You can use [Microsoft Entra Conditional Access policies in combination with Intune compliance policies](/mem/intune/protect/app-based-conditional-access-intune) to ensure that only managed apps and policy-compliant users can access corporate data. --## User experience --When Intune-licensed Azure mobile app users are targeted with an Intune MAM policy, they are subject to all rules and actions dictated by their policy. When these users sign in to the Azure Mobile app, policy rules are retrieved and enacted immediately, before allowing access to any corporate data. --For example, a user's MAM policy may specify a 6-digit PIN requirement. When that user first signs into the Azure mobile app, they see a message from Intune MAM that describes their current device state and asks them to set an access PIN. ---After the user sets up their PIN, they'll be prompted to enter that PIN every time they sign in. The PIN must be entered in order to use the Azure mobile app. ---If a user is marked as out of compliance with their policy (following any remediation steps), they'll be signed out of the app. For example, a user might switch to a different policy-protected account that was marked as out of compliance. In this case, the app signs them out and displays a message notifying the user that they must sign back in. ---## Next steps --- Learn more about the [Microsoft Intune](/mem/intune/fundamentals/what-is-intune).-- Download the Azure mobile app for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc), or [Amazon App Store](https://aka.ms/azureapp/amazon/doc). |
azure-portal | Learn Training | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/learn-training.md | - Title: Learn about Azure in the Azure mobile app -description: The Microsoft Learn features in the Azure mobile app help you learn Azure skills anytime, anywhere. Previously updated : 02/26/2024----# Learn about Azure in the Azure mobile app --The Microsoft Learn features on the Azure mobile app are designed to help you learn anytime, anywhere. Browse and view the most popular Azure modules, with training on fundamentals, security, AI, and more. Access the Azure certification page, Azure Learn landing page, Q&A, and other useful pages. --In this article, we'll walk through some of the features you can use to access training content and grow your Azure skills, right from within the app. With the Azure mobile app, you can learn Azure at your own pace and convenience. --You can access the **Learn** Page from [Azure mobile app **Home**](home.md). --## Most popular lessons --When you arrive to the Learn page on the Azure mobile app, the **Most popular lessons** section shows the most popular lessons. These modules are the highest-viewed Azure content that can be easily completed in a short amount of time. ---Each lesson card shows information about the module, including the title, average time to complete, and user rating. --To see more of the most popular lessons, select **More** in the top right. The current top 10 most popular lessons will be shown. ---To start a lesson, just select it to begin. Remember to sign in to your Microsoft account to save your progress! --## Learn Links --The **Learn Links** section shows buttons that take you to different experiences across Microsoft Learn, including: --- **Azure Learn**: Shows learning paths and other resources to help you build Azure skills.-- **Azure Basics**: Launches the Microsoft Azure Fundamentals learning path with three modules about basic cloud concepts and Azure services.-- **Certifications**: Shows information about available Azure-related Microsoft Certifications.-- **Azure Q&A**: Explore technical questions and answers about Azure.--Select any of these links to explore their content. --## Learn more about Azure AI --The **Learn more about Azure AI** section showcases a few of the most popular learning modules focused on Azure AI. The content you see here will vary, based on popularity and new releases. Select any module to open and begin it. As noted earlier, be sure to sign in with your Microsoft account if you want to save your progress. ---## Next steps --- Learn more about the [Azure mobile app](overview.md).-- Download the Azure mobile app for free from the [Apple App Store](https://aka.ms/ReferAzureIOSMSLearnMobileAppDocs), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc). |
azure-portal | Microsoft Copilot In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/microsoft-copilot-in-azure.md | - Title: Use Microsoft Copilot in Azure with the Azure mobile app -description: You can use the Azure mobile app to access Microsoft Copilot in Azure (preview) and benefit from its features. Previously updated : 05/21/2024--- - build-2024 ---# Use Microsoft Copilot in Azure with the Azure mobile app --[Microsoft Copilot in Azure](/azure/copilot/overview) is an AI-powered tool that lets you do more with Azure. Copilot uses Large Language Models (LLMs) and insights about your Azure environment to help you work more efficiently. You can use the Azure mobile app to access Copilot in Azure (preview) and benefit from its features. --With Copilot in Azure, you can explore the range of services and resources that Azure offers and find the best ones for your needs. You can ask questions in natural language and get personalized information based on your own Azure resources and environment. Copilot in Azure can also help you achieve your goals by generating code snippets and suggestions. --To learn more about Copilot in Azure, see [What is Microsoft Copilot in Azure?](/azure/copilot/overview) and [Responsible AI FAQ for Microsoft Copilot in Azure (preview)](/azure/copilot/responsible-ai-faq). --> [!IMPORTANT] -> Microsoft Copilot in Azure is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --## Access Copilot in Azure on the Azure mobile app --In the Azure Mobile app, you can find the **Copilot** icon on the top right of your screen in the navigation bar. The first time you use Copilot in Azure, a welcome screen tells you more about the feature and how to use it. --To start a conversation with Copilot in Azure, just type or speak your query or command in natural language. You can also use the provided prompts that appear above the keyboard in the mobile app. These prompts are based on common tasks and your Azure environment. Copilot in Azure responds with relevant information, code snippets, or suggestions that you can utilize to monitor and manage your resources. When needed, it prompts you to provide more information or select a specific resource. --For tips on how to create prompts that provide the most helpful responses, see [Write effective prompts for Microsoft Copilot in Azure](/azure/copilot/write-effective-prompts). --To provide feedback on any response, select the thumbs up/down icon. This feedback helps us improve Copilot and make it more useful for you. --## Capabilities --To learn more about ways to use Copilot in Azure, see [Microsoft Copilot in Azure capabilities](/azure/copilot/capabilities). Keep in mind that there are some differences in functionality when using Copilot in Azure from within the Azure mobile app. Some key scenarios in the Azure mobile app include: --- Troubleshooting resources-- Generating CLI and PowerShell scripts-- Generating and running Azure Resource Graph queries-- Get information about Azure concepts, services, and offerings--In some cases, Microsoft Copilot in Azure may not be able to complete your request, or may have a limited ability to respond. For more information, see [Current limitations](/azure/copilot/capabilities#current-limitations). --## Next steps --- Learn more about the [Azure mobile app](overview.md).-- Download the Azure mobile app for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc). |
azure-portal | Microsoft Entra Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/microsoft-entra-id.md | - Title: Use Microsoft Entra ID with the Azure mobile app -description: Use the Azure mobile app to manage users and groups with Microsoft Entra ID. Previously updated : 04/04/2024----# Use Microsoft Entra ID with the Azure mobile app --The Azure mobile app provides access to Microsoft Entra ID. You can perform tasks such as managing users and updating group memberships from within the app. --To access Microsoft Entra ID, open the Azure mobile app and sign in with your Azure account. From **Home**, scroll down to select the **Microsoft Entra ID** card. --> [!NOTE] -> Your account must have the appropriate permissions in order to perform these tasks. For example, to invite a user to your tenant, you must have a role that includes this permission, such as [Guest Inviter](/entra/identity/role-based-access-control/permissions-reference) or [User Administrator](/entra/identity/role-based-access-control/permissions-reference). --## Invite a user to the tenant --To invite a [guest user](/entra/external-id/what-is-b2b) to your tenant from the Azure mobile app: --1. In **Microsoft Entra ID**, select **Users**, then select the **+** icon in the top right corner. -1. Select **Invite user**, then enter the user's name and email address. You can optionally add a message for the user. -1. Select **Invite** in the top right corner, then select **Save** to confirm your changes. --## Add users to a group --To add one or more users to a group from the Azure mobile app: --1. In **Microsoft Entra ID**, select **Groups**. -1. Search or scroll to find the desired group, then tap to select it. -1. On the **Members** card, select **See All**. The current list of members is displayed. -1. Select the **+** icon in the top right corner. -1. Search or scroll to find users you want to add to the group, then select one or more users by tapping the circle next to their name. -1. Select **Add** in the top right corner to add the selected users to the group. --## Add group memberships for a specified user --You can also add a single user to one or more groups in the **Users** section of **Microsoft Entra ID** in the Azure mobile app. To do so: --1. In **Microsoft Entra ID**, select **Users**, then search or scroll to find and select the desired user. -1. On the **Groups** card, select **See All** to display all current group memberships for that user. -1. Select the **+** icon in the top right corner. -1. Search or scroll to find groups to which this user should be added, then select one or more groups by tapping the circle next to the group name. -1. Select **Add** in the top right corner to add the user to the selected groups. --## Manage authentication methods or reset password for a user --To [manage authentication methods](/entra/identity/authentication/concept-authentication-methods-manage) or [reset a user's password](/entra/fundamentals/users-reset-password-azure-portal): --1. In **Microsoft Entra ID**, select **Users**, then search or scroll to find and select the desired user. -1. On the **Authentication methods** card, select **Manage**. -1. Select **Reset password** to assign a temporary password to the user, or **Authentication methods** to manage authentication methods for self-service password reset. --> [!NOTE] -> You won't see the **Authentication methods** card if you don't have the appropriate permissions to manage authentication methods and/or password changes for a user. --## Investigate risky users and sign-ins --[Microsoft Entra ID Protection](/entra/id-protection/overview-identity-protection) provides organizations with reporting they can use to [investigate identity risks in their environment](/entra/id-protection/howto-identity-protection-investigate-risk). --If you have the [necessary permissions and license](/entra/id-protection/overview-identity-protection#required-roles), you'll see details in the **Risky users** and **Risky sign-ins** sections within **Microsoft Entra ID**. You can open these sections to view more information and perform some management tasks. --### Manage risky users --1. In **Microsoft Entra ID**, scroll down to the **Security** card and then select **Risky users**. -1. Search or scroll to find and select a specific risky user. -1. Review basic information for this user, a list of their risky sign-ins, and their risk history. -1. To [take action on the user](/entra/id-protection/howto-identity-protection-investigate-risk), select the three dots near the top of the screen. You can: -- * Reset the user's password - * Confirm user compromise - * Dismiss user risk - * Block the user from signing in (or unblock, if previously blocked) --### Monitor risky sign-ins --1. In **Microsoft Entra ID**, scroll down to the **Security** card and then select **Risky sign-ins**. It may take a minute or two for the list of all risky sign-ins to load. --1. Search or scroll to find and select a specific risky sign-in. --1. Review details about the risky sign-in. --## Activate Privileged Identity Management (PIM) roles --If you have been made eligible for an administrative role through Microsoft Entra Privileged Identity Management (PIM), you must activate the role assignment when you need to perform privileged actions. This activation can be done from within the Azure mobile app. --For more information, see [Activate PIM roles using the Azure mobile app](/entra/id-governance/privileged-identity-management/pim-how-to-activate-role). --## Next steps --- Learn more about the [Azure mobile app](overview.md).-- Download the Azure mobile app for free from the [Apple App Store](https://aka.ms/ReferAzureIOSEntraIDMobileAppDocs), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc). |
azure-portal | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/overview.md | - Title: What is the Azure mobile app? -description: The Azure mobile app is a tool that allows you to monitor and manage your Azure resources and services from your mobile device. Previously updated : 06/06/2024----# What is the Azure mobile app? --The Azure mobile app is a tool that allows you to monitor and manage your Azure resources and services from your mobile device. You can use the app to view the status, performance, and health of your resources, as well as perform common operations such as starting and stopping virtual machines, web apps, and databases. You can also access Azure Cloud Shell from the app and get push notifications and alerts about your resources. The Azure mobile app is available for iOS and Android devices, and you can download it for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc). --To use the app, you need an Azure account with the appropriate permissions to access your resources. The app supports multiple accounts, and you can switch between them easily. The app also supports Microsoft Entra ID authentication and multifactor authentication for enhanced security. The Azure mobile app is a convenient way to stay connected to your Azure resources and Entra tenant, and manage much more on the go. --## Azure mobile app Home --When you first open the Azure mobile app, **Home** shows an overview of your Azure account. ---View and customize display cards, including: --- Microsoft Entra ID-- Resource groups-- Azure services-- Latest alerts-- Service Health-- Cloud Shell-- Recent resources-- Favorites-- Learn-- Privileged Identity Management--You can select which of these tiles appear on **Home** and rearrange them. --For more information, see [Azure mobile app Home](home.md). --## Hamburger menu --The hamburger menu lets you select the environment, account, and directory (Azure tenant) you want to work in. The hamburger menu also houses several other settings and features, including: --- Billing/Cost management-- Settings-- Help & feedback-- Support requests-- Privacy + Terms--## Navigation --The Azure mobile app provides several areas that allow you to navigate to different sections of the app. On the bottom navigation bar, you'll find **Home**, **Subscriptions**, **Resources**, and **Notifications**. --On the top navigation bar, you'll find the hamburger button to open the hamburger menu, the search magnifying glass to explore your services and resources, the edit button to change the layout of the Azure mobile app **Home**, and the filter button to filter what content currently appears. --If available in your tenant, you can also access [Microsoft Copilot in Azure (preview)](microsoft-copilot-in-azure.md) by selecting the **Copilot** icon from the top navigation bar. --## Download the Azure mobile app --You can download the Azure mobile app today for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc). --## Next steps --- Learn about [Azure mobile app **Home**](home.md) and how to customize it.-- Learn about [alerts and notifications](alerts-notifications.md) in the Azure mobile app. |
azure-portal | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md | - Title: Built-in policy definitions for Azure portal -description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/06/2024-----# Azure Policy built-in definitions for Azure portal --This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy -definitions for Azure portal. For additional Azure Policy built-ins for other services, see -[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). --The name of each built-in policy definition links to the policy definition in the Azure portal. Use -the link in the **Version** column to view the source on the -[Azure Policy GitHub repo](https://github.com/Azure/azure-policy). --## Azure portal ---## Next steps --- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).-- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../governance/policy/concepts/effects.md). |
azure-portal | Quick Create Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quick-create-bicep.md | - Title: Create an Azure portal dashboard by using a Bicep file -description: Learn how to create an Azure portal dashboard by using a Bicep file. -- Previously updated : 12/11/2023---# Quickstart: Create a dashboard in the Azure portal by using a Bicep file --A [dashboard](azure-portal-dashboards.md) in the Azure portal is a focused and organized view of your cloud resources. This quickstart shows how to deploy a Bicep file to create a dashboard. The example dashboard shows the performance of a virtual machine (VM), along with some static information and links. ---## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).--## Review the Bicep file --The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-portal-dashboard/). This Bicep file is too long to show here. To view the Bicep file, see [main.bicep](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/main.bicep). --The Bicep file defines one Azure resource, a [Microsoft.Portal dashboards resource](/azure/templates/microsoft.portal/dashboards?pivots=deployment-language-bicep) that displays data about the VM that you'll create as part of the deployment. --The dashboard created by deploying this Bicep file requires an existing virtual machine. Before deploying the Bicep file, the script deploys an ARM template called [prereq.azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/prereqs/prereq.azuredeploy.json) that creates a virtual machine. --The virtual machine name is hard-coded as **SimpleWinVM** in the ARM template, to match what's used in the `main.bicep` file that creates the dashboard. You'll need to create your own administration username and password for this VM. This is a new username and password (not the account you use to sign in to Azure). The password must be complex. For more information, see [username requirements](/azure/virtual-machines/windows/faq#what-are-the-username-requirements-when-creating-a-vm-) -and [password requirements](/azure/virtual-machines/windows/faq#what-are-the-password-requirements-when-creating-a-vm-). ---## Deploy the Bicep file --1. Save the Bicep file as **main.bicep** to your local computer. -1. Deploy the Bicep file using either Azure CLI or Azure PowerShell, using the script shown here. Replace the following values in the script: -- - <admin-user-name>: specify an administrator username. - - <admin-password>: specify an administrator password. - - <dns-label-prefix>: specify a DNS prefix. -- # [CLI](#tab/CLI) -- ```azurecli - $resourceGroupName = 'SimpleWinVmResourceGroup' - $location = 'eastus' - $adminUserName = '<admin-user-name>' - $adminPassword = '<admin-password>' - $dnsLabelPrefix = '<dns-label-prefix>' - $virtualMachineName = 'SimpleWinVM' - $vmTemplateUri = 'https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/prereqs/prereq.azuredeploy.json' -- az group create --name $resourceGroupName --location $location - az deployment group create --resource-group $resourceGroupName --template-uri $vmTemplateUri --parameters adminUsername=$adminUserName adminPassword=$adminPassword dnsLabelPrefix=$dnsLabelPrefix - az deployment group create --resource-group $resourceGroupName --template-file main.bicep --parameters virtualMachineName=$virtualMachineName virtualMachineResourceGroup=$resourceGroupName - ``` -- # [PowerShell](#tab/PowerShell) -- ```azurepowershell - $resourceGroupName = 'SimpleWinVmResourceGroup' - $location = 'eastus' - $adminUserName = '<admin-user-name>' - $adminPassword = '<admin-password>' - $dnsLabelPrefix = '<dns-label-prefix>' - $virtualMachineName = 'SimpleWinVM' - $vmTemplateUri = 'https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/prereqs/prereq.azuredeploy.json' -- $encrypted = ConvertTo-SecureString -string $adminPassword -AsPlainText -- New-AzResourceGroup -Name $resourceGroupName -Location $location - New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $vmTemplateUri -adminUsername $adminUserName -adminPassword $encrypted -dnsLabelPrefix $dnsLabelPrefix - New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile ./main.bicep -virtualMachineName $virtualMachineName -virtualMachineResourceGroup $resourceGroupName - ``` -- --After the deployment finishes, you should see a message indicating the deployment succeeded. --## Review deployed resources ---## Clean up resources --If you want to remove the VM and associated dashboard, delete the resource group that contains them. --1. In the Azure portal, search for **SimpleWinVmResourceGroup**, then select it in the search results. --1. On the **SimpleWinVmResourceGroup** page, select **Delete resource group**, enter the resource group name to confirm, then select **Delete**. --> [!CAUTION] -> Deleting a resource group will delete all of the resources contained within it. If the resource group contains additional resources aside from your virtual machine and dashboard, those resources will also be deleted. --## Next steps --For more information about dashboards in the Azure portal, see: --> [!div class="nextstepaction"] -> [Create and share dashboards in the Azure portal](azure-portal-dashboards.md) |
azure-portal | Quick Create Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quick-create-template.md | - Title: Create an Azure portal dashboard by using an Azure Resource Manager template -description: Learn how to create an Azure portal dashboard by using an Azure Resource Manager template. -- Previously updated : 12/11/2023---# Quickstart: Create a dashboard in the Azure portal by using an ARM template --A [dashboard](azure-portal-dashboards.md) in the Azure portal is a focused and organized view of your cloud resources. This quickstart shows how to deploy an Azure Resource Manager template (ARM template) to create a dashboard. The example dashboard shows the performance of a virtual machine (VM), along with some static information and links. ---If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal, where you can edit the details (such as the VM used in the dashboard) before you deploy. ---## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- A virtual machine. The dashboard you create in the next part of this quickstart requires an existing VM called `myVM1` located in a resource group called `SimpleWinVmResourceGroup`. You can create this VM by following these steps:-- 1. In the Azure portal, select **Cloud Shell** from the global controls at the top of the page. -- :::image type="content" source="media/quick-create-template/cloud-shell.png" alt-text="Screenshot showing the Cloud Shell option in the Azure portal."::: -- 1. In the **Cloud Shell** window, select **PowerShell**. -- :::image type="content" source="media/quick-create-template/powershell.png" alt-text="Screenshot showing the PowerShell option in Cloud Shell."::: -- 1. Copy the following command and enter it at the command prompt to create a resource group. -- ```powershell - New-AzResourceGroup -Name SimpleWinVmResourceGroup -Location EastUS - ``` -- 1. Next, copy the following command and enter it at the command prompt to create a VM in your new resource group. -- ```powershell - New-AzVm ` - -ResourceGroupName "SimpleWinVmResourceGroup" ` - -Name "myVM1" ` - -Location "East US" - ``` -- 1. Enter a username and password for the VM. This is a new username and password (not the account you use to sign in to Azure). The password must be complex. For more information, see [username requirements](/azure/virtual-machines/windows/faq#what-are-the-username-requirements-when-creating-a-vm-) and [password requirements](/azure/virtual-machines/windows/faq#what-are-the-password-requirements-when-creating-a-vm-). -- After the VM has been created, move on to the next section. --## Review the template --The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-portal-dashboard/). This template file is too long to show here. To view the template, see [azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/azuredeploy.json). The template defines one Azure resource, a dashboard that displays data about your VM. --## Deploy the template --This example uses the Azure portal to deploy the template. You can also use other methods to deploy ARM templates, such as [Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md), [Azure CLI](../azure-resource-manager/templates/deploy-cli.md), or [REST API](../azure-resource-manager/templates/deploy-rest.md). --1. Select the following image to sign in to Azure and open a template. -- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Button to deploy the Resource Manager template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.portal%2Fazure-portal-dashboard%2Fazuredeploy.json"::: --1. Select or enter the following values, then select **Review + create**. -- :::image type="content" source="media/quick-create-template/create-dashboard-using-template-portal.png" alt-text="Screenshot of the dashboard template deployment screen in the Azure portal."::: -- Unless it's specified, use the default values to create the dashboard. -- - **Subscription**: select the Azure subscription where the dashboard will be located. - - **Resource group**: select **SimpleWinVmResourceGroup**. - - **Location**: If not automatically selected, choose **East US**. - - **Virtual Machine Name**: enter **myVM1**. - - **Virtual Machine Resource Group**: enter **SimpleWinVmResourceGroup**. --1. Select **Create**. You'll see a notification confirming when the dashboard has been deployed successfully. --## Review deployed resources ---## Clean up resources --If you want to remove the VM and associated dashboard, delete the resource group that contains them. --1. In the Azure portal, search for **SimpleWinVmResourceGroup**, then select it in the search results. --1. On the **SimpleWinVmResourceGroup** page, select **Delete resource group**, enter the resource group name to confirm, then select **Delete**. --> [!CAUTION] -> Deleting a resource group will delete all of the resources contained within it. If the resource group contains additional resources aside from your virtual machine and dashboard, those resources will also be deleted. --## Next steps --For more information about dashboards in the Azure portal, see: --> [!div class="nextstepaction"] -> [Create and share dashboards in the Azure portal](azure-portal-dashboards.md) |
azure-portal | Quickstart Portal Dashboard Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quickstart-portal-dashboard-azure-cli.md | - Title: Create an Azure portal dashboard with Azure CLI -description: "Quickstart: Learn how to create a dashboard in the Azure portal using the Azure CLI. A dashboard is a focused and organized view of your cloud resources." -- Previously updated : 03/27/2023---# Quickstart: Create an Azure portal dashboard with Azure CLI --A [dashboard](azure-portal-dashboards.md) in the Azure portal is a focused and organized view of your cloud resources. This quickstart shows how to use Azure CLI to create a dashboard. The example dashboard shows the performance of a virtual machine (VM), along with some static information and links. --In addition to the prerequisites below, you'll need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ---- If you have multiple Azure subscriptions, choose the appropriate subscription in which to bill the resources.-Select a subscription by using the [az account set](/cli/azure/account#az-account-set) command: -- ```azurecli - az account set --subscription 00000000-0000-0000-0000-000000000000 - ``` --- Create an [Azure resource group](../azure-resource-manager/management/overview.md#resource-groups) by using the [az group create](/cli/azure/group#az-group-create) command (or use an existing resource group):-- ```azurecli - az group create --name myResourceGroup --location centralus - ``` --## Create a virtual machine --Create a virtual machine by using the [az vm create](/cli/azure/vm#az-vm-create) command: --```azurecli -az vm create --resource-group myResourceGroup --name myVM1 --image win2016datacenter \ - --admin-username azureuser --admin-password 1StrongPassword$ -``` --> [!NOTE] -> This is a new username and password (not the account you use to sign in to Azure). The password must be complex. For more information, see [username requirements](/azure/virtual-machines/windows/faq#what-are-the-username-requirements-when-creating-a-vm-) -and [password requirements](/azure/virtual-machines/windows/faq#what-are-the-password-requirements-when-creating-a-vm-). --The deployment starts and typically takes a few minutes to complete. --## Download the dashboard template --Since Azure dashboards are resources, they can be represented as JSON. For more information, see [The structure of Azure dashboards](./azure-portal-dashboards-structure.md). --Download the file [portal-dashboard-template-testvm.json](https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/azure-portal/portal-dashboard-template-testvm.json). --Then, customize the downloaded template file by changing the following to your values: --- `<subscriptionID>`: Your subscription-- `<rgName>`: Resource group, for example `myResourceGroup`-- `<vmName>`: Virtual machine name, for example `myVM1`-- `<dashboardTitle>`: Dashboard title, for example `Simple VM Dashboard`-- `<location>`: Your Azure region, for example `centralus`--For more information, see [Microsoft portal dashboards template reference](/azure/templates/microsoft.portal/dashboards). --## Deploy the dashboard template --You can now deploy the template from within Azure CLI. --1. Run the [az portal dashboard create](/cli/azure/portal/dashboard#az-portal-dashboard-create) command to deploy the template: -- ```azurecli - az portal dashboard create --resource-group myResourceGroup --name 'Simple VM Dashboard' \ - --input-path portal-dashboard-template-testvm.json --location centralus - ``` --1. Check that the dashboard was created successfully by running the [az portal dashboard show](/cli/azure/portal/dashboard#az-portal-dashboard-show) command: -- ```azurecli - az portal dashboard show --resource-group myResourceGroup --name 'Simple VM Dashboard' - ``` --To see all the dashboards for the current subscription, use [az portal dashboard list](/cli/azure/portal/dashboard#az-portal-dashboard-list): --```azurecli -az portal dashboard list -``` --You can also see all the dashboards for a specific resource group: --```azurecli -az portal dashboard list --resource-group myResourceGroup -``` --To update a dashboard, use the [az portal dashboard update](/cli/azure/portal/dashboard#az-portal-dashboard-update) command: --```azurecli -az portal dashboard update --resource-group myResourceGroup --name 'Simple VM Dashboard' \ - --input-path portal-dashboard-template-testvm.json --location centralus -``` --## Review deployed resources ---## Clean up resources --To remove the virtual machine and associated dashboard that you created, delete the resource group that contains them. --> [!CAUTION] -> Deleting the resource group will delete all of the resources contained within it. If the resource group contains additional resources aside from your virtual machine and dashboard, those resources will also be deleted. --```azurecli -az group delete --name myResourceGroup -``` --To remove only the dashboard, use the [az portal dashboard delete](/cli/azure/portal/dashboard#az-portal-dashboard-delete) command: --```azurecli -az portal dashboard delete --resource-group myResourceGroup --name "Simple VM Dashboard" -``` --## Next steps --For more information about Azure CLI commands for dashboards, see: --> [!div class="nextstepaction"] -> [Azure CLI: az portal dashboard](/cli/azure/portal/dashboard). |
azure-portal | Quickstart Portal Dashboard Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quickstart-portal-dashboard-powershell.md | - Title: Create an Azure portal dashboard with PowerShell -description: Learn how to create a dashboard in the Azure portal using Azure PowerShell. -- Previously updated : 03/27/2023---# Quickstart: Create an Azure portal dashboard with PowerShell --A dashboard in the Azure portal is a focused and organized view of your cloud resources. This article focuses on the process of using the [Az.Portal PowerShell module](/powershell/module/az.portal) to create a dashboard. The dashboard shows the performance of a virtual machine (VM) that you create, as well as some static information and links. --A [dashboard](azure-portal-dashboards.md) in the Azure portal is a focused and organized view of your cloud resources. This quickstart shows how to use the Az.Portal PowerShell to create a dashboard. The example dashboard shows the performance of a virtual machine (VM), along with some static information and links. --## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).---## Choose a specific Azure subscription --If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription using the -[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet. --```azurepowershell-interactive -Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000 -``` --## Define variables --You'll be using several pieces of information repeatedly. Create variables to store the information. --```azurepowershell-interactive -# Name of resource group used throughout this article -$resourceGroupName = 'myResourceGroup' --# Azure region -$location = 'centralus' --# Dashboard Title -$dashboardTitle = 'Simple VM Dashboard' --# Dashboard Name -$dashboardName = $dashboardTitle -replace '\s' --# Your Azure Subscription ID -$subscriptionID = (Get-AzContext).Subscription.Id --# Name of test VM -$vmName = 'myVM1' -``` --## Create a resource group --Create an [Azure resource group](../azure-resource-manager/management/overview.md) using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) -cmdlet. A resource group is a logical container in which Azure resources are deployed and managed as a group. --The following example creates a resource group based on the name in the `$resourceGroupName` -variable in the region specified in the `$location` variable. --```azurepowershell-interactive -New-AzResourceGroup -Name $resourceGroupName -Location $location -``` --## Create a virtual machine --The dashboard you create in the next part of this quickstart requires an existing VM. Create a VM by following these steps. --Store login credentials for the VM in a variable. The password must be complex. This is a new user name and password; it's not, for example, the account you use to sign in to Azure. For more information, see [username requirements](/azure/virtual-machines/windows/faq#what-are-the-username-requirements-when-creating-a-vm-) -and [password requirements](/azure/virtual-machines/windows/faq#what-are-the-password-requirements-when-creating-a-vm-). --```azurepowershell-interactive -$Cred = Get-Credential -``` --Create the VM. --```azurepowershell-interactive -$AzVmParams = @{ - ResourceGroupName = $resourceGroupName - Name = $vmName - Location = $location - Credential = $Cred -} -New-AzVm @AzVmParams -``` --The VM deployment now starts and typically takes a few minutes to complete. After deployment completes, move on to the next section. --## Download the dashboard template --Since Azure dashboards are resources, they can be represented as JSON. The following code downloads a JSON representation of a sample dashboard. For more information, see [The structure of Azure Dashboards](./azure-portal-dashboards-structure.md). --```azurepowershell-interactive -$myPortalDashboardTemplateUrl = 'https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/azure-portal/portal-dashboard-template-testvm.json' --$myPortalDashboardTemplatePath = "$HOME\portal-dashboard-template-testvm.json" --Invoke-WebRequest -Uri $myPortalDashboardTemplateUrl -OutFile $myPortalDashboardTemplatePath -UseBasicParsing -``` --## Customize the template --Customize the downloaded template by running the following code. --```azurepowershell -$Content = Get-Content -Path $myPortalDashboardTemplatePath -Raw -$Content = $Content -replace '<subscriptionID>', $subscriptionID -$Content = $Content -replace '<rgName>', $resourceGroupName -$Content = $Content -replace '<vmName>', $vmName -$Content = $Content -replace '<dashboardTitle>', $dashboardTitle -$Content = $Content -replace '<location>', $location -$Content | Out-File -FilePath $myPortalDashboardTemplatePath -Force -``` --For more information about the dashboard template structure, see [Microsoft portal dashboards template reference](/azure/templates/microsoft.portal/dashboards). --## Deploy the dashboard template --You can use the `New-AzPortalDashboard` cmdlet that's part of the Az.Portal module to deploy the template directly from PowerShell. --```azurepowershell -$DashboardParams = @{ - DashboardPath = $myPortalDashboardTemplatePath - ResourceGroupName = $resourceGroupName - DashboardName = $dashboardName -} -New-AzPortalDashboard @DashboardParams -``` --## Review the deployed resources --Check that the dashboard was created successfully. --```azurepowershell -Get-AzPortalDashboard -Name $dashboardName -ResourceGroupName $resourceGroupName -``` ---## Clean up resources --To remove the VM and associated dashboard, delete the resource group that contains them. --> [!CAUTION] -> Deleting the resource group will delete all of the resources contained within it. If the resource group contains additional resources aside from your virtual machine and dashboard, those resources will also be deleted. --```azurepowershell-interactive -Remove-AzResourceGroup -Name $resourceGroupName -Remove-Item -Path "$HOME\portal-dashboard-template-testvm.json" -``` --## Next steps --For more information about the cmdlets contained in the Az.Portal PowerShell module, see: --> [!div class="nextstepaction"] -> [Microsoft Azure PowerShell: Portal Dashboard cmdlets](/powershell/module/Az.Portal/#portal) |
azure-portal | Recover Shared Deleted Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/recover-shared-deleted-dashboard.md | - Title: Recover a deleted dashboard in the Azure portal -description: If you delete a published dashboard in the Azure portal, you can recover the dashboard. Previously updated : 09/05/2023----# Recover a deleted dashboard in the Azure portal --If you're in the global Azure cloud, and you delete a _published_ (shared) dashboard in the Azure portal, you can recover that dashboard within seven days of the delete. --> [!IMPORTANT] -> If you're in an Azure Government cloud, or if the dashboard isn't published, you can't recover a deleted dashboard. --Follow these steps to recover a published dashboard: --1. From the Azure portal menu, select **Resource groups**, then select the resource group where you published the dashboard. (The default resource group is named **dashboards**.) --1. Under **Activity log**, expand the **Delete Dashboard** operation. Select the **Change history** tab, then select **\<deleted resource\>**. -- ![Screenshot of change history tab](media/recover-shared-deleted-dashboard/change-history-tab.png) --1. Select and copy the contents of the left pane, then save to a text file with a _.json_ file extension. The portal can use this JSON file to re-create the dashboard. -- ![Screenshot of change history diff](media/recover-shared-deleted-dashboard/change-history-diff.png) --1. From the Azure portal menu, select **Dashboards**, then select **Upload**. -- :::image type="content" source="media/recover-shared-deleted-dashboard/dashboard-upload.png" alt-text="Screenshot of the Upload option in the Azure portal."::: --1. Select the JSON file you saved. The portal re-creates the dashboard with the same name and elements as the deleted dashboard. --1. Select **Share** to publish the dashboard and re-establish the appropriate access control. -- :::image type="content" source="media/recover-shared-deleted-dashboard/dashboard-share.png" alt-text="Screenshot of the Share option for dashboards in the Azure portal."::: |
azure-portal | Set Preferences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md | - Title: Manage Azure portal settings and preferences -description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more. Previously updated : 07/18/2024----# Manage Azure portal settings and preferences --You can change the default settings of the Azure portal to meet your own preferences. --To view and manage your portal settings, select the **Settings** menu icon in the global controls, which are located in the page header at the top right of the screen. ---Within **Portal settings**, you'll see different sections. This article describes the available options for each section. --## Directories + subscriptions --**Directories + subscriptions** lets you manage directories (Azure tenants) and set subscription filters. --### Switch and manage directories --In the **Directories** section, you'll see your **Current directory** (the directory, or Azure tenant, that you're currently signed in to). --The **Startup directory** shows the default directory when you sign in to the Azure portal (or **Last visited** if you've chosen that option). To choose a different startup directory, select **change** to open [Appearance + startup views](#appearance--startup-views), where you can change your selection. --To see a full list of directories to which you have access, select **All Directories**. --To mark a directory as a favorite, select its star icon. Those directories will be listed in the **Favorites** section. --To switch to a different directory, find the directory that you want to work in, then select the **Switch** button in its row. ---### Subscription filters --You can choose the subscriptions that are filtered by default when you sign in to the Azure portal. This can be helpful if you have a primary list of subscriptions you work with but use others occasionally. --> [!IMPORTANT] -> After you apply a subscription filter, you'll only see subscriptions that match that filter, across all portal experiences. You won't be able to work with other subscriptions that are excluded from the selected filter. Any new subscriptions that are created after the filter was applied may not be shown if the filter criteria don't match. To see them, you must update the filter criteria to include other subscriptions in the portal, or select **Advanced filters** and use the **Default** filter to always show all subscriptions. -> -> Certain features, such as **Management groups** or **Security Center**, may show subscriptions that don't match your filter criteria. However, you won't be able to perform operations on those subscriptions (such as moving a subscription between management groups) unless you adjust your filters to include the subscriptions that you want to work with. --To use customized filters, select **Advanced filters**. You'll be prompted to confirm before continuing. ---After you continue, **Advanced filters** appears in the left navigation menu of **Portal settings**. You can create and manage multiple subscription filters here. Your currently selected subscriptions are saved as an imported filter that you can use again. You'll see this filter selected in **Directories + subscriptions**. --If you want to stop using advanced filters, select the toggle again to restore the default subscription view. Any custom filters you've created are saved and will be available to use if you enable **Advanced filters** in the future. ---## Advanced filters --After enabling **Advanced filters**, you can create, modify, or delete subscription filters by selecting **Modify advanced filters**. --The **Default** filter shows all subscriptions to which you have access. This filter is used if there are no other filters, or when the active filter fails to include any subscriptions. --You may also see a filter named **Imported-filter**, which includes all subscriptions that had been selected previously. --To change the filter that is currently in use, select **Activate** next to that filter. ---### Create a filter --To create a new filter, select **Create a filter**. You can create up to ten filters. --Each filter must have a unique name that is between 8 and 50 characters long and contains only letters, numbers, and hyphens. --After you've named your filter, enter at least one condition. In the **Filter type** field, select **Management group**, **Subscription ID**, **Subscription name**, or **Subscription state**. Then select an operator and the value to filter on. ---When you're finished adding conditions, select **Create**. Your filter will then appear in the list in **Active filters**. --### Modify or delete a filter --You can modify or rename an existing filter by selecting the pencil icon in that filter's row. Make your changes, and then select **Apply**. --> [!NOTE] -> If you modify a filter that is currently active, and the changes result in 0 subscriptions, the **Default** filter will become active instead. You can't activate a filter which doesn't include any subscriptions. --To delete a filter, select the trash can icon in that filter's row. You can't delete the **Default** filter or a filter that is currently active. --## Appearance + startup views --The **Appearance + startup views** pane has two sections. The **Appearance** section lets you choose menu behavior, your color theme, and whether to use a high-contrast theme. -The **Startup views** section lets you set options for what you see when you first sign in to the Azure portal. ---### Portal menu behavior --The **Menu behavior** section lets you choose how the [Azure portal menu](azure-portal-overview.md#portal-menu) appears. --- **Flyout**: The menu is hidden until you need it. You can select the menu icon in the upper left hand corner to open or close the menu.-- **Docked**: The menu is always visible. You can collapse the menu to provide more working space.--### Service menu behavior --The **Service menu behavior** section lets you choose how items in [service menus](azure-portal-overview.md#service-menu) are displayed. --- **Collapsed**: Groups of commands in service menus will appear collapsed. You can still manually select any top-level item to display the commands within that menu group.-- **Expanded**: Groups of commands in service menus will appear expanded. You can still manually select any top-level item to collapse that menu group.--### Choose a theme or enable high contrast --The theme that you choose affects the background and font colors that appear in the Azure portal. In the **Theme** section, you can select from one of four preset color themes. Select each thumbnail to find the theme that best suits you. --Alternatively, you can choose a theme from the **High contrast theme** section. These themes can make the Azure portal easier to read, especially if you have a visual impairment. Selecting either the white or black high-contrast theme will override any other theme selections. --### Choose a startup page --Choose one of the following options for **Startup page**. This setting determines which page you see when you first sign in to the Azure portal. --- **Home**: Displays the home page, with shortcuts to popular Azure services, a list of resources you've used most recently, and useful links to tools, documentation, and more.-- **Dashboard**: Displays your most recently used dashboard. Dashboards can be customized to create a workspace designed just for you. For more information, see [Create and share dashboards in the Azure portal](azure-portal-dashboards.md).---### Manage startup directory options --Choose one of the following options to control which directory (Azure tenant) to work in when you first sign in to the Azure portal. --- **Last visited**: When you sign in to the Azure portal, you'll start in the same directory from your previous visit.-- **Select a directory**: Choose this option to select a specific directory. You'll start in that directory every time you sign in to the Azure portal, even if you had been working in a different directory last time.--## Language + region --Here, you can choose the language used in the Azure portal. You can also select a regional format to determine the format for dates, time, and currency. ---> [!NOTE] -> These language and regional settings affect only the Azure portal. Documentation links that open in a new tab or window use your browser's settings to determine the language to display. --### Language --Use the drop-down list to select from the list of available languages. This setting controls the language you see for text throughout the Azure portal. Azure portal supports the following 18 languages in addition to English: Chinese (Simplified), Chinese (Traditional), Czech, Dutch, French, German, Hungarian, Indonesian, Italian, Japanese, Korean, Polish, Portuguese (Brazil), Portuguese (Portugal), Russian, Spanish, Swedish, and Turkish. --### Regional format --Select an option to control the way dates, time, numbers, and currency are shown in the Azure portal. --The options shown in the **Regional format** drop-down list correspond to the **Language** options. For example, if you select **English** as your language, and then select **English (United States)** as the regional format, currency is shown in U.S. dollars. If you select **English** as your language and then select **English (Europe)** as the regional format, currency is shown in euros. If you prefer, you can select a regional format that is different from your language selection. --After making the desired changes to your language and regional format settings, select **Apply**. --## My information --**My information** lets you provide information specific to your Azure experience. --### Email setting --The email address you provide here is used when we need to contact you for updates on Azure services, billing, support, or security issues. You can change this address at any time. --You can also indicate whether you'd like to receive additional emails about Microsoft Azure and other Microsoft products and services. If you select the checkbox to receive these emails, you'll be prompted to select the country/region in which you'll receive these emails. Note that certain countries/regions may not be available. You only need to specify a country/region if you want to receive these additional emails; selecting a country/region isn't required in order to receive emails about your Azure account at the address you provide in this section. --### Portal personalization --In this section, you can optionally share information about how you plan to use Azure. This information helps us provide tips, tools, and recommendations that are relevant to the tasks and services that you're interested in. --To provide this information, select one or more items from the list. You can change your selections at any time. --### Export, restore, and delete user settings --Near the top of **My information**, you'll see options to export, restore, or delete settings. ---#### Export user settings --Information about your custom settings is stored in Azure. You can export the following user data: --- Private dashboards in the Azure portal-- User settings like favorite subscriptions or directories-- Themes and other custom portal settings--To export your portal settings, select **Export settings** from the top of the **My information** pane. This creates a JSON file that contains your user settings data. --Due to the dynamic nature of user settings and risk of data corruption, you can't import settings from the JSON file. However, you can use this file to review the settings you selected. It can be useful to have an exported backup of your selections if you choose to delete your settings and private dashboards. --#### Restore default settings --If you've made changes to the Azure portal settings and want to discard them, select **Restore default settings** from the top of the **My information** pane. You'll be prompted to confirm this action. If you do so, any changes you've made to your Azure portal settings are lost. This option doesn't affect dashboard customizations. --#### Delete user settings and dashboards --Information about your custom settings is stored in Azure. You can delete the following user data: --- Private dashboards in the Azure portal-- User settings, such as favorite subscriptions or directories-- Themes and other custom portal settings--It's a good idea to export and review your settings before you delete them, as described in the previous section. Rebuilding [dashboards](azure-portal-dashboards.md) or redoing custom settings can be time-consuming. ---To delete your portal settings, select **Delete all settings and private dashboards** from the top of **My information**. You'll be prompted to confirm the deletion. When you do so, all settings customizations will return to the default settings, and all of your private dashboards will be lost. --## Signing out + notifications --This pane lets you manage pop-up notifications and session timeouts. ---### Signing out --The inactivity timeout setting helps to protect resources from unauthorized access if you forget to secure your workstation. After you've been idle for a while, you're automatically signed out of your Azure portal session. As an individual, you can change the timeout setting for yourself. If you're an admin, you can set it at the directory level for all your users in the directory. --### Change your individual timeout setting (user) --In the drop-down menu next to **Sign me out when inactive**, choose the duration after which your Azure portal session is signed out if you're idle. --Select **Apply** to save your changes. After that, if you're inactive during the portal session, Azure portal will sign out after the duration you set. --If your admin has enabled an inactivity timeout policy, you can still choose your own timeout duration, but it must be shorter than the directory-level setting. To do so, select **Override the directory inactivity timeout policy**, then enter a time interval for the **Override value**. ---### Change the directory timeout setting (admin) --Users with the [Global Administrator role](../active-directory/roles/permissions-reference.md#global-administrator) can enforce the maximum idle time before a session is signed out. This inactivity timeout setting applies to all users in the Azure tenant. Once it's set, all new sessions will comply with the new timeout settings. The change won't apply to signed-in users until their next sessions. --Global Administrators can't specify different settings for individual users in the tenant. However, each user has the option to set a shorter timeout interval for themselves. Users can't change their individual timeout setting to a longer interval than the current option set by a Global Administrator. --To enforce an idle timeout setting for all users of the Azure portal, sign in with a Global Administrator account, then select **Enable directory level idle timeout** to turn on the setting. Next, enter the **Hours** and **Minutes** for the maximum time that a user can be inactive before their session is automatically signed out. After you select **Apply**, this setting will apply to all users in the directory. ---To confirm that the inactivity timeout policy is set correctly, select **Notifications** from the global page header and verify that a success notification is listed. --To change a previously selected timeout, any Global Administrator can follow these steps again to apply a new timeout interval. If a Global Administrator unchecks the box for **Enable directory level idle timeout**, the previous setting will remain in place by default for all users; however, each user can change their individual setting to whatever they prefer. --### Enable or disable pop-up notifications --Notifications are system messages related to your current session. They provide information such as showing your current credit balance, confirming your last action, or letting you know when resources you created become available. When pop-up notifications are turned on, the messages briefly display in the top corner of your screen. --To enable or disable pop-up notifications, select or clear **Show pop-up notifications**. --To read all notifications received during your current session, select the **Notifications** icon from the global header. ---To view notifications from previous sessions, look for events in the Activity log. For more information, see [View the Activity log](/azure/azure-monitor/essentials/activity-log-insights#view-the-activity-log). --### Enable or disable teaching bubbles --Teaching bubbles may appear in the portal when new features are released. These bubbles contain information to help you understand how new features work. --To enable or disable teaching bubbles in the portal, select or clear **Show teaching bubbles**. --## Next steps --- Learn about [keyboard shortcuts in the Azure portal](azure-portal-keyboard-shortcuts.md).-- [View supported browsers and devices](azure-portal-supported-browsers-devices.md) for the Azure portal.-- Learn how to [add, remove, and rearrange favorite services](azure-portal-add-remove-sort-favorites.md).-- Learn how to [create and share custom dashboards](azure-portal-dashboards.md). |
azure-portal | How To Create Azure Support Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md | - Title: How to create an Azure support request -description: Customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests. -- Previously updated : 02/26/2024---# Create an Azure support request --Azure enables you to create and manage support requests, also known as support tickets. You can create and manage requests in the [Azure portal](https://portal.azure.com), which is covered in this article. You can also create and manage requests programmatically, using the [Azure support ticket REST API](/rest/api/support), or by using [Azure CLI](/cli/azure/azure-cli-support-request). --> [!NOTE] -> The Azure portal URL is specific to the Azure cloud where your organization is deployed. -> ->- Azure portal for commercial use is: [https://portal.azure.com](https://portal.azure.com) ->- Azure portal for the United States government is: [https://portal.azure.us](https://portal.azure.us) --Azure provides unlimited support for subscription management, which includes billing, [quota adjustments](../../quotas/quotas-overview.md), and account transfers. For technical support, you need a support plan. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans). --## Getting started --You can open support requests in the Azure portal from the Azure portal menu, the global header, or the resource menu for a service. Before you can file a support request, you must have appropriate permissions. --### Azure role-based access control --You must have the appropriate access to a subscription in order to create a support request for it. This means you must have the [Owner](../../role-based-access-control/built-in-roles.md#owner), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Support Request Contributor](../../role-based-access-control/built-in-roles.md#support-request-contributor) role, or a custom role with [Microsoft.Support/*](../../role-based-access-control/resource-provider-operations.md#microsoftsupport), at the subscription level. --To create a support request without a subscription, for example a Microsoft Entra scenario, you must be an [Admin](../../active-directory/roles/permissions-reference.md). --> [!IMPORTANT] -> If a support request requires investigation into multiple subscriptions, you must have the required access for each subscription involved ([Owner](../../role-based-access-control/built-in-roles.md#owner), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), [Reader](../../role-based-access-control/built-in-roles.md#reader), [Support Request Contributor](../../role-based-access-control/built-in-roles.md#support-request-contributor), or a custom role with the [Microsoft.Support/supportTickets/read](../../role-based-access-control/resource-provider-operations.md#microsoftsupport) permission). --If a support request requires confirmation or release of account-specific information, changes to account information, or operations such as subscription ownership transfer or cancelation, you must be an [account billing administrator](/azure/cost-management-billing/manage/add-change-subscription-administrator#determine-account-billing-administrator) for the subscription. --### Open a support request from the global header --To start a support request from anywhere in the Azure portal: --1. Select the **?** in the global header, then enter a few words to describe your issue. -- :::image type="content" source="media/how-to-create-azure-support-request/support-menu-issue.png" alt-text="Screenshot of the Help menu from the global header in the Azure portal."::: --1. Follow the prompts to share more details about your issue, including the specific resource, if applicable. We'll look for solutions that might help you resolve the issue. -- If none of the solutions resolve the problem you're having, select **Create a support request**. -- :::image type="content" source="media/how-to-create-azure-support-request/header-create-support-request.png" alt-text="Screenshot of the Help menu with Create a support request link."::: --### Open a support request from a resource menu --To start a support request in the context of the resource you're currently working with: --1. From the resource menu, in the **Help** section, select **Support + Troubleshooting**. -- :::image type="content" source="media/how-to-create-azure-support-request/resource-context-support.png" alt-text="Screenshot of the New Support Request option in the resource pane."::: --1. Follow the prompts to share more details about your issue. Some options may be preselected for you, based on the resource you were viewing when you selected **Support + Troubleshooting**. We'll look for solutions that might help you resolve the issue. -- If none of the solutions resolve the problem you're having, select **Create a support request**. --## Create a support request --When you create a new support request, you'll need to provide some information to help us understand the problem. This information is gathered in a few separate sections. --### Problem description --The first step of the support request process is to select an issue type. You'll be prompted for more information, which can vary depending on what type of issue you selected. If you select **Technical**, specify the service that your issue relates to. Depending on the service, you might see options for **Problem type** and **Problem subtype**. Be sure to select the service (and problem type/subtype if applicable) that is most related to your issue. Selecting an unrelated service may result in delays in addressing your support request. --> [!IMPORTANT] -> In most cases, you'll need to specify a subscription. Be sure to choose the subscription where you are experiencing the problem. The support engineer assigned to your case will only be able to access resources in the subscription you specify. The access requirement serves as a point of confirmation that the support engineer is sharing information to the right audience, which is a key factor for ensuring the security and privacy of customer data. For details on how Azure treats customer data, see [Data Privacy in the Trusted Cloud](https://azure.microsoft.com/overview/trusted-cloud/privacy/). -> -> If the issue applies to multiple subscriptions, you can mention additional subscriptions in your description, or by [sending a message](how-to-manage-azure-support-request.md#send-a-message) later. However, the support engineer will only be able to work on [subscriptions to which you have access](#azure-role-based-access-control). If you don't have the required access for a subscription, we won't be able to work on it as part of your request. ---After you provide all of the requested information, select **Next**. --### Recommended solution --Based on the information you provided, we provide some recommended solutions that might be able to fix the problem. In some cases, we may even run a quick diagnostic check. These solutions are written by Azure engineers to solve most common problems. --If you're still unable to resolve the issue, continue creating your support request by selecting **Return to support request**, then selecting **Next**. --### Additional details --Next, we collect more details about the problem. Providing thorough and detailed information in this step helps us route your support request to the right engineer. --1. Complete the **Problem details** so that we have more information about your issue. If possible, tell us when the problem started and any steps to reproduce it. You can optionally upload one file (or a compressed file such as .zip that contains multiple files), such as a log file or [browser trace](../capture-browser-trace.md). For more information on file uploads, see [File upload guidelines](how-to-manage-azure-support-request.md#file-upload-guidelines). --1. In the **Advanced diagnostic information** section, select **Yes** or **No**. Selecting **Yes** allows Azure support to gather [advanced diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) from your Azure resources. If you prefer not to share this information, select **No**. For details about the types of files we might collect, see [Advanced diagnostic information logs](#advanced-diagnostic-information-logs). -- In some cases, you may see additional options. For example, for certain types of Virtual Machine problem types, you can choose whether to [allow access to a virtual machine's memory](#memory-dump-collection). --1. In the **Support method** section, select the **Support plan**, the **Severity** level, depending on the business impact. The [maximum available severity level and time to respond](https://azure.microsoft.com/support/plans/response/) depends on your [support plan](https://azure.microsoft.com/support/plans) and the country/region in which you're located, including the timing of business hours in that country/region. -- > [!TIP] - > To add a support plan that requires an **Access ID** and **Contract ID**, select **Help + Support** > **Support plans** > **Link support benefits**. When a limited support plan expires or has no support incidents remaining, it won't be available to select. --1. Provide your preferred contact method, your availability, and your preferred support language. Confirm that your country/region setting is accurate, as this setting affects the business hours in which a support engineer can work on your request. --1. Complete the **Contact info** section so that we know how to reach you. --Select **Next** after you finish entering this information. --### Review + create --Before you create your request, review all of the details that you'll send to support. You can select **Previous** to return to any tab if you want to make changes. When you're satisfied that the support request is complete, select **Create**. --A support engineer will contact you using the method you indicated. For information about initial response times, see [Support scope and responsiveness](https://azure.microsoft.com/support/plans/response/). --### Advanced diagnostic information logs --When you allow collection of [advanced diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/), Microsoft support can collect information that can help solve your problem more quickly. Files commonly collected for different services or environments include: --- [Microsoft Azure PaaS VM logs](/troubleshoot/azure/virtual-machines/sdp352ef8720-e3ee-4a12-a37e-cc3b0870f359-windows-vm)-- [Microsoft Azure IaaS VM logs](https://github.com/azure/azure-diskinspect-service/blob/master/docs/manifest_by_file.md)-- [Microsoft Azure Service Fabric logs](/troubleshoot/azure/general/fabric-logs)-- [StorSimple support packages and device logs](https://support.microsoft.com/topic/storsimple-support-packages-and-device-logs-cb0a1c7e-6125-a5a7-f212-51439781f646)-- [SQL Server on Azure Virtual Machines logs](/troubleshoot/azure/general/sql-vm-logs)-- [Microsoft Entra logs](/troubleshoot/azure/active-directory/support-data-collection-diagnostic-logs)-- [Azure Stack Edge support package and device logs](/troubleshoot/azure/general/azure-stack-edge-support-package-device-logs)-- [Azure Synapse Analytics logs](/troubleshoot/azure/general/synapse-analytics-apache-spark-pools-diagnostic-logs)--Depending on your issue or environment type, we may collect other files in addition to the ones listed here. For more information, see [Data we use to deliver Azure support](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/). --### Memory dump collection --When you create a support case for certain Virtual Machine (VM) problem types, you choose whether to allow us to access your virtual machine's memory. If you do so, we may collect a memory dump to help diagnose the problem. --A complete memory dump is the largest kernel-mode dump file. This file includes all of the physical memory that is used by Windows. A complete memory dump does not, by default, include physical memory that is used by the platform firmware. --The dump is copied from the compute node (Azure host) to another server for debugging within the same datacenter. Customer data is protected, since the data doesn't leave Azure's secure boundary. --The dump file is created by generating a Hyper-V save state of the VM. During this process, the VM will be paused for up to 10 minutes, after which time the VM is resumed. The VM isn't restarted as part of this process. --## Next steps --To learn more about self-help support options in Azure, watch this video: --> [!VIDEO https://www.youtube.com/embed/gNhzR5FE9DY] --Follow these links to learn more: --- [How to manage an Azure support request](how-to-manage-azure-support-request.md)-- [Azure support ticket REST API](/rest/api/support)-- Get help from your peers in [Microsoft Q&A](/answers/products/azure)-- Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)-- [Azure Quotas overview](../../quotas/quotas-overview.md) |
azure-portal | How To Manage Azure Support Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-manage-azure-support-request.md | - Title: Manage an Azure support request -description: Learn about viewing support requests and how to send messages, upload files, and manage options. -tags: billing - Previously updated : 03/08/2024---# Manage an Azure support request --After you [create an Azure support request](how-to-create-azure-support-request.md), you can manage it in the [Azure portal](https://portal.azure.com). --> [!TIP] -> You can create and manage requests programmatically by using the [Azure support ticket REST API](/rest/api/support) or [Azure CLI](/cli/azure/azure-cli-support-request). Additionally, you can view open requests, reply to your support engineer, or edit the severity of your ticket in the [Azure mobile app](https://azure.microsoft.com/get-started/azure-portal/mobile-app/). --To manage a support request, you must have the [Owner](../../role-based-access-control/built-in-roles.md#owner), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Support Request Contributor](../../role-based-access-control/built-in-roles.md#support-request-contributor) role at the subscription level. To manage a support request that was created without a subscription, you must be an [Admin](../../active-directory/roles/permissions-reference.md). --## View support requests --View the details and status of support requests by going to **Help + support** > **All support requests** in the Azure portal. ---You can search, filter, and sort support requests. By default, you might only see recent open requests. Change the filter options to select a longer period of time or to include support requests that were closed. --To view details about a support request to view details, including its severity and any messages associated with the request, select it from the list. --## Send a message --1. From **All support requests**, select the support request. --1. In the **Support Request**, select **New message**. --1. Enter your message and select **Submit**. --## Change the severity level --> [!NOTE] -> The maximum severity level depends on your [support plan](https://azure.microsoft.com/support/plans). --1. From **All support requests**, select the support request. --1. In the **Support Request**, select **Change severity**. --1. The Azure portal shows one of two screens, depending on whether your request is already assigned to a support engineer: -- - If your request isn't assigned, you see a screen like the following. Select a new severity level, then select **Change**. -- :::image type="content" source="media/how-to-manage-azure-support-request/unassigned-can-change-severity.png" alt-text="Select a new severity level"::: -- - If your request is assigned, you see a screen like the following. Select **OK**, then create a [new message](#send-a-message) to request a change in severity level. -- :::image type="content" source="media/how-to-manage-azure-support-request/assigned-cant-change-severity.png" alt-text="Can't select a new severity level"::: --## Allow collection of advanced diagnostic information --When you create a support request, you can select **Yes** or **No** in the **Advanced diagnostic information** section. This option determines whether Azure support can gather [diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) such as [log files](how-to-create-azure-support-request.md#advanced-diagnostic-information-logs) from your Azure resources that can potentially help resolve your issue. Azure support can only access advanced diagnostic information if your case was created through the Azure portal and you granted permission to allow it. --To change your **Advanced diagnostic information** selection after the request is created: --1. From **All support requests**, select the support request. --1. In the **Support Request**, select **Advanced diagnostic information** near the top of the screen. --1. Select **Yes** or **No**, then select **Submit**. -- :::image type="content" source="media/how-to-manage-azure-support-request/grant-permission-manage.png" alt-text="Grant permissions for diagnostic information"::: --## Upload files --You can use the file upload option to upload a diagnostic file, such as a [browser trace](../capture-browser-trace.md) or any other files that you think are relevant to a support request. --1. From **All support requests**, select the support request. --1. In the **Support Request**, select the **Upload file** box, then browse to find your file and select **Upload**. --### File upload guidelines --Follow these guidelines when you use the file upload option: --- To protect your privacy, don't include personal information in your upload.-- The file name must be no longer than 110 characters.-- You can't upload more than one file. To include multiple different files, package them together in a compressed format such as .zip.-- Files can't be larger than 4 MB.-- All files must have a valid file name extension, such as `.docx` or `.xlsx`. Most file name extensions are supported, but you can't upload files with these extensions: `.bat, .cmd, .exe, .ps1, .js, .vbs, .com, .lnk, .reg, .bin,. cpl, .inf, .ins, .isu, .job, .jse, .msi, .msp, .paf, .pif, .rgs, .scr, .sct, .vbe, .vb, .ws, .wsf, .wsh`--## Close a support request --To close a support request, select the **Close request** option near the top of the screen. When prompted to confirm, select **Close**. You'll receive a confirmation email when your request is closed. --## Reopen a closed request --To reopen a closed support request, select **Reopen request** near the top of the screen. When prompted to confirm, select **Reopen request.** Your support request will then be active again. --> [!NOTE] -> Closed support requests can generally be viewed and reopened for a period of 13 months. After that time, they may be removed, making them unavailable to view or reopen. --## Cancel a support plan --To cancel a support plan, see [Cancel a support plan](../../cost-management-billing/manage/cancel-azure-subscription.md#cancel-a-subscription-in-the-azure-portal). --## Get help with a support request --If you need assistance managing a support request, [create another support request](how-to-create-azure-support-request.md) to get help. For the **Issue type**, select **Technical**, then select **All Services**. For **Service type**, select **Portal** and for **Problem type** select **Issue with Support Ticket Experience**. --## Next steps --- Review the process to [create an Azure support request](how-to-create-azure-support-request.md).-- Learn about the [Azure support ticket REST API](/rest/api/support). |
azure-resource-manager | Private Module Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md | If you would rather learn about parameters through step-by-step guidance, see [S ## Configure private registry -A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md). Use the following steps to configure your registry for modules. +A Bicep registry is hosted on [Azure Container Registry (ACR)](/azure/container-registry/container-registry-intro). Use the following steps to configure your registry for modules. -1. If you already have a container registry, you can use it. If you need to create a container registry, see [Quickstart: Create a container registry by using a Bicep file](../../container-registry/container-registry-get-started-bicep.md). +1. If you already have a container registry, you can use it. If you need to create a container registry, see [Quickstart: Create a container registry by using a Bicep file](/azure/container-registry/container-registry-get-started-bicep). - You can use any of the available registry SKUs for the module registry. Registry [geo-replication](../../container-registry/container-registry-geo-replication.md) provides users with a local presence or as a hot-backup. + You can use any of the available registry SKUs for the module registry. Registry [geo-replication](/azure/container-registry/container-registry-geo-replication) provides users with a local presence or as a hot-backup. 1. Get the login server name. You need this name when linking to the registry from your Bicep files. The format of the login server name is: `<registry-name>.azurecr.io`. A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-r -1. To publish modules to a registry, you must have permission to **push** an image. To deploy a module from a registry, you must have permission to **pull** the image. For more information about the roles that grant adequate access, see [Azure Container Registry roles and permissions](../../container-registry/container-registry-roles.md). +1. To publish modules to a registry, you must have permission to **push** an image. To deploy a module from a registry, you must have permission to **pull** the image. For more information about the roles that grant adequate access, see [Azure Container Registry roles and permissions](/azure/container-registry/container-registry-roles). 1. Depending on the type of account you use to deploy the module, you may need to customize which credentials are used. These credentials are needed to get the modules from the registry. By default, credentials are obtained from Azure CLI or Azure PowerShell. You can customize the precedence for getting the credentials in the **bicepconfig.json** file. For more information, see [Credentials for restoring modules](bicep-config-modules.md#configure-profiles-and-credentials). > [!IMPORTANT]-> The private container registry is only available to users with the required access. However, it's accessed through the public internet. For more security, you can require access through a private endpoint. See [Connect privately to an Azure container registry using Azure Private Link](../../container-registry/container-registry-private-link.md). +> The private container registry is only available to users with the required access. However, it's accessed through the public internet. For more security, you can require access through a private endpoint. See [Connect privately to an Azure container registry using Azure Private Link](/azure/container-registry/container-registry-private-link). > -> The private container registry must have the policy `azureADAuthenticationAsArmPolicy` set to `enabled`. If `azureADAuthenticationAsArmPolicy` is set to `disabled`, you'll get a 401 (Unauthorized) error message when publishing modules. See [Azure Container Registry introduces the Conditional Access policy](../../container-registry/container-registry-configure-conditional-access.md). +> The private container registry must have the policy `azureADAuthenticationAsArmPolicy` set to `enabled`. If `azureADAuthenticationAsArmPolicy` is set to `disabled`, you'll get a 401 (Unauthorized) error message when publishing modules. See [Azure Container Registry introduces the Conditional Access policy](/azure/container-registry/container-registry-configure-conditional-access). ## Publish files to registry |
azure-resource-manager | Quickstart Private Module Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md | If you don't have an Azure subscription, [create a free account](https://azure.m To work with module registries, you must have [Bicep CLI](./install.md) version **0.4.1008** or later. To use with [Azure CLI](/cli/azure/install-azure-cli), you must also have Azure CLI version **2.31.0** or later; to use with [Azure PowerShell](/powershell/azure/install-azure-powershell), you must also have Azure PowerShell version **7.0.0** or later. -A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md). To create one, see [Quickstart: Create a container registry by using a Bicep file](../../container-registry/container-registry-get-started-bicep.md). +A Bicep registry is hosted on [Azure Container Registry (ACR)](/azure/container-registry/container-registry-intro). To create one, see [Quickstart: Create a container registry by using a Bicep file](/azure/container-registry/container-registry-get-started-bicep). To set up your environment for Bicep development, see [Install Bicep tools](install.md). After completing those steps, you'll have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep), or [Visual Studio](https://visualstudio.microsoft.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.visualstudiobicep). |
azure-resource-manager | Scenarios Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scenarios-monitoring.md | Smart detection alerts warn you of potential performance problems and failure an In Bicep, you can create portal dashboards by using the resource type [Microsoft.Portal/dashboards](/azure/templates/microsoft.portal/dashboards?tabs=bicep). -For more information about creating dashboards with code, see [Programmatically create an Azure Dashboard](../../azure-portal/azure-portal-dashboards-create-programmatically.md). +For more information about creating dashboards with code, see [Programmatically create an Azure Dashboard](/azure/azure-portal/azure-portal-dashboards-create-programmatically). ## Autoscale rules |
azure-resource-manager | Azure Services Resource Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md | The resource providers for container services are: | | - | | Microsoft.App | [Azure Container Apps](../../container-apps/index.yml) | | Microsoft.ContainerInstance | [Container Instances](/azure/container-instances/) |-| Microsoft.ContainerRegistry | [Container Registry](../../container-registry/index.yml) | +| Microsoft.ContainerRegistry | [Container Registry](/azure/container-registry/) | | Microsoft.ContainerService | [Azure Kubernetes Service (AKS)](/azure/aks/) | | Microsoft.RedHatOpenShift | [Azure Red Hat OpenShift](/azure/virtual-machines/linux/openshift-get-started) | The resource providers for hybrid services are: | | - | | Microsoft.AzureArcData | [Azure Arc-enabled data services](/azure/azure-arc/data/overview) | | Microsoft.AzureStackHCI | [Azure Stack HCI](/azure-stack/hci/overview) |-| Microsoft.HybridCompute | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | -| Microsoft.Kubernetes | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | -| Microsoft.KubernetesConfiguration | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | -| Microsoft.Edge | [Azure Arc site manager](../../azure-arc/site-manager/index.yml) | +| Microsoft.HybridCompute | [Azure Arc-enabled servers](/azure/azure-arc/servers/) | +| Microsoft.Kubernetes | [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/) | +| Microsoft.KubernetesConfiguration | [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/) | +| Microsoft.Edge | [Azure Arc site manager](/azure/azure-arc/site-manager/) | ## Identity resource providers The resource providers for management services are: | Microsoft.DynamicsLcs | [Lifecycle Services](https://lcs.dynamics.com/Logon/Index) | | Microsoft.Features - [registered by default](#registration) | [Azure Resource Manager](../index.yml) | | Microsoft.GuestConfiguration | [Azure Policy](../../governance/policy/index.yml) |-| Microsoft.ManagedServices | [Azure Lighthouse](../../lighthouse/index.yml) | +| Microsoft.ManagedServices | [Azure Lighthouse](/azure/lighthouse/) | | Microsoft.Management | [Management Groups](../../governance/management-groups/index.yml) | | Microsoft.PolicyInsights | [Azure Policy](../../governance/policy/index.yml) |-| Microsoft.Portal - [registered by default](#registration) | [Azure portal](../../azure-portal/index.yml) | +| Microsoft.Portal - [registered by default](#registration) | [Azure portal](/azure/azure-portal/) | | Microsoft.RecoveryServices | [Azure Site Recovery](../../site-recovery/index.yml) | | Microsoft.ResourceGraph - [registered by default](#registration) | [Azure Resource Graph](../../governance/resource-graph/index.yml) | | Microsoft.ResourceHealth | [Azure Service Health](/azure/service-health/) | |
azure-resource-manager | Azure Subscription Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md | If you use classic deployment model instead of the Azure Resource Manager deploy ## Container Registry limits -The following table details the features and limits of the Basic, Standard, and Premium [service tiers](../../container-registry/container-registry-skus.md). +The following table details the features and limits of the Basic, Standard, and Premium [service tiers](/azure/container-registry/container-registry-skus). ## Content Delivery Network limits |
azure-resource-manager | Delete Resource Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md | To restore deleted resources, see: * [Recover deleted Azure AI services resources](/azure/ai-services/manage-resources) * [Microsoft Entra - Recover from deletions](../../active-directory/architecture/recover-from-deletions.md) -You can also [open an Azure support case](../../azure-portal/supportability/how-to-create-azure-support-request.md). Provide as much detail as you can about the deleted resources, including their resource IDs, types, and resource names. Request that the support engineer check if the resources can be restored. +You can also [open an Azure support case](/azure/azure-portal/supportability/how-to-create-azure-support-request). Provide as much detail as you can about the deleted resources, including their resource IDs, types, and resource names. Request that the support engineer check if the resources can be restored. > [!NOTE] > Recovery of deleted resources is not possible under all circumstances. A support engineer will investigate your scenario and advise you whether it's possible. |
azure-resource-manager | Manage Resources Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-portal.md | When you open a resource, the portal presents default graphs and tables for moni :::image type="content" source="./media/manage-resources-portal/manage-azure-resources-portal-monitor-resource.png" alt-text="Screenshot of the Azure portal showing monitoring graphs for a virtual machine."::: -You can select the pin icon on the upper right corner of the graphs to pin the graph to the dashboard. To learn about working with dashboards, see [Creating and sharing dashboards in the Azure portal](../../azure-portal/azure-portal-dashboards.md). +You can select the pin icon on the upper right corner of the graphs to pin the graph to the dashboard. To learn about working with dashboards, see [Creating and sharing dashboards in the Azure portal](/azure/azure-portal/azure-portal-dashboards). ## Manage access to resources |
azure-resource-manager | Move Resource Group And Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md | Moving a resource only moves it to a new resource group or subscription. It does When you move a resource, you change its resource ID. The standard format for a resource ID is `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}`. When you move a resource to a new resource group or subscription, you change one or more values in that path. -If you use the resource ID anywhere, you'll need to change that value. For example, if you have a [custom dashboard](../../azure-portal/quickstart-portal-dashboard-azure-cli.md) in the portal that references a resource ID, you'll need to update that value. Look for any scripts or templates that need to be updated for the new resource ID. +If you use the resource ID anywhere, you'll need to change that value. For example, if you have a [custom dashboard](/azure/azure-portal/quickstart-portal-dashboard-azure-cli) in the portal that references a resource ID, you'll need to update that value. Look for any scripts or templates that need to be updated for the new resource ID. ## Checklist before moving resources There are some important steps to do before moving a resource. By verifying these conditions, you can avoid errors. -1. The source and destination subscriptions must be active. If you have trouble enabling an account that has been disabled, [create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Select **Subscription Management** for the issue type. +1. The source and destination subscriptions must be active. If you have trouble enabling an account that has been disabled, [create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Select **Subscription Management** for the issue type. 1. The source and destination subscriptions must exist within the same [Microsoft Entra tenant](../../active-directory/develop/quickstart-create-new-tenant.md). To check that both subscriptions have the same tenant ID, use Azure PowerShell or Azure CLI. When you get an error message that indicates a resource can't be moved because i If the source or target resource group contains a virtual network, the states of all dependent resources for the virtual network are checked during the move. The check includes those resources directly and indirectly dependent on the virtual network. If any of those resources are in a failed state, the move is blocked. For example, if a virtual machine that uses the virtual network has failed, the move is blocked. The move is blocked even when the virtual machine isn't one of the resources being moved and isn't in one of the resource groups for the move. -When you receive this error, you have two options. Either move your resources to a resource group that doesn't have a virtual network, or [contact support](../../azure-portal/supportability/how-to-create-azure-support-request.md). +When you receive this error, you have two options. Either move your resources to a resource group that doesn't have a virtual network, or [contact support](/azure/azure-portal/supportability/how-to-create-azure-support-request). **Question: Can I move a resource group to a different subscription?** |
azure-resource-manager | Move Resources Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resources-overview.md | If you actually want to upgrade your Azure subscription (such as switching from - To upgrade a free trial, see [Upgrade your Free Trial or Microsoft Imagine Azure subscription to pay-as-you-go](../../cost-management-billing/manage/upgrade-azure-subscription.md). - To change a pay-as-you-go account, see [Change your Azure pay-as-you-go subscription to a different offer](../../cost-management-billing/manage/switch-azure-offer.md). -If you can't convert the subscription, [create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Select **Subscription Management** for the issue type. +If you can't convert the subscription, [create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Select **Subscription Management** for the issue type. ## Move resources across regions |
azure-resource-manager | Preview Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/preview-features.md | After a preview feature is registered in your subscription, you'll see one of tw - For a preview feature that doesn't require approval, the state is **Registered**. - If a preview feature requires approval, the registration state is **Pending**. You must request approval from the Azure service offering the preview feature. Usually, you request access through a support ticket.- - To request approval, submit an [Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). + - To request approval, submit an [Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). - After the registration is approved, the preview feature's state changes to **Registered**. Some services require other methods, such as email, to get approval for pending request. Check announcements about the preview feature for information about how to get access. |
azure-resource-manager | Create Visual Studio Deployment Project | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/create-visual-studio-deployment-project.md | You aren't limited to only the resources that are available through the Visual S :::image type="content" source="./media/create-visual-studio-deployment-project/Ops-DemoSiteGroup-dashboard.png" alt-text="Screenshot of the customized operational dashboard in the Azure portal."::: -You can manage access to the dashboard by using Azure role-based access control (Azure RBAC). You can also customize the dashboard's appearance after it's deployed. However, if you redeploy the resource group, the dashboard is reset to its default state in your template. For more information about creating dashboards, see [Programmatically create Azure Dashboards](../../azure-portal/azure-portal-dashboards-create-programmatically.md). +You can manage access to the dashboard by using Azure role-based access control (Azure RBAC). You can also customize the dashboard's appearance after it's deployed. However, if you redeploy the resource group, the dashboard is reset to its default state in your template. For more information about creating dashboards, see [Programmatically create Azure Dashboards](/azure/azure-portal/azure-portal-dashboards-create-programmatically). ## Clean up resources |
azure-resource-manager | Deployment History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-history.md | For help with resolving particular deployment errors, see [Troubleshoot common A ## Correlation ID and support -Each deployment has a correlation ID, which is used to track related events. If you [create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md), support may ask you for the correlation ID. Support uses the correlation ID to identify the operations for the failed deployment. +Each deployment has a correlation ID, which is used to track related events. If you [create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request), support may ask you for the correlation ID. Support uses the correlation ID to identify the operations for the failed deployment. The examples in this article show how to retrieve the correlation ID. |
azure-vmware | Attach Azure Netapp Files To Azure Vmware Solution Hosts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md | There are some important best practices to follow for optimal performance of NFS >[!IMPORTANT] > If you've changed the Azure NetApp Files volumes performance tier after creating the volume and datastore, see [Service level change for Azure NetApp files datastore](#service-level-change-for-azure-netapp-files-datastore) to ensure that volume/datastore metadata is in sync to avoid unexpected behavior in the portal or the API due to metadata mismatch. -- Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type determines volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 8, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).+- Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type determines volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 8, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). - Ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones) using the [the availability zone volume placement](../azure-netapp-files/manage-availability-zone-volume-placement.md) in the same subscription. Information regarding your AVS private cloud's availability zone can be viewed from the overview pane within the AVS private cloud. For performance benchmarks that Azure NetApp Files datastores deliver for VMs on Azure VMware Solution, see [Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](../azure-netapp-files/performance-benchmarks-azure-vmware-solution.md). Now that you attached a datastore on Azure NetApp Files-based NFS volume to your - **How many datastores are we supporting with Azure VMware Solution?** - The default maximum is 8 but it can be increased to 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). + The default maximum is 8 but it can be increased to 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). - **What latencies and bandwidth can be expected from the datastores backed by Azure NetApp Files?** |
azure-vmware | Azure Vmware Solution Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md | description: This article provides details about the known issues of Azure VMwar Previously updated : 7/30/2024 Last updated : 9/18/2024 # Known issues: Azure VMware Solution Refer to the table to find details about resolution dates or possible workaround | Zerto DR isn't currently supported with the AV64 SKU. The AV64 SKU uses ESXi host secure boot and Zerto DR hasn't implemented a signed VIB for the ESXi install. | 2024 | Continue using the AV36, AV36P, and AV52 SKUs for Zerto DR. | N/A | | [VMSA-2024-0013 (CVE-2024-37085)](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24505) VMware ESXi Active Directory Integration Authentication Bypass | July 2024 | Azure VMware Solution does not provide Active Directory integration and isn't vulnerable to this attack. | N/A | | AV36P SKU new private cloud deploys with vSphere 7, not vSphere 8. | September 2024 | The AV36P SKU is waiting for a Hotfix to be deployed, which will resolve this issue. | N/A |+| [VMSA-2024-0019](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24968) Vulnerability in the DCERPC Protocol and Local Privilege Escalations | September 2024 | Microsoft, working with Broadcom, adjudicated the risk of CVE-2024-38812 at an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) and CVE-2024-38813 with an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H/MAV:A/MAC:H/MPR:L/MUI:R). Adjustments from the base scores were possible due to the network isolation of the Azure VMware Solution vCenter Server DCERPC protocol access (ports 2012, 2014, and 2020 aren't exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the Azure VMware Solution vCenter Server. A plan is being put in place to address these vulnerabilities at a future date TBD. | N/A | In this article, you learned about the current known issues with the Azure VMware Solution. |
azure-vmware | Fix Deployment Failures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/fix-deployment-failures.md | If your private cloud prevalidations check failed (before deployment), a correla ## Create your support request -For general information about creating a support request, see [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +For general information about creating a support request, see [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). To create a support request for an Azure VMware Solution deployment or provisioning failure: |
azure-web-pubsub | Howto Develop Eventhandler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-eventhandler.md | The Web PubSub service delivers client events to the configured upstream webhook ## Event handler settings -A client always connects to a hub, and you could configure multiple event handler settings for the hub. The order of the event handler settings matters and the former one has the higher priority. When a client connects and an event is triggered, Web PubSub goes through the configured event handlers in the priority order and the first matching one wins. When configuring the event handler, the below properties should be set. +A client always connects to a hub, and you could configure multiple event handler settings for the hub. The order of the event handler settings matters and the former one has the higher priority. When a client connects and an event is triggered, Web PubSub goes through the configured event handlers in the priority order and the first matching one wins. Set the following properties when you configure the event handler: |Property name | Description | |--|--| A client always connects to a hub, and you could configure multiple event handle ### Events -The events include user events and system events. System events are predefined events that are triggered during the lifetime of a client, and user events are the events triggered when the client sends data, the user event name can be customized using client protocols, [here contains the detailed explanation](concept-service-internals.md#client-protocol). +The events include user events and system events. System events are predefined events that are triggered during the lifetime of a client. User events are triggered when the client sends data, the user event name can be customized using client protocols, [here contains the detailed explanation](concept-service-internals.md#client-protocol). Event type | Supported values | |--|--|-System events | `connect`, `connected` and `disconnected` | +System events | `connect`, `connected`, and `disconnected` | User events | `message`, or custom event name following client protocols | ### URL template -URL template supports several parameters that can be evaluated during runtime. With this feature, it is easy to route different hubs or events into different upstream servers with a single setting. KeyVault reference syntax is also support so that data could be stored in Azure Key Vault securely. +URL template supports several parameters that can be evaluated during runtime. With this feature, it's easy to route different hubs or events into different upstream servers with a single setting. KeyVault reference syntax is also support so that data could be stored in Azure Key Vault securely. -Note URL domain name should not contain parameter syntax, for example, `http://{hub}.com` is not a valid URL template. +Note URL domain name shouldn't contain parameter syntax, for example, `http://{hub}.com` isn't a valid URL template. | Supported parameters | Syntax | Description | Samples | |--|--|--|--| | Hub parameter | `{hub}` | The value is the hub that the client connects to. | When a client connects to `client/hubs/chat`, a URL template `http://host.com/api/{hub}` evaluates to `http://host.com/api/chat` because for this client, hub is `chat`. |-| Event parameter | `{event}` | The value of the triggered event. `event` values are listed [here](#events).The event value for abuse protection requests is `validate` as explained [here](#upstream-and-validation). | If there is a URL template `http://host.com/api/{hub}/{event}` configured for event `connect`, When a client connects to `client/hubs/chat`, Web PubSub initiates a POST request to the evaluated URL `http://host.com/api/chat/connect` when the client is connecting, since for this client event, hub is `chat` and the event triggering this event handler setting is `connect`. | -| KeyVault reference parameter | `{@Microsoft.KeyVault(SecretUri=<secretUri>)}` | The **SecretUri** should be the full data-plane URI of a secret in the vault, optionally including a version, e.g., `https://myvault.vault.azure.net/secrets/mysecret/` or `https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931`. When using KeyVault reference, you also need to configure the authentication between your Web PubSub service and your KeyVault service, check [here](howto-use-managed-identity.md#use-a-managed-identity-for-a-key-vault-reference) for detailed steps. | `@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)` | +| Event parameter | `{event}` | The value of the triggered event. `event` values are listed [here](#events). The event value for abuse protection requests is `validate` as explained [here](#upstream-and-validation). | If there's a URL template `http://host.com/api/{hub}/{event}` configured for event `connect`, When a client connects to `client/hubs/chat`, Web PubSub initiates a POST request to the evaluated URL `http://host.com/api/chat/connect` when the client is connecting, since for this client event, hub is `chat` and the event triggering this event handler setting is `connect`. | +| KeyVault reference parameter | `{@Microsoft.KeyVault(SecretUri=<secretUri>)}` | The **SecretUri** should be the full data-plane URI of a secret in the vault, optionally including a version, for example, `https://myvault.vault.azure.net/secrets/mysecret/` or `https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931`. When using KeyVault reference, you also need to configure the authentication between your Web PubSub service and your KeyVault service, check [here](howto-use-managed-identity.md#use-a-managed-identity-for-a-key-vault-reference) for detailed steps. | `@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)` | ### Authentication between service and webhook You can use any of these methods to authenticate between the service and webhook ## Upstream and Validation -When setting up the event handler webhook through Azure portal or CLI, the service follows the [CloudEvents Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) to validate the upstream webhook. Every registered upstream webhook URL is validated by this mechanism. The `WebHook-Request-Origin` request header is set to the service domain name `xxx.webpubsub.azure.com`, and it expects the response to have a header `WebHook-Allowed-Origin` to contain this domain name or `*`. +When setting up the event handler webhook through Azure portal or CLI, the service follows the [CloudEvents Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) to validate the upstream webhook. This mechanism validates every registered upstream webhook URL. The `WebHook-Request-Origin` request header is set to the service domain name `xxx.webpubsub.azure.com`, and it expects the response to have a header `WebHook-Allowed-Origin` to contain this domain name or `*`. When doing the validation, the `{event}` parameter is resolved to `validate`. For example, when trying to set the URL to `http://host.com/api/{event}`, the service tries to **OPTIONS** a request to `http://host.com/api/validate`. And only when the response is valid, the configuration can be set successfully. |
azure-web-pubsub | Howto Websocket Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-websocket-connect.md | -Clients connect to the Azure Web PubSub service by using the standard [WebSocket](https://tools.ietf.org/html/rfc6455) protocol. You can use languages that have WebSocket client support to write a client for the service. In this article, you'll see several WebSocket client samples in different languages. +Clients connect to the Azure Web PubSub service by using the standard [WebSocket](https://tools.ietf.org/html/rfc6455) protocol. You can use languages that have WebSocket client support to write a client for the service. In this article, you see several WebSocket client samples in different languages. ## Authorization |
backup | Azure Kubernetes Service Cluster Backup Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md | To enable backup for an AKS cluster, see the following prerequisites: . - Azure Backup for AKS supports AKS clusters using either a system-assigned managed identity or a user-assigned managed identity for backup operations. Although clusters using a service principal aren't supported, you can update an existing AKS cluster to use a [system-assigned managed identity](/azure/aks/use-managed-identity#update-an-existing-aks-cluster-to-use-a-system-assigned-managed-identity) or a [user-assigned managed identity](/azure/aks/use-managed-identity#update-an-existing-cluster-to-use-a-user-assigned-managed-identity). -- The Backup Extension during installation fetches Container Images stored in Microsoft Container Registry (MCR). If you enable a firewall on the AKS cluster, the extension installation process might fail due to access issues on the Registry. Learn [how to allow MCR access from the firewall](../container-registry/container-registry-firewall-access-rules.md#configure-client-firewall-rules-for-mcr).+- The Backup Extension during installation fetches Container Images stored in Microsoft Container Registry (MCR). If you enable a firewall on the AKS cluster, the extension installation process might fail due to access issues on the Registry. Learn [how to allow MCR access from the firewall](/azure/container-registry/container-registry-firewall-access-rules#configure-client-firewall-rules-for-mcr). - In case you have the cluster in a Private Virtual Network and Firewall, apply the following FQDN/application rules: `*.microsoft.com`, `*.azure.com`, `*.core.windows.net`, `*.azmk8s.io`, `*.digicert.com`, `*.digicert.cn`, `*.geotrust.com`, `*.msocsp.com`. Learn [how to apply FQDN rules](../firewall/dns-settings.md). |
backup | Back Up Azure Stack Hyperconverged Infrastructure Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md | MABS can back up Azure Stack HCI virtual machines in the following scenarios: -- **Arc VMs**: [Arc VMs](../azure-arc/servers/overview.md) add fabric management capabilities in addition to [Arc-enabled servers](../azure-arc/servers/overview.md). These allow *IT admins* to create, modify, delete, and assign permissions and roles to *app owners*, thereby enabling *self-service VM management*. Recovery of Arc VMs is supported in a limited capacity in Azure Stack HCI, version 23H2.+- **Arc VMs**: [Arc VMs](/azure/azure-arc/servers/overview) add fabric management capabilities in addition to [Arc-enabled servers](/azure/azure-arc/servers/overview). These allow *IT admins* to create, modify, delete, and assign permissions and roles to *app owners*, thereby enabling *self-service VM management*. Recovery of Arc VMs is supported in a limited capacity in Azure Stack HCI, version 23H2. The following table lists the various levels of backup and restore capabilities for Azure Arc VMs: |
backup | Backup Azure Delete Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-delete-vault.md | If you try to delete the vault without removing the dependencies, you'll encount - Recovery Services vault cannot be deleted as there are backup items in soft deleted state in the vault. The soft deleted items are permanently deleted after 14 days of delete operation. Please try vault deletion after the backup items are permanently deleted and there is no item in soft deleted state left in the vault. For more information, see [Soft delete for Azure Backup](./backup-azure-security-feature-cloud.md). +++> [!NOTE] +> Before deleting a Backup protection policy from a vault, you must ensure that +> - the policy doesn't have any associated Backup items. +> - each associated item is associated with some other policy. ++ ## Delete a Recovery Services vault > [!VIDEO https://www.youtube.com/embed/xg_TnyhK34o] |
backup | Backup Azure Mars Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md | This section explains the process to troubleshoot errors that you might encounte ### Backup jobs completed with warning -- When the MARS agent iterates over files and folders during backup, it might encounter various conditions that can cause the backup to be marked as completed with warnings. During these conditions, a job shows as completed with warnings. That's fine, but it means that at least one file wasn't able to be backed up. So the job skipped that file, but backed up all other files in question on the data source.+- When the MARS agent iterates over files and folders during backup, it might encounter various conditions that can cause the backup to be marked as completed with warnings. During these conditions, a job shows as completed with warnings. That's fine, but it means that at least one file wasn't able to be backed up. So the job skipped that file, but backed up all other files in question on the data source. Also, not having the latest agent installed on VM can cause a warning state. ![Backup job completed with warnings](./media/backup-azure-mars-troubleshoot/backup-completed-with-warning.png) |
backup | Backup Center Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-overview.md | In this article, you'll learn about: Some of the key benefits of Backup center include: -- **Single pane of glass to manage backups**: Backup center is designed to function well across a large and distributed Azure environment. You can use Backup center to efficiently manage backups spanning multiple workload types, vaults, subscriptions, regions, and [Azure Lighthouse](../lighthouse/overview.md) tenants.+- **Single pane of glass to manage backups**: Backup center is designed to function well across a large and distributed Azure environment. You can use Backup center to efficiently manage backups spanning multiple workload types, vaults, subscriptions, regions, and [Azure Lighthouse](/azure/lighthouse/overview) tenants. - **Datasource-centric management**: Backup center provides views and filters that are centered on the datasources that you're backing up (for example, VMs and databases). This allows a resource owner or a backup admin to monitor and operate backups of items without needing to focus on which vault an item is backed up to. A key feature of this design is the ability to filter views by datasource-specific properties, such as datasource subscription, datasource resource group, and datasource tags. For example, if your organization follows a practice of assigning different tags to VMs belonging to different departments, you can use Backup center to filter backup information based on the tags of the underlying VMs being backed up without needing to focus on the tag of the vault. - **Connected experiences**: Backup center provides native integrations to existing Azure services that enable management at scale. For example, Backup center uses the [Azure Policy](../governance/policy/overview.md) experience to help you govern your backups. It also leverages [Azure workbooks](/azure/azure-monitor/visualize/workbooks-overview) and [Azure Monitor Logs](/azure/azure-monitor/logs/data-platform-logs) to help you view detailed reports on backups. So, you don't need to learn any new principles to use the varied features that the Backup center offers. You can also [discover community resources from the Backup center](#access-community-resources-on-community-hub). - **At-scale monitoring capabilities**: Backup center now provides at-scale monitoring capabilities that help you to view replicated items and jobs across all vaults and manage them across subscriptions, resource groups, and regions from a single view for Azure Site Recovery. |
backup | Backup Sql Server Azure Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-azure-troubleshoot.md | Title: Troubleshoot SQL Server database backup description: Troubleshooting information for backing up SQL Server databases running on Azure VMs with Azure Backup. Previously updated : 01/04/2024 Last updated : 09/19/2024 AzureBackup workload extension operation failed. | The VM is shut down, or the V | Error message | Possible causes | Recommended actions | ||||-The VM is not able to contact Azure Backup service due to internet connectivity issues. | The VM needs outbound connectivity to Azure Backup Service, Azure Storage, or Microsoft Entra services.| <li> If you use NSG to restrict connectivity, then you should use the *AzureBackup* service tag to allow outbound access to Azure Backup Service, and similarly for the Microsoft Entra ID (*AzureActiveDirectory*) and Azure Storage(*Storage*) services. Follow these [steps](./backup-sql-server-database-azure-vms.md#nsg-tags) to grant access. <li> Ensure DNS is resolving Azure endpoints. <li> Check if the VM is behind a load balancer blocking internet access. By assigning public IP to the VMs, discovery will work. <li> Verify there's no firewall/antivirus/proxy that are blocking calls to the above three target services. +| The VM is not able to contact Azure Backup service due to internet connectivity issues. | **Cause 1**: The VM needs outbound connectivity to Azure Backup Service, Azure Storage, or Microsoft Entra services. <br><br> **Cause 2**: A Group Policy Object (GPO) policy restricts the required cipher suites for TLS communication. | **Recommendation for cause 1**: <li> If you use NSG to restrict connectivity, then you should use the *AzureBackup* service tag to allows outbound access to Azure Backup Service, and similarly for the Microsoft Entra ID (*AzureActiveDirectory*) and Azure Storage(*Storage*) services. Follow these [steps](./backup-sql-server-database-azure-vms.md#nsg-tags) to grant access. <li> Ensure DNS is resolving Azure endpoints. <li> Check if the VM is behind a load balancer blocking internet access. By assigning public IP to the VMs, discovery will work. <li> Verify there's no firewall/antivirus/proxy that are blocking calls to the above three target services. <br><br> **Recommendation for cause 2**: Remove the VM from the GPO or disable/remove the GPO policy as a workaround. Alternatively, modify the GPO in such a way that it allows the required cipher suites. | ### UserErrorOperationNotAllowedDatabaseMirroringEnabled |
backup | Configure Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/configure-reports.md | Azure Backup provides a reporting solution that uses [Azure Monitor logs](/azure - For DPM workloads, Backup reports are supported for DPM Version 5.1.363.0 and above and Agent Version 2.0.9127.0 and above. - For MABS workloads, Backup reports are supported for MABS Version 13.0.415.0 and above and Agent Version 2.0.9170.0 and above. - Backup reports can be viewed across all backup items, vaults, subscriptions, and regions as long as their data is being sent to a Log Analytics workspace that the user has access to. To view reports for a set of vaults, you only need to have reader access to the Log Analytics workspace to which the vaults are sending their data. You don't need to have access to the individual vaults.-- If you're an [Azure Lighthouse](../lighthouse/index.yml) user with delegated access to your customers' subscriptions, you can use these reports with Azure Lighthouse to view reports across all your tenants.+- If you're an [Azure Lighthouse](/azure/lighthouse/) user with delegated access to your customers' subscriptions, you can use these reports with Azure Lighthouse to view reports across all your tenants. - Currently, data can be viewed in Backup Reports across a maximum of 100 Log Analytics Workspaces (across tenants). >[!Note] >Depending on the complexity of queries and the volume of data processed, it's possible that you might see errors when selecting a large number of workspaces that are less than 100, in some cases. We recommend that you limit the number of workspaces being queried at a time. Select the pin button at the top of each widget to pin the widget to your Azure ## Cross-tenant reports -If you use [Azure Lighthouse](../lighthouse/index.yml) with delegated access to subscriptions across multiple tenant environments, you can use the default subscription filter. Select the filter button in the upper-right corner of the Azure portal to choose all the subscriptions for which you want to see data. Doing so lets you select Log Analytics workspaces across your tenants to view multi-tenanted reports. +If you use [Azure Lighthouse](/azure/lighthouse/) with delegated access to subscriptions across multiple tenant environments, you can use the default subscription filter. Select the filter button in the upper-right corner of the Azure portal to choose all the subscriptions for which you want to see data. Doing so lets you select Log Analytics workspaces across your tenants to view multi-tenanted reports. ## Conventions used in Backup reports |
backup | Monitor Azure Backup With Backup Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/monitor-azure-backup-with-backup-explorer.md | You can select the "pin" icon at the top of each table or chart to pin it to you ## Cross-tenant views -If you're an Azure Lighthouse user with delegated access to subscriptions across multiple tenant environments, you can use the default subscription filter. You display the subscriptions that you want to see data for by selecting the "filter" icon at the top right of the Azure portal. When you use this feature, Backup Explorer aggregates information about all the vaults across your selected subscriptions. To learn more, see [What is Azure Lighthouse?](../lighthouse/overview.md). +If you're an Azure Lighthouse user with delegated access to subscriptions across multiple tenant environments, you can use the default subscription filter. You display the subscriptions that you want to see data for by selecting the "filter" icon at the top right of the Azure portal. When you use this feature, Backup Explorer aggregates information about all the vaults across your selected subscriptions. To learn more, see [What is Azure Lighthouse?](/azure/lighthouse/overview). ## Next steps |
backup | Restore Sql Database Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-sql-database-azure-vm.md | Title: Restore SQL Server databases on an Azure VM description: This article describes how to restore SQL Server databases that are running on an Azure VM and that are backed up with Azure Backup. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 09/11/2024 Last updated : 09/19/2024 Before you restore a database, note the following: - We strongly recommended to restore the "master" database using the [Restore as files](#restore-as-files) option and then restore [using T-SQL commands](/sql/relational-databases/backup-restore/restore-the-master-database-transact-sql). - For all system databases (model, msdb), stop the SQL Server Agent service before you trigger the restore. - Close any applications that might try to take a connection to any of these databases.+- For the **master databases**, the **Alternate Location** option for restore isn't supported. We recommend you to restore the **master database** using the **Restore as files** option, and then restore using the `T-SQL` commands. +- For `msdb` and `model`, the **Alternate Location** option for restore is supported only when the **Restored database name** is different from the **target database** name. If you want to restore the same name with the **target database**, we recommend you to restore using the **Restore as files** option, and then restore using the `T-SQL` commands. ## Restore a database |
backup | Sql Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sql-support-matrix.md | Title: Azure Backup support matrix for SQL Server Backup in Azure VMs description: Provides a summary of support settings and limitations when backing up SQL Server in Azure VMs with the Azure Backup service. Previously updated : 09/11/2024 Last updated : 09/19/2024 You can use Azure Backup to back up SQL Server databases in Azure VMs hosted on **Supported operating systems** | Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012 (all versions), Windows Server 2008 R2 SP1 <br/><br/> Linux isn't currently supported. **Supported SQL Server versions** | SQL Server 2022 Express, SQL Server 2022, SQL Server 2019, SQL Server 2017 as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202017), SQL Server 2016 and SPs as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202016%20service%20pack), SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008 <br/><br/> Enterprise, Standard, Web, Developer, Express.<br><br>Express Local DB versions aren't supported. **Supported .NET versions** | .NET Framework 4.5.2 or later installed on the VM-**Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server that is manually installed) VMs are supported. Support for standalone instances is always on [availability groups](backup-sql-server-on-availability-groups.md). +**Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server that is manually installed) VMs are supported. Support for standalone instances is always on [availability groups](backup-sql-server-on-availability-groups.md). <br><br> Note that the SQL databases that are part of a AlwaysOn AG and are synced from SQL Managed Instance aren't supported. **Cross Region Restore** | Supported. [Learn more](restore-sql-database-azure-vm.md#cross-region-restore). **Cross Subscription Restore** | Supported via the Azure portal and Azure CLI. [Learn more](restore-sql-database-azure-vm.md#cross-subscription-restore). |
bastion | Bastion Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md | No, Bastion connectivity to Azure Virtual Desktop isn't supported. ### <a name="udr"></a>How do I handle deployment failures? -Review any error messages and [raise a support request in the Azure portal](../azure-portal/supportability/how-to-create-azure-support-request.md) as needed. Deployment failures can result from [Azure subscription limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). Specifically, customers might encounter a limit on the number of public IP addresses allowed per subscription that causes the Azure Bastion deployment to fail. +Review any error messages and [raise a support request in the Azure portal](/azure/azure-portal/supportability/how-to-create-azure-support-request) as needed. Deployment failures can result from [Azure subscription limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). Specifically, customers might encounter a limit on the number of public IP addresses allowed per subscription that causes the Azure Bastion deployment to fail. ### <a name="move-virtual-network"></a>Does Bastion support moving a VNet to another resource group? |
batch | Batch Docker Container Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md | More considerations for using a custom Linux image: To enable a Batch pool to run container workloads, you must specify [ContainerConfiguration](/dotnet/api/microsoft.azure.batch.containerconfiguration) settings in the pool's [VirtualMachineConfiguration](/dotnet/api/microsoft.azure.batch.virtualmachineconfiguration) object. This article provides links to the Batch .NET API reference. Corresponding settings are in the [Batch Python](/python/api/overview/azure/batch) API. -You can create a container-enabled pool with or without prefetched container images, as shown in the following examples. The pull (or prefetch) process lets you preload container images from either Docker Hub or another container registry on the Internet. For best performance, use an [Azure container registry](../container-registry/container-registry-intro.md) in the same region as the Batch account. +You can create a container-enabled pool with or without prefetched container images, as shown in the following examples. The pull (or prefetch) process lets you preload container images from either Docker Hub or another container registry on the Internet. For best performance, use an [Azure container registry](/azure/container-registry/container-registry-intro) in the same region as the Batch account. The advantage of prefetching container images is that when tasks first start running, they don't have to wait for the container image to download. The container configuration pulls container images to the VMs when the pool is created. Tasks that run on the pool can then reference the list of container images and container run options. The advantage of prefetching container images is that when tasks first start run > Docker Hub limits the number of image pulls. Ensure that your workload doesn't > [exceed published rate limits](https://docs.docker.com/docker-hub/download-rate-limit/) for Docker > Hub-based images. It's recommended to use-> [Azure Container Registry](../container-registry/container-registry-intro.md) directly or leverage -> [Artifact cache in ACR](../container-registry/container-registry-artifact-cache.md). +> [Azure Container Registry](/azure/container-registry/container-registry-intro) directly or leverage +> [Artifact cache in ACR](/azure/container-registry/container-registry-artifact-cache). ### Pool without prefetched container images |
batch | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md | IP ranges between the Docker network bridge and your virtual network. Docker Hub limits the number of image pulls. Ensure that your workload doesn't [exceed published rate limits](https://docs.docker.com/docker-hub/download-rate-limit/) for Docker Hub-based images. It's recommended to use-[Azure Container Registry](../container-registry/container-registry-intro.md) directly or leverage -[Artifact cache in ACR](../container-registry/container-registry-artifact-cache.md). +[Azure Container Registry](/azure/container-registry/container-registry-intro) directly or leverage +[Artifact cache in ACR](/azure/container-registry/container-registry-artifact-cache). ### Azure region dependency |
cloud-services-extended-support | In Place Migration Common Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-common-errors.md | Common migration errors and mitigation steps. | UnknownExceptionInEndExecute: A task was canceled: Exception received in EndExecute that isn't an RdfeException. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | | XrpVirtualNetworkMigrationError: Virtual network migration failure. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | | Deployment {0} in HostedService {1} belongs to Virtual Network {2}. Migrate Virtual Network {2} to migrate this HostedService {1}. | Refer to [Virtual Network migration](in-place-migration-technical-details.md#virtual-network-migration). | -| The current quota for Resource name in Azure Resource Manager is insufficient to complete migration. Current quota is {0}, additional needed is {1}. File a support request to raise the quota and retry migration once the quota raises. | To request a quota increase, follow the appropriate channels: <br>[Quota increase for networking resources](../azure-portal/supportability/networking-quota-requests.md) <br>[Quota increase for compute resources](../azure-portal/supportability/per-vm-quota-requests.md) | +| The current quota for Resource name in Azure Resource Manager is insufficient to complete migration. Current quota is {0}, additional needed is {1}. File a support request to raise the quota and retry migration once the quota raises. | To request a quota increase, follow the appropriate channels: <br>[Quota increase for networking resources](/azure/azure-portal/supportability/networking-quota-requests) <br>[Quota increase for compute resources](/azure/azure-portal/supportability/per-vm-quota-requests) | |XrpPaaSMigrationCscfgCsdefValidationMismatch: Migration couldn't be completed on deployment deployment-name in hosted service service-name because the deployment's metadata is stale. Abort the migration and upgrade the deployment before retrying migration. Validation Message: The service name 'service-name'in the service definition file doesn't match the name 'service-name-in-config-file' in the service configuration file|match the service names in both .csdef and .cscfg file| |NetworkingInternalOperationError when deploying Cloud Service (extended support) resource| The issue may occur if the Service name is same as role name. The recommended remediation is to use different names for service and roles| |
communication-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md | You can find more general guidance on how to set up your service architecture to 8. Add **Additional details** as needed, then click **Next**. 9. At **Review + create** check the information, make changes as needed, then click **Create**. -You can follow the documentation for [creating request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md). +You can follow the documentation for [creating request to Azure Support](/azure/azure-portal/supportability/how-to-create-azure-support-request). ## Acquiring phone numbers Before acquiring a phone number, make sure your subscription meets the [geographic and subscription](./telephony/plan-solution.md) requirements. Otherwise, you can't purchase a phone number. The following limitations apply to purchasing numbers through the [Phone Numbers SDK](./reference.md) and the [Azure portal](https://portal.azure.com/). Rate Limits for SMS: |Send Message|Alphanumeric Sender ID |Per resource|60|600|600| ### Action to take-If you have requirements that exceed the rate-limits, submit [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md) to enable higher throughput. +If you have requirements that exceed the rate-limits, submit [a request to Azure Support](/azure/azure-portal/supportability/how-to-create-azure-support-request) to enable higher throughput. For more information on the SMS SDK and service, see the [SMS SDK overview](./sms/sdk-features.md) page or the [SMS FAQ](./sms/sms-faq.md) page. If you have strict compliance needs, we recommend that you delete chat threads u | Default number of outbound* concurrent calls | per Number | 2 | > [!NOTE] -> \* No limits on inbound concurrent calls. You can also [submit a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md) to increase the outbound concurrent calls limit, which is reviewed by our vetting team. +> \* No limits on inbound concurrent calls. You can also [submit a request to Azure Support](/azure/azure-portal/supportability/how-to-create-azure-support-request) to increase the outbound concurrent calls limit, which is reviewed by our vetting team. ### Call maximum limitations The following timeouts apply to the Communication Services Calling SDKs: ### Action to take -For more information about the voice and video calling SDK and service, see the [calling SDK overview](./voice-video-calling/calling-sdk-features.md) page or [known issues](./known-issues.md). You can also [submit a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md) to increase some of the limits, pending review by our vetting team. +For more information about the voice and video calling SDK and service, see the [calling SDK overview](./voice-video-calling/calling-sdk-features.md) page or [known issues](./known-issues.md). You can also [submit a request to Azure Support](/azure/azure-portal/supportability/how-to-create-azure-support-request) to increase some of the limits, pending review by our vetting team. ## Job Router When sending or receiving a high volume of requests, you might receive a ```ThrottleLimitExceededException``` error. This error indicates you're hitting the service limitations, and your requests fail until the token of bucket to handle requests is replenished after a certain time. |
communication-services | Sms Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md | Rate Limits for SMS: |Send Message|Short Code |Per Number|60|6000*|6000| |Send Message|Alphanumeric Sender ID |Per resource|60|600*|600| -*If your company has requirements that exceed the rate-limits, submit [a request to Azure Support](../../../azure-portal/supportability/how-to-create-azure-support-request.md) to enable higher throughput. +*If your company has requirements that exceed the rate-limits, submit [a request to Azure Support](/azure/azure-portal/supportability/how-to-create-azure-support-request) to enable higher throughput. ## Carrier Fees ### What are the carrier fees for SMS? |
communication-services | Call Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-diagnostics.md | You can view detailed call logs for each participant within a call. Call informa ## Copilot in Azure for Call Diagnostics -Artificial Intelligence can help app developers across every step of the development lifecycle: designing, building, and operating. Developers with [Microsoft Copilot in Azure (preview)](../../../copilot/overview.md) can use Copilot in Azure within Call Diagnostics to understand and resolve a variety of calling issues. For example, developers can ask Copilot in Azure questions, such as: +Artificial Intelligence can help app developers across every step of the development lifecycle: designing, building, and operating. Developers with [Microsoft Copilot in Azure (preview)](/azure/copilot/overview) can use Copilot in Azure within Call Diagnostics to understand and resolve a variety of calling issues. For example, developers can ask Copilot in Azure questions, such as: - How do I run network diagnostics in Azure Communication Services VoIP calls? - How can I optimize my calls for poor network conditions? quality](https://learn.microsoft.com/azure/communication-services/concepts/voice - **How do I use Copilot in Azure (preview) in Call Diagnostics?** - - Your organization needs to manage access to [Microsoft Copilot in Azure (preview)](../../../copilot/overview.md). Once your organization has access to Copilot in Azure (preview), the Call Diagnostics interface will include the option to 'Diagnose with Copilot' in the Search, Overview, and Issues tabs. + - Your organization needs to manage access to [Microsoft Copilot in Azure (preview)](/azure/copilot/overview). Once your organization has access to Copilot in Azure (preview), the Call Diagnostics interface will include the option to 'Diagnose with Copilot' in the Search, Overview, and Issues tabs. - Leverage Copilot in Azure for Call Diagnostics to improve call quality by detailing problems faced during Azure Communication Services calls. Giving Copilot in Azure detailed information from Call Diagnostics will help it enhance analysis, identify issues, and identify fixes. Be aware that Copilot in Azure currently lacks programmatic access to your call details. <!-- 1. If Teams participants join a call, how will they display in Call |
communication-services | Calling Sdk Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md | Once you start development, check out the [known issues page](../known-issues.md - **Media Stats** - The Calling SDK provides comprehensive insights into [the metrics](media-quality-sdk.md) of your VoIP and video calls. With this information, developers have a clearer understanding of call quality and can make informed decisions to further enhance their communication experience. - **Video Constraints** - The Calling SDK provides APIs that gain the ability to regulate [video quality among other parameters](../../quickstarts/voice-video-calling/get-started-video-constraints.md) during video calls by adjusting parameters such as resolution and frame rate supporting different call situations for different levels of video quality - **User Facing Diagnostics (UFD)** - The Calling SDK provides [events](user-facing-diagnostics.md) that are designed to provide insights into underlying issues that could affect call quality. Developers can subscribe to triggers such as weak network signals or muted microphones, ensuring that they're always aware of any factors impacting the calls.+- **Custom context** - The Calling SDK provides APIs supporting calling with one user-to-user and up to five custom headers. The headers are received within the incoming call. ## Detailed capabilities The following list presents the set of features that are currently available in | | Noise suppression | ✔️ | ✔️ | ✔️ | ✔️ | | | Automatic gain control (AGC) | ❌ | ✔️ | ✔️ | ✔️ | | Notifications <sup>4</sup> | [Push notifications](../../how-tos/calling-sdk/push-notifications.md) | ✔️ | ✔️ | ✔️ | ✔️ |+| Custom context | Place a call with user-to-user or custom headers | ✔️ | ❌ | ❌ | ❌ | <sup>1</sup> The capability to Mute Others is currently in public preview. |
communication-services | Troubleshoot Web Voip Quality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/troubleshoot-web-voip-quality.md | Sometimes users have multiple browsers tabs with instances of Azure Communicatio You can check the log insights from the Azure portal for calling to determine the exact issue during the call. For more information, see [Query call logs](../analytics/query-call-logs.md). -If you tried all the previous steps and still face quality issues, [Create an Azure support request](../../../azure-portal/supportability/how-to-create-azure-support-request.md). If necessary, Microsoft can run a network check for your tenant to ensure call quality. +If you tried all the previous steps and still face quality issues, [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). If necessary, Microsoft can run a network check for your tenant to ensure call quality. ## End of call survey |
communication-services | Call Setup Takes Too Long | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/call-setup-issues/call-setup-takes-too-long.md | You can also check the Network tab of the Developer tools to see the size of req If the issue is due to the long duration of the signaling request, you should be able to see some requests taking very long time from the network trace. If you need to file a support ticket, we may request the browser HAR file.-To learn how to collect a HAR file, see [Capture a browser trace for troubleshooting](../../../../../azure-portal/capture-browser-trace.md). +To learn how to collect a HAR file, see [Capture a browser trace for troubleshooting](/azure/azure-portal/capture-browser-trace). |
communications-gateway | Request Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/request-changes.md | If you notice problems with Azure Communications Gateway or you need Microsoft t When you raise a request, we'll investigate. If we think the problem is caused by traffic from Zoom servers, we might ask you to raise a separate support request with Zoom. -This article provides an overview of how to raise support requests for Azure Communications Gateway. For more detailed information on raising support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +This article provides an overview of how to raise support requests for Azure Communications Gateway. For more detailed information on raising support requests, see [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). [!INCLUDE [communications-gateway-lab-ticket-sla](includes/communications-gateway-lab-ticket-sla.md)] If you're still unable to resolve the issue, continue creating your support requ ## Enter additional details -In this section, we collect more details about the problem or the change and how to contact you. Providing thorough and detailed information in this step helps us route your support request to the right engineer. For more information, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +In this section, we collect more details about the problem or the change and how to contact you. Providing thorough and detailed information in this step helps us route your support request to the right engineer. For more information, see [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). ## Review and create your support request Before creating your request, review the details and diagnostic files that you'r ## Next steps > [!div class="nextstepaction"]-> [Learn how to manage an Azure support request](../azure-portal/supportability/how-to-manage-azure-support-request.md). +> [Learn how to manage an Azure support request](/azure/azure-portal/supportability/how-to-manage-azure-support-request). |
confidential-computing | Confidential Enclave Nodes Aks Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md | This quickstart requires: - A minimum of eight DCsv2/DCSv3/DCdsv3 cores available in your subscription. - By default, there is no pre-assigned quota for Intel SGX VM sizes for your Azure subscriptions. You should follow [these instructions](../azure-portal/supportability/per-vm-quota-requests.md) to request for VM core quota for your subscriptions. + By default, there is no pre-assigned quota for Intel SGX VM sizes for your Azure subscriptions. You should follow [these instructions](/azure/azure-portal/supportability/per-vm-quota-requests) to request for VM core quota for your subscriptions. ## Create an AKS cluster with enclave-aware confidential computing nodes and Intel SGX add-on You're now ready to deploy a test application. Create a file named *hello-world-enclave.yaml* and paste in the following YAML manifest. You can find this sample application code in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld). This deployment assumes that you've deployed the *confcom* add-on. > [!NOTE]-> The following example pulls a public container image from Docker Hub. We recommend that you set up a pull secret to authenticate using a Docker Hub account instead of making an anonymous pull request. To improve reliability when working with public content, import and manage the image in a private Azure container registry. [Learn more about working with public images.](../container-registry/buffer-gate-public-content.md) +> The following example pulls a public container image from Docker Hub. We recommend that you set up a pull secret to authenticate using a Docker Hub account instead of making an anonymous pull request. To improve reliability when working with public content, import and manage the image in a private Azure container registry. [Learn more about working with public images.](/azure/container-registry/buffer-gate-public-content) ```yaml apiVersion: batch/v1 |
confidential-computing | Confidential Vm Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md | Confidential VMs support the following VM sizes: - NVIDIA H100 Tensor Core GPU powered NCCadsH100v5-series ### OS support+OS images for confidential VMs must meet specific security requirements. These qualified images are designed to support an optional confidential OS disk encryption and ensure isolation from the underlying cloud infrastructure. Meeting these requirements helps protect sensitive data and maintain system integrity. + Confidential VMs support the following OS options: | Linux | Windows Client | Windows Server | Confidential VMs support the following OS options: | **Ubuntu** | **Windows 11**| **Windows Server Datacenter** | | 20.04 LTS (AMD SEV-SNP Only) | 21H2, 21H2 Pro, 21H2 Enterprise, 21H2 Enterprise N, 21H2 Enterprise Multi-session | 2019 Server Core | | 22.04 LTS | 22H2, 22H2 Pro, 22H2 Enterprise, 22H2 Enterprise N, 22H2 Enterprise Multi-session | 2019 Datacenter |-| | 23H2, 23H2 Pro, 23H2 Enterprise, 23H2 Enterprise N, 23H2 Enterprise Multi-session | 2022 Server Core| +| 24.04 LTS | 23H2, 23H2 Pro, 23H2 Enterprise, 23H2 Enterprise N, 23H2 Enterprise Multi-session | 2022 Server Core| | **RHEL** | **Windows 10** | 2022 Azure Edition| | 9.4 (AMD SEV-SNP Only) | 22H2, 22H2 Pro, 22H2 Enterprise, 22H2 Enterprise N, 22H2 Enterprise Multi-session | 2022 Azure Edition Core|-| [9.3 <span class="pill purple">Preview (Intel TDX Only)](https://aka.ms/tdx-rhel-93-preview)</span>| | 2022 Datacenter | +| | | 2022 Datacenter | | **SUSE (Tech Preview)** | | | | [15 SP5 <span class="pill purple">(Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)</span>| | | | [15 SP5 for SAP <span class="pill purple">(Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)</span> | | | |
confidential-computing | Gpu Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/gpu-options.md | To deploy a confidential GPU VM instance, consider a [pay-as-you-go subscription You might need to increase the cores quota in your Azure subscription from the default value. Default limits vary depending on your subscription category. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the confidential VM sizes. -To request a quota increase, [open an online customer support request](../azure-portal/supportability/per-vm-quota-requests.md). +To request a quota increase, [open an online customer support request](/azure/azure-portal/supportability/per-vm-quota-requests). If you have large-scale capacity needs, contact Azure Support. Azure quotas are credit limits, not capacity guarantees. You only incur charges for cores that you use. |
confidential-computing | How To Fortanix Confidential Computing Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/how-to-fortanix-confidential-computing-manager.md | Fortanix is a third-party software vendor with products and services built on to 4. In the CCM node agent form, fill all the required fields. Paste the join token that you copied in Step 2 in **Join Token**. Select **Review + submit** to confirm. - For more information on how to enroll a CCM compute node, see [Enroll Compute Node](https://support.fortanix.com/hc/en-us/articles/360043085652-User-s-Guide-Compute-Nodes). + For more information on how to enroll a CCM compute node, see [Enroll Compute Node](https://support.fortanix.com/docs/users-guide-compute-nodes). :::image type="content" source="media/how-to-fortanix-confidential-computing-manager/enroll-compute-node.png" alt-text="Screenshot that shows enrolling the compute node."::: |
confidential-computing | Virtual Machine Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-options.md | To deploy a confidential VM instance, consider a [pay-as-you-go subscription](/a You might need to increase the cores quota in your Azure subscription from the default value. Default limits vary depending on your subscription category. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the confidential VM sizes. -To request a quota increase, [open an online customer support request](../azure-portal/supportability/per-vm-quota-requests.md). +To request a quota increase, [open an online customer support request](/azure/azure-portal/supportability/per-vm-quota-requests). If you have large-scale capacity needs, contact Azure Support. Azure quotas are credit limits, not capacity guarantees. You only incur charges for cores that you use. Confidential VMs run on specialized hardware, so you can only [resize confidenti It's not possible to resize a non-confidential VM to a confidential VM. -### Guest OS support --OS images for confidential VMs have to meet certain security and compatibility requirements. Qualified images support the secure mounting, attestation, optional [confidential OS disk encryption](confidential-vm-overview.md#confidential-os-disk-encryption), and isolation from underlying cloud infrastructure. These images include: --- Ubuntu 20.04 LTS (AMD SEV-SNP supported only)-- Ubuntu 22.04 LTS-- Red Hat Enterprise Linux 9.3 (AMD SEV-SNP supported only)-- Windows Server 2019 Datacenter - x64 Gen 2 (AMD SEV-SNP supported only)-- Windows Server 2019 Datacenter Server Core - x64 Gen 2 (AMD SEV-SNP supported only)-- Windows Server 2022 Datacenter - x64 Gen 2-- Windows Server 2022 Datacenter: Azure Edition Core - x64 Gen 2-- Windows Server 2022 Datacenter: Azure Edition - x64 Gen 2-- Windows Server 2022 Datacenter Server Core - x64 Gen 2-- Windows 11 Enterprise N, version 22H2 -x64 Gen 2-- Windows 11 Pro, version 22H2 ZH-CN -x64 Gen 2-- Windows 11 Pro, version 22H2 -x64 Gen 2-- Windows 11 Pro N, version 22H2 -x64 Gen 2-- Windows 11 Enterprise, version 22H2 -x64 Gen 2-- Windows 11 Enterprise multi-session, version 22H2 -x64 Gen 2--As we work to onboard more OS images with confidential OS disk encryption, there are various images available in early preview that can be tested. You can sign up below: --- [Red Hat Enterprise Linux 9.3 (Support for Intel TDX)](https://aka.ms/tdx-rhel-93-preview)-- [SUSE Enterprise Linux 15 SP5 (Support for Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)-- [SUSE Enterprise Linux 15 SAP SP5 (Support for Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)--For more information about supported and unsupported VM scenarios, see [support for generation 2 VMs on Azure](/azure/virtual-machines/generation-2). - ### High availability and disaster recovery You're responsible for creating high availability and disaster recovery solutions for your confidential VMs. Planning for these scenarios helps minimize and avoid prolonged downtime. |
confidential-computing | Virtual Machine Solutions Sgx | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-sgx.md | Find the pricing for **DCsv2**, **DCsv3**, and **DCdsv3** VMs on the [Azure VMs ### Cores quota -You might need to increase the cores quota in your Azure subscription from the default value. Your [subscription might also limit the number of cores](#azure-subscription) that you can deploy in certain VM size families, including **DCsv2-Series**. You can [request a quota increase](../azure-portal/supportability/per-vm-quota-requests.md) at no charge. Default limits might be different based on your subscription category. +You might need to increase the cores quota in your Azure subscription from the default value. Your [subscription might also limit the number of cores](#azure-subscription) that you can deploy in certain VM size families, including **DCsv2-Series**. You can [request a quota increase](/azure/azure-portal/supportability/per-vm-quota-requests) at no charge. Default limits might be different based on your subscription category. If you have large-scale capacity needs, contact Azure Support. Azure quotas are credit limits, not capacity guarantees. Whatever your quota, you're only charged for cores that you use. |
container-apps | Aspire Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/aspire-dashboard.md | zone_pivot_groups: azure-azd-cli-portal # Read real time app data with .NET Aspire Dashboard in Azure Container Apps (preview) -The [.NET Aspire Dashboard](/dotnet/aspire/fundamentals/dashboard/overview) provides information on how your app is running both on the environment and individual app level, which can help you detect anomalies in real time and debug errors. The dashboard shows data for all Container Apps that is part of your project, regardless of language or runtime. +The [.NET Aspire Dashboard](/dotnet/aspire/fundamentals/dashboard/overview) displays live data about how applications and other resources are running within an environment. The following image is a screenshot of a trace visualization generated by the .NET Aspire Dashboard. :::image type="content" source="media/aspire-dashboard/aspire-dashboard-trace.png" alt-text="Screenshot of a .NET Aspire Dashboard trace window."::: +The information displayed on the dashboard comes from two sources: ++- OpenTelemetry (OTel), an open-source library for tracking **traces**, **metrics**, and **logs** for your applications. [This documentation](/dotnet/aspire/fundamentals/telemetry) provides more information on how the Aspire dashboard integrates with OTel. ++ - **Traces** track the lifecycle of requests - how a request is received and processed as it moves between different parts of the application. This information is useful for identifying bottlenecks and other issues. + - **Metrics** are real-time measurements of the general health and performance of the infrastructure - for example, how many CPU resources are consumed and how many transactions that the application handles per second. This information is useful for understanding the responsiveness of your app or identifying early warning signs of performance issues. + - **Logs** record all events and errors that take place during the running of the application. This information is useful for finding when a problem occurred and correlated events. ++- The Kubernetes API provides information about the underlying Kubernetes pods on which your application is running on and their logs. ++The dashboard is secured against unauthorized access and modification. To use the dashboard, a user must have 'Write' permissions or higher - in other words, they must be a Contributor or Owner on the environment. + ## Enable the dashboard ::: zone pivot="portal" |
container-apps | Azure Arc Enable Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md | -With [Azure Arc-enabled Kubernetes clusters](../azure-arc/kubernetes/overview.md), you can create a [Container Apps enabled custom location](azure-arc-create-container-app.md) in your on-premises or cloud Kubernetes cluster to deploy your Azure Container Apps applications as you would any other region. +With [Azure Arc-enabled Kubernetes clusters](/azure/azure-arc/kubernetes/overview), you can create a [Container Apps enabled custom location](azure-arc-create-container-app.md) in your on-premises or cloud Kubernetes cluster to deploy your Azure Container Apps applications as you would any other region. This tutorial will show you how to enable Azure Container Apps on your Arc-enabled Kubernetes cluster. In this tutorial you will: This tutorial will show you how to enable Azure Container Apps on your Arc-enabl - An Azure account with an active subscription. - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). - Install the [Azure CLI](/cli/azure/install-azure-cli).-- Access to a public or private container registry, such as the [Azure Container Registry](../container-registry/index.yml).+- Access to a public or private container registry, such as the [Azure Container Registry](/azure/container-registry/). ## Setup $LOCATION="eastus" ## Create a connected cluster -The following steps help you get started understanding the service, but for production deployments, they should be viewed as illustrative, not prescriptive. See [Quickstart: Connect an existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md) for general instructions on creating an Azure Arc-enabled Kubernetes cluster. +The following steps help you get started understanding the service, but for production deployments, they should be viewed as illustrative, not prescriptive. See [Quickstart: Connect an existing Kubernetes cluster to Azure Arc](/azure/azure-arc/kubernetes/quickstart-connect-cluster) for general instructions on creating an Azure Arc-enabled Kubernetes cluster. 1. Create a cluster in Azure Kubernetes Service. To learn more about these pods and their role in the system, see [Azure Arc over ## Create a custom location -The [custom location](../azure-arc/kubernetes/custom-locations.md) is an Azure location that you assign to the Azure Container Apps connected environment. +The [custom location](/azure/azure-arc/kubernetes/custom-locations) is an Azure location that you assign to the Azure Container Apps connected environment. 1. Set the following environment variables to the desired name of the custom location and for the ID of the Azure Arc-connected cluster. The [custom location](../azure-arc/kubernetes/custom-locations.md) is an Azure l > [!NOTE]- > If you experience issues creating a custom location on your cluster, you may need to [enable the custom location feature on your cluster](../azure-arc/kubernetes/custom-locations.md#enable-custom-locations-on-your-cluster). This is required if logged into the CLI using a Service Principal or if you are logged in with a Microsoft Entra user with restricted permissions on the cluster resource. + > If you experience issues creating a custom location on your cluster, you may need to [enable the custom location feature on your cluster](/azure/azure-arc/kubernetes/custom-locations#enable-custom-locations-on-your-cluster). This is required if logged into the CLI using a Service Principal or if you are logged in with a Microsoft Entra user with restricted permissions on the cluster resource. > 1. Validate that the custom location is successfully created with the following command. The output should show the `provisioningState` property as `Succeeded`. If not, rerun the command after a minute. |
container-apps | Azure Arc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-overview.md | Learn to set up your Kubernetes cluster for Container Apps, via [Set up an Azure As you configure your cluster, you carry out these actions: -- **The connected cluster**, which is an Azure projection of your Kubernetes infrastructure. For more information, see [What is Azure Arc-enabled Kubernetes?](../azure-arc/kubernetes/overview.md).+- **The connected cluster**, which is an Azure projection of your Kubernetes infrastructure. For more information, see [What is Azure Arc-enabled Kubernetes?](/azure/azure-arc/kubernetes/overview). -- **A cluster extension**, which is a subresource of the connected cluster resource. The Container Apps extension [installs the required resources into your connected cluster](#resources-created-by-the-container-apps-extension). For more information about cluster extensions, see [Cluster extensions on Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/conceptual-extensions.md).+- **A cluster extension**, which is a subresource of the connected cluster resource. The Container Apps extension [installs the required resources into your connected cluster](#resources-created-by-the-container-apps-extension). For more information about cluster extensions, see [Cluster extensions on Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/conceptual-extensions). -- **A custom location**, which bundles together a group of extensions and maps them to a namespace for created resources. For more information, see [Custom locations on top of Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/conceptual-custom-locations.md).+- **A custom location**, which bundles together a group of extensions and maps them to a namespace for created resources. For more information, see [Custom locations on top of Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/conceptual-custom-locations). - **A Container Apps connected environment**, which enables configuration common across apps but not related to cluster operations. Conceptually, it's deployed into the custom location resource, and app developers create apps into this environment. |
container-apps | Azure Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-pipelines.md | steps: ``` > [!IMPORTANT]-> If you're building a container image in a separate step, make sure you use a unique tag such as the build ID instead of a stable tag like `latest`. For more information, see [Image tag best practices](../container-registry/container-registry-image-tag-version.md). +> If you're building a container image in a separate step, make sure you use a unique tag such as the build ID instead of a stable tag like `latest`. For more information, see [Image tag best practices](/azure/container-registry/container-registry-image-tag-version). ### Authenticate with Azure Container Registry |
container-apps | Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md | The following example shows how to configure Azure Container Registry credential ``` > [!NOTE]-> Docker Hub [limits](https://docs.docker.com/docker-hub/download-rate-limit/) the number of Docker image downloads. When the limit is reached, containers in your app will fail to start. Use a registry with sufficient limits, such as [Azure Container Registry](../container-registry/container-registry-intro.md) to avoid this problem. +> Docker Hub [limits](https://docs.docker.com/docker-hub/download-rate-limit/) the number of Docker image downloads. When the limit is reached, containers in your app will fail to start. Use a registry with sufficient limits, such as [Azure Container Registry](/azure/container-registry/container-registry-intro) to avoid this problem. ### Managed identity with Azure Container Registry |
container-apps | Custom Domains Managed Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/custom-domains-managed-certificates.md | Azure Container Apps provides a free managed certificate for your custom domain. The requirements are: -- Your container app has HTTP ingress enabled and is publicly accessible.+- Enable HTTP ingress and ensure your container app is publicly accessible. -- For apex domains, you must have an A record pointing to your Container Apps environment's IP address.+- Must have an A record for apex domains that points to your Container Apps environment's IP address. -- For subdomains, you must have a CNAME record mapped directly to the container app's automatically generated domain name. Mapping to an intermediate CNAME value blocks certificate issuance and renewal. Examples of CNAME values are traffic managers, Cloudflare, and similar services.+- Establish a CNAME record for subdomains that maps directly to the container app's automatically generated domain name. Mapping to an intermediate CNAME value blocks certificate issuance and renewal. Examples of CNAME values are traffic managers, Cloudflare, and similar services. > [!NOTE] > To ensure the certificate issuance and subsequent renewals proceed successfully, all requirements must be met at all times when the managed certificate is assigned. The requirements are: 1. Navigate to your container app in the [Azure portal](https://portal.azure.com) -1. Verify that your app has HTTP ingress enabled by selecting **Ingress** in the *Settings* section. If ingress isn't enabled, enable it with these steps: +1. Verify that your app has HTTP ingress enabled by selecting **Ingress** in the *Settings* section. If ingress isn't enabled, enable it with these steps: 1. Set *HTTP Ingress* to **Enabled**. 1. Select the desired *Ingress traffic* setting. The requirements are: | Apex domain | A record | An apex domain is a domain at the root level of your domain. For example, if your DNS zone is `contoso.com`, then `contoso.com` is the apex domain. | | Subdomain | CNAME | A subdomain is a domain that is part of another domain. For example, if your DNS zone is `contoso.com`, then `www.contoso.com` is an example of a subdomain that can be configured in the zone. | -1. Using the DNS provider that is hosting your domain, create DNS records based on the *Hostname record type* you selected using the values shown in the *Domain validation* section. The records point the domain to your container app and verify that you are the owner. +1. Using the DNS provider that is hosting your domain, create DNS records based on the *Hostname record type* you selected using the values shown in the *Domain validation* section. The records point the domain to your container app and verify that you're the owner. - If you selected *A record*, create the following DNS records: The requirements are: 1. Once validation succeeds, select **Add**. - It may take several minutes to issue the certificate and add the domain to your container app. + It might take several minutes to issue the certificate and add the domain to your container app. 1. Once the operation is complete, you see your domain name in the list of custom domains with a status of *Secured*. Navigate to your domain to verify that it's accessible. Container Apps supports apex domains and subdomains. Each domain type requires a - If you're configuring an *A record*, replace `<VALIDATION_METHOD>` with `HTTP`. - If you're configuring a *CNAME*, replace `<VALIDATION_METHOD>` with `CNAME`. - It may take several minutes to issue the certificate and add the domain to your container app. + It might take several minutes to issue the certificate and add the domain to your container app. 1. Once the operation is complete, navigate to your domain to verify that it's accessible. |
container-apps | Get Started Existing Container Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md | This article demonstrates how to deploy an existing container to Azure Container - An Azure account with an active subscription. - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). - Install the [Azure CLI](/cli/azure/install-azure-cli).-- Access to a public or private container registry, such as the [Azure Container Registry](../container-registry/index.yml).+- Access to a public or private container registry, such as the [Azure Container Registry](/azure/container-registry/). [!INCLUDE [container-apps-create-cli-steps.md](../../includes/container-apps-create-cli-steps.md)] |
container-apps | Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions.md | steps: ``` > [!IMPORTANT]-> If you're building a container image in a separate step, make sure you use a unique tag such as the commit SHA instead of a stable tag like `latest`. For more information, see [Image tag best practices](../container-registry/container-registry-image-tag-version.md). +> If you're building a container image in a separate step, make sure you use a unique tag such as the commit SHA instead of a stable tag like `latest`. For more information, see [Image tag best practices](/azure/container-registry/container-registry-image-tag-version). ### Authenticate with Azure Container Registry |
container-apps | Java Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-get-started.md | The following image is a screenshot of how your application looks once deployed | Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).| | Container Apps CLI extension | Use version `0.3.47` or higher. Use the `az extension add --name containerapp --upgrade --allow-preview` command to install the latest version. | | Java | Install the [Java Development Kit](/java/openjdk/install). Use version 17 or later. |-| Maven | Install the [Maven](https://maven.apache.org/download.cgi).| +| Apache Maven | Download and install [Apache Maven](https://maven.apache.org/download.cgi).| ## Prepare the project cd spring-petclinic Clean the Maven build area, compile the project's code, and create a JAR file, all while skipping any tests. ```bash-mvn clean package -DskipTests +mvn clean verify ``` After you execute the build command, a file named *petclinic.jar* is generated in the */target* folder. cd spring-framework-petclinic Clean the Maven build area, compile the project's code, and create a WAR file, all while skipping any tests. ```bash-mvn clean package -DskipTests +mvn clean verify ``` After you execute the build command, a file named *petclinic.war* is generated in the */target* folder. |
container-apps | Key Vault Certificates Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/key-vault-certificates-manage.md | You can set up Azure Key Vault to centrally manage your container app's TLS/SSL An Azure Key Vault resource is required to store your certificate. See [Import a certificate in Azure Key Vault](/azure/key-vault/certificates/tutorial-import-certificate?tabs=azure-portal) or [Configure certificate auto-rotation in Key Vault](/azure/key-vault/certificates/tutorial-rotate-certificates) to create a Key Vault and add a certificate. +## Exceptions ++While the majority of certificate types are supported, there are a few exceptions to keep in mind. +- ECDSA p384 and p521 certificates are not supported. +- Due to how App Services certificates are saved in Key Vault, they cannot be imported using the Azure Portal and require the Azure CLI. + ## Enable managed identity for Container Apps environment Azure Container Apps uses an environment level managed identity to access your Key Vault and import your certificate. To enable system-assigned managed identity, follow these steps: |
container-apps | Managed Identity Image Pull | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity-image-pull.md | The method to configure a system-assigned managed identity in the Azure portal i - An Azure account with an active subscription. - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/).-- A private Azure Container Registry containing an image you want to pull. See [Create a private Azure Container Registry](../container-registry/container-registry-get-started-portal.md#create-a-container-registry).+- A private Azure Container Registry containing an image you want to pull. See [Create a private Azure Container Registry](/azure/container-registry/container-registry-get-started-portal#create-a-container-registry). ### Create a container app This article describes how to configure your container app to use managed identi | Azure account | An Azure account with an active subscription. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/). | | Azure CLI | If using Azure CLI, [install the Azure CLI](/cli/azure/install-azure-cli) on your local machine. | | Azure PowerShell | If using PowerShell, [install the Azure PowerShell](/powershell/azure/install-azure-powershell) on your local machine. Ensure that the latest version of the Az.App module is installed by running the command `Install-Module -Name Az.App`. |-|Azure Container Registry | A private Azure Container Registry containing an image you want to pull. [Quickstart: Create a private container registry using the Azure CLI](../container-registry/container-registry-get-started-azure-cli.md) or [Quickstart: Create a private container registry using Azure PowerShell](../container-registry/container-registry-get-started-powershell.md)| +|Azure Container Registry | A private Azure Container Registry containing an image you want to pull. [Quickstart: Create a private container registry using the Azure CLI](/azure/container-registry/container-registry-get-started-azure-cli) or [Quickstart: Create a private container registry using Azure PowerShell](/azure/container-registry/container-registry-get-started-powershell)| [!INCLUDE [container-apps-create-cli-steps.md](../../includes/container-apps-create-cli-steps.md)] This article describes how to use a Bicep template to configure your container a - If you don't have one, you can [create one for free](https://azure.microsoft.com/free/). - If using Azure CLI, [install the Azure CLI](/cli/azure/install-azure-cli) on your local machine. - If using PowerShell, [install the Azure PowerShell](/powershell/azure/install-azure-powershell) on your local machine. Ensure that the latest version of the Az.App module is installed by running the command `Install-Module -Name Az.App`.-- A private Azure Container Registry containing an image you want to pull. To create a container registry and push an image to it, see [Quickstart: Create a private container registry using the Azure CLI](../container-registry/container-registry-get-started-azure-cli.md) or [Quickstart: Create a private container registry using Azure PowerShell](../container-registry/container-registry-get-started-powershell.md)+- A private Azure Container Registry containing an image you want to pull. To create a container registry and push an image to it, see [Quickstart: Create a private container registry using the Azure CLI](/azure/container-registry/container-registry-get-started-azure-cli) or [Quickstart: Create a private container registry using Azure PowerShell](/azure/container-registry/container-registry-get-started-powershell) [!INCLUDE [container-apps-create-cli-steps.md](../../includes/container-apps-create-cli-steps.md)] |
container-apps | Storage Mounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts.md | A container app has access to different types of storage. A single app can take | Storage type | Description | Persistence | Usage example | |--|--|--|-| [Container-scoped storage](#container-scoped-storage) | Ephemeral storage available to a running container | Data is available until container shuts down | Writing a local app cache. | -| [Replica-scoped storage](#replica-scoped-storage) | Ephemeral storage for sharing files between containers in the same replica | Data is available until replica shuts down | The main app container writing log files that are processed by a sidecar container. | +| [Container-scoped storage](#container-scoped-storage) | Ephemeral storage available to a running container | Data is available until container shuts down | Writing a local app cache. | +| [Replica-scoped storage](#replica-scoped-storage) | Ephemeral storage for sharing files between containers in the same replica | Data is available until replica shuts down | The main app container writing log files that a sidecar container processes. | | [Azure Files](#azure-files) | Permanent storage | Data is persisted to Azure Files | Writing files to a file share to make data accessible by other systems. | ## Ephemeral storage -A container app can read and write temporary data to ephemeral storage. Ephemeral storage can be scoped to a container or a replica. The total amount of container-scoped and replica-scoped storage available to each replica depends on the total amount of vCPUs allocated to the replica. +A container app can read and write temporary data to ephemeral storage. Ephemeral storage can be scoped to a container or a replica. The total amount of container-scoped and replica-scoped storage available to each replica depends on the total number of vCPUs allocated to the replica. | vCPUs | Total ephemeral storage | |--|--| Azure Files storage has the following characteristics: * All containers that mount the share can access files written by any other container or method. * More than one Azure Files volume can be mounted in a single container. -Azure Files supports both SMB and NFS protocols. You can mount an Azure Files share using either protocol. The file share you define in the environment must be configured with the same protocol used by the file share in the storage account. +Azure Files supports both SMB (Server Message Block) and NFS (Network File System) protocols. You can mount an Azure Files share using either protocol. The file share you define in the environment must be configured with the same protocol used by the file share in the storage account. > [!NOTE] > Support for mounting NFS shares in Azure Container Apps is in preview. Azure Files supports both SMB and NFS protocols. You can mount an Azure Files sh To enable Azure Files storage in your container, you need to set up your environment and container app as follows: * Create a storage definition in the Container Apps environment.-* If you are using NFS, your environment must be configured with a custom VNet and the storage account must be configured to allow access from the VNet. For more information, see [NFS file shares in Azure Files +* If you're using NFS, your environment must be configured with a custom VNet and the storage account must be configured to allow access from the VNet. For more information, see [NFS file shares in Azure Files ](../storage/files/files-nfs-protocol.md). * If your environment is configured with a custom VNet, you must allow ports 445 and 2049 in the network security group (NSG) associated with the subnet. * Define a volume of type `AzureFile` (SMB) or `NfsAzureFile` (NFS) in a revision. |
container-apps | Tutorial Code To Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-code-to-cloud.md | $acr = New-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $ACRName ## Build your application -With [ACR tasks](../container-registry/container-registry-tasks-overview.md), you can build and push the docker image for the album API without installing Docker locally. +With [ACR tasks](/azure/container-registry/container-registry-tasks-overview), you can build and push the docker image for the album API without installing Docker locally. ### Build the container with ACR |
container-apps | Tutorial Java Quarkus Connect Managed Identity Postgresql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md | cd quarkus-quickstarts/hibernate-orm-panache-quickstart --location $LOCATION ``` -1. Create a container app with your app image by running the following command. Replace the placeholders with your values. To find the container registry admin account details, see [Authenticate with an Azure container registry](../container-registry/container-registry-authentication.md) +1. Create a container app with your app image by running the following command. Replace the placeholders with your values. To find the container registry admin account details, see [Authenticate with an Azure container registry](/azure/container-registry/container-registry-authentication) ```azurecli-interactive CONTAINER_IMAGE_NAME=quarkus-postgres-passwordless-app:v1 |
container-registry | Allow Access Trusted Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/allow-access-trusted-services.md | - Title: Access network-restricted registry using trusted Azure service -description: Enable a trusted Azure service instance to securely access a network-restricted container registry to pull or push images ---- Previously updated : 10/31/2023---# Allow trusted services to securely access a network-restricted container registry --Azure Container Registry can allow select trusted Azure services to access a registry that's configured with network access rules. When trusted services are allowed, a trusted service instance can securely bypass the registry's network rules and perform operations such as pull or push images. This article explains how to enable and use trusted services with a network-restricted Azure container registry. --Use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.18 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --## Limitations --* Certain registry access scenarios with trusted services require a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). Except where noted that a user-assigned managed identity is supported, only a system-assigned identity may be used. -* Allowing trusted services doesn't apply to a container registry configured with a [service endpoint](container-registry-vnet.md). The feature only affects registries that are restricted with a [private endpoint](container-registry-private-link.md) or that have [public IP access rules](container-registry-access-selected-networks.md) applied. --## About trusted services --Azure Container Registry has a layered security model, supporting multiple network configurations that restrict access to a registry, including: --* [Private endpoint with Azure Private Link](container-registry-private-link.md). When configured, a registry's private endpoint is accessible only to resources within the virtual network, using private IP addresses. -* [Registry firewall rules](container-registry-access-selected-networks.md), which allow access to the registry's public endpoint only from specific public IP addresses or address ranges. You can also configure the firewall to block all access to the public endpoint when using private endpoints. --When deployed in a virtual network or configured with firewall rules, a registry denies access to users or services from outside those sources. --Several multi-tenant Azure services operate from networks that can't be included in these registry network settings, preventing them from performing operations such as pull or push images to the registry. By designating certain service instances as "trusted", a registry owner can allow select Azure resources to securely bypass the registry's network settings to perform registry operations. --### Trusted services --Instances of the following services can access a network-restricted container registry if the registry's **allow trusted services** setting is enabled (the default). More services will be added over time. --Where indicated, access by the trusted service requires additional configuration of a managed identity in a service instance, assignment of an [RBAC role](container-registry-roles.md), and authentication with the registry. For example steps, see [Trusted services workflow](#trusted-services-workflow), later in this article. --|Trusted service |Supported usage scenarios | Configure managed identity with RBAC role | -|||| -| Azure Container Instances | [Deploy to Azure Container Instances from Azure Container Registry using a managed identity](/azure/container-instances/using-azure-container-registry-mi) | Yes, either system-assigned or user-assigned identity | -| Microsoft Defender for Cloud | Vulnerability scanning by [Microsoft Defender for container registries](scan-images-defender.md) | No | -|ACR Tasks | [Access the parent registry or a different registry from an ACR Task](container-registry-tasks-cross-registry-authentication.md) | Yes | -|Machine Learning | [Deploy](/azure/machine-learning/how-to-deploy-custom-container) or [train](/azure/machine-learning/how-to-train-with-custom-image) a model in a Machine Learning workspace using a custom Docker container image | Yes | -|Azure Container Registry | [Import images](container-registry-import-images.md) to or from a network-restricted Azure container registry | No | --> [!NOTE] -> Currently, enabling the allow trusted services setting doesn't apply to App Service. --## Allow trusted services - CLI --By default, the allow trusted services setting is enabled in a new Azure container registry. Disable or enable the setting by running the [az acr update](/cli/azure/acr#az-acr-update) command. --To disable: --```azurecli -az acr update --name myregistry --allow-trusted-services false -``` --To enable the setting in an existing registry or a registry where it's already disabled: --```azurecli -az acr update --name myregistry --allow-trusted-services true -``` --## Allow trusted services - portal --By default, the allow trusted services setting is enabled in a new Azure container registry. --To disable or re-enable the setting in the portal: --1. In the portal, navigate to your container registry. -1. Under **Settings**, select **Networking**. -1. In **Allow public network access**, select **Selected networks** or **Disabled**. -1. Do one of the following: - * To disable access by trusted services, under **Firewall exception**, uncheck **Allow trusted Microsoft services to access this container registry**. - * To allow trusted services, under **Firewall exception**, check **Allow trusted Microsoft services to access this container registry**. -1. Select **Save**. --## Trusted services workflow --Here's a typical workflow to enable an instance of a trusted service to access a network-restricted container registry. This workflow is needed when a service instance's managed identity is used to bypass the registry's network rules. --1. Enable a managed identity in an instance of one of the [trusted services](#trusted-services) for Azure Container Registry. -1. Assign the identity an [Azure role](container-registry-roles.md) to your registry. For example, assign the ACRPull role to pull container images. -1. In the network-restricted registry, configure the setting to allow access by trusted services. -1. Use the identity's credentials to authenticate with the network-restricted registry. -1. Pull images from the registry, or perform other operations allowed by the role. --### Example: ACR Tasks --The following example demonstrates using ACR Tasks as a trusted service. See [Cross-registry authentication in an ACR task using an Azure-managed identity](container-registry-tasks-cross-registry-authentication.md) for task details. --1. Create or update an Azure container registry. -[Create](container-registry-tasks-cross-registry-authentication.md#option-2-create-task-with-system-assigned-identity) an ACR task. - * Enable a system-assigned managed identity when creating the task. - * Disable default auth mode (`--auth-mode None`) of the task. -1. Assign the task identity [an Azure role to access the registry](container-registry-tasks-authentication-managed-identity.md#3-grant-the-identity-permissions-to-access-other-azure-resources). For example, assign the AcrPush role, which has permissions to pull and push images. -1. [Add managed identity credentials for the registry](container-registry-tasks-authentication-managed-identity.md#4-optional-add-credentials-to-the-task) to the task. -1. To confirm that the task bypasses network restrictions, [disable public access](container-registry-access-selected-networks.md#disable-public-network-access) in the registry. -1. Run the task. If the registry and task are configured properly, the task runs successfully, because the registry allows access. --To test disabling access by trusted --1. Disable the setting to allow access by trusted services. -1. Run the task again. In this case, the task run fails, because the registry no longer allows access by the task. --## Next steps --* To restrict access to a registry using a private endpoint in a virtual network, see [Configure Azure Private Link for an Azure container registry](container-registry-private-link.md). -* To set up registry firewall rules, see [Configure public IP network rules](container-registry-access-selected-networks.md). |
container-registry | Anonymous Pull Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/anonymous-pull-access.md | - Title: Enable anonymous pull access -description: Optionally enable anonymous pull access to make content in your Azure container registry publicly available ---- Previously updated : 10/31/2023-#customer intent: As a user, I want to learn how to enable anonymous pull access in Azure container registry so that I can make my registry content publicly available. ---# Make your container registry content publicly available --Setting up an Azure container registry for anonymous (unauthenticated) pull access is an optional feature that allows any user with internet access the ability to pull any content from the registry. --Anonymous pull access is a preview feature, available in the Standard and Premium [service tiers](container-registry-skus.md). To configure anonymous pull access, update a registry using the Azure CLI (version 2.21.0 or later). To install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --## About anonymous pull access --By default, access to pull or push content from an Azure container registry is only available to [authenticated](container-registry-authentication.md) users. Enabling anonymous (unauthenticated) pull access makes all registry content publicly available for read (pull) actions. Anonymous pull access can be used in scenarios that do not require user authentication such as distributing public container images. --- Enable anonymous pull access by updating the properties of an existing registry.-- After enabling anonymous pull access, you may disable that access at any time.-- Only data-plane operations are available to unauthenticated clients.-- The registry may throttle a high rate of unauthenticated requests.-- If you previously authenticated to the registry, make sure you clear the credentials before attempting an anonymous pull operation.--> [!WARNING] -> Anonymous pull access currently applies to all repositories in the registry. If you manage repository access using [repository-scoped tokens](container-registry-repository-scoped-permissions.md), all users may pull from those repositories in a registry enabled for anonymous pull. We recommend deleting tokens when anonymous pull access is enabled. ---## Configure anonymous pull access --Users can enable, disable and query the status of anonymous pull access using the Azure CLI. The following examples demonstrate how to enable, disable, and query the status of anonymous pull access. --### Enable anonymous pull access --Update a registry using the [az acr update](/cli/azure/acr#az-acr-update) command and pass the `--anonymous-pull-enabled` parameter. By default, anonymous pull is disabled in the registry. - -```azurecli -az acr update --name myregistry --anonymous-pull-enabled -``` --> [!IMPORTANT] -> If you previously authenticated to the registry with Docker credentials, run `docker logout` to ensure that you clear the existing credentials before attempting anonymous pull operations. Otherwise, you might see an error message similar to "pull access denied". -> Remember to always specify the fully qualified registry name (all lowercase) when using `docker login` and tagging images for pushing to your registry. In the examples provided, the fully qualified name is `myregistry.azurecr.io`. --If you've previously authenticated to the registry with Docker credentials, run the following command to clear existing credentials or any previous authentication is cleared. - - ```azurecli - docker logout myregistry.azurecr.io - ``` --This will help you to attempt an anonymous pull operation. If you encounter any issues, you might see an error message similar to "pull access denied." ---### Disable anonymous pull access --Disable anonymous pull access by setting `--anonymous-pull-enabled` to `false`. --```azurecli -az acr update --name myregistry --anonymous-pull-enabled false -``` --### Query the status of anonymous pull access --Users can query the status of "anonymous-pull" using the [az acr show command][az-acr-show] with the --query parameter. Here's an example: --```azurecli-interactive -az acr show -n <registry_name> --query anonymousPullEnabled -``` --The command will return a boolean value indicating whether "Anonymous Pull" is enabled (true) or disabled (false). This will streamline the process for users to verify the status of features within ACR. --## Next steps --* Learn about using [repository-scoped tokens](container-registry-repository-scoped-permissions.md). -* Learn about options to [authenticate](container-registry-authentication.md) to an Azure container registry. ---[az-acr-show]: /cli/azure/acr#az-acr-show |
container-registry | Authenticate Aks Cross Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/authenticate-aks-cross-tenant.md | - Title: Authenticate from AKS cluster to Azure container registry in different AD tenant -description: Configure an AKS cluster's service principal with permissions to access your Azure container registry in a different AD tenant ---- Previously updated : 10/31/2023---# Pull images from a container registry to an AKS cluster in a different Microsoft Entra tenant --In some cases, you might have your Azure AKS cluster in one Microsoft Entra tenant and your Azure container registry in a different tenant. This article walks through the steps to enable cross-tenant authentication using the AKS service principal credential to pull from the container registry. --> [!NOTE] -> You can't attach the registry and authenticate using an AKS managed identity when the cluster and the container registry are in different tenants. --## Scenario overview --Assumptions for this example: --* The AKS cluster is in **Tenant A** and the Azure container registry is in **Tenant B**. -* The AKS cluster is configured with service principal authentication in **Tenant A**. Learn more about how to create and use a [service principal for your AKS cluster](/azure/aks/kubernetes-service-principal). --You need at least the Contributor role in the AKS cluster's subscription and the Owner role in the container registry's subscription. --You use the following steps to: --* Create a new multitenant app (service principal) in **Tenant A**. -* Provision the app in **Tenant B**. -* Configure the service principal to pull from the registry in **Tenant B** -* Update the AKS cluster in **Tenant A** to authenticate using the new service principal --## Step-by-step instructions --<a name='step-1-create-multitenant-azure-ad-application'></a> --### Step 1: Create multitenant Microsoft Entra application --1. Sign in to the [Azure portal](https://portal.azure.com/) in **Tenant A**. -1. Search for and select **Microsoft Entra ID**. -1. Under **Manage**, select **App registrations > + New registration**. -1. In **Supported account types**, select **Accounts in any organizational directory**. -1. In **Redirect URI**, enter *https://www.microsoft.com*. -1. Select **Register**. -1. On the **Overview** page, take note of the **Application (client) ID**. It will be used in Step 2 and Step 4. -- :::image type="content" source="media/authenticate-kubernetes-cross-tenant/service-principal-overview.png" alt-text="Service principal application ID"::: -1. In **Certificates & secrets**, under **Client secrets**, select **+ New client secret**. -1. Enter a **Description** such as *Password* and select **Add**. -1. In **Client secrets**, take note of the value of the client secret. You use it to update the AKS cluster's service principal in Step 4. -- :::image type="content" source="media/authenticate-kubernetes-cross-tenant/configure-client-secret.png" alt-text="Configure client secret"::: --### Step 2: Provision the service principal in the ACR tenant --1. Open the following link using an admin account in **Tenant B**. Where indicated, insert the **ID of Tenant B** and the **application ID** (client ID) of the multitenant app. -- ```console - https://login.microsoftonline.com/<Tenant B ID>/oauth2/authorize?client_id=<Multitenant application ID>&response_type=code&redirect_uri=<redirect url> - ``` --1. Select **Consent on behalf of your organization** and then **Accept**. - - :::image type="content" source="media/authenticate-kubernetes-cross-tenant/multitenant-app-consent.png" alt-text="Grant tenant access to application"::: - -### Step 3: Grant service principal permission to pull from registry --In **Tenant B**, assign the AcrPull role to the service principal, scoped to the target container registry. You can use the [Azure portal](../role-based-access-control/role-assignments-portal.yml) or other tools to assign the role. For example steps using the Azure CLI, see [Azure Container Registry authentication with service principals](container-registry-auth-service-principal.md#use-an-existing-service-principal). ---<a name='step-4-update-aks-with-the-azure-ad-application-secret'></a> --### Step 4: Update AKS with the Microsoft Entra application secret --Use the multitenant application (client) ID and client secret collected in Step 1 to [update the AKS service principal credential](/azure/aks/update-credentials#update-aks-cluster-with-service-principal-credentials). --Updating the service principal can take several minutes. --## Next steps --* Learn more [Azure Container Registry authentication with service principals](container-registry-auth-service-principal.md) -* Learn more about image pull secrets in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) -- Learn about [Application and service principal objects in Microsoft Entra ID](../active-directory/develop/app-objects-and-service-principals.md)-- Learn more about [scenarios to authenticate with Azure Container Registry](authenticate-kubernetes-options.md) from a Kubernetes cluster |
container-registry | Authenticate Kubernetes Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/authenticate-kubernetes-options.md | - Title: Scenarios to authenticate with Azure Container Registry from Kubernetes -description: Overview of options and scenarios to authenticate to an Azure container registry from a Kubernetes cluster to pull container images ---- Previously updated : 10/31/2023---# Scenarios to authenticate with Azure Container Registry from Kubernetes ---You can use an Azure container registry as a source of container images for Kubernetes, including clusters you manage, managed clusters hosted in [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes) or other clouds, and "local" Kubernetes configurations such as [minikube](https://minikube.sigs.k8s.io/) and [kind](https://kind.sigs.k8s.io/). --To pull images to your Kubernetes cluster from an Azure container registry, an authentication and authorization mechanism needs to be established. Depending on your cluster environment, choose one of the following methods: --## Scenarios --| Kubernetes cluster |Authentication method | Description | Example | -||||-| -| AKS cluster |AKS managed identity | Enable the AKS kubelet [managed identity](/azure/aks/use-managed-identity) to pull images from an attached Azure container registry.<br/><br/> Registry and cluster must be in same Active Directory tenant but can be in the same or a different Azure subscription. | [Authenticate with Azure Container Registry from Azure Kubernetes Service](/azure/aks/cluster-container-registry-integration?toc=/azure/container-registry/toc.json&bc=/azure/container-registry/breadcrumb/toc.json)| -| AKS cluster | AKS service principal | Enable the [AKS service principal](/azure/aks/kubernetes-service-principal) with permissions to a target Azure container registry.<br/><br/>Registry and cluster can be in the same or a different Azure subscription or Microsoft Entra tenant. | [Pull images from an Azure container registry to an AKS cluster in a different AD tenant](authenticate-aks-cross-tenant.md) -| Kubernetes cluster other than AKS |Pod [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | Use general Kubernetes mechanism to manage registry credentials for pod deployments.<br/><br/>Configure AD service principal, repository-scoped token, or other supported [registry credentials](container-registry-authentication.md). | [Pull images from an Azure container registry to a Kubernetes cluster using a pull secret](container-registry-auth-kubernetes.md) | ----## Next steps --* Learn more about how to [authenticate with an Azure container registry](container-registry-authentication.md) |
container-registry | Buffer Gate Public Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/buffer-gate-public-content.md | - Title: Manage public content in private container registry -description: Practices and workflows in Azure Container Registry to manage dependencies on public images from Docker Hub and other public content ---- Previously updated : 10/31/2023---# Manage public content with Azure Container Registry --This article is an overview of practices and workflows to use a local registry such as an [Azure container registry](container-registry-intro.md) to maintain copies of public content, such as container images in Docker Hub. ---## Risks with public content --Your environment may have dependencies on public content such as public container images, [Helm charts](https://helm.sh/), [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) policies, or other artifacts. For example, you might run [nginx](https://hub.docker.com/_/nginx) for service routing or `docker build FROM alpine` by pulling images directly from Docker Hub or another public registry. --Without proper controls, having dependencies on public registry content can introduce risks to your image development and deployment workflows. To mitigate the risks, keep local copies of public content when possible. For details, see the [Open Container Initiative blog](https://opencontainers.org/posts/blog/2020-10-30-consuming-public-content/). --## Authenticate with Docker Hub --As a first step, if you currently pull public images from Docker Hub as part of a build or deployment workflow, we recommend that you [authenticate using a Docker Hub account](https://docs.docker.com/docker-hub/download-rate-limit/#how-do-i-authenticate-pull-requests) instead of making an anonymous pull request. --When making frequent anonymous pull requests you might see Docker errors similar to `ERROR: toomanyrequests: Too Many Requests.` or `You have reached your pull rate limit.` Authenticate to Docker Hub to prevent these errors. --> [!NOTE] -> Effective November 2, 2020, [download rate limits](https://docs.docker.com/docker-hub/download-rate-limit) apply to anonymous and authenticated requests to Docker Hub from Docker Free Plan accounts and are enforced by IP address and Docker ID, respectively. -> -> When estimating your number of pull requests, take into account that when using cloud provider services or working behind a corporate NAT, multiple users will be presented to Docker Hub in aggregate as a subset of IP addresses. Adding Docker paid account authentication to requests made to Docker Hub will avoid potential service disruptions due to rate-limit throttling. -> -> For details, see [Docker pricing and subscriptions](https://www.docker.com/pricing) and the [Docker Terms of Service](https://www.docker.com/legal/docker-terms-service). --### Docker Hub access token --Docker Hub supports [personal access tokens](https://docs.docker.com/docker-hub/access-tokens/) as alternatives to a Docker password when authenticating to Docker Hub. Tokens are recommended for automated services that pull images from Docker Hub. You can generate multiple tokens for different users or services, and revoke tokens when no longer needed. --To authenticate with `docker login` using a token, omit the password on the command line. When prompted for a password, enter the token instead. If you enabled two-factor authentication for your Docker Hub account, you must use a personal access token when logging in from the Docker CLI. --### Authenticate from Azure services --Several Azure services including App Service and Azure Container Instances support pulling images from public registries such as Docker Hub for container deployments. If you need to deploy an image from Docker Hub, we recommend that you configure settings to authenticate using a Docker Hub account. Examples: --**App Service** --* **Image source**: Docker Hub -* **Repository access**: Private -* **Login**: \<Docker Hub username> -* **Password**: \<Docker Hub token> --For details, see [Docker Hub authenticated pulls on App Service](https://azure.github.io/AppService/2020/10/15/Docker-Hub-authenticated-pulls-on-App-Service.html). --**Azure Container Instances** --* **Image source**: Docker Hub or other registry -* **Image type**: Private -* **Image registry login server**: docker.io -* **Image registry user name**: \<Docker Hub username> -* **Image registry password**: \<Docker Hub token> -* **Image**: docker.io/\<repo name\>:\<tag> ---## Configure Artifact Cache to consume public content --The best practice for consuming public content is to combine registry authentication and the Artifact Cache feature. You can use Artifact Cache to cache your container artifacts into your Azure Container Registry even in private networks. Using Artifact Cache not only protects you from registry rate limits, but dramatically increases pull reliability when combined with Geo-replicated ACR to pull artifacts from whichever region is closest to your Azure resource. In addition, you can also use all the security features ACR has to offer, including private networks, firewall configuration, Service Principals, and more. For complete information on using public content with ACR Artifact Cache, check out the [Artifact Cache](container-registry-artifact-cache.md) tutorial. ---## Import images to an Azure container registry - -To begin managing copies of public images, you can create an Azure container registry if you don't already have one. Create a registry using the [Azure CLI](container-registry-get-started-azure-cli.md), [Azure portal](container-registry-get-started-portal.md), [Azure PowerShell](container-registry-get-started-powershell.md), or other tools. --# [Azure CLI](#tab/azure-cli) --As a recommended one-time step, [import](container-registry-import-images.md) base images and other public content to your Azure container registry. The [az acr import](/cli/azure/acr#az-acr-import) command in the Azure CLI supports image import from public registries such as Docker Hub and Microsoft Container Registry and from other private container registries. --`az acr import` doesn't require a local Docker installation. You can run it with a local installation of the Azure CLI or directly in Azure Cloud Shell. It supports images of any OS type, multi-architecture images, or OCI artifacts such as Helm charts. --Depending on your organization's needs, you can import to a dedicated registry or a repository in a shared registry. --```azurecli-interactive -az acr import \ - --name myregistry \ - --source docker.io/library/hello-world:latest \ - --image hello-world:latest \ - --username <Docker Hub username> \ - --password <Docker Hub token> -``` --# [Azure PowerShell](#tab/azure-powershell) --As a recommended one-time step, [import](container-registry-import-images.md) base images and other public content to your Azure container registry. The [Import-AzContainerRegistryImage](/powershell/module/az.containerregistry/import-azcontainerregistryimage) command in the Azure PowerShell supports image import from public registries such as Docker Hub and Microsoft Container Registry and from other private container registries. --`Import-AzContainerRegistryImage` doesn't require a local Docker installation. You can run it with a local installation of the Azure PowerShell or directly in Azure Cloud Shell. It supports images of any OS type, multi-architecture images, or OCI artifacts such as Helm charts. --Depending on your organization's needs, you can import to a dedicated registry or a repository in a shared registry. --```azurepowershell-interactive -$Params = @{ - SourceImage = 'library/busybox:latest' - ResourceGroupName = $resourceGroupName - RegistryName = $RegistryName - SourceRegistryUri = 'docker.io' - TargetTag = 'busybox:latest' -} -Import-AzContainerRegistryImage @Params -``` --Credentials are required if the source registry is not available publicly or the admin user is disabled. ----## Update image references --Developers of application images should ensure that their code references local content under their control. --* Update image references to use the private registry. For example, update a `FROM baseimage:v1` statement in a Dockerfile to `FROM myregistry.azurecr.io/mybaseimage:v1` -* Configure credentials or an authentication mechanism to use the private registry. The exact mechanism depends on the tools you use to access the registry and how you manage user access. - * If you use a Kubernetes cluster or Azure Kubernetes Service to access the registry, see the [authentication scenarios](authenticate-kubernetes-options.md). - * Learn more about [options to authenticate](container-registry-authentication.md) with an Azure container registry. --## Automate application image updates --Expanding on image import, set up an [Azure Container Registry task](container-registry-tasks-overview.md) to automate application image builds when base images are updated. An automated build task can track both [base image updates](container-registry-tasks-base-images.md) and [source code updates](container-registry-tasks-overview.md#trigger-a-task-on-a-source-code-update). --For a detailed example, see [How to consume and maintain public content with Azure Container Registry Tasks](tasks-consume-public-content.md). --> [!NOTE] -> A single preconfigured task can automatically rebuild every application image that references a dependent base image. - -## Next steps -* Learn more about [ACR Tasks](container-registry-tasks-overview.md) to build, run, push, and patch container images in Azure. -* See [How to consume and maintain public content with Azure Container Registry Tasks](tasks-consume-public-content.md) for an automated gating workflow to update base images to your environment. -* See the [ACR Tasks tutorials](container-registry-tutorial-quick-task.md) for more examples to automate image builds and updates. |
container-registry | Connected Registry Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/connected-registry-glossary.md | - Title: "Glossary for connected registry with Azure Arc" -description: "Learn the terms and definitions for the connected registry extension with Azure Arc for a seamless extension deployment." ---- Previously updated : 06/18/2024-#customer intent: As a customer, I want to understand the terms and definitions for the connected registry extension with Azure Arc for a successful deployment. ----# Glossary for Connected registry with Azure Arc --This glossary provides terms and definitions for the connected registry extension with Azure Arc for a seamless extension deployment. --## Glossary of terms --### Auto-upgrade-version --- **Definition:** Automatically upgrade the version of the extension instance.-- **Accepted Values:** `true`, `false`-- **Default Value:** `false`-- **Note:** [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) manages the upgrade process and automatic rollback.--### Bring Your Own Certificate (BYOC) --- **Definition:** Allows customers to use their own certificate management service.-- **Accepted Values:** Kubernetes Secret or Public Certificate + Private Key pair-- **Note:** Customer must specify.--### Cert-manager.enabled --- **Definition:** Enables cert-manager service for use with the connected registry, handling the TLS certificate management lifecycle.-- **Accepted Values:** `true`, `false`-- **Default Value:** `true`-- **Note:** Customers can either use the provided cert-manager service at deployment or use theirs (must already be installed).--### Cert-manager.install --- **Definition:** Installs the cert-manager tool as part of the extension deployment.-- **Accepted Values:** `true`, `false`-- **Default Value:** `true`-- **Note:** Must be set to `false` if a customer is using their own cert-manager service.--### Child Registry --- **Description:** A registry that synchronizes with its parent (top-level) registry. The modes of the parent and child registries must match to ensure compatibility.--### Client Token --- **Definition:** Manages client access to a connected registry, allowing for actions on one or more repositories.-- **Accepted Values:** Token name-- **Note:** After creating a token, configure the connected registry to accept it using the `az acr connected-registry update` command.--### Cloud Registry --- **Description:** The ACR registry from which the connected registry syncs artifacts.--### Cluster-name --- **Definition:** The name of the Arc cluster for which the extension is deployed.-- **Accepted Values:** Alphanumerical value--### Cluster-type --- **Definition:** Specifies the type of Arc cluster for the extension deployment.-- **Accepted Values:** `connectedCluster`-- **Default Value:** `connectedCluster`--### Single configuration value (--config) --- **Definition:** The configuration parameters and values for deploying the connected registry extension on the Arc Kubernetes cluster.-- **Accepted Values:** Alphanumerical value--### Connection String --- **Value Type:** Alphanumerical-- **Customer Action:** Must generate and specify-- **Description:** The connection string contains authorization details necessary for the connected registry to securely connect and sync data with the cloud registry using Shared Key authorization. It includes the connected registry name, sync token name, sync token password, parent gateway endpoint, and parent endpoint protocol.--### Connected Registry --- **Description:** The on-premises or remote registry replica that facilitates local access to containerized workloads synchronized from the ACR registry.--### Data-endpoint-enabled --- **Definition:** Enables a [dedicated data endpoint](/azure/container-registry/container-registry-dedicated-data-endpoints) for client firewall configuration.-- **Accepted Values:** `true`, `false`-- **Default Value:** `false`-- **Note:** Must be enabled for a successful creation of a connected registry.--### Extension-type --- **Definition:** Specifies the extension provider unique name for the extension deployment.-- **Accepted Values:** `Microsoft.ContainerRegistry.ConnectedRegistry`-- **Default Value:** `Microsoft.ContainerRegistry.ConnectedRegistry`--### Kubernetes Secret --- **Definition:** A Kubernetes managed secret for securely accessing data across pods within a cluster.-- **Accepted Values:** Secret name-- **Note:** Customer must specify.--### Message TTL (Time To Live) --- **Value Type:** Numerical-- **Default Value/Behavior:** Every two days-- **Description:** Message TTL defines the duration sync messages are retained in the cloud. This value isn't applicable when the sync schedule is continuous.--### Modes --- **Accepted Values:** `ReadOnly` and `ReadWrite`-- **Default Value/Behavior:** `ReadOnly`-- **Description:** Defines the operational permissions for client access to the connected registry. In `ReadOnly` mode, clients can only pull (read) artifacts, which is also suitable for nested scenarios. In `ReadWrite` mode, clients can pull (read) and push (write) artifacts, which is ideal for local development environments.--### Parent Registry --- **Description:** The primary registry that synchronizes with its child connected registries. A single parent registry can have multiple child registries connected to it. In a nested scenario, there can be multiple layers of registries within the hierarchy.--### Protected Settings File (--config-protected-file) --- **Definition:** The file containing the connection string for deploying the connected registry extension on the Kubernetes cluster. This file would also include the Kubernetes Secret or Public Cert + Private Key values pair for BYOC scenarios.-- **Accepted Values:** Alphanumerical value-- **Note:** Customer must specify.--### Public Certificate + Private Key --- **Value Type:** Alphanumerical base64-encoded-- **Customer Action:** Must specify-- **Description:** The public key certificate comprises a pair of keys: a public key available to anyone for identity verification of the certificate holder, and a private key, a unique secret key.--### Pvc.storageClassName --- **Definition:** Specifies the storage class in use on the cluster.-- **Accepted Values:** `standard`, `azurefile`--### Pvc.storageRequest --- **Definition:** Specifies the storage size that the connected registry claims in the cluster.-- **Accepted Values:** Alphanumerical value (for example, ΓÇ£500GiΓÇ¥)-- **Default Value:** `500Gi`--### Service.ClusterIP --- **Definition:** The IP address within the Kubernetes service cluster IP range.-- **Accepted Values:** IPv4 or IPv6 format-- **Note:** Customer must specify. An incorrect IP not within the range will result in a failed extension deployment.--### Sync Token --- **Definition:** A token used by each connected registry to authenticate with its immediate parent for content synchronization and updates.-- **Accepted Values:** Token name-- **Action:** Customer action required.--### Synchronization Schedule --- **Value Type:** Numerical-- **Default Value/Behavior:** Every minute-- **Description:** The synchronization schedule, set using a cron expression, determines the cadence for when the registry syncs with its parent.--### Synchronization Window --- **Value Type:** Alphanumerical-- **Default Value/Behavior:** Hourly-- **Description:** The synchronization window specifies the sync duration. This parameter is disregarded if the sync schedule is continuous.--### TrustDistribution.enabled --- **Definition:** Trust distribution refers to the process of securely distributing trust between the connected registry and all client nodes within a Kubernetes cluster. When enabled, all nodes are configured with trust distribution.-- **Accepted Values:** `true`, `false`-- **Note:** Customer must choose `true` or `false`.--### TrustDistribution.useNodeSelector --- **Definition:** By default, the trust distribution daemonsets, which are responsible for configuring the container runtime environment (containerd), will run on all nodes in the cluster. However, with this setting enabled, trust distribution is limited to only those nodes that have been specifically labeled with `containerd-configured-by: connected-registry`.-- **Accepted Values:** `true`, `false`-- **Label:** `containerd-configured-by=connected-registry`-- **Command to specify nodes for trust distribution:** `kubectl label node/[node name] containerd-configured-by=connected-registry`---### Registry Hierarchy --- **Description:** The structure of connected registries, where each connected registry is linked to a parent registry. The top parent in this hierarchy is the ACR registry. |
container-registry | Container Registry Access Selected Networks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-access-selected-networks.md | - Title: Configure public registry access -description: Configure IP rules to enable access to an Azure container registry from selected public IP addresses or address ranges. ---- Previously updated : 10/31/2023---# Configure public IP network rules --An Azure container registry by default accepts connections over the internet from hosts on any network. This article shows how to configure your container registry to allow access from only specific public IP addresses or address ranges. Equivalent steps using the Azure CLI and Azure portal are provided. --IP network rules are configured on the public registry endpoint. IP network rules do not apply to private endpoints configured with [Private Link](container-registry-private-link.md) --Configuring IP access rules is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry tiers](container-registry-skus.md). --Each registry supports a maximum of 100 IP access rules. ---## Access from selected public network - CLI --### Change default network access to registry --To limit access to a selected public network, first change the default action to deny access. Substitute the name of your registry in the following [az acr update][az-acr-update] command: --```azurecli -az acr update --name myContainerRegistry --default-action Deny -``` --### Add network rule to registry --Use the [az acr network-rule add][az-acr-network-rule-add] command to add a network rule to your registry that allows access from a public IP address or range. For example, substitute the container registry's name and the public IP address of a VM in a virtual network. --```azurecli -az acr network-rule add \ - --name mycontainerregistry \ - --ip-address <public-IP-address> -``` --> [!NOTE] -> After adding a rule, it takes a few minutes for the rule to take effect. --## Access from selected public network - portal --1. In the portal, navigate to your container registry. -1. Under **Settings**, select **Networking**. -1. On the **Public access** tab, select to allow public access from **Selected networks**. -1. Under **Firewall**, enter a public IP address, such as the public IP address of a VM in a virtual network. Or, enter an address range in CIDR notation that contains the VM's IP address. -1. Select **Save**. --![Configure firewall rule for container registry][acr-access-selected-networks] --> [!NOTE] -> After adding a rule, it takes a few minutes for the rule to take effect. --> [!TIP] -> Optionally, enable registry access from a local client computer or IP address range. To allow this access, you need the computer's public IPv4 address. You can find this address by searching "what is my IP address" in an internet browser. The current client IPv4 address also appears automatically when you configure firewall settings on the **Networking** page in the portal. --## Disable public network access --Optionally, disable the public endpoint on the registry. Disabling the public endpoint overrides all firewall configurations. For example, you might want to disable public access to a registry secured in a virtual network using [Private Link](container-registry-private-link.md). --> [!NOTE] -> If the registry is set up in a virtual network with a [service endpoint](container-registry-vnet.md), disabling access to the registry's public endpoint also disables access to the registry within the virtual network. --### Disable public access - CLI --To disable public access using the Azure CLI, run [az acr update][az-acr-update] and set `--public-network-enabled` to `false`. The `public-network-enabled` argument requires Azure CLI 2.6.0 or later. --```azurecli -az acr update --name myContainerRegistry --public-network-enabled false -``` --### Disable public access - portal --1. In the portal, navigate to your container registry and select **Settings > Networking**. -1. On the **Public access** tab, in **Allow public network access**, select **Disabled**. Then select **Save**. --![Disable public access][acr-access-disabled] ---## Restore public network access --To re-enable the public endpoint, update the networking settings to allow public access. Enabling the public endpoint overrides all firewall configurations. --### Restore public access - CLI --Run [az acr update][az-acr-update] and set `--public-network-enabled` to `true`. --> [!NOTE] -> The `public-network-enabled` argument requires Azure CLI 2.6.0 or later. --```azurecli -az acr update --name myContainerRegistry --public-network-enabled true -``` --### Restore public access - portal --1. In the portal, navigate to your container registry and select **Settings > Networking**. -1. On the **Public access** tab, in **Allow public network access**, select **All networks**. Then select **Save**. --![Public access from all networks][acr-access-all-networks] --## Troubleshoot --### Access behind HTTPS proxy --If a public network rule is set, or public access to the registry is denied, attempts to login to the registry from a disallowed public network will fail. Client access from behind an HTTPS proxy will also fail if an access rule for the proxy is not set. You will see an error message similar to `Error response from daemon: login attempt failed with status: 403 Forbidden` or `Looks like you don't have access to registry`. --These errors can also occur if you use an HTTPS proxy that is allowed by a network access rule, but the proxy isn't properly configured in the client environment. Check that both your Docker client and the Docker daemon are configured for proxy behavior. For details, see [HTTP/HTTPS proxy](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) in the Docker documentation. --### Access from Azure Pipelines --If you use Azure Pipelines with an Azure container registry that limits access to specific IP addresses, the pipeline may be unable to access the registry, because the outbound IP address from the pipeline is not fixed. By default, the pipeline runs jobs using a Microsoft-hosted [agent](/azure/devops/pipelines/agents/agents) on a virtual machine pool with a changing set of IP addresses. --One workaround is to change the agent used to run the pipeline from Microsoft-hosted to self-hosted. With a self-hosted agent running on a [Windows](/azure/devops/pipelines/agents/v2-windows) or [Linux](/azure/devops/pipelines/agents/v2-linux) machine that you manage, you control the outbound IP address of the pipeline, and you can add this address in a registry IP access rule. --### Access from AKS --If you use Azure Kubernetes Service (AKS) with an Azure container registry that limits access to specific IP addresses, you can't configure a fixed AKS IP address by default. The egress IP address from the AKS cluster is randomly assigned. --To allow the AKS cluster to access the registry, you have these options: --* If you use the Azure Basic Load Balancer, set up a [static IP address](/azure/aks/egress) for the AKS cluster. -* If you use the Azure Standard Load Balancer, see guidance to [control egress traffic](/azure/aks/limit-egress-traffic) from the cluster. --## Next steps --* To restrict access to a registry using a private endpoint in a virtual network, see [Configure Azure Private Link for an Azure container registry](container-registry-private-link.md). -* If you need to set up registry access rules from behind a client firewall, see [Configure rules to access an Azure container registry behind a firewall](container-registry-firewall-access-rules.md). -* For more troubleshooting guidance, see [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md). --[az-acr-login]: /cli/azure/acr#az_acr_login -[az-acr-network-rule-add]: /cli/azure/acr/network-rule/#az_acr_network_rule_add -[az-acr-network-rule-remove]: /cli/azure/acr/network-rule/#az_acr_network_rule_remove -[az-acr-network-rule-list]: /cli/azure/acr/network-rule/#az_acr_network_rule_list -[az-acr-run]: /cli/azure/acr#az_acr_run -[az-acr-update]: /cli/azure/acr#az_acr_update -[quickstart-portal]: container-registry-get-started-portal.md -[quickstart-cli]: container-registry-get-started-azure-cli.md --[acr-access-selected-networks]: ./media/container-registry-access-selected-networks/acr-access-selected-networks.png -[acr-access-disabled]: ./media/container-registry-access-selected-networks/acr-access-disabled.png -[acr-access-all-networks]: ./media/container-registry-access-selected-networks/acr-access-all-networks.png |
container-registry | Container Registry Api Deprecation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-api-deprecation.md | - Title: Removed and deprecated features for Azure Container Registry -description: This article lists and notifies the features that are deprecated or removed from support for Azure Container Registry. - Previously updated : 10/31/2023-----# API Deprecations in Azure Container Registry --This article describes how to use the information about APIs that are removed from support for Azure Container Registry (ACR). This article provides early notice about future changes that might affect the APIs of ACR available to customers in preview or GA states. --This information helps you identify the deprecated API versions. The information is subject to change with future releases, and might not include each deprecated feature or product. --## How to use this information --When an API is first listed as deprecated, support for using it with ACR is on schedule to be removed in a future update. This information is provided to help you plan for alternatives and a replacement version for using that API. When a version of API is removed, this article is updated to indicate that specific version. --Unless noted otherwise, a feature, product, SKU, SDK, utility, or tool that's supporting the deprecated API typically continues to be fully supported, available, and usable. --When support is removed for a version of API, you can use a latest version of API, as long as the API remains in support. --For CLI users, we recommend to use latest version of [Azure CLI][Azure Cloud Shell], for invoking SDK implementation. Run `az --version` to find the version. --To avoid errors due to using a deprecated API, we recommend moving to a newer version of the ACR API. You can find a list of [supported versions here.](/azure/templates/microsoft.containerregistry/allversions) --You may be consuming this API via one or more SDKs. Use a newer API version by updating to a newer version of the SDK. You can find a [list of SDKs and their latest versions here.](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html?search=containerregistry) --## Removed and Deprecated APIs --The following API versions are ready for retirement/deprecation. In some cases, they're no longer in the product. --| API version | Deprecation first announcement | Plan to end support | -| | | - | -| 2016-06-27-preview | July 17, 2023 | October 16, 2023 | -| 2017-06-01-preview | July 17, 2023 | October 16, 2023 | -| 2018-02-01-preview | July 17, 2023 | October 16, 2023 | -| 2017-03-01-GA | September 2023 | September 2026 | --## See also --For more information, see the following articles: -->* [Supported API versions](/azure/templates/microsoft.containerregistry/allversions) ->* [SDKs and their latest versions](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html?search=containerregistry) --<!-- LINKS - External --> -[Azure Cloud Shell]: /azure/cloud-shell/quickstart |
container-registry | Container Registry Artifact Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-cache.md | - Title: "Artifact cache in Azure Container Registry" -description: "Artifact cache is a feature that allows you to cache container images in Azure Container Registry, improving performance and efficiency." -----zone_pivot_groups: container-registry-zones Previously updated : 02/26/2024-ai-usage: ai-assisted -#customer intent: As a developer, I want Artifact cache capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time. ---# Artifact cache in Azure Container Registry --Artifact cache feature allows users to cache container images in a private container registry. Artifact cache is available in *Basic*, *Standard*, and *Premium* [service tiers](container-registry-skus.md). --Artifact cache enhances container image management by providing a caching solution for both public and private repositories. --Artifact cache offers faster and more *reliable pull operations* through Azure Container Registry (ACR), utilizing features like Geo-Replication and Availability Zone support for higher availability and speed image pulls. --Artifact cache allows cached registries to be accessible over *private networks* for users to align with firewall configurations and compliance standards seamlessly. --Artifact cache addresses the challenge of pull limits imposed by public registries. We recommend users authenticate their cache rules with their upstream source credentials. Then pull images from the local ACR, to help mitigate rate limits. --## Terminology --- Cache Rule - A Cache Rule is a rule you can create to pull artifacts from a supported repository into your cache.- - A cache rule contains four parts: - - - Rule Name - The name of your cache rule. For example, `Hello-World-Cache`. -- - Source - The name of the Source Registry. -- - Repository Path - The source path of the repository to find and retrieve artifacts you want to cache. For example, `docker.io/library/hello-world`. -- - New ACR Repository Namespace - The name of the new repository path to store artifacts. For example, `hello-world`. The Repository can't already exist inside the ACR instance. --- Credentials- - Credentials are a set of username and password for the source registry. You require Credentials to authenticate with a public or private repository. Credentials contain four parts -- - Credentials - The name of your credentials. -- - Source registry Login Server - The login server of your source registry. -- - Source Authentication - The key vault locations to store credentials. - - - Username and Password secrets- The secrets containing the username and password. --## Limitations --- Cache will only occur after at least one image pull is complete on the available container image. For every new image available, a new image pull must be complete. Artifact cache doesn't automatically pull new tags of images when a new tag is available. It is on the roadmap but not supported in this release. --- Artifact cache only supports 1,000 cache rules.--## Upstream support --Artifact cache currently supports the following upstream registries: -->[!WARNING] -> Customers must generate [credential set](container-registry-artifact-cache.md#create-new-credentials) to source content from Docker hub. --| Upstream Registries | Support | Availability | -|-|-|--| -| Docker Hub | Supports authenticated pulls only. | Azure CLI, Azure portal | -| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | -| AWS Elastic Container Registry (ECR) Public Gallery | Supports unauthenticated pulls only. | Azure CLI, Azure portal | -| GitHub Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | -| Quay | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | -| registry.k8s.io | Supports both authenticated and unauthenticated pulls. | Azure CLI | -| Google Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI | --## Wildcards --Wildcard use asterisks (*) to match multiple paths within the container image registry. Artifact cache currently supports the following wildcards: --> [!NOTE] -> The cache rules map from Target Repository => Source Repository. --### Registry Level Wildcard --The registry level wildcard allows you to cache all repositories from an upstream registry. ---| Cache Rule | Mapping | Example | -| - | - | -- | -| contoso.azurecr.io/* => mcr.microsoft.com/* | Mapping for all images under ACR to MCR. | contoso.azurecr.io/myapp/image1 => mcr.microsoft.com/myapp/image1<br>contoso.azurecr.io/myapp/image2 => mcr.microsoft.com/myapp/image2 | --### Repository Level Wildcard --The repository level wildcard allows you to cache all repositories from an upstream registry mapping to the repository prefix. --| Cache Rule | Mapping | Example | -| | - | -- | -| contoso.azurecr.io/dotnet/* => mcr.microsoft.com/dotnet/* | Mapping specific repositories under ACR to corresponding repositories in MCR. | contoso.azurecr.io/dotnet/sdk => mcr.microsoft.com/dotnet/sdk<br>contoso.azurecr.io/dotnet/runtime => mcr.microsoft.com/dotnet/runtime | -| contoso.azurecr.io/library/dotnet/* => mcr.microsoft.com/dotnet/* <br>contoso.azurecr.io/library/python/* => docker.io/library/python/* | Mapping specific repositories under ACR to repositories from different upstream registries. | contoso.azurecr.io/library/dotnet/app1 => mcr.microsoft.com/dotnet/app1<br>contoso.azurecr.io/library/python/app3 => docker.io/library/python/app3 | --### Limitations for Wildcard based cache rules --Wildcard cache rules use asterisks (*) to match multiple paths within the container image registry. These rules can't overlap with other wildcard cache rules. In other words, if you have a wildcard cache rule for a certain registry path, you cannot add another wildcard rule that overlaps with it. --Here are some examples of overlapping rules: --**Example 1**: --Existing cache rule: `contoso.azurecr.io/* => mcr.microsoft.com/*`<br> -New cache being added: `contoso.azurecr.io/library/* => docker.io/library/*`<br> --The addition of the new cache rule is blocked because the target repository path `contoso.azurecr.io/library/*` overlaps with the existing wildcard rule `contoso.azurecr.io/*`. --**Example 2:** --Existing cache rule: `contoso.azurecr.io/library/*` => `mcr.microsoft.com/library/*`<br> -New cache being added: `contoso.azurecr.io/library/dotnet/*` => `docker.io/library/dotnet/*`<br> --The addition of the new cache rule is blocked because the target repository path `contoso.azurecr.io/library/dotnet/*` overlaps with the existing wildcard rule `contoso.azurecr.io/library/*`. --### Limitations for Static/fixed cache rules --Static or fixed cache rules are more specific and do not use wildcards. They can overlap with wildcard-based cache rules. If a cache rule specifies a fixed repository path, then it allows overlapping with a wildcard-based cache rule. --**Example 1**: --Existing cache rule: `contoso.azurecr.io/*` => `mcr.microsoft.com/*`<br> -New cache being added: `contoso.azurecr.io/library/dotnet` => `docker.io/library/dotnet`<br> --The addition of the new cache rule is allowed because `contoso.azurecr.io/library/dotnet` is a static path and can overlap with the wildcard cache rule `contoso.azurecr.io/*`. --<!-- markdownlint-disable MD044 --> -<!-- markdownlint-enable MD044 --> --## Enable Artifact cache - Azure CLI --You can enable Artifact cache in your Azure Container Registry with or without authentication using Azure CLI by following the steps. --### Prerequisites --* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.46.0 or later is required. Run `az --version` for finding the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI]. -* You have an existing Key Vault to store the credentials. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] -* You can set and retrieve secrets from your Key Vault. Learn more about [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret] ---### Configure and create a Cache rule without the Credentials. --1. Run [az acr Cache create][az-acr-cache-create] command to create a Cache rule. -- - For example, to create a Cache rule without the credentials for a given `MyRegistry` Azure Container Registry. -- ```azurecli-interactive - az acr Cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu- - ``` --2. Run [az acr Cache show][az-acr-cache-show] command to show a Cache rule. -- - For example, to show a Cache rule for a given `MyRegistry` Azure Container Registry. - - ```azurecli-interactive - az acr Cache show -r MyRegistry -n MyRule - ``` --### Create the credentials --Before configuring the Credentials, you have to create and store secrets in the Azure KeyVault and retrieve the secrets from the Key Vault. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] And to [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret]. --1. Run [az acr credential set create][az-acr-credential-set-create] command to create the credentials. -- - For example, To create the credentials for a given `MyRegistry` Azure Container Registry. -- ```azurecli-interactive - az acr credential-set create - -r MyRegistry \ - -n MyDockerHubCredSet \ - -l docker.io \ - -u https://MyKeyvault.vault.azure.net/secrets/usernamesecret \ - -p https://MyKeyvault.vault.azure.net/secrets/passwordsecret - ``` --2. Run [az acr credential set update][az-acr-credential-set-update] to update the username or password KV secret ID on a credential set. -- - For example, to update the username or password KV secret ID on the credentials for a given `MyRegistry` Azure Container Registry. -- ```azurecli-interactive - az acr credential-set update -r MyRegistry -n MyDockerHubCredSet -p https://MyKeyvault.vault.azure.net/secrets/newsecretname - ``` --3. Run [az acr credential-set show][az-acr-credential-set-show] to show the credentials. -- - For example, to show a credential set in a given `MyRegistry` Azure Container Registry. -- ```azurecli-interactive - az acr credential-set show -r MyRegistry -n MyDockerHubCredSet - ``` --### Configure and create a cache rule with the credentials --1. Run [az acr cache create][az-acr-cache-create] command to create a cache rule. -- - For example, to create a cache rule with the credentials for a given `MyRegistry` Azure Container Registry. -- ```azurecli-interactive - az acr cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu -c MyDockerHubCredSet - ``` --2. Run [az acr cache update][az-acr-cache-update] command to update the credentials on a cache rule. -- - For example, to update the credentials on a cache rule for a given `MyRegistry` Azure Container Registry. -- ```azurecli-interactive - az acr cache update -r MyRegistry -n MyRule -c NewCredSet - ``` -- - For example, to remove the credentials from an existing cache rule for a given `MyRegistry` Azure Container Registry. -- ```azurecli-interactive - az acr cache update -r MyRegistry -n MyRule --remove-cred-set - ``` --3. Run [az acr cache show][az-acr-cache-show] command to show a cache rule. -- - For example, to show a cache rule for a given `MyRegistry` Azure Container Registry. - - ```azurecli-interactive - az acr cache show -r MyRegistry -n MyRule - ``` --#### Assign permissions to Key Vault --1. Get the principal ID of system identity in use to access Key Vault. -- ```azurecli-interactive - PRINCIPAL_ID=$(az acr credential-set show - -n MyDockerHubCredSet \ - -r MyRegistry \ - --query 'identity.principalId' \ - -o tsv) - ``` --2. Run the [az keyvault set-policy][az-keyvault-set-policy] command to assign access to the Key Vault, before pulling the image. -- - For example, to assign permissions for the credentials access the KeyVault secret -- ```azurecli-interactive - az keyvault set-policy --name MyKeyVault \ - --object-id $PRINCIPAL_ID \ - --secret-permissions get - ``` --### Pull your image --1. Pull the image from your cache using the Docker command by the registry login server name, repository name, and its desired tag. -- - For example, to pull the image from the repository `hello-world` with its desired tag `latest` for a given registry login server `myregistry.azurecr.io`. -- ```azurecli-interactive - docker pull myregistry.azurecr.io/hello-world:latest - ``` --### Clean up the resources --1. Run [az acr cache list][az-acr-cache-list] command to list the cache rules in the Azure Container Registry. -- - For example, to list the cache rules for a given `MyRegistry` Azure Container Registry. -- ```azurecli-interactive - az acr cache list -r MyRegistry - ``` --2. Run [az acr cache delete][az-acr-cache-delete] command to delete a cache rule. -- - For example, to delete a cache rule for a given `MyRegistry` Azure Container Registry. -- ```azurecli-interactive - az acr cache delete -r MyRegistry -n MyRule - ``` --3. Run[az acr credential set list][az-acr-credential-set-list] to list the credential in an Azure Container Registry. -- - For example, to list the credentials for a given `MyRegistry` Azure Container Registry. -- ```azurecli-interactive - az acr credential-set list -r MyRegistry - ``` --4. Run [az acr credential-set delete][az-acr-credential-set-delete] to delete the credentials. -- - For example, to delete the credentials for a given `MyRegistry` Azure Container Registry. -- ```azurecli-interactive - az acr credential-set delete -r MyRegistry -n MyDockerHubCredSet - ``` ---<!-- markdownlint-disable MD044 --> -<!-- markdownlint-enable MD044 --> --## Enable Artifact cache - Azure portal --You can enable Artifact cache in your Azure Container Registry with or without authentication using Azure portal by following the steps. --### Prerequisites --* Sign in to the [Azure portal](https://ms.portal.azure.com/) -* You have an existing Key Vault to store credentials. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] -* You have the existing Key vaults without the Role based access(RBAC) controls. --### Configure Artifact cache without credentials --Follow the steps to create cache rule in the [Azure portal](https://portal.azure.com). --1. Navigate to your Azure Container Registry. --2. In the side **Menu**, under the **Services**, select **Cache**. --- :::image type="content" source="./media/container-registry-artifact-cache/cache-preview-01.png" alt-text="Screenshot for Registry cache in Azure portal."::: ---3. Select **Create Rule**. --- :::image type="content" source="./media/container-registry-artifact-cache/cache-blade-02.png" alt-text="Screenshot for Create Rule in Azure portal."::: ---4. A window for **New cache rule** appears. --- :::image type="content" source="./media/container-registry-artifact-cache/new-cache-rule-03.png" alt-text="Screenshot for new Cache Rule in Azure portal."::: ---5. Enter the **Rule name**. --6. Select **Source** Registry from the dropdown menu. --7. Enter the **Repository Path** to the artifacts you want to cache. --8. You can skip **Authentication**, if you aren't accessing a private repository or performing an authenticated pull. --9. Under the **Destination**, Enter the name of the **New ACR Repository Namespace** to store cached artifacts. --- :::image type="content" source="./media/container-registry-artifact-cache/save-cache-rule-04.png" alt-text="Screenshot to save Cache Rule in Azure portal."::: ---10. Select on **Save**. --11. Pull the image from your cache using the Docker command by the registry login server name, repository name, and its desired tag. -- - For example, to pull the image from the repository `hello-world` with its desired tag `latest` for a given registry login server `myregistry.azurecr.io`. -- ```azurecli-interactive - docker pull myregistry.azurecr.io/hello-world:latest - ``` --### Configure Artifact cache with authentication --Follow the steps to create cache rule in the [Azure portal](https://portal.azure.com). --1. Navigate to your Azure Container Registry. --2. In the side **Menu**, under the **Services**, select **Cache**. --- :::image type="content" source="./media/container-registry-artifact-cache/cache-preview-01.png" alt-text="Screenshot for Registry cache in Azure portal."::: ---3. Select **Create Rule**. --- :::image type="content" source="./media/container-registry-artifact-cache/cache-blade-02.png" alt-text="Screenshot for Create Rule in Azure portal."::: ---4. A window for **New cache rule** appears. --- :::image type="content" source="./media/container-registry-artifact-cache/new-cache-rule-auth-03.png" alt-text="Screenshot for new Cache Rule with auth in Azure portal."::: ---5. Enter the **Rule name**. --6. Select **Source** Registry from the dropdown menu. --7. Enter the **Repository Path** to the artifacts you want to cache. --8. For adding authentication to the repository, check the **Authentication** box. --9. Choose **Create new credentials** to create a new set of credentials to store the username and password for your source registry. Learn how to [create new credentials](tutorial-enable-artifact-cache-auth.md#create-new-credentials). --10. If you have the credentials ready, **Select credentials** from the drop-down menu. --11. Under the **Destination**, Enter the name of the **New ACR Repository Namespace** to store cached artifacts. --- :::image type="content" source="./media/container-registry-artifact-cache/save-cache-rule-04.png" alt-text="Screenshot to save Cache Rule in Azure portal."::: ---12. Select on **Save**. --13. Pull the image from your cache using the Docker command by the registry login server name, repository name, and its desired tag. -- - For example, to pull the image from the repository `hello-world` with its desired tag `latest` for a given registry login server `myregistry.azurecr.io`. -- ```azurecli-interactive - docker pull myregistry.azurecr.io/hello-world:latest - ``` --### Create new credentials --Before configuring the Credentials, you require to create and store secrets in the Azure KeyVault and retrieve the secrets from the Key Vault. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] And to [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret]. --1. Navigate to **Credentials** > **Create credentials**. --- :::image type="content" source="./media/container-registry-artifact-cache/add-credential-set-05.png" alt-text="Screenshot for adding credentials in Azure portal."::: --- :::image type="content" source="./media/container-registry-artifact-cache/create-credential-set-06.png" alt-text="Screenshot for create new credentials in Azure portal."::: ---1. Enter **Name** for the new credentials for your source registry. --1. Select a **Source Authentication**. Artifact cache currently supports **Select from Key Vault** and **Enter secret URI's**. --1. For the **Select from Key Vault** option, Learn more about [creating credentials using key vault][create-and-store-keyvault-credentials]. --1. Select on **Create**. ---## Next steps --* Advance to the [next article](troubleshoot-artifact-cache.md) to walk through the troubleshoot guide for Registry Cache. --<!-- LINKS - External --> -[create-and-store-keyvault-credentials]: /azure/key-vault/secrets/quick-create-cli#add-a-secret-to-key-vault -[set-and-retrieve-a-secret]: /azure/key-vault/secrets/quick-create-cli#retrieve-a-secret-from-key-vault -[az-keyvault-set-policy]: /azure/key-vault/general/assign-access-policy#assign-an-access-policy -[Install Azure CLI]: /cli/azure/install-azure-cli -[Azure Cloud Shell]: /azure/cloud-shell/quickstart -[az-acr-cache-create]:/cli/azure/acr/cache#az-acr-cache-create -[az-acr-cache-show]:/cli/azure/acr/cache#az-acr-cache-show -[az-acr-cache-list]:/cli/azure/acr/cache#az-acr-cache-list -[az-acr-cache-delete]:/cli/azure/acr/cache#az-acr-cache-delete -[az-acr-cache-update]:/cli/azure/acr/cache#az-acr-cache-update -[az-acr-credential-set-create]:/cli/azure/acr/credential-set#az-acr-credential-set-create -[az-acr-credential-set-update]:/cli/azure/acr/credential-set#az-acr-credential-set-update -[az-acr-credential-set-show]: /cli/azure/acr/credential-set#az-acr-credential-set-show -[az-acr-credential-set-list]: /cli/azure/acr/credential-set#az-acr-credential-set-list -[az-acr-credential-set-delete]: /cli/azure/acr/credential-set#az-acr-credential-set-delete |
container-registry | Container Registry Artifact Streaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-streaming.md | - Title: "Artifact streaming in Azure Container Registry (Preview)" -description: "Artifact streaming is a feature in Azure Container Registry to enhance managing, scaling, and deploying artifacts through containerized platforms." ----zone_pivot_groups: container-registry-zones - Previously updated : 02/26/2024-ai-usage: ai-assisted -#customer intent: As a developer, I want artifact streaming capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time. ---# Artifact streaming in Azure Container Registry (Preview) --Artifact streaming is a feature in Azure Container Registry that allows you to store container images within a single registry, manage, and stream the container images to Azure Kubernetes Service (AKS) clusters in multiple regions. This feature is designed to accelerate containerized workloads for Azure customers using AKS. With artifact streaming, you can easily scale workloads without having to wait for slow pull times for your node. --## Use cases --Here are few scenarios to use artifact streaming: --**Deploying containerized applications to multiple regions**: With artifact streaming, you can store container images within a single registry and manage and stream container images to AKS clusters in multiple regions. Artifact streaming deploys container applications to multiple regions without consuming time and resources. --**Reducing image pull latency**: Artifact streaming can reduce time to pod readiness by over 15%, depending on the size of the image, and it works best for images < 30 GB. This feature reduces image pull latency and fast container startup, which is beneficial for software developers and system architects. --**Effective scaling of containerized applications**: Artifact streaming provides the opportunity to design, build, and deploy containerized applications at a high scale. --## Artifact streaming aspects --Here are some brief aspects of artifact streaming: --* Customers with new and existing registries can start artifact streaming for specific repositories or tags. --* Customers are able to store both the original and the streaming artifact in the ACR by starting artifact streaming. --* Customers have access to the original and the streaming artifact even after turning off artifact streaming for repositories or artifacts. --* Customers with artifact streaming and Soft Delete enabled, deletes a repository or artifact then both the original and artifact streaming versions are deleted. However, only the original version is available on the soft delete portal. --## Availability and pricing information --Artifact streaming is only available in the **Premium** [service tiers](container-registry-skus.md) (also known as SKUs). Artifact streaming has potential to increase the overall registry storage consumption. Customers are subjected to more storage charges as outlined in our [pricing](https://azure.microsoft.com/pricing/details/container-registry/) if the consumption exceeds the included 500 GiB Premium SKU threshold. --## Preview limitations --Artifact streaming is currently in preview. The following limitations apply: --* Only images with Linux AMD64 architecture are supported in the preview release. -* The preview release doesn't support Windows-based container images and ARM64 images. -* The preview release partially support multi-architecture images only the AMD64 architecture is supported. -* For creating Ubuntu based node pool in AKS, choose Ubuntu version 20.04 or higher. -* For Kubernetes, use Kubernetes version 1.26 or higher or Kubernetes version > 1.25. -* Only premium SKU registries support generating streaming artifacts in the preview release. The nonpremium SKU registries don't offer this functionality during the preview. -* The CMK (Customer-Managed Keys) registries are NOT supported in the preview release. -* Kubernetes regcred is currently NOT supported. --## Prerequisites --* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.54.0 or later is required. Run `az --version` for finding the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI]. --* Sign in to the [Azure portal](https://ms.portal.azure.com/). ---## Start artifact streaming --Start artifact streaming with a series with Azure CLI commands and Azure portal for pushing, importing, and generating streaming artifacts for container images in an Azure Container Registry (ACR). These instructions outline the process for creating a *Premium* [SKU](container-registry-skus.md) ACR, importing an image, generating a streaming artifact, and managing the artifact streaming operation. Make sure to replace the placeholders with your actual values where necessary. --<!-- markdownlint-disable MD044 --> -<!-- markdownlint-enable MD044 --> --### Push/Import the image and generate the streaming artifact - Azure CLI --Artifact streaming is available in the **Premium** container registry service tier. To start Artifact streaming, update a registry using the Azure CLI (version 2.54.0 or above). To install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --Start artifact streaming, by following these general steps: -->[!NOTE] -> If you already have a premium container registry, you can skip this step. If the user is on Basic of Standard SKUs, the following commands will fail. -> The code is written in Azure CLI and can be executed in an interactive mode. -> Please note that the placeholders should be replaced with actual values before executing the command. --1. Create a new Azure Container Registry (ACR) using the premium SKU through: -- For example, run the [az group create][az-group-create] command to create an Azure Resource Group with name `my-streaming-test` in the West US region and then run the [az acr create][az-acr-create] command to create a premium Azure Container Registry with name `mystreamingtest` in that resource group. -- ```azurecli-interactive - az group create -n my-streaming-test -l westus - az acr create -n mystreamingtest -g my-streaming-test -l westus --sku premium - ``` --2. Push or import an image to the registry through: -- For example, run the [az configure] command to configure the default ACR and [az acr import][az-acr-import] command to import a Jupyter Notebook image from Docker Hub into the `mystreamingtest` ACR. -- ```azurecli-interactive - az configure --defaults acr="mystreamingtest" - az acr import --source docker.io/jupyter/all-spark-notebook:latest -t jupyter/all-spark-notebook:latest - ``` --3. Create an artifact streaming from the Image - - Initiates the creation of a streaming artifact from the specified image. - - For example, run the [az acr artifact-streaming create][az-acr-artifact-streaming-create] commands to create a streaming artifact from the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR. -- ```azurecli-interactive - az acr artifact-streaming create --image jupyter/all-spark-notebook:latest - ``` -->[!NOTE] -> An operation ID is generated during the process for future reference to verify the status of the operation. --4. Verify the generated artifact streaming in the Azure CLI. -- For example, run the [az acr manifest list-referrers][az-acr-manifest-list-referrers] command to list the streaming artifacts for the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR. - - ```azurecli-interactive - az acr manifest list-referrers -n jupyter/all-spark-notebook:latest - ``` --5. Cancel the artifact streaming creation (if needed) -- Cancel the streaming artifact creation if the conversion isn't finished yet. It stops the operation. - - For example, run the [az acr artifact-streaming operation cancel][az-acr-artifact-streaming-operation-cancel] command to cancel the conversion operation for the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR. -- ```azurecli-interactive - az acr artifact-streaming operation cancel --repository jupyter/all-spark-notebook --id c015067a-7463-4a5a-9168-3b17dbe42ca3 - ``` --6. Start autoconversion on the repository -- Start autoconversion in the repository for newly pushed or imported images. When started, new images pushed into that repository trigger the generation of streaming artifacts. -- >[!NOTE] - > Auto-conversion does not apply to existing images. Existing images can be manually converted. - - For example, run the [az acr artifact-streaming update][az-acr-artifact-streaming-update] command to start autoconversion for the `jupyter/all-spark-notebook` repository in the `mystreamingtest` ACR. -- ```azurecli-interactive - az acr artifact-streaming update --repository jupyter/all-spark-notebook --enable-streaming true - ``` --7. Verify the streaming conversion progress, after pushing a new image `jupyter/all-spark-notebook:newtag` to the above repository. -- For example, run the [az acr artifact-streaming operation show][az-acr-artifact-streaming-operation-show] command to check the status of the conversion operation for the `jupyter/all-spark-notebook:newtag` image in the `mystreamingtest` ACR. -- ```azurecli-interactive - az acr artifact-streaming operation show --image jupyter/all-spark-notebook:newtag - ``` - -8. Once you have verified conversion status, you can now connect to AKS. Refer to [AKS documentation](https://aka.ms/artifactstreaming). --9. Turn-off the streaming artifact from the repository. -- For example, run the [az acr artifact-streaming update][az-acr-artifact-streaming-update] command to delete the streaming artifact for the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR. -- ```azurecli-interactive - az acr artifact-streaming update --repository jupyter/all-spark-notebook --enable-streaming false - ``` ---->[!NOTE] -> Artifact streaming can work across regions, regardless of whether geo-replication is started or not. -> Artifact streaming can work through a private endpoint and attach to it. --<!-- markdownlint-disable MD044 --> -<!-- markdownlint-enable MD044 --> --### Push/Import the image and generate the streaming artifact - Azure portal --Artifact streaming is available in the *premium* [SKU](container-registry-skus.md) Azure Container Registry. To start artifact streaming, update a registry using the Azure portal. --Follow the steps to create artifact streaming in the [Azure portal](https://portal.azure.com). --1. Navigate to your Azure Container Registry. --2. In the side **Menu**, under the **Services**, select **Repositories**. --3. Select the latest imported image. --4. Convert the image and create artifact streaming in Azure portal. -- > [!div class="mx-imgBorder"] - > [![A screenshot of Azure portal with the create streaming artifact button highlighted.](./media/container-registry-artifact-streaming/01-create-artifact-streaming-inline.png)](./media/container-registry-artifact-streaming/01-create-artifact-streaming-expanded.png#lightbox) ---5. Check the streaming artifact generated from the image in Referrers tab. - - > [!div class="mx-imgBorder"] - > [![A screenshot of Azure portal with the streaming artifact highlighted.](./media/container-registry-artifact-streaming/02-artifact-streaming-generated-inline.png)](./media/container-registry-artifact-streaming/02-artifact-streaming-generated-expanded.png#lightbox) --6. You can also delete the artifact streaming from the repository. -- > [!div class="mx-imgBorder"] - > [![A screenshot of Azure portal with the delete artifact streaming button highlighted.](./media/container-registry-artifact-streaming/04-delete-artifact-streaming-inline.png)](./media/container-registry-artifact-streaming/04-delete-artifact-streaming-expanded.png#lightbox) --7. You can also enable autoconversion by accessing the repository on portal. Active means autoconversion is enabled on the repository. Inactive means autoconversion is disabled on the repository. - - > [!div class="mx-imgBorder"] - > [![A screenshot of Azure portal with the start artifact streaming button highlighted.](./media/container-registry-artifact-streaming/03-start-artifact-streaming-inline.png)](./media/container-registry-artifact-streaming/03-start-artifact-streaming-expanded.png#lightbox) --> [!NOTE] -> The state of artifact streaming in a repository (inactive or active) determines whether newly pushed compatible images will be automatically converted. By default, all repositories are in an inactive state for artifact streaming. This means that when new compatible images are pushed to the repository, artifact streaming will not be triggered, and the images will not be automatically converted. If you want to start automatic conversion of newly pushed images, you need to set the repository's artifact streaming to the active state. Once the repository is in the active state, any new compatible container images that are pushed to the repository will trigger artifact streaming. This will start the automatic conversion of those images. -----## Next steps --> [!div class="nextstepaction"] -> [Troubleshoot Artifact streaming](troubleshoot-artifact-streaming.md) --<!-- LINKS - External --> -[Install Azure CLI]: /cli/azure/install-azure-cli -[Azure Cloud Shell]: /azure/cloud-shell/quickstart -[az-group-create]: /cli/azure/group#az-group-create -[az-acr-import]: /cli/azure/acr#az-acr-import -[az-acr-artifact-streaming-create]: /cli/azure/acr/artifact-streaming#az-acr-artifact-streaming-create -[az-acr-manifest-list-referrers]: /cli/azure/acr/manifest#az-acr-manifest-list-referrers -[az-acr-create]: /cli/azure/acr#az-acr-create -[az-acr-artifact-streaming-operation-cancel]: /cli/azure/acr/artifact-streaming/operation#az-acr-artifact-streaming-operation-cancel -[az-acr-artifact-streaming-operation-show]: /cli/azure/acr/artifact-streaming/operation#az-acr-artifact-streaming-operation-show -[az-acr-artifact-streaming-update]: /cli/azure/acr/artifact-streaming#az-acr-artifact-streaming-update |
container-registry | Container Registry Auth Aci | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-aci.md | - Title: Access from Container Instances -description: Learn how to provide access to images in your private container registry from Azure Container Instances by using a Microsoft Entra service principal. ----- Previously updated : 10/31/2023---# Authenticate with Azure Container Registry from Azure Container Instances --You can use a Microsoft Entra service principal to provide access to your private container registries in Azure Container Registry. --In this article, you learn to create and configure a Microsoft Entra service principal with *pull* permissions to your registry. Then, you start a container in Azure Container Instances (ACI) that pulls its image from your private registry, using the service principal for authentication. --## When to use a service principal --You should use a service principal for authentication from ACI in **headless scenarios**, such as in applications or services that create container instances in an automated or otherwise unattended manner. --For example, if you have an automated script that runs nightly and creates a [task-based container instance](/azure/container-instances/container-instances-restart-policy) to process some data, it can use a service principal with pull-only permissions to authenticate to the registry. You can then rotate the service principal's credentials or revoke its access completely without affecting other services and applications. --Service principals should also be used when the registry [admin user](container-registry-authentication.md#admin-account) is disabled. ---## Authenticate using the service principal --To launch a container in Azure Container Instances using a service principal, specify its ID for `--registry-username`, and its password for `--registry-password`. --```azurecli-interactive -az container create \ - --resource-group myResourceGroup \ - --name mycontainer \ - --image mycontainerregistry.azurecr.io/myimage:v1 \ - --registry-login-server mycontainerregistry.azurecr.io \ - --registry-username <service-principal-ID> \ - --registry-password <service-principal-password> -``` -->[!Note] -> We recommend running the commands in the most recent version of the Azure Cloud Shell. Set `export MSYS_NO_PATHCONV=1` for running on-perm bash environment. --## Sample scripts --You can find the preceding sample scripts for Azure CLI on GitHub, as well versions for Azure PowerShell: --* [Azure CLI][acr-scripts-cli] -* [Azure PowerShell][acr-scripts-psh] --## Next steps --The following articles contain additional details on working with service principals and ACR: --* [Azure Container Registry authentication with service principals](container-registry-auth-service-principal.md) -* [Authenticate with Azure Container Registry from Azure Kubernetes Service (AKS)](/azure/aks/cluster-container-registry-integration) --<!-- IMAGES --> --<!-- LINKS - External --> -[acr-scripts-cli]: https://github.com/Azure/azure-docs-cli-python-samples/tree/master/container-registry/create-registry/create-registry-service-principal-assign-role.sh -[acr-scripts-psh]: https://github.com/Azure/azure-docs-powershell-samples/tree/master/container-registry --<!-- LINKS - Internal --> |
container-registry | Container Registry Auth Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-kubernetes.md | - Title: Authenticate with an Azure container registry using a Kubernetes pull secret -description: Learn how to provide a Kubernetes cluster with access to images in your Azure container registry by creating a pull secret using a service principal ----- Previously updated : 10/31/2023---# Pull images from an Azure container registry to a Kubernetes cluster using a pull secret --You can use an Azure container registry as a source of container images with any Kubernetes cluster, including "local" Kubernetes clusters such as [minikube](https://minikube.sigs.k8s.io/) and [kind](https://kind.sigs.k8s.io/). This article shows how to create a Kubernetes pull secret using credentials for an Azure container registry. Then, use the secret to pull images from an Azure container registry in a pod deployment. --This example creates a pull secret using Microsoft Entra [service principal credentials](container-registry-auth-service-principal.md). You can also configure a pull secret using other Azure container registry credentials, such as a [repository-scoped access token](container-registry-repository-scoped-permissions.md). --> [!NOTE] -> While pull secrets are commonly used, they bring additional management overhead. If you're using [Azure Kubernetes Service](/azure/aks/intro-kubernetes), we recommend [other options](authenticate-kubernetes-options.md) such as using the cluster's managed identity or service principal to securely pull the image without an additional `imagePullSecrets` setting on each pod. --## Prerequisites --This article assumes you already created a private Azure container registry. You also need to have a Kubernetes cluster running and accessible via the `kubectl` command-line tool. ---If you don't save or remember the service principal password, you can reset it with the [az ad sp credential reset][az-ad-sp-credential-reset] command: --```azurecli -az ad sp credential reset --name http://<service-principal-name> --query password --output tsv -``` --This command returns a new, valid password for your service principal. --## Create an image pull secret --Kubernetes uses an *image pull secret* to store information needed to authenticate to your registry. To create the pull secret for an Azure container registry, you provide the service principal ID, password, and the registry URL. --Create an image pull secret with the following `kubectl` command: --```console -kubectl create secret docker-registry <secret-name> \ - --namespace <namespace> \ - --docker-server=<container-registry-name>.azurecr.io \ - --docker-username=<service-principal-ID> \ - --docker-password=<service-principal-password> -``` --where: --| Value | Description | -| : | : | -| `secret-name` | Name of the image pull secret, for example, *acr-secret* | -| `namespace` | Kubernetes namespace to put the secret into <br/> Only needed if you want to place the secret in a namespace other than the default namespace | -| `container-registry-name` | Name of your Azure container registry, for example, *myregistry*<br/><br/>The `--docker-server` is the fully qualified name of the registry login server | -| `service-principal-ID` | ID of the service principal that will be used by Kubernetes to access your registry | -| `service-principal-password` | Service principal password | --## Use the image pull secret --Once you've created the image pull secret, you can use it to create Kubernetes pods and deployments. Provide the name of the secret under `imagePullSecrets` in the deployment file. For example: --```yaml -apiVersion: v1 -kind: Pod -metadata: - name: my-awesome-app-pod - namespace: awesomeapps -spec: - containers: - - name: main-app-container - image: myregistry.azurecr.io/my-awesome-app:v1 - imagePullPolicy: IfNotPresent - imagePullSecrets: - - name: acr-secret -``` --In the preceding example, `my-awesome-app:v1` is the name of the image to pull from the Azure container registry, and `acr-secret` is the name of the pull secret you created to access the registry. When you deploy the pod, Kubernetes automatically pulls the image from your registry, if it is not already present on the cluster. --## Next steps --* For more about working with service principals and Azure Container Registry, see [Azure Container Registry authentication with service principals](container-registry-auth-service-principal.md) -* Learn more about image pull secrets in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) --<!-- IMAGES --> --<!-- LINKS - External --> -[acr-scripts-cli]: https://github.com/Azure/azure-docs-cli-python-samples/tree/master/container-registry/create-registry/create-registry-service-principal-assign-role.sh -[acr-scripts-psh]: https://github.com/Azure/azure-docs-powershell-samples/tree/master/container-registry --<!-- LINKS - Internal --> -[az-ad-sp-credential-reset]: /cli/azure/ad/sp/credential#az_ad_sp_credential_reset |
container-registry | Container Registry Auth Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-service-principal.md | - Title: Authenticate with service principal -description: Provide access to images in your private container registry by using a Microsoft Entra service principal. ----- Previously updated : 10/31/2023---# Azure Container Registry authentication with service principals --You can use a Microsoft Entra service principal to provide push, pull, or other access to your container registry. By using a service principal, you can provide access to "headless" services and applications. --## What is a service principal? --Microsoft Entra ID [*service principals*](../active-directory/develop/app-objects-and-service-principals.md) provide access to Azure resources within your subscription. You can think of a service principal as a user identity for a service, where "service" is any application, service, or platform that needs to access the resources. You can configure a service principal with access rights scoped only to those resources you specify. Then, configure your application or service to use the service principal's credentials to access those resources. --In the context of Azure Container Registry, you can create a Microsoft Entra service principal with pull, push and pull, or other permissions to your private registry in Azure. For a complete list, see [Azure Container Registry roles and permissions](container-registry-roles.md). --## Why use a service principal? --By using a Microsoft Entra service principal, you can provide scoped access to your private container registry. Create different service principals for each of your applications or services, each with tailored access rights to your registry. And, because you can avoid sharing credentials between services and applications, you can rotate credentials or revoke access for only the service principal (and thus the application) you choose. --For example, configure your web application to use a service principal that provides it with image `pull` access only, while your build system uses a service principal that provides it with both `push` and `pull` access. If development of your application changes hands, you can rotate its service principal credentials without affecting the build system. --## When to use a service principal --You should use a service principal to provide registry access in **headless scenarios**. That is, an application, service, or script that must push or pull container images in an automated or otherwise unattended manner. For example: --* *Pull*: Deploy containers from a registry to orchestration systems including Kubernetes, DC/OS, and Docker Swarm. You can also pull from container registries to related Azure services such as [App Service](../app-service/index.yml), [Batch](../batch/index.yml), [Service Fabric](/azure/service-fabric/), and others. -- > [!TIP] - > A service principal is recommended in several [Kubernetes scenarios](authenticate-kubernetes-options.md) to pull images from an Azure container registry. With Azure Kubernetes Service (AKS), you can also use an automated mechanism to authenticate with a target registry by enabling the cluster's [managed identity](/azure/aks/cluster-container-registry-integration). - * *Push*: Build container images and push them to a registry using continuous integration and deployment solutions like Azure Pipelines or Jenkins. --For individual access to a registry, such as when you manually pull a container image to your development workstation, we recommend using your own [Microsoft Entra identity](container-registry-authentication.md#individual-login-with-azure-ad) instead for registry access (for example, with [az acr login][az-acr-login]). ---### Sample scripts --You can find the preceding sample scripts for Azure CLI on GitHub, as well as versions for Azure PowerShell: --* [Azure CLI][acr-scripts-cli] -* [Azure PowerShell][acr-scripts-psh] --## Authenticate with the service principal --Once you have a service principal that you've granted access to your container registry, you can configure its credentials for access to "headless" services and applications, or enter them using the `docker login` command. Use the following values: --* **Username** - service principal's **application (client) ID** -* **Password** - service principal's **password (client secret)** --The **Username** value has the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. --> [!TIP] -> You can regenerate the password (client secret) of a service principal by running the [az ad sp credential reset](/cli/azure/ad/sp/credential#az-ad-sp-credential-reset) command. -> --### Use credentials with Azure services --You can use service principal credentials from any Azure service that authenticates with an Azure container registry. Use service principal credentials in place of the registry's admin credentials for a variety of scenarios. --### Use with docker login --You can run `docker login` using a service principal. In the following example, the service principal application ID is passed in the environment variable `$SP_APP_ID`, and the password in the variable `$SP_PASSWD`. For recommended practices to manage Docker credentials, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command reference. --```bash -# Log in to Docker with service principal credentials -docker login myregistry.azurecr.io --username $SP_APP_ID --password $SP_PASSWD -``` --Once logged in, Docker caches the credentials. --### Use with certificate --If you've added a certificate to your service principal, you can sign in to the Azure CLI with certificate-based authentication, and then use the [az acr login][az-acr-login] command to access a registry. Using a certificate as a secret instead of a password provides additional security when you use the CLI. --A self-signed certificate can be created when you [create a service principal](/cli/azure/create-an-azure-service-principal-azure-cli). Or, add one or more certificates to an existing service principal. For example, if you use one of the scripts in this article to create or update a service principal with rights to pull or push images from a registry, add a certificate using the [az ad sp credential reset][az-ad-sp-credential-reset] command. --To use the service principal with certificate to [sign in to the Azure CLI](/cli/azure/authenticate-azure-cli#sign-in-with-a-service-principal), the certificate must be in PEM format and include the private key. If your certificate isn't in the required format, use a tool such as `openssl` to convert it. When you run [az login][az-login] to sign into the CLI using the service principal, also provide the service principal's application ID and the Active Directory tenant ID. The following example shows these values as environment variables: --```azurecli -az login --service-principal --username $SP_APP_ID --tenant $SP_TENANT_ID --password /path/to/cert/pem/file -``` --Then, run [az acr login][az-acr-login] to authenticate with the registry: --```azurecli -az acr login --name myregistry -``` --The CLI uses the token created when you ran `az login` to authenticate your session with the registry. --## Create service principal for cross-tenant scenarios --A service principal can also be used in Azure scenarios that require pulling images from a container registry in one Microsoft Entra ID (tenant) to a service or app in another. For example, an organization might run an app in Tenant A that needs to pull an image from a shared container registry in Tenant B. --To create a service principal that can authenticate with a container registry in a cross-tenant scenario: --* Create a [multitenant app](../active-directory/develop/single-and-multi-tenant-apps.md) (service principal) in Tenant A -* Provision the app in Tenant B -* Grant the service principal permissions to pull from the registry in Tenant B -* Update the service or app in Tenant A to authenticate using the new service principal --For example steps, see [Pull images from a container registry to an AKS cluster in a different AD tenant](authenticate-aks-cross-tenant.md). --## Service principal renewal --The service principal is created with one-year validity. You have options to extend the validity further than one year, or can provide expiry date of your choice using the [`az ad sp credential reset`](/cli/azure/ad/sp/credential#az-ad-sp-credential-reset) command. --## Next steps --* See the [authentication overview](container-registry-authentication.md) for other scenarios to authenticate with an Azure container registry. --* For an example of using an Azure key vault to store and retrieve service principal credentials for a container registry, see the tutorial to [build and deploy a container image using ACR Tasks](container-registry-tutorial-quick-task.md). --<!-- LINKS - External --> -[acr-scripts-cli]: https://github.com/Azure/azure-docs-cli-python-samples/tree/master/container-registry/create-registry/create-registry-service-principal-assign-role.sh -[acr-scripts-psh]: https://github.com/Azure/azure-docs-powershell-samples/tree/master/container-registry --<!-- LINKS - Internal --> -[az-acr-login]: /cli/azure/acr#az_acr_login -[az-login]: /cli/azure/reference-index#az_login -[az-ad-sp-credential-reset]: /cli/azure/ad/sp/credential#az_ad_sp_credential_reset |
container-registry | Container Registry Authentication Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication-managed-identity.md | - Title: Authenticate with managed identity -description: Provide access to images in your private container registry by using a user-assigned or system-assigned managed Azure identity. ----- Previously updated : 10/31/2023---# Use an Azure managed identity to authenticate to an Azure container registry --Use a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) to authenticate to an Azure container registry from another Azure resource, without needing to provide or manage registry credentials. For example, set up a user-assigned or system-assigned managed identity on a Linux VM to access container images from your container registry, as easily as you use a public registry. Or, set up an Azure Kubernetes Service cluster to use its [managed identity](/azure/aks/cluster-container-registry-integration) to pull container images from Azure Container Registry for pod deployments. --For this article, you learn more about managed identities and how to: --> [!div class="checklist"] -> * Enable a user-assigned or system-assigned identity on an Azure VM -> * Grant the identity access to an Azure container registry -> * Use the managed identity to access the registry and pull a container image --### [Azure CLI](#tab/azure-cli) --To create the Azure resources, this article requires that you run the Azure CLI version 2.0.55 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. --### [Azure PowerShell](#tab/azure-powershell) --To create the Azure resources, this article requires that you run the Azure PowerShell module version 7.5.0 or later. Run `Get-Module Az -ListAvailable` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module][azure-powershell-install]. ----To set up a container registry and push a container image to it, you must also have Docker installed locally. Docker provides packages that easily configure Docker on any [macOS][docker-mac], [Windows][docker-windows], or [Linux][docker-linux] system. --## Why use a managed identity? --If you're not familiar with the managed identities for Azure resources feature, see this [overview](../active-directory/managed-identities-azure-resources/overview.md). --After you set up selected Azure resources with a managed identity, give the identity the access you want to another resource, just like any security principal. For example, assign a managed identity a role with pull, push and pull, or other permissions to a private registry in Azure. (For a complete list of registry roles, see [Azure Container Registry roles and permissions](container-registry-roles.md).) You can give an identity access to one or more resources. --Then, use the identity to authenticate to any [service that supports Microsoft Entra authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication), without any credentials in your code. Choose how to authenticate using the managed identity, depending on your scenario. To use the identity to access an Azure container registry from a virtual machine, you authenticate with Azure Resource Manager. --## Create a container registry --### [Azure CLI](#tab/azure-cli) --If you don't already have an Azure container registry, create a registry and push a sample container image to it. For steps, see [Quickstart: Create a private container registry using the Azure CLI](container-registry-get-started-azure-cli.md). --This article assumes you have the `aci-helloworld:v1` container image stored in your registry. The examples use a registry name of *myContainerRegistry*. Replace with your own registry and image names in later steps. --### [Azure PowerShell](#tab/azure-powershell) --If you don't already have an Azure container registry, create a registry and push a sample container image to it. For steps, see [Quickstart: Create a private container registry using Azure PowerShell](container-registry-get-started-powershell.md). --This article assumes you have the `aci-helloworld:v1` container image stored in your registry. The examples use a registry name of *myContainerRegistry*. Replace with your own registry and image names in later steps. ----## Create a Docker-enabled VM --### [Azure CLI](#tab/azure-cli) --Create a Docker-enabled Ubuntu virtual machine. You also need to install the [Azure CLI][azure-cli-install] on the virtual machine. If you already have an Azure virtual machine, skip this step to create the virtual machine. --Deploy a default Ubuntu Azure virtual machine with [az vm create][az-vm-create]. The following example creates a VM named *myDockerVM* in an existing resource group named *myResourceGroup*: --```azurecli-interactive -az vm create \ - --resource-group myResourceGroup \ - --name myDockerVM \ - --image Ubuntu2204 \ - --admin-username azureuser \ - --generate-ssh-keys -``` --It takes a few minutes for the VM to be created. When the command completes, take note of the `publicIpAddress` displayed by the Azure CLI. Use this address to make SSH connections to the VM. --### [Azure PowerShell](#tab/azure-powershell) --Create a Docker-enabled Ubuntu virtual machine. You also need to install the [Azure PowerShell][azure-powershell-install] on the virtual machine. If you already have an Azure virtual machine, skip this step to create the virtual machine. --Deploy a default Ubuntu Azure virtual machine with [New-AzVM][new-azvm]. The following example creates a VM named *myDockerVM* in an existing resource group named *myResourceGroup*. You will be prompted for a user name that will be used when you connect to the VM. Specify *azureuser* as the user name. You will also be asked for a password, which you can leave blank. Password login for the VM is disabled when using an SSH key. --```azurepowershell-interactive -$vmParams = @{ - ResourceGroupName = 'MyResourceGroup' - Name = 'myDockerVM' - Image = 'UbuntuLTS' - PublicIpAddressName = 'myPublicIP' - GenerateSshKey = $true - SshKeyName = 'mySSHKey' -} -New-AzVM @vmParams -``` --It takes a few minutes for the VM to be created. When the command completes, run the following command to get the public IP address. Use this address to make SSH connections to the VM. --```azurepowershell-interactive -Get-AzPublicIpAddress -Name myPublicIP -ResourceGroupName myResourceGroup | Select-Object -ExpandProperty IpAddress -``` ----### Install Docker on the VM --After the VM is running, make an SSH connection to the VM. Replace *publicIpAddress* with the public IP address of your VM. --```bash -ssh azureuser@publicIpAddress -``` --Run the following command to install Docker on the VM: --```bash -sudo apt update -sudo apt install docker.io -y -``` --After installation, run the following command to verify that Docker is running properly on the VM: --```bash -sudo docker run -it mcr.microsoft.com/hello-world -``` --```output -Hello from Docker! -This message shows that your installation appears to be working correctly. -[...] -``` -### [Azure CLI](#tab/azure-cli) --### Install the Azure CLI --Follow the steps in [Install Azure CLI with apt](/cli/azure/install-azure-cli-apt) to install the Azure CLI on your Ubuntu virtual machine. For this article, ensure that you install version 2.0.55 or later. --### [Azure PowerShell](#tab/azure-powershell) --### Install the Azure PowerShell --Follow the steps in [Installing PowerShell on Ubuntu][powershell-install] and [Install the Azure Az PowerShell module][azure-powershell-install] to install PowerShell and Azure PowerShell on your Ubuntu virtual machine. For this article, ensure that you install Azure PowerShell version 7.5.0 or later. ----Exit the SSH session. --## Example 1: Access with a user-assigned identity --### Create an identity --### [Azure CLI](#tab/azure-cli) --Create an identity in your subscription using the [az identity create][az-identity-create] command. You can use the same resource group you used previously to create the container registry or virtual machine, or a different one. --```azurecli-interactive -az identity create --resource-group myResourceGroup --name myACRId -``` --To configure the identity in the following steps, use the [az identity show][az-identity-show] command to store the identity's resource ID and service principal ID in variables. --```azurecli-interactive -# Get resource ID of the user-assigned identity -userID=$(az identity show --resource-group myResourceGroup --name myACRId --query id --output tsv) --# Get service principal ID of the user-assigned identity -spID=$(az identity show --resource-group myResourceGroup --name myACRId --query principalId --output tsv) -``` --Because you need the identity's ID in a later step when you sign in to the CLI from your virtual machine, show the value: --```bash -echo $userID -``` --The ID is of the form: --```output -/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACRId -``` --### [Azure PowerShell](#tab/azure-powershell) --Create an identity in your subscription using the [New-AzUserAssignedIdentity][new-azuserassignedidentity] cmdlet. You can use the same resource group you used previously to create the container registry or virtual machine, or a different one. --```azurepowershell-interactive -New-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Location eastus -Name myACRId -``` --To configure the identity in the following steps, use the [Get-AzUserAssignedIdentity][get-azuserassignedidentity] cmdlet to store the identity's resource ID and service principal ID in variables. --```azurepowershell-interactive -# Get resource ID of the user-assigned identity -$userID = (Get-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Name myACRId).Id --# Get service principal ID of the user-assigned identity -$spID = (Get-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Name myACRId).PrincipalId -``` --Because you need the identity's ID in a later step when you sign in to the Azure PowerShell from your virtual machine, show the value: --```azurepowershell-interactive -$userID -``` --The ID is of the form: --```output -/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACRId -``` ----### Configure the VM with the identity --### [Azure CLI](#tab/azure-cli) --The following [az vm identity assign][az-vm-identity-assign] command configures your Docker VM with the user-assigned identity: --```azurecli-interactive -az vm identity assign --resource-group myResourceGroup --name myDockerVM --identities $userID -``` --### [Azure PowerShell](#tab/azure-powershell) --The following [Update-AzVM][update-azvm] command configures your Docker VM with the user-assigned identity: --```azurepowershell-interactive -$vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myDockerVM -Update-AzVM -ResourceGroupName myResourceGroup -VM $vm -IdentityType UserAssigned -IdentityID $userID -``` ----### Grant identity access to the container registry --### [Azure CLI](#tab/azure-cli) --Now configure the identity to access your container registry. First use the [az acr show][az-acr-show] command to get the resource ID of the registry: --```azurecli-interactive -resourceID=$(az acr show --resource-group myResourceGroup --name myContainerRegistry --query id --output tsv) -``` --Use the [az role assignment create][az-role-assignment-create] command to assign the AcrPull role to the identity. This role provides [pull permissions](container-registry-roles.md) to the registry. To provide both pull and push permissions, assign the AcrPush role. --```azurecli-interactive -az role assignment create --assignee $spID --scope $resourceID --role acrpull -``` --### [Azure PowerShell](#tab/azure-powershell) --Now configure the identity to access your container registry. First use the [Get-AzContainerRegistry][get-azcontainerregistry] command to get the resource ID of the registry: --```azurepowershell-interactive -$resourceID = (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name myContainerRegistry).Id -``` --Use the [New-AzRoleAssignment][new-azroleassignment] cmdlet to assign the AcrPull role to the identity. This role provides [pull permissions](container-registry-roles.md) to the registry. To provide both pull and push permissions, assign the AcrPush role. --```azurepowershell-interactive -New-AzRoleAssignment -ObjectId $spID -Scope $resourceID -RoleDefinitionName AcrPull -``` ----### Use the identity to access the registry --### [Azure CLI](#tab/azure-cli) --SSH into the Docker virtual machine that's configured with the identity. Run the following Azure CLI commands, using the Azure CLI installed on the VM. --First, authenticate to the Azure CLI with [az login][az-login], using the identity you configured on the VM. For `<userID>`, substitute the ID of the identity you retrieved in a previous step. --```azurecli-interactive -az login --identity --username <userID> -``` --Then, authenticate to the registry with [az acr login][az-acr-login]. When you use this command, the CLI uses the Active Directory token created when you ran `az login` to seamlessly authenticate your session with the container registry. (Depending on your VM's setup, you might need to run this command and docker commands with `sudo`.) --```azurecli-interactive -az acr login --name myContainerRegistry -``` --You should see a `Login succeeded` message. You can then run `docker` commands without providing credentials. For example, run [docker pull][docker-pull] to pull the `aci-helloworld:v1` image, specifying the login server name of your registry. The login server name consists of your container registry name (all lowercase) followed by `.azurecr.io` - for example, `mycontainerregistry.azurecr.io`. --``` -docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1 -``` --### [Azure PowerShell](#tab/azure-powershell) --SSH into the Docker virtual machine that's configured with the identity. Run the following Azure PowerShell commands, using the Azure PowerShell installed on the VM. --First, authenticate to the Azure PowerShell with [Connect-AzAccount][connect-azaccount], using the identity you configured on the VM. For `-AccountId` specify a client ID of the identity. --```azurepowershell-interactive -$clientId = (Get-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Name myACRId).ClientId -Connect-AzAccount -Identity -AccountId $clientId -``` --Then, authenticate to the registry with [Connect-AzContainerRegistry][connect-azcontainerregistry]. When you use this command, the Azure PowerShell uses the Active Directory token created when you ran `Connect-AzAccount` to seamlessly authenticate your session with the container registry. (Depending on your VM's setup, you might need to run this command and docker commands with `sudo`.) --```azurepowershell-interactive -sudo pwsh -command Connect-AzContainerRegistry -Name myContainerRegistry -``` --You should see a `Login succeeded` message. You can then run `docker` commands without providing credentials. For example, run [docker pull][docker-pull] to pull the `aci-helloworld:v1` image, specifying the login server name of your registry. The login server name consists of your container registry name (all lowercase) followed by `.azurecr.io` - for example, `mycontainerregistry.azurecr.io`. --```bash -docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1 -``` ----## Example 2: Access with a system-assigned identity --### Configure the VM with a system-managed identity --### [Azure CLI](#tab/azure-cli) --The following [az vm identity assign][az-vm-identity-assign] command configures your Docker VM with a system-assigned identity: --```azurecli-interactive -az vm identity assign --resource-group myResourceGroup --name myDockerVM -``` --Use the [az vm show][az-vm-show] command to set a variable to the value of `principalId` (the service principal ID) of the VM's identity, to use in later steps. --```azurecli-interactive -spID=$(az vm show --resource-group myResourceGroup --name myDockerVM --query identity.principalId --out tsv) -``` --### [Azure PowerShell](#tab/azure-powershell) --The following [Update-AzVM][update-azvm] command configures your Docker VM with a system-assigned identity: --```azurepowershell-interactive -$vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myDockerVM -Update-AzVM -ResourceGroupName myResourceGroup -VM $vm -IdentityType SystemAssigned -``` --Use the [Get-AzVM][get-azvm] command to set a variable to the value of `principalId` (the service principal ID) of the VM's identity, to use in later steps. --```azurepowershell-interactive -$spID = (Get-AzVM -ResourceGroupName myResourceGroup -Name myDockerVM).Identity.PrincipalId -``` ----### Grant identity access to the container registry --### [Azure CLI](#tab/azure-cli) --Now configure the identity to access your container registry. First use the [az acr show][az-acr-show] command to get the resource ID of the registry: --```azurecli-interactive -resourceID=$(az acr show --resource-group myResourceGroup --name myContainerRegistry --query id --output tsv) -``` --Use the [az role assignment create][az-role-assignment-create] command to assign the AcrPull role to the identity. This role provides [pull permissions](container-registry-roles.md) to the registry. To provide both pull and push permissions, assign the AcrPush role. --```azurecli-interactive -az role assignment create --assignee $spID --scope $resourceID --role acrpull -``` --### [Azure PowerShell](#tab/azure-powershell) --Now configure the identity to access your container registry. First use the [[Get-AzContainerRegistry][get-azcontainerregistry] command to get the resource ID of the registry: --```azurepowershell-interactive -$resourceID = (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name myContainerRegistry).Id -``` --Use the [New-AzRoleAssignment][new-azroleassignment] cmdlet to assign the AcrPull role to the identity. This role provides [pull permissions](container-registry-roles.md) to the registry. To provide both pull and push permissions, assign the AcrPush role. --```azurepowershell-interactive -New-AzRoleAssignment -ObjectId $spID -Scope $resourceID -RoleDefinitionName AcrPull -``` ----### Use the identity to access the registry --### [Azure CLI](#tab/azure-cli) --SSH into the Docker virtual machine that's configured with the identity. Run the following Azure CLI commands, using the Azure CLI installed on the VM. --First, authenticate the Azure CLI with [az login][az-login], using the system-assigned identity on the VM. --```azurecli-interactive -az login --identity -``` --Then, authenticate to the registry with [az acr login][az-acr-login]. When you use this command, the CLI uses the Active Directory token created when you ran `az login` to seamlessly authenticate your session with the container registry. (Depending on your VM's setup, you might need to run this command and docker commands with `sudo`.) --```azurecli-interactive -az acr login --name myContainerRegistry -``` --You should see a `Login succeeded` message. You can then run `docker` commands without providing credentials. For example, run [docker pull][docker-pull] to pull the `aci-helloworld:v1` image, specifying the login server name of your registry. The login server name consists of your container registry name (all lowercase) followed by `.azurecr.io` - for example, `mycontainerregistry.azurecr.io`. --```bash -docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1 -``` -### [Azure PowerShell](#tab/azure-powershell) --SSH into the Docker virtual machine that's configured with the identity. Run the following Azure PowerShell commands, using the Azure PowerShell installed on the VM. --First, authenticate the Azure PowerShell with [Connect-AzAccount][connect-azaccount], using the system-assigned identity on the VM. --```azurepowershell-interactive -Connect-AzAccount -Identity -``` --Then, authenticate to the registry with [Connect-AzContainerRegistry][connect-azcontainerregistry]. When you use this command, the PowerShell uses the Active Directory token created when you ran `Connect-AzAccount` to seamlessly authenticate your session with the container registry. (Depending on your VM's setup, you might need to run this command and docker commands with `sudo`.) --```azurepowershell-interactive -sudo pwsh -command Connect-AzContainerRegistry -Name myContainerRegistry -``` --You should see a `Login succeeded` message. You can then run `docker` commands without providing credentials. For example, run [docker pull][docker-pull] to pull the `aci-helloworld:v1` image, specifying the login server name of your registry. The login server name consists of your container registry name (all lowercase) followed by `.azurecr.io` - for example, `mycontainerregistry.azurecr.io`. --```bash -docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1 -``` ----## Next steps --In this article, you learned about using managed identities with Azure Container Registry and how to: --> [!div class="checklist"] -> * Enable a user-assigned or system-assigned identity in an Azure VM -> * Grant the identity access to an Azure container registry -> * Use the managed identity to access the registry and pull a container image --* Learn more about [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/index.yml). -* Learn how to use a [system-assigned](https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/use_system-assigned_managed_identities.md) or [user-assigned](https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/use_user-assigned_managed_identities.md) managed identity with App Service and Azure Container Registry. -* Learn how to [deploy a container image from Azure Container Registry using a managed identity](/azure/container-instances/using-azure-container-registry-mi). --<!-- LINKS - external --> -[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms -[docker-login]: https://docs.docker.com/engine/reference/commandline/login/ -[docker-mac]: https://docs.docker.com/docker-for-mac/ -[docker-pull]: https://docs.docker.com/engine/reference/commandline/pull/ -[docker-windows]: https://docs.docker.com/docker-for-windows/ --<!-- LINKS - Internal --> -[az-login]: /cli/azure/reference-index#az_login -[connect-azaccount]: /powershell/module/az.accounts/connect-azaccount -[az-acr-login]: /cli/azure/acr#az_acr_login -[connect-azcontainerregistry]: /powershell/module/az.containerregistry/connect-azcontainerregistry -[az-acr-show]: /cli/azure/acr#az_acr_show -[get-azcontainerregistry]: /powershell/module/az.containerregistry/get-azcontainerregistry -[az-vm-create]: /cli/azure/vm#az_vm_create -[new-azvm]: /powershell/module/az.compute/new-azvm -[az-vm-show]: /cli/azure/vm#az_vm_show -[get-azvm]: /powershell/module/az.compute/get-azvm -[az-identity-create]: /cli/azure/identity#az_identity_create -[new-azuserassignedidentity]: /powershell/module/az.managedserviceidentity/new-azuserassignedidentity -[az-vm-identity-assign]: /cli/azure/vm/identity#az_vm_identity_assign -[update-azvm]: /powershell/module/az.compute/update-azvm -[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create -[new-azroleassignment]: /powershell/module/az.resources/new-azroleassignment -[az-identity-show]: /cli/azure/identity#az_identity_show -[get-azuserassignedidentity]: /powershell/module/az.managedserviceidentity/get-azuserassignedidentity -[azure-cli-install]: /cli/azure/install-azure-cli -[azure-powershell-install]: /powershell/azure/install-az-ps -[powershell-install]: /powershell/scripting/install/install-ubuntu |
container-registry | Container Registry Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication.md | - Title: Registry authentication options -description: Authentication options for a private Azure container registry, including signing in with a Microsoft Entra identity, using service principals, and using optional admin credentials. ---- Previously updated : 10/31/2023---# Authenticate with an Azure container registry --There are several ways to authenticate with an Azure container registry, each of which is applicable to one or more registry usage scenarios. --Recommended ways include: --* Authenticate to a registry directly via [individual login](#individual-login-with-azure-ad) -* Applications and container orchestrators can perform unattended, or "headless," authentication by using a Microsoft Entra [service principal](#service-principal) --If you use a container registry with Azure Kubernetes Service (AKS) or another Kubernetes cluster, see [Scenarios to authenticate with Azure Container Registry from Kubernetes](authenticate-kubernetes-options.md). --## Authentication options --The following table lists available authentication methods and typical scenarios. See linked content for details. --| Method | How to authenticate | Scenarios  | Azure role-based access control (Azure RBAC)  | Limitations  | -||-||-|--| -| [Individual AD identity](#individual-login-with-azure-ad)  | `az acr login` in Azure CLI<br/><br/> `Connect-AzContainerRegistry` in Azure PowerShell  | Interactive push/pull by developers, testers  | Yes  | AD token must be renewed every 3 hours  | -| [AD service principal](#service-principal)  | `docker login`<br/><br/>`az acr login` in Azure CLI<br/><br/> `Connect-AzContainerRegistry` in Azure PowerShell<br/><br/> Registry login settings in APIs or tooling<br/><br/> [Kubernetes pull secret](container-registry-auth-kubernetes.md)    | Unattended push from CI/CD pipeline<br/><br/> Unattended pull to Azure or external services  | Yes  | SP password default expiry is 1 year  | -| [Managed identity for Azure resources](container-registry-authentication-managed-identity.md)  | `docker login`<br/><br/> `az acr login` in Azure CLI<br/><br/> `Connect-AzContainerRegistry` in Azure PowerShell | Unattended push from Azure CI/CD pipeline<br/><br/> Unattended pull to Azure services<br/><br/> | Yes  | Use only from select Azure services that [support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-managed-identities-for-azure-resources) | -| [AKS cluster managed identity](/azure/aks/cluster-container-registry-integration?toc=/azure/container-registry/toc.json&bc=/azure/container-registry/breadcrumb/toc.json)  | Attach registry when AKS cluster created or updated  | Unattended pull to AKS cluster in the same or a different subscription | No, pull access only  | Only available with AKS cluster<br/><br/>Can't be used for cross-tenant authentication  | -| [AKS cluster service principal](authenticate-aks-cross-tenant.md)  | Enable when AKS cluster created or updated  | Unattended pull to AKS cluster from registry in another AD tenant  | No, pull access only  | Only available with AKS cluster  | -| [Admin user](#admin-account)  | `docker login`  | Interactive push/pull by individual developer or tester<br/><br/>Portal deployment of image from registry to Azure App Service or Azure Container Instances | No, always pull and push access  | Single account per registry, not recommended for multiple users  | -| [Repository-scoped access token](container-registry-repository-scoped-permissions.md)  | `docker login`<br/><br/>`az acr login` in Azure CLI<br/><br/> `Connect-AzContainerRegistry` in Azure PowerShell<br/><br/> [Kubernetes pull secret](container-registry-auth-kubernetes.md)  | Interactive push/pull to repository by individual developer or tester<br/><br/> Unattended pull from repository by individual system or external device  | Yes  | Not currently integrated with AD identity  | --<a name='individual-login-with-azure-ad'></a> --## Individual login with Microsoft Entra ID --### [Azure CLI](#tab/azure-cli) --When working with your registry directly, such as pulling images to and pushing images from a development workstation to a registry you created, authenticate by using your individual Azure identity. Sign in to the [Azure CLI](/cli/azure/install-azure-cli) with [az login](/cli/azure/reference-index#az-login), and then run the [az acr login](/cli/azure/acr#az-acr-login) command: --```azurecli -az login -az acr login --name <acrName> -``` --When you log in with `az acr login`, the CLI uses the token created when you executed `az login` to seamlessly authenticate your session with your registry. To complete the authentication flow, the Docker CLI and Docker daemon must be installed and running in your environment. `az acr login` uses the Docker client to set a Microsoft Entra token in the `docker.config` file. Once you've logged in this way, your credentials are cached, and subsequent `docker` commands in your session do not require a username or password. --> [!TIP] -> Also use `az acr login` to authenticate an individual identity when you want to push or pull artifacts other than Docker images to your registry, such as [OCI artifacts](container-registry-manage-artifact.md). --For registry access, the token used by `az acr login` is valid for **3 hours**, so we recommend that you always log in to the registry before running a `docker` command. If your token expires, you can refresh it by using the `az acr login` command again to reauthenticate. --Using `az acr login` with Azure identities provides [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.yml). For some scenarios, you may want to log in to a registry with your own individual identity in Microsoft Entra ID, or configure other Azure users with specific [Azure roles and permissions](container-registry-roles.md). For cross-service scenarios or to handle the needs of a workgroup or a development workflow where you don't want to manage individual access, you can also log in with a [managed identity for Azure resources](container-registry-authentication-managed-identity.md). --### az acr login with --expose-token --In some cases, you need to authenticate with `az acr login` when the Docker daemon isn't running in your environment. For example, you might need to run `az acr login` in a script in Azure Cloud Shell, which provides the Docker CLI but doesn't run the Docker daemon. --For this scenario, run `az acr login` first with the `--expose-token` parameter. This option exposes an access token instead of logging in through the Docker CLI. --```azurecli -az acr login --name <acrName> --expose-token -``` --Output displays the access token, abbreviated here: --```console -{ - "accessToken": "eyJhbGciOiJSUzI1NiIs[...]24V7wA", - "loginServer": "myregistry.azurecr.io" -} -``` -For registry authentication, we recommend that you store the token credential in a safe location and follow recommended practices to manage [docker login](https://docs.docker.com/engine/reference/commandline/login/) credentials. For example, store the token value in an environment variable: --```azurecli -TOKEN=$(az acr login --name <acrName> --expose-token --output tsv --query accessToken) -``` --Then, run `docker login`, passing `00000000-0000-0000-0000-000000000000` as the username and using the access token as password: --```console -docker login myregistry.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password-stdin <<< $TOKEN -``` -Likewise, you can use the token returned by `az acr login` with the `helm registry login` command to authenticate with the registry: --```console -echo $TOKEN | helm registry login myregistry.azurecr.io \ - --username 00000000-0000-0000-0000-000000000000 \ - --password-stdin -``` --### [Azure PowerShell](#tab/azure-powershell) --When working with your registry directly, such as pulling images to and pushing images from a development workstation to a registry you created, authenticate by using your individual Azure identity. Sign in to [Azure PowerShell](/powershell/azure/uninstall-az-ps) with [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount), and then run the [Connect-AzContainerRegistry](/powershell/module/az.containerregistry/connect-azcontainerregistry) cmdlet: --```azurepowershell -Connect-AzAccount -Connect-AzContainerRegistry -Name <acrName> -``` --When you log in with `Connect-AzContainerRegistry`, PowerShell uses the token created when you executed `Connect-AzAccount` to seamlessly authenticate your session with your registry. To complete the authentication flow, the Docker CLI and Docker daemon must be installed and running in your environment. `Connect-AzContainerRegistry` uses the Docker client to set a Microsoft Entra token in the `docker.config` file. Once you've logged in this way, your credentials are cached, and subsequent `docker` commands in your session do not require a username or password. --> [!TIP] -> Also use `Connect-AzContainerRegistry` to authenticate an individual identity when you want to push or pull artifacts other than Docker images to your registry, such as [OCI artifacts](container-registry-manage-artifact.md). --For registry access, the token used by `Connect-AzContainerRegistry` is valid for **3 hours**, so we recommend that you always log in to the registry before running a `docker` command. If your token expires, you can refresh it by using the `Connect-AzContainerRegistry` command again to reauthenticate. --Using `Connect-AzContainerRegistry` with Azure identities provides [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.yml). For some scenarios, you may want to log in to a registry with your own individual identity in Microsoft Entra ID, or configure other Azure users with specific [Azure roles and permissions](container-registry-roles.md). For cross-service scenarios or to handle the needs of a workgroup or a development workflow where you don't want to manage individual access, you can also log in with a [managed identity for Azure resources](container-registry-authentication-managed-identity.md). ----## Service principal --If you assign a [service principal](../active-directory/develop/app-objects-and-service-principals.md) to your registry, your application or service can use it for headless authentication. Service principals allow [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.yml) to a registry, and you can assign multiple service principals to a registry. Multiple service principals allow you to define different access for different applications. --ACR authentication token gets created upon login to the ACR, and is refreshed upon subsequent operations. The time to live for that token is 3 hours. --The available roles for a container registry include: --* **AcrPull**: pull --* **AcrPush**: pull and push --* **Owner**: pull, push, and assign roles to other users --For a complete list of roles, see [Azure Container Registry roles and permissions](container-registry-roles.md). --For CLI scripts to create a service principal for authenticating with an Azure container registry, and more guidance, see [Azure Container Registry authentication with service principals](container-registry-auth-service-principal.md). --## Admin account --Each container registry includes an admin user account, which is disabled by default. You can enable the admin user and manage its credentials in the Azure portal, or by using the Azure CLI, Azure PowerShell, or other Azure tools. The admin account has full permissions to the registry. --The admin account is currently required for some scenarios to deploy an image from a container registry to certain Azure services. For example, the admin account is needed when you use the Azure portal to deploy a container image from a registry directly to [Azure Container Instances](/azure/container-instances/container-instances-using-azure-container-registry#deploy-with-azure-portal) or [Azure Web Apps for Containers](container-registry-tutorial-deploy-app.md). --> [!IMPORTANT] -> The admin account is designed for a single user to access the registry, mainly for testing purposes. We do not recommend sharing the admin account credentials among multiple users. All users authenticating with the admin account appear as a single user with push and pull access to the registry. Changing or disabling this account disables registry access for all users who use its credentials. Individual identity is recommended for users and service principals for headless scenarios. -> --The admin account is provided with two passwords, both of which can be regenerated. New passwords created for admin accounts are available immediately. Regenerating passwords for admin accounts will take 60 seconds to replicate and be available. Two passwords allow you to maintain connection to the registry by using one password while you regenerate the other. If the admin account is enabled, you can pass the username and either password to the `docker login` command when prompted for basic authentication to the registry. For example: --``` -docker login myregistry.azurecr.io -``` --For recommended practices to manage login credentials, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command reference. --### [Azure CLI](#tab/azure-cli) --To enable the admin user for an existing registry, you can use the `--admin-enabled` parameter of the [az acr update](/cli/azure/acr#az-acr-update) command in the Azure CLI: --```azurecli -az acr update -n <acrName> --admin-enabled true -``` --### [Azure PowerShell](#tab/azure-powershell) --To enable the admin user for an existing registry, you can use the `EnableAdminUser` parameter of the [Update-AzContainerRegistry](/powershell/module/az.containerregistry/update-azcontainerregistry) command in Azure PowerShell: --```azurepowershell -Update-AzContainerRegistry -Name <acrName> -ResourceGroupName myResourceGroup -EnableAdminUser -``` ----You can enable the admin user in the Azure portal by navigating your registry, selecting **Access keys** under **SETTINGS**, then **Enable** under **Admin user**. --![Enable admin user UI in the Azure portal][auth-portal-01] --## Log in with an alternative container tool instead of Docker -In some scenarios, you need to use alternative container tools like `podman` instead of the common container tool `docker`. For example: [Docker is no longer available in RHEL 8 and 9][docker-deprecated-redhat-8-9], so you have to switch your container tool. --The default container tool is set to `docker` for `az acr login` commands. If you don't set the default container tool and the `docker` command is missing in your environment, the following error will be popped: -```bash -az acr login --name <acrName> -2024-03-29 07:30:10.014426 An error occurred: DOCKER_COMMAND_ERROR -Please verify if Docker client is installed and running. -``` --To change the default container tool that the `az acr login` command uses, you can set the environment variable `DOCKER_COMMAND`. For example: -```azurecli -DOCKER_COMMAND=podman \ -az acr login --name <acrName> -``` --> [!NOTE] -> You need the Azure CLI version 2.59.0 or later installed and configured to use this feature. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. --## Next steps --* [Push your first image using the Azure CLI](container-registry-get-started-azure-cli.md) --* [Push your first image using Azure PowerShell](container-registry-get-started-powershell.md) --<!-- IMAGES --> -[auth-portal-01]: ./media/container-registry-authentication/auth-portal-01.png --<!-- EXTERNAL LINKS --> -[docker-deprecated-redhat-8-9]: https://access.redhat.com/solutions/3696691 --<!-- INTERNAL LINKS --> -[install-azure-cli]: /cli/azure/install-azure-cli |
container-registry | Container Registry Auto Purge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auto-purge.md | - Title: Purge tags and manifests -description: Use a purge command to delete multiple tags and manifests from an Azure container registry based on age and a tag filter, and optionally schedule purge operations. ----- Previously updated : 10/31/2023---# Automatically purge images from an Azure container registry --When you use an Azure container registry as part of a development workflow, the registry can quickly fill up with images or other artifacts that aren't needed after a short period. You might want to delete all tags that are older than a certain duration or match a specified name filter. To delete multiple artifacts quickly, this article introduces the `acr purge` command you can run as an on-demand or [scheduled](container-registry-tasks-scheduled.md) ACR Task. --The `acr purge` command is currently distributed in a public container image (`mcr.microsoft.com/acr/acr-cli:0.5`), built from source code in the [acr-cli](https://github.com/Azure/acr-cli) repo in GitHub. `acr purge` is currently in preview. --You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the ACR task examples in this article. If you'd like to use it locally, version 2.0.76 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. --> [!WARNING] -> Use the `acr purge` command with caution--deleted image data is UNRECOVERABLE. If you have systems that pull images by manifest digest (as opposed to image name), you should not purge untagged images. Deleting untagged images will prevent those systems from pulling the images from your registry. Instead of pulling by manifest, consider adopting a *unique tagging* scheme, a [recommended best practice](container-registry-image-tag-version.md). --If you want to delete single image tags or manifests using Azure CLI commands, see [Delete container images in Azure Container Registry](container-registry-delete.md). --## Use the purge command --The `acr purge` container command deletes images by tag in a repository that match a name filter and that are older than a specified duration. By default, only tag references are deleted, not the underlying [manifests](container-registry-concepts.md#manifest) and layer data. The command has an option to also delete manifests. --> [!NOTE] -> `acr purge` does not delete an image tag or repository where the `write-enabled` attribute is set to `false`. For information, see [Lock a container image in an Azure container registry](container-registry-image-lock.md). --`acr purge` is designed to run as a container command in an [ACR Task](container-registry-tasks-overview.md), so that it authenticates automatically with the registry where the task runs and performs actions there. The task examples in this article use the `acr purge` command [alias](container-registry-tasks-reference-yaml.md#aliases) in place of a fully qualified container image command. --> [!IMPORTANT] -- The standard command to execute the `acr purge` is `az acr run --registry <YOUR_REGISTRY> --cmd 'acr purge --optional parameter' `.-- We recommend running the complete `acr purge` command to use the ACR Purge. For example, run the `acr purge --help` as `az acr run --registry <YOUR_REGISTRY> --cmd 'acr purge --help' `.--At a minimum, specify the following when you run `acr purge`: --* `--filter` - A repository name *regular expression* and a tag name *regular expression* to filter images in the registry. Examples: `--filter "hello-world:.*"` matches all tags in the `hello-world` repository, `--filter "hello-world:^1.*"` matches tags beginning with `1` in the `hello-world` repository, and `--filter ".*/cache:.*"` matches all tags in the repositories ending in `/cache`. You can also pass multiple `--filter` parameters. -* `--ago` - A Go-style [duration string](https://go.dev/pkg/time/) to indicate a duration beyond which images are deleted. The duration consists of a sequence of one or more decimal numbers, each with a unit suffix. Valid time units include "d" for days, "h" for hours, and "m" for minutes. For example, `--ago 2d3h6m` selects all filtered images last modified more than two days, 3 hours, and 6 minutes ago, and `--ago 1.5h` selects images last modified more than 1.5 hours ago. --`acr purge` supports several optional parameters. The following two are used in examples in this article: --* `--untagged` - Specifies that all manifests that don't have associated tags (*untagged manifests*) are deleted. This parameter also deletes untagged manifests in addition to tags that are already being deleted. Remove all tags associated with a manifest to purge it; only then you can purge a tag free manifest using `--untagged`. -* `--dry-run` - Specifies that no data is deleted, but the output is the same as if the command is run without this flag. This parameter is useful for testing a purge command to make sure it does not inadvertently delete data you intend to preserve. -* `--keep` - Specifies that the latest x number of to-be-deleted tags are retained. The latest tags are determined by the last modified time of the tag. -* `--concurrency` - Specifies a number of purge tasks to process concurrently. A default value is used if this parameter is not provided. --> [!NOTE] -> The `--untagged` filter doesn't respond to the `--ago` filter. -For additional parameters, run `acr purge --help`. --`acr purge` supports other features of ACR Tasks commands including [run variables](container-registry-tasks-reference-yaml.md#run-variables) and [task run logs](container-registry-tasks-logs.md) that are streamed and also saved for later retrieval. --### Run in an on-demand task --The following example uses the [az acr run][az-acr-run] command to run the `acr purge` command on-demand. This example deletes all image tags and manifests in the `hello-world` repository in *myregistry* that were modified more than 1 day ago and all untagged manifests. The container command is passed using an environment variable. The task runs without a source context. --```azurecli -# Environment variable for container command line -PURGE_CMD="acr purge --filter 'hello-world:.*' \ - --untagged --ago 1d" --az acr run \ - --cmd "$PURGE_CMD" \ - --registry myregistry \ - -``` --### Run in a scheduled task --The following example uses the [az acr task create][az-acr-task-create] command to create a daily [scheduled ACR task](container-registry-tasks-scheduled.md). The task purges tags modified more than 7 days ago in the `hello-world` repository. The container command is passed using an environment variable. The task runs without a source context. --```azurecli -# Environment variable for container command line -PURGE_CMD="acr purge --filter 'hello-world:.*' \ - --ago 7d" --az acr task create --name purgeTask \ - --cmd "$PURGE_CMD" \ - --schedule "0 0 * * *" \ - --registry myregistry \ - --context -``` --Run the [az acr task show][az-acr-task-show] command to see that the timer trigger is configured. --### Purge large numbers of tags and manifests --Purging a large number of tags and manifests could take several minutes or longer. To purge thousands of tags and manifests, the command might need to run longer than the default timeout time of 600 seconds for an on-demand task, or 3600 seconds for a scheduled task. If the timeout time is exceeded, only a subset of tags and manifests are deleted. To ensure that a large-scale purge is complete, pass the `--timeout` parameter to increase the value. --For example, the following on-demand task sets a timeout time of 3600 seconds (1 hour): --```azurecli -# Environment variable for container command line -PURGE_CMD="acr purge --filter 'hello-world:.*' \ - --ago 1d --untagged" --az acr run \ - --cmd "$PURGE_CMD" \ - --registry myregistry \ - --timeout 3600 \ - -``` --## Example: Scheduled purge of multiple repositories in a registry --This example walks through using `acr purge` to periodically clean up multiple repositories in a registry. For example, you might have a development pipeline that pushes images to the `samples/devimage1` and `samples/devimage2` repositories. You periodically import development images into a production repository for your deployments, so you no longer need the development images. On a weekly basis, you purge the `samples/devimage1` and `samples/devimage2` repositories, in preparation for the coming week's work. --### Preview the purge --Before deleting data, we recommend running an on-demand purge task using the `--dry-run` parameter. This option allows you to see the tags and manifests that the command will purge, without removing any data. --In the following example, the filter in each repository selects all tags. The `--ago 0d` parameter matches images of all ages in the repositories that match the filters. Modify the selection criteria as needed for your scenario. The `--untagged` parameter indicates to delete manifests in addition to tags. The container command is passed to the [az acr run][az-acr-run] command using an environment variable. --```azurecli -# Environment variable for container command line -PURGE_CMD="acr purge \ - --filter 'samples/devimage1:.*' --filter 'samples/devimage2:.*' \ - --ago 0d --untagged --dry-run" --az acr run \ - --cmd "$PURGE_CMD" \ - --registry myregistry \ - -``` --Review the command output to see the tags and manifests that match the selection parameters. Because the command is run with `--dry-run`, no data is deleted. --Sample output: --```console -[...] -Deleting tags for repository: samples/devimage1 -myregistry.azurecr.io/samples/devimage1:232889b -myregistry.azurecr.io/samples/devimage1:a21776a -Deleting manifests for repository: samples/devimage1 -myregistry.azurecr.io/samples/devimage1@sha256:81b6f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e788b -myregistry.azurecr.io/samples/devimage1@sha256:3ded859790e68bd02791a972ab0bae727231dc8746f233a7949e40f8ea90c8b3 -Deleting tags for repository: samples/devimage2 -myregistry.azurecr.io/samples/devimage2:5e788ba -myregistry.azurecr.io/samples/devimage2:f336b7c -Deleting manifests for repository: samples/devimage2 -myregistry.azurecr.io/samples/devimage2@sha256:8d2527cde610e1715ad095cb12bc7ed169b60c495e5428eefdf336b7cb7c0371 -myregistry.azurecr.io/samples/devimage2@sha256:ca86b078f89607bc03ded859790e68bd02791a972ab0bae727231dc8746f233a --Number of deleted tags: 4 -Number of deleted manifests: 4 -[...] -``` --### Schedule the purge --After you've verified the dry run, create a scheduled task to automate the purge. The following example schedules a weekly task on Sunday at 1:00 UTC to run the previous purge command: --```azurecli -# Environment variable for container command line -PURGE_CMD="acr purge \ - --filter 'samples/devimage1:.*' --filter 'samples/devimage2:.*' \ - --ago 0d --untagged" --az acr task create --name weeklyPurgeTask \ - --cmd "$PURGE_CMD" \ - --schedule "0 1 * * Sun" \ - --registry myregistry \ - --context -``` --Run the [az acr task show][az-acr-task-show] command to see that the timer trigger is configured. --## Next steps --Learn about other options to [delete image data](container-registry-delete.md) in Azure Container Registry. --For more information about image storage, see [Container image storage in Azure Container Registry](container-registry-storage.md). --<!-- LINKS - External --> --[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ --<!-- LINKS - Internal --> -[azure-cli-install]: /cli/azure/install-azure-cli -[az-acr-run]: /cli/azure/acr#az-acr-run -[az-acr-task-create]: /cli/azure/acr/task#az-acr-task-create -[az-acr-task-show]: /cli/azure/acr/task#az-acr-task-show |
container-registry | Container Registry Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-azure-policy.md | - Title: Compliance using Azure Policy -description: Assign built-in policy definitions in Azure Policy to audit compliance of your Azure container registries ---- Previously updated : 10/31/2023---# Audit compliance of Azure container registries using Azure Policy --[Azure Policy](../governance/policy/overview.md) is a service in Azure that you use to create, assign, and manage policy definitions. These policy definitions enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements. --This article introduces built-in policy definitions for Azure Container Registry. Use these definitions to audit new and existing registries for compliance. --There is no charge for using Azure Policy. --## Built-in policy definitions --The following built-in policy definitions are specific to Azure Container Registry: ---## Create policy assignments --* Create policy assignments using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs. -* Scope a policy assignment to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md). Container registry policy assignments apply to existing and new container registries within the scope. -* Enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time. --> [!NOTE] -> After you create or update a policy assignment, it takes some time for the assignment to evaluate resources in the defined scope. See information about [policy evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers). --## Review policy compliance --Access compliance information generated by your policy assignments using the Azure portal, Azure command-line tools, or the Azure Policy SDKs. For details, see [Get compliance data of Azure resources](../governance/policy/how-to/get-compliance-data.md). --When a resource is non-compliant, there are many possible reasons. To determine the reason or to find the change responsible, see [Determine non-compliance](../governance/policy/how-to/determine-non-compliance.md). --### Policy compliance in the portal: --1. Select **All services**, and search for **Policy**. -1. Select **Compliance**. -1. Use the filters to limit compliance states or to search for policies. -- ![Policy compliance in portal](./media/container-registry-azure-policy/azure-policy-compliance.png) - -1. Select a policy to review aggregate compliance details and events. If desired, then select a specific registry for resource compliance. --### Policy compliance in the Azure CLI --You can also use the Azure CLI to get compliance data. For example, use the [az policy assignment list](/cli/azure/policy/assignment#az-policy-assignment-list) command in the CLI to get the policy IDs of the Azure Container Registry policies that are applied: --```azurecli -az policy assignment list --query "[?contains(displayName,'Container Registries')].{name:displayName, ID:id}" --output table -``` --Sample output: --``` -Name ID -- ---Container Registries should not allow unrestricted network access /subscriptions/<subscriptionID>/providers/Microsoft.Authorization/policyAssignments/b4faf132dc344b84ba68a441 -Container Registries should be encrypted with a Customer-Managed Key (CMK) /subscriptions/<subscriptionID>/providers/Microsoft.Authorization/policyAssignments/cce1ed4f38a147ad994ab60a -``` --Then run [az policy state list](/cli/azure/policy/state#az-policy-state-list) to return the JSON-formatted compliance state for all resources under a specific policy ID: --```azurecli -az policy state list \ - --resource <policyID> -``` --Or run [az policy state list](/cli/azure/policy/state#az-policy-state-list) to return the JSON-formatted compliance state of a specific registry resource, such as *myregistry*: --```azurecli -az policy state list \ - --resource myregistry \ - --namespace Microsoft.ContainerRegistry \ - --resource-type registries \ - --resource-group myresourcegroup -``` --## Next steps --* Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md). --* Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md). --* Learn more about [governance capabilities](../governance/index.yml) in Azure. |
container-registry | Container Registry Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-best-practices.md | - Title: Registry best practices -description: Learn how to use your Azure container registry effectively by following these best practices. ---- Previously updated : 10/31/2023---# Best practices for Azure Container Registry --By following these best practices, you can help maximize the performance and cost-effective use of your private registry in Azure to store and deploy container images and other artifacts. --For background on registry concepts, see [About registries, repositories, and images](container-registry-concepts.md). See also [Recommendations for tagging and versioning container images](container-registry-image-tag-version.md) for strategies to tag and version images in your registry. --## Network-close deployment --Create your container registry in the same Azure region in which you deploy containers. Placing your registry in a region that is network-close to your container hosts can help lower both latency and cost. --Network-close deployment is one of the primary reasons for using a private container registry. Docker images have an efficient [layering construct](https://docs.docker.com/get-started/docker-concepts/building-images/understanding-image-layers/) that allows for incremental deployments. However, new nodes need to pull all layers required for a given image. This initial `docker pull` can quickly add up to multiple gigabytes. Having a private registry close to your deployment minimizes the network latency. -Additionally, all public clouds, Azure included, implement network egress fees. Pulling images from one datacenter to another adds network egress fees, in addition to the latency. --## Geo-replicate multi-region deployments --Use Azure Container Registry's [geo-replication](container-registry-geo-replication.md) feature if you're deploying containers to multiple regions. Whether you're serving global customers from local data centers or your development team is in different locations, you can simplify registry management and minimize latency by geo-replicating your registry. You can also configure regional [webhooks](container-registry-webhook.md) to notify you of events in specific replicas such as when images are pushed. --Geo-replication is available with [Premium](container-registry-skus.md) registries. To learn how to use geo-replication, see the three-part tutorial, [Geo-replication in Azure Container Registry](container-registry-tutorial-prepare-registry.md). --## Maximize pull performance --In addition to placing images close to your deployments, characteristics of your images themselves can impact pull performance. --* **Image size** - Minimize the sizes of your images by removing unnecessary [layers](container-registry-concepts.md#manifest) or reducing the size of layers. One way to reduce image size is to use the [multi-stage Docker build](https://docs.docker.com/develop/develop-images/multistage-build/) approach to include only the necessary runtime components. -- Also check whether your image can include a lighter base OS image. And if you use a deployment environment such as Azure Container Instances that caches certain base images, check whether you can swap an image layer for one of the cached images. -* **Number of layers** - Balance the number of layers used. If you have too few, you donΓÇÖt benefit from layer reuse and caching on the host. Too many, and your deployment environment spends more time pulling and decompressing. Five to 10 layers is optimal. --Also choose a [service tier](container-registry-skus.md) of Azure Container Registry that meets your performance needs. The Premium tier provides the greatest bandwidth and highest rate of concurrent read and write operations when you have high-volume deployments. --## Repository namespaces --By using repository namespaces, you can allow sharing a single registry across multiple groups within your organization. Registries can be shared across deployments and teams. Azure Container Registry supports nested namespaces, enabling group isolation. However, the registry manages all repositories independently, not as a hierarchy. --For example, consider the following container image tags. Images that are used corporate-wide, like `aspnetcore`, are placed in the root namespace, while container images owned by the Products and Marketing groups each use their own namespaces. --- *contoso.azurecr.io/aspnetcore:2.0*-- *contoso.azurecr.io/products/widget/web:1*-- *contoso.azurecr.io/products/bettermousetrap/refundapi:12.3*-- *contoso.azurecr.io/marketing/2017-fall/concertpromotions/campaign:218.42*--## Dedicated resource group --Because container registries are resources that are used across multiple container hosts, a registry should reside in its own resource group. --Although you might experiment with a specific host type, such as [Azure Container Instances](/azure/container-instances/container-instances-overview), you'll likely want to delete the container instance when you're done. However, you might also want to keep the collection of images you pushed to Azure Container Registry. By placing your registry in its own resource group, you minimize the risk of accidentally deleting the collection of images in the registry when you delete the container instance resource group. --## Authentication and authorization --When authenticating with an Azure container registry, there are two primary scenarios: individual authentication, and service (or "headless") authentication. The following table provides a brief overview of these scenarios, and the recommended method of authentication for each. --| Type | Example scenario | Recommended method | -|||| -| Individual identity | A developer pulling images to or pushing images from their development machine. | [az acr login](/cli/azure/acr#az-acr-login) | -| Headless/service identity | Build and deployment pipelines where the user isn't directly involved. | [Service principal](container-registry-authentication.md#service-principal) | --For in-depth information about these and other Azure Container Registry authentication scenarios, see [Authenticate with an Azure container registry](container-registry-authentication.md). --Azure Container Registry supports security practices in your organization to distribute duties and privileges to different identities. Using [role-based access control](container-registry-roles.md), assign appropriate permissions to different users, service principals, or other identities that perform different registry operations. For example, assign push permissions to a service principal used in a build pipeline and assign pull permissions to a different identity used for deployment. Create [tokens](container-registry-repository-scoped-permissions.md) for fine-grained, time-limited access to specific repositories. --## Manage registry size --The storage constraints of each [container registry service tier][container-registry-skus] are intended to align with a typical scenario: **Basic** for getting started, **Standard** for most production applications, and **Premium** for hyper-scale performance and [geo-replication][container-registry-geo-replication]. Throughout the life of your registry, you should manage its size by periodically deleting unused content. --Use the Azure CLI command [az acr show-usage][az-acr-show-usage] to display the current consumption of storage and other resources in your registry: --```azurecli -az acr show-usage --resource-group myResourceGroup --name myregistry --output table -``` --Sample output: --``` -NAME LIMIT CURRENT VALUE UNIT -Size 536870912000 215629144 Bytes -Webhooks 500 1 Count -Geo-replications -1 3 Count -IPRules 100 1 Count -VNetRules 100 0 Count -PrivateEndpointConnections 10 0 Count -``` --You can also find the current storage usage in the **Overview** of your registry in the Azure portal: --![Registry usage information in the Azure portal][registry-overview-quotas] --> [!NOTE] -> In a [geo-replicated](container-registry-geo-replication.md) registry, storage usage is shown for the home region. Multiply by the number of replications for total registry storage consumed. --### Delete image data --Azure Container Registry supports several methods for deleting image data from your container registry. You can delete images by tag or manifest digest, or delete a whole repository. --For details on deleting image data from your registry, including untagged (sometimes called "dangling" or "orphaned") images, see [Delete container images in Azure Container Registry](container-registry-delete.md). You can also set a [retention policy](container-registry-retention-policy.md) for untagged manifests. --## Next steps --Azure Container Registry is available in several tiers (also called SKUs) that provide different capabilities. For details on the available service tiers, see [Azure Container Registry service tiers](container-registry-skus.md). --For recommendations to improve the security posture of your container registries, see [Azure Security Baseline for Azure Container Registry](security-baseline.md). --<!-- IMAGES --> -[delete-repository-portal]: ./media/container-registry-best-practices/delete-repository-portal.png -[registry-overview-quotas]: ./media/container-registry-best-practices/registry-overview-quotas.png --<!-- LINKS - Internal --> -[az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete -[az-acr-show-usage]: /cli/azure/acr#az_acr_show_usage -[azure-cli]: /cli/azure -[container-registry-geo-replication]: container-registry-geo-replication.md -[container-registry-skus]: container-registry-skus.md |
container-registry | Container Registry Check Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-check-health.md | - Title: Check registry health -description: Learn how to run a quick diagnostic command to identify common problems when using an Azure container registry, including local Docker configuration and connectivity to the registry ----- Previously updated : 10/31/2023--# Check the health of an Azure container registry --When using an Azure container registry, you might occasionally encounter problems. For example, you might not be able to pull a container image because of an issue with Docker in your local environment. Or, a network issue might prevent you from connecting to the registry. --As a first diagnostic step, run the [az acr check-health][az-acr-check-health] command to get information about the health of the environment and optionally access to a target registry. This command is available in Azure CLI version 2.0.67 or later. If you need to install or upgrade, see [Install Azure CLI][azure-cli]. --For additional registry troubleshooting guidance, see: -* [Troubleshoot registry login](container-registry-troubleshoot-login.md) -* [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md) -* [Troubleshoot registry performance](container-registry-troubleshoot-performance.md) --## Run az acr check-health --The follow examples show different ways to run the `az acr check-health` command. --> [!NOTE] -> If you run the command in Azure Cloud Shell, the local environment is not checked. However, you can check the access to a target registry. --### Check the environment only --To check the local Docker daemon, CLI version, and Helm client configuration, run the command without additional parameters: --```azurecli -az acr check-health -``` --### Check the environment and a target registry --To check access to a registry as well as perform local environment checks, pass the name of a target registry. For example: --```azurecli -az acr check-health --name myregistry -``` --### Check registry access in a virtual network --To verify DNS settings to route to a private endpoint, pass the virtual network's name or resource ID. The resource ID is required when the virtual network is in a different subscription or resource group than the registry. --```azurecli -az acr check-health --name myregistry --vnet myvnet -``` --## Error reporting --The command logs information to the standard output. If a problem is detected, it provides an error code and description. For more information about the codes and possible solutions, see the [error reference](container-registry-health-error-reference.md). --By default, the command stops whenever it finds an error. You can also run the command so that it provides output for all health checks, even if errors are found. Add the `--ignore-errors` parameter, as shown in the following examples: --```azurecli -# Check environment only -az acr check-health --ignore-errors --# Check environment and target registry; skip confirmation to pull image -az acr check-health --name myregistry --ignore-errors --yes -``` --Sample output: --```azurecli -az acr check-health --name myregistry --ignore-errors --yes -``` --```output -Docker daemon status: available -Docker version: Docker version 18.09.2, build 6247962 -Docker pull of 'mcr.microsoft.com/mcr/hello-world:latest' : OK -ACR CLI version: 2.2.9 -Helm version: -Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"} -DNS lookup to myregistry.azurecr.io at IP 40.xxx.xxx.162 : OK -Challenge endpoint https://myregistry.azurecr.io/v2/ : OK -Fetch refresh token for registry 'myregistry.azurecr.io' : OK -Fetch access token for registry 'myregistry.azurecr.io' : OK -``` --## Check if registry is configured with quarantine --Once you enable a container registry to be quarantined, every image you publish to this repository will be quarantined. Any attempts to access or pull quarantined images will fail with an error. For more information, See [pull the quarantine image](https://github.com/Azure/acr/tree/main/docs/preview/quarantine#pull-the-quarantined-image). --## Next steps --For details about error codes returned by the [az acr check-health][az-acr-check-health] command, see the [Health check error reference](container-registry-health-error-reference.md). --See the [FAQ](container-registry-faq.yml) for frequently asked questions and other known issues about Azure Container Registry. ------<!-- LINKS - internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-check-health]: /cli/azure/acr#az_acr_check_health |
container-registry | Container Registry Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-concepts.md | - Title: About registries, repositories, images, and artifacts -description: Introduction to key concepts of Azure container registries, repositories, container images, and other artifacts. ---- Previously updated : 10/31/2023---# About registries, repositories, and artifacts --This article introduces the key concepts of container registries, repositories, and container images and related artifacts. ---## Registry --A container *registry* is a service that stores and distributes container images and related artifacts. Docker Hub is an example of a public container registry that serves as a general catalog of Docker container images. Azure Container Registry provides users with direct control of their container content, with integrated authentication, [geo-replication](container-registry-geo-replication.md) supporting global distribution and reliability for network-close deployments, [virtual network configuration with Private Link](container-registry-private-link.md), [tag locking](container-registry-image-lock.md), and many other enhanced features. --In addition to Docker-compatible container images, Azure Container Registry supports a range of [content artifacts](container-registry-image-formats.md) including Helm charts and Open Container Initiative (OCI) image formats. --## Repository --A *repository* is a collection of container images or other artifacts in a registry that have the same name, but different tags. For example, the following three images are in the `acr-helloworld` repository: --- *acr-helloworld:latest*-- *acr-helloworld:v1*-- *acr-helloworld:v2*--Repository names can also include [namespaces](container-registry-best-practices.md#repository-namespaces). Namespaces allow you to identify related repositories and artifact ownership in your organization by using forward slash-delimited names. However, the registry manages all repositories independently, not as a hierarchy. For example: --- *marketing/campaign10-18/web:v2*-- *marketing/campaign10-18/api:v3*-- *marketing/campaign10-18/email-sender:v2*-- *product-returns/web-submission:20180604*-- *product-returns/legacy-integrator:20180715*--Repository names can only include lowercase alphanumeric characters, periods, dashes, underscores, and forward slashes. --## Artifact --A container image or other artifact within a registry is associated with one or more tags, has one or more layers, and is identified by a manifest. Understanding how these components relate to each other can help you manage your registry effectively. --### Tag --The *tag* for an image or other artifact specifies its version. A single artifact within a repository can be assigned one or many tags, and may also be "untagged." That is, you can delete all tags from an image, while the image's data (its layers) remain in the registry. --The repository (or repository and namespace) plus a tag defines an image's name. You can push and pull an image by specifying its name in the push or pull operation. The tag `latest` is used by default if you don't provide one in your Docker commands. --How you tag container images is guided by your scenarios to develop or deploy them. For example, stable tags are recommended for maintaining your base images, and unique tags for deploying images. For more information, see [Recommendations for tagging and versioning container images](container-registry-image-tag-version.md). --For tag naming rules, see the [Docker documentation](https://docs.docker.com/engine/reference/commandline/tag/). --### Layer --Container images and artifacts are made up of one or more *layers*. Different artifact types define layers differently. For example, in a Docker container image, each layer corresponds to a line in the Dockerfile that defines the image: ---Artifacts in a registry share common layers, increasing storage efficiency. For example, several images in different repositories might have a common ASP.NET Core base layer, but only one copy of that layer is stored in the registry. Layer sharing also optimizes layer distribution to nodes, with multiple artifacts sharing common layers. If an image already on a node includes the ASP.NET Core layer as its base, the subsequent pull of a different image referencing the same layer doesn't transfer the layer to the node. Instead, it references the layer already existing on the node. --To provide secure isolation and protection from potential layer manipulation, layers are not shared across registries. --### Manifest --Each container image or artifact pushed to a container registry is associated with a *manifest*. The manifest, generated by the registry when the content is pushed, uniquely identifies the artifacts and specifies the layers. --A basic manifest for a Linux `hello-world` image looks similar to the following: -- ```json - { - "schemaVersion": 2, - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "config": { - "mediaType": "application/vnd.docker.container.image.v1+json", - "size": 1510, - "digest": "sha256:fbf289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e" - }, - "layers": [ - { - "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", - "size": 977, - "digest": "sha256:2c930d010525941c1d56ec53b97bd057a67ae1865eebf042686d2a2d18271ced" - } - ] - } - ``` --You can list the manifests for a repository with the Azure CLI command [az acr manifest list-metadata][az-acr-manifest-list-metadata]: --```azurecli -az acr manifest list-metadata --name <repositoryName> --registry <acrName> -``` --For example, list the manifests for the "acr-helloworld" repository: --```azurecli -az acr manifest list-metadata --name acr-helloworld --registry myregistry -``` --```output -[ - { - "digest": "sha256:0a2e01852872580b2c2fea9380ff8d7b637d3928783c55beb3f21a6e58d5d108", - "tags": [ - "latest", - "v3" - ], - "timestamp": "2018-07-12T15:52:00.2075864Z" - }, - { - "digest": "sha256:3168a21b98836dda7eb7a846b3d735286e09a32b0aa2401773da518e7eba3b57", - "tags": [ - "v2" - ], - "timestamp": "2018-07-12T15:50:53.5372468Z" - }, - { - "digest": "sha256:7ca0e0ae50c95155dbb0e380f37d7471e98d2232ed9e31eece9f9fb9078f2728", - "tags": [ - "v1" - ], - "timestamp": "2018-07-11T21:38:35.9170967Z" - } -] -``` --### Manifest digest --Manifests are identified by a unique SHA-256 hash, or *manifest digest*. Each image or artifact--whether tagged or not--is identified by its digest. The digest value is unique even if the artifact's layer data is identical to that of another artifact. This mechanism is what allows you to repeatedly push identically tagged images to a registry. For example, you can repeatedly push `myimage:latest` to your registry without error because each image is identified by its unique digest. --You can pull an artifact from a registry by specifying its digest in the pull operation. Some systems may be configured to pull by digest because it guarantees the image version being pulled, even if an identically tagged image is pushed later to the registry. --> [!IMPORTANT] -> If you repeatedly push modified artifacts with identical tags, you might create "orphans"--artifacts that are untagged, but still consume space in your registry. Untagged images are not shown in the Azure CLI or in the Azure portal when you list or view images by tag. However, their layers still exist and consume space in your registry. Deleting an untagged image frees registry space when the manifest is the only one, or the last one, pointing to a particular layer. For information about freeing space used by untagged images, see [Delete container images in Azure Container Registry](container-registry-delete.md). --## Addressing an artifact --To address a registry artifact for push and pull operations with Docker or other client tools, combine the fully qualified registry name, repository name (including namespace path if applicable), and an artifact tag or manifest digest. See previous sections for explanations of these terms. -- **Address by tag**: `[loginServerUrl]/[repository][:tag]` - - **Address by digest**: `[loginServerUrl]/[repository@sha256][:digest]` --When using Docker or other client tools to pull or push artifacts to an Azure container registry, use the registry's fully qualified URL, also called the *login server* name. In the Azure cloud, the fully qualified URL of an Azure container registry is in the format `myregistry.azurecr.io` (all lowercase). --> [!NOTE] -> * You can't specify a port number in the registry login server URL, such as `myregistry.azurecr.io:443`. -> * The tag `latest` is used by default if you don't provide a tag in your command. -- -### Push by tag --Examples: -- `docker push myregistry.azurecr.io/samples/myimage:20210106` -- `docker push myregistry.azurecr.io/marketing/email-sender` --### Pull by tag --Example: -- `docker pull myregistry.azurecr.io/marketing/campaign10-18/email-sender:v2` --### Pull by manifest digest ---Example: -- `docker pull myregistry.azurecr.io/acr-helloworld@sha256:0a2e01852872580b2c2fea9380ff8d7b637d3928783c55beb3f21a6e58d5d108` ----## Next steps --Learn more about [registry storage](container-registry-storage.md) and [supported content formats](container-registry-image-formats.md) in Azure Container Registry. --Learn how to [push and pull images](container-registry-get-started-docker-cli.md) from Azure Container Registry. --<!-- LINKS - Internal --> -[az-acr-manifest-list-metadata]: /cli/azure/acr/manifest#az-acr-manifest-list-metadata |
container-registry | Container Registry Configure Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-configure-conditional-access.md | - Title: Configure conditional access to your Azure Container Registry. -description: Learn how to configure conditional access to your registry by using Azure CLI and Azure portal. ---- Previously updated : 11/02/2023---# Conditional Access policy for Azure Container Registry --Azure Container Registry (ACR) gives you the option to create and configure the *Conditional Access policy*. Conditional Access policies, which are typically associated with Azure Active Directory (Azure AD), are used to enforce strong authentication and access controls for various Azure services, including ACR. --The Conditional Access policy applies after the first-factor authentication to the Azure Container Registry is complete. The purpose of Conditional Access for ACR is for user authentication only. The policy enables the user to choose the controls and further blocks or grants access based on the policy decisions. --The [Conditional Access policy](../active-directory/conditional-access/overview.md) is designed to enforce strong authentication. The policy enables the security to meet the organizations compliance requirements and keep the data and user accounts safe. -->[!IMPORTANT] -> To configure Conditional Access policy for the registry, you must disable [`authentication-as-arm`](container-registry-disable-authentication-as-arm.md) for all the registries within the desired tenant. --Learn more about [Conditional Access policy](../active-directory/conditional-access/overview.md), the [conditions](../active-directory/conditional-access/overview.md#common-signals) you'll take it into consideration to make [policy decisions.](../active-directory/conditional-access/overview.md#common-decisions) --In this tutorial, you learn how to: --> [!div class="checklist"] -> * Create and configure Conditional Access policy for Azure Container Registry. -> * Troubleshoot Conditional Access policy. --## Prerequisites --* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) version 2.40.0 or later. To find the version, run `az --version`. -* Sign in to the [Azure portal](https://portal.azure.com). --## Create and configure a Conditional Access policy - Azure portal --ACR supports Conditional Access policy for Active Directory users only. It currently doesn't support Conditional Access policy for Service Principal. To configure Conditional Access policy for the registry, you must disable `authentication-as-arm` for all the registries within the desired tenant. In this tutorial, we'll create a basic Conditional Access policy for the Azure Container Registry from the Azure portal. --Create a Conditional Access policy and assign your test group of users as follows: -- 1. Sign in to the [Azure portal](https://portal.azure.com) by using an account with *global administrator* permissions. -- 1. Search for and select **Microsoft Entra ID**. Then select **Security** from the menu on the left-hand side. -- 1. Select **Conditional Access**, select **+ New policy**, and then select **Create new policy**. - - :::image type="content" alt-text="A screenshot of the Conditional Access page, where you select 'New policy' and then select 'Create new policy'." source="media/container-registry-enable-conditional-policy/01-create-conditional-access.png"::: -- 1. Enter a name for the policy, such as *demo*. -- 1. Under **Assignments**, select the current value under **Users or workload identities**. - - :::image type="content" alt-text="A screenshot of the Conditional Access page, where you select the current value under 'Users or workload identities'." source="media/container-registry-enable-conditional-policy/02-conditional-access-users-and-groups.png"::: -- 1. Under **What does this policy apply to?**, verify and select **Users and groups**. -- 1. Under **Include**, choose **Select users and groups**, and then select **All users**. - - :::image type="content" alt-text="A screenshot of the page for creating a new policy, where you select options to specify users." source="media/container-registry-enable-conditional-policy/03-conditional-access-users-groups-select-users.png"::: -- 1. Under **Exclude**, choose **Select users and groups**, to exclude any choice of selection. -- 1. Under **Cloud apps or actions**, choose **Cloud apps**. -- 1. Under **Include**, choose **Select apps**. -- :::image type="content" alt-text="A screenshot of the page for creating a new policy, where you select options to specify cloud apps." source="media/container-registry-enable-conditional-policy/04-select-cloud-apps-select-apps.png"::: -- 1. Browse for and select apps to apply Conditional Access, in this case *Azure Container Registry*, then choose **Select**. -- :::image type="content" alt-text="A screenshot of the list of apps, with results filtered, and 'Azure Container Registry' selected." source="media/container-registry-enable-conditional-policy/05-select-azure-container-registry-app.png"::: -- 1. Under **Conditions** , configure control access level with options such as *User risk level*, *Sign-in risk level*, *Sign-in risk detections (Preview)*, *Device platforms*, *Locations*, *Client apps*, *Time (Preview)*, *Filter for devices*. -- 1. Under **Grant**, filter and choose from options to enforce grant access or block access, during a sign-in event to the Azure portal. In this case grant access with *Require multifactor authentication*, then choose **Select**. -- >[!TIP] - > To configure and grant multi-factor authentication, see [configure and conditions for multi-factor authentication.](../active-directory/authentication/tutorial-enable-azure-mfa.md#configure-the-conditions-for-multi-factor-authentication) -- 1. Under **Session**, filter and choose from options to enable any control on session level experience of the cloud apps. -- 1. After selecting and confirming, Under **Enable policy**, select **On**. -- 1. To apply and activate the policy, Select **Create**. -- :::image type="content" alt-text="A screenshot showing how to activate the Conditional Access policy." source="media/container-registry-enable-conditional-policy/06-enable-conditional-access-policy.png"::: -- We have now completed creating the Conditional Access policy for the Azure Container Registry. --## Troubleshoot Conditional Access policy --- For problems with Conditional Access sign-in, see [Troubleshoot Conditional Access sign-in](/entra/identity/conditional-access/troubleshoot-conditional-access).--- For problems with Conditional Access policy, see [Troubleshoot Conditional Access policy](/entra/identity/conditional-access/troubleshoot-conditional-access-what-if).--## Next steps --> [!div class="nextstepaction"] -> [Azure Policy definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md). ->[Common access concerns that Conditional Access policies can help with](../active-directory/conditional-access/concept-conditional-access-policy-common.md). -> [Conditional Access policy components](../active-directory/conditional-access/concept-conditional-access-policies.md). |
container-registry | Container Registry Content Trust | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-content-trust.md | - Title: Manage signed images -description: Learn how to enable content trust for your Azure container registry, and push and pull signed images. Content trust implements Docker content trust and is a feature of the Premium service tier. --- Previously updated : 10/31/2023----# Content trust in Azure Container Registry --Azure Container Registry implements Docker's [content trust][docker-content-trust] model, enabling pushing and pulling of signed images. This article gets you started enabling content trust in your container registries. --> [!NOTE] -> Content trust is a feature of the [Premium service tier](container-registry-skus.md) of Azure Container Registry. --## Limitations -- Token with repository-scoped permissions does not currently support docker push and pull of signed images.--## How content trust works --Important to any distributed system designed with security in mind is verifying both the *source* and the *integrity* of data entering the system. Consumers of the data need to be able to verify both the publisher (source) of the data, as well as ensure it's not been modified after it was published (integrity). --As an image publisher, content trust allows you to **sign** the images you push to your registry. Consumers of your images (people or systems pulling images from your registry) can configure their clients to pull *only* signed images. When an image consumer pulls a signed image, their Docker client verifies the integrity of the image. In this model, consumers are assured that the signed images in your registry were indeed published by you, and that they've not been modified since being published. --> [!NOTE] -> Azure Container Registry (ACR) does not support `acr import` to import images signed with Docker Content Trust (DCT). By design, the signatures are not visible after the import, and the notary v2 stores these signatures as artifacts. --### Trusted images --Content trust works with the **tags** in a repository. Image repositories can contain images with both signed and unsigned tags. For example, you might sign only the `myimage:stable` and `myimage:latest` images, but not `myimage:dev`. --### Signing keys --Content trust is managed through the use of a set of cryptographic signing keys. These keys are associated with a specific repository in a registry. There are several types of signing keys that Docker clients and your registry use in managing trust for the tags in a repository. When you enable content trust and integrate it into your container publishing and consumption pipeline, you must manage these keys carefully. For more information, see [Key management](#key-management) later in this article and [Manage keys for content trust][docker-manage-keys] in the Docker documentation. --> [!TIP] -> This was a very high-level overview of Docker's content trust model. For an in-depth discussion of content trust, see [Content trust in Docker][docker-content-trust]. --## Enable registry content trust --Your first step is to enable content trust at the registry level. Once you enable content trust, clients (users or services) can push signed images to your registry. Enabling content trust on your registry does not restrict registry usage only to consumers with content trust enabled. Consumers without content trust enabled can continue to use your registry as normal. Consumers who have enabled content trust in their clients, however, will be able to see *only* signed images in your registry. --To enable content trust for your registry, first navigate to the registry in the Azure portal. Under **Policies**, select **Content Trust** > **Enabled** > **Save**. You can also use the [az acr config content-trust update][az-acr-config-content-trust-update] command in the Azure CLI. --![Screenshot shows enabling content trust for a registry in the Azure portal.][content-trust-01-portal] --## Enable client content trust --To work with trusted images, both image publishers and consumers need to enable content trust for their Docker clients. As a publisher, you can sign the images you push to a content trust-enabled registry. As a consumer, enabling content trust limits your view of a registry to signed images only. Content trust is disabled by default in Docker clients, but you can enable it per shell session or per command. --To enable content trust for a shell session, set the `DOCKER_CONTENT_TRUST` environment variable to **1**. For example, in the Bash shell: --```bash -# Enable content trust for shell session -export DOCKER_CONTENT_TRUST=1 -``` --If instead you'd like to enable or disable content trust for a single command, several Docker commands support the `--disable-content-trust` argument. To enable content trust for a single command: --```bash -# Enable content trust for single command -docker build --disable-content-trust=false -t myacr.azurecr.io/myimage:v1 . -``` --If you've enabled content trust for your shell session and want to disable it for a single command: --```bash -# Disable content trust for single command -docker build --disable-content-trust -t myacr.azurecr.io/myimage:v1 . -``` --## Grant image signing permissions --Only the users or systems you've granted permission can push trusted images to your registry. To grant trusted image push permission to a user (or a system using a service principal), grant their Microsoft Entra identities the `AcrImageSigner` role. This is in addition to the `AcrPush` (or equivalent) role required for pushing images to the registry. For details, see [Azure Container Registry roles and permissions](container-registry-roles.md). --> [!IMPORTANT] -> You can't grant trusted image push permission to the following administrative accounts: -> * the [admin account](container-registry-authentication.md#admin-account) of an Azure container registry -> * a user account in Microsoft Entra ID with the [classic system administrator role](../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles). --> [!NOTE] -> Starting July 2021, the `AcrImageSigner` role includes both the `Microsoft.ContainerRegistry/registries/sign/write` action and the `Microsoft.ContainerRegistry/registries/trustedCollections/write` data action. --Details for granting the `AcrImageSigner` role in the Azure portal and the Azure CLI follow. --### Azure portal --1. Select **Access control (IAM)**. --1. Select **Add** > **Add role assignment** to open the Add role assignment page. --1. Assign the following role. In this example, the role is assigned to an individual user. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). - - | Setting | Value | - | | | - | Role | AcrImageSigner | - | Assign access to | User | - | Members | Alain | -- ![Add role assignment page in Azure portal.](~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-page.png) --### Azure CLI --To grant signing permissions to a user with the Azure CLI, assign the `AcrImageSigner` role to the user, scoped to your registry. The format of the command is: --```azurecli -az role assignment create --scope <registry ID> --role AcrImageSigner --assignee <user name> -``` --For example, to grant a non-administrative user the role, you can run the following commands in an authenticated Azure CLI session. Modify the `REGISTRY` value to reflect the name of your Azure container registry. --```bash -# Grant signing permissions to authenticated Azure CLI user -REGISTRY=myregistry -REGISTRY_ID=$(az acr show --name $REGISTRY --query id --output tsv) -``` --```azurecli -az role assignment create --scope $REGISTRY_ID --role AcrImageSigner --assignee azureuser@contoso.com -``` --You can also grant a [service principal](container-registry-auth-service-principal.md) the rights to push trusted images to your registry. Using a service principal is useful for build systems and other unattended systems that need to push trusted images to your registry. The format is similar to granting a user permission, but specify a service principal ID for the `--assignee` value. --```azurecli -az role assignment create --scope $REGISTRY_ID --role AcrImageSigner --assignee <service principal ID> -``` --The `<service principal ID>` can be the service principal's **appId**, **objectId**, or one of its **servicePrincipalNames**. For more information about working with service principals and Azure Container Registry, see [Azure Container Registry authentication with service principals](container-registry-auth-service-principal.md). --> [!IMPORTANT] -> After any role changes, run `az acr login` to refresh the local identity token for the Azure CLI so that the new roles can take effect. For information about verifying roles for an identity, see [Add or remove Azure role assignments using Azure CLI](../role-based-access-control/role-assignments-cli.md) and [Troubleshoot Azure RBAC](../role-based-access-control/troubleshooting.md). --## Push a trusted image --To push a trusted image tag to your container registry, enable content trust and push the image with `docker push`. After push with a signed tag completes the first time, you're asked to create a passphrase for both a root signing key and a repository signing key. Both the root and repository keys are generated and stored locally on your machine. --```console -$ docker push myregistry.azurecr.io/myimage:v1 -[...] -The push refers to repository [myregistry.azurecr.io/myimage] -ee83fc5847cb: Pushed -v1: digest: sha256:aca41a608e5eb015f1ec6755f490f3be26b48010b178e78c00eac21ffbe246f1 size: 524 -Signing and pushing trust metadata -You are about to create a new root signing key passphrase. This passphrase -will be used to protect the most sensitive key in your signing system. Please -choose a long, complex passphrase and be careful to keep the password and the -key file itself secure and backed up. It is highly recommended that you use a -password manager to generate the passphrase and keep it safe. There will be no -way to recover this key. You can find the key in your config directory. -Enter passphrase for new root key with ID 4c6c56a: -Repeat passphrase for new root key with ID 4c6c56a: -Enter passphrase for new repository key with ID bcd6d98: -Repeat passphrase for new repository key with ID bcd6d98: -Finished initializing "myregistry.azurecr.io/myimage" -Successfully signed myregistry.azurecr.io/myimage:v1 -``` --After your first `docker push` with content trust enabled, the Docker client uses the same root key for subsequent pushes. On each subsequent push to the same repository, you're asked only for the repository key. Each time you push a trusted image to a new repository, you're asked to supply a passphrase for a new repository key. --## Pull a trusted image --To pull a trusted image, enable content trust and run the `docker pull` command as normal. To pull trusted images, the `AcrPull` role is enough for normal users. No additional roles like an `AcrImageSigner` role are required. Consumers with content trust enabled can pull only images with signed tags. Here's an example of pulling a signed tag: --```console -$ docker pull myregistry.azurecr.io/myimage:signed -Pull (1 of 1): myregistry.azurecr.io/myimage:signed@sha256:0800d17e37fb4f8194495b1a188f121e5b54efb52b5d93dc9e0ed97fce49564b -sha256:0800d17e37fb4f8194495b1a188f121e5b54efb52b5d93dc9e0ed97fce49564b: Pulling from myimage -8e3ba11ec2a2: Pull complete -Digest: sha256:0800d17e37fb4f8194495b1a188f121e5b54efb52b5d93dc9e0ed97fce49564b -Status: Downloaded newer image for myregistry.azurecr.io/myimage@sha256:0800d17e37fb4f8194495b1a188f121e5b54efb52b5d93dc9e0ed97fce49564b -Tagging myregistry.azurecr.io/myimage@sha256:0800d17e37fb4f8194495b1a188f121e5b54efb52b5d93dc9e0ed97fce49564b as myregistry.azurecr.io/myimage:signed -``` --If a client with content trust enabled tries to pull an unsigned tag, the operation fails with an error similar to the following: --```console -$ docker pull myregistry.azurecr.io/myimage:unsigned -Error: remote trust data does not exist -``` --### Behind the scenes --When you run `docker pull`, the Docker client uses the same library as in the [Notary CLI][docker-notary-cli] to request the tag-to-SHA-256 digest mapping for the tag you're pulling. After validating the signatures on the trust data, the client instructs Docker Engine to do a "pull by digest." During the pull, the Engine uses the SHA-256 checksum as a content address to request and validate the image manifest from the Azure container registry. --> [!NOTE] -> Azure Container Registry does not officially support the Notary CLI but is compatible with the Notary Server API, which is included with Docker Desktop. Currently Notary version **0.6.0** is recommended. --## Key management --As stated in the `docker push` output when you push your first trusted image, the root key is the most sensitive. Be sure to back up your root key and store it in a secure location. By default, the Docker client stores signing keys in the following directory: --```sh -~/.docker/trust/private -``` --Back up your root and repository keys by compressing them in an archive and storing it in a secure location. For example, in Bash: --```bash -umask 077; tar -zcvf docker_private_keys_backup.tar.gz ~/.docker/trust/private; umask 022 -``` --Along with the locally generated root and repository keys, several others are generated and stored by Azure Container Registry when you push a trusted image. For a detailed discussion of the various keys in Docker's content trust implementation, including additional management guidance, see [Manage keys for content trust][docker-manage-keys] in the Docker documentation. --### Lost root key --If you lose access to your root key, you lose access to the signed tags in any repository whose tags were signed with that key. Azure Container Registry cannot restore access to image tags signed with a lost root key. To remove all trust data (signatures) for your registry, first disable, then re-enable content trust for the registry. --> [!WARNING] -> Disabling and re-enabling content trust in your registry **deletes all trust data for all signed tags in every repository in your registry**. This action is irreversible--Azure Container Registry cannot recover deleted trust data. Disabling content trust does not delete the images themselves. --To disable content trust for your registry, navigate to the registry in the Azure portal. Under **Policies**, select **Content Trust** > **Disabled** > **Save**. You're warned of the loss of all signatures in the registry. Select **OK** to permanently delete all signatures in your registry. --![Disabling content trust for a registry in the Azure portal][content-trust-03-portal] --## Next steps --* See [Content trust in Docker][docker-content-trust] for additional information about content trust, including [docker trust](https://docs.docker.com/engine/reference/commandline/trust/) commands and [trust delegations](https://docs.docker.com/engine/security/trust/trust_delegation/). While several key points were touched on in this article, content trust is an extensive topic and is covered more in-depth in the Docker documentation. --* See the [Azure Pipelines](/azure/devops/pipelines/build/content-trust) documentation for an example of using content trust when you build and push a Docker image. --<!-- IMAGES> --> -[content-trust-01-portal]: ./media/container-registry-content-trust/content-trust-01-portal.png -[content-trust-02-portal]: ./media/container-registry-content-trust/content-trust-02-portal.png -[content-trust-03-portal]: ./media/container-registry-content-trust/content-trust-03-portal.png --<!-- LINKS - external --> -[docker-content-trust]: https://docs.docker.com/engine/security/trust/content_trust -[docker-manage-keys]: https://docs.docker.com/engine/security/trust/trust_key_mng/ -[docker-notary-cli]: https://docs.docker.com/notary/getting_started/ -[docker-push]: https://docs.docker.com/engine/reference/commandline/push/ -[docker-tag]: https://docs.docker.com/engine/reference/commandline/tag/ -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ --<!-- LINKS - internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-config-content-trust-update]: /cli/azure/acr/config/content-trust#az_acr_config_content_trust_update |
container-registry | Container Registry Dedicated Data Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-dedicated-data-endpoints.md | - Title: Mitigate data exfiltration with dedicated data endpoints -description: Azure Container Registry is introducing dedicated data endpoints available to mitigate data-exfiltration concerns. ---- Previously updated : 10/31/2023---# Azure Container Registry mitigating data exfiltration with dedicated data endpoints --Azure Container Registry introduces dedicated data endpoints. The feature enables tightly scoped client firewall rules to specific registries, minimizing data exfiltration concerns. --Dedicated data endpoints feature is available in **Premium** service tier. For pricing information, see [container-registry-pricing.](https://azure.microsoft.com/pricing/details/container-registry/) --Pulling content from a registry involves two endpoints: --*Registry endpoint*, often referred to as the login URL, used for authentication and content discovery. A command like docker pulls `contoso.azurecr.io/hello-world` makes a REST request, which authenticates and negotiates the layers, which represent the requested artifact. -*Data endpoints* serve blobs representing content layers. -----## Registry managed storage accounts --Azure Container Registry is a multi-tenant service. The registry service manages the data endpoint storage accounts. The benefits of the managed storage accounts, include load balancing, contentious content splitting, multiple copies for higher concurrent content delivery, and multi-region support with [geo-replication.](container-registry-geo-replication.md) --## Azure Private Link virtual network support --The [Azure Private Link virtual network support](container-registry-private-link.md) enables the private endpoints for the managed registry service from Azure Virtual Networks. In this case, both the registry and data endpoints are accessible from within the virtual network, using private IPs. --Once the managed registry service and storage accounts are both secured to access from within the virtual network, then the public endpoints are removed. -----Unfortunately, virtual network connection isnΓÇÖt always an option. --> [!IMPORTANT] ->[Azure Private Link](container-registry-private-link.md) is the most secure way to control network access between clients and the registry as network traffic is limited to the Azure Virtual Network, using private IPs. When Private Link isnΓÇÖt an option, dedicated data endpoints can provide secure knowledge in what resources are accessible from each client. --## Client firewall rules and data exfiltration risks --Client firewall rules limits access to specific resources. The firewall rules apply while connecting to a registry from on-premises hosts, IoT devices, custom build agents. The rules also apply when the Private Link support isn't an option. -----As customers locked down their client firewall configurations, they realized they must create a rule with a wildcard for all storage accounts, raising concerns for data-exfiltration. A bad actor could deploy code that would be capable of writing to their storage account. -----So, to address the data-exfiltration concerns, Azure Container Registry is making dedicated data endpoints available. --## Dedicated data endpoints --Dedicated data endpoints, help retrieve layers from the Azure Container Registry service, with fully qualified domain names representing the registry domain. --As any registry may become geo-replicated, a regional pattern is used: `[registry].[region].data.azurecr.io`. --For the Contoso example, multiple regional data endpoints are added supporting the local region with a nearby replica. --With dedicated data endpoints, the bad actor is blocked from writing to other storage accounts. -----## Enabling dedicated data endpoints --> [!NOTE] -> Switching to dedicated data-endpoints will impact clients that have configured firewall access to the existing `*.blob.core.windows.net` endpoints, causing pull failures. To assure clients have consistent access, add the new data-endpoints to the client firewall rules. Once completed, existing registries can enable dedicated data-endpoints through the `az cli`. --To use the Azure CLI steps in this article, Azure CLI version 2.4.0 or later is required. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli) or run in [Azure Cloud Shell](../cloud-shell/quickstart.md). --* Run the [az acr update](/cli/azure/acr#az-acr-update) command to enable dedicated data endpoint. --```azurecli-interactive -az acr update --name contoso --data-endpoint-enabled -``` --* Run the [az acr show](/cli/azure/acr#az-acr-show-endpoints) command to view the data endpoints, including regional endpoints for geo-replicated registries. --```azurecli-interactive -az acr show-endpoints --name contoso -``` --Sample output: --```json -{ - "loginServer": "contoso.azurecr.io", - "dataEndpoints": [ - { - "region": "eastus", - "endpoint": "contoso.eastus.data.azurecr.io", - }, - { - "region": "westus", - "endpoint": "contoso.westus.data.azurecr.io", - } - ] -} - -``` --## Next steps --* Configure to access an Azure container registry from behind a [firewall rules.](container-registry-firewall-access-rules.md) -* Connect Azure Container Registry using [Azure Private Link](container-registry-private-link.md) |
container-registry | Container Registry Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-delete.md | - Title: Delete image resources -description: Details on how to effectively manage registry size by deleting container image data using Azure CLI commands. ----- Previously updated : 10/31/2023---# Delete container images in Azure Container Registry --To maintain the size of your Azure container registry, you should periodically delete stale image data. While some container images deployed into production may require longer-term storage, others can typically be deleted more quickly. For example, in an automated build and test scenario, your registry can quickly fill with images that might never be deployed, and can be purged shortly after completing the build and test pass. --Because you can delete image data in several different ways, it's important to understand how each delete operation affects storage usage. This article covers several methods for deleting image data: --* Delete a [repository](#delete-repository): Deletes all images and all unique layers within the repository. -* Delete by [tag](#delete-by-tag): Deletes an image, the tag, all unique layers referenced by the image, and all other tags associated with the image. -* Delete by [manifest digest](#delete-by-manifest-digest): Deletes an image, all unique layers referenced by the image, and all tags associated with the image. --For an introduction to these concepts, see [About registries, repositories, and images](container-registry-concepts.md). --> [!NOTE] -> After you delete image data, Azure Container Registry stops billing you immediately for the associated storage. However, the registry recovers the associated storage space using an asynchronous process. It takes some time before the registry cleans up layers and shows the updated storage usage. --## Delete repository --Deleting a repository deletes all of the images in the repository, including all tags, unique layers, and manifests. When you delete a repository, you recover the storage space used by the images that reference unique layers in that repository. --The following Azure CLI command deletes the "acr-helloworld" repository and all tags and manifests within the repository. If layers referenced by the deleted manifests are not referenced by any other images in the registry, their layer data is also deleted, recovering the storage space. --```azurecli - az acr repository delete --name myregistry --repository acr-helloworld -``` --## Delete by tag --You can delete individual images from a repository by specifying the repository name and tag in the delete operation. When you delete by tag, you recover the storage space used by any unique layers in the image (layers not shared by any other images in the registry). --To delete by tag, use [az acr repository delete][az-acr-repository-delete] and specify the image name in the `--image` parameter. All layers unique to the image, and any other tags associated with the image are deleted. --For example, deleting the "acr-helloworld:latest" image from registry "myregistry": --```azurecli -az acr repository delete --name myregistry --image acr-helloworld:latest -``` --```output -This operation will delete the manifest 'sha256:0a2e01852872580b2c2fea9380ff8d7b637d3928783c55beb3f21a6e58d5d108' and all the following images: 'acr-helloworld:latest', 'acr-helloworld:v3'. -Are you sure you want to continue? (y/n): -``` --> [!TIP] -> Deleting *by tag* shouldn't be confused with deleting a tag (untagging). You can delete a tag with the Azure CLI command [az acr repository untag][az-acr-repository-untag]. No space is freed when you untag an image because its [manifest](container-registry-concepts.md#manifest) and layer data remain in the registry. Only the tag reference itself is deleted. --## Delete by manifest digest --A [manifest digest](container-registry-concepts.md#manifest-digest) can be associated with one, none, or multiple tags. When you delete by digest, all tags referenced by the manifest are deleted, as is layer data for any layers unique to the image. Shared layer data is not deleted. --To delete by digest, first list the manifest digests for the repository containing the images you wish to delete. For example: --```azurecli -az acr manifest list-metadata --name acr-helloworld --registry myregistry -``` --```output -[ - { - "digest": "sha256:0a2e01852872580b2c2fea9380ff8d7b637d3928783c55beb3f21a6e58d5d108", - "tags": [ - "latest", - "v3" - ], - "timestamp": "2018-07-12T15:52:00.2075864Z" - }, - { - "digest": "sha256:3168a21b98836dda7eb7a846b3d735286e09a32b0aa2401773da518e7eba3b57", - "tags": [ - "v2" - ], - "timestamp": "2018-07-12T15:50:53.5372468Z" - } -] -``` --Next, specify the digest you wish to delete in the [az acr repository delete][az-acr-repository-delete] command. The format of the command is: --```azurecli -az acr repository delete --name <acrName> --image <repositoryName>@<digest> -``` --For example, to delete the last manifest listed in the preceding output (with the tag "v2"): --```azurecli -az acr repository delete --name myregistry --image acr-helloworld@sha256:3168a21b98836dda7eb7a846b3d735286e09a32b0aa2401773da518e7eba3b57 -``` --```output -This operation will delete the manifest 'sha256:3168a21b98836dda7eb7a846b3d735286e09a32b0aa2401773da518e7eba3b57' and all the following images: 'acr-helloworld:v2'. -Are you sure you want to continue? (y/n): -``` --The `acr-helloworld:v2` image is deleted from the registry, as is any layer data unique to that image. If a manifest is associated with multiple tags, all associated tags are also deleted. --## Delete digests by timestamp --To maintain the size of a repository or registry, you might need to periodically delete manifest digests older than a certain date. --The following Azure CLI command lists all manifest digests in a repository older than a specified timestamp, in ascending order. Replace `<acrName>` and `<repositoryName>` with values appropriate for your environment. The timestamp could be a full date-time expression or a date, as in this example. --```azurecli -az acr manifest list-metadata --name <repositoryName> --registry <acrName> \ - --orderby time_asc -o tsv --query "[?lastUpdateTime < '2019-04-05'].[digest, lastUpdateTime]" -``` --After identifying stale manifest digests, you can run the following Bash script to delete manifest digests older than a specified timestamp. It requires the Azure CLI and **xargs**. By default, the script performs no deletion. Change the `ENABLE_DELETE` value to `true` to enable image deletion. --> [!WARNING] -> Use the following sample script with caution--deleted image data is UNRECOVERABLE. If you have systems that pull images by manifest digest (as opposed to image name), you should not run these scripts. Deleting the manifest digests will prevent those systems from pulling the images from your registry. Instead of pulling by manifest, consider adopting a *unique tagging* scheme, a [recommended best practice](container-registry-image-tag-version.md). --```azurecli -#!/bin/bash --# WARNING! This script deletes data! -# Run only if you do not have systems -# that pull images via manifest digest. --# Change to 'true' to enable image delete -ENABLE_DELETE=false --# Modify for your environment -# TIMESTAMP can be a date-time string such as 2019-03-15T17:55:00. -REGISTRY=myregistry -REPOSITORY=myrepository -TIMESTAMP=2019-04-05 --# Delete all images older than specified timestamp. --if [ "$ENABLE_DELETE" = true ] -then - az acr manifest list-metadata --name $REPOSITORY --registry $REGISTRY \ - --orderby time_asc --query "[?lastUpdateTime < '$TIMESTAMP'].digest" -o tsv \ - | xargs -I% az acr repository delete --name $REGISTRY --image $REPOSITORY@% --yes -else - echo "No data deleted." - echo "Set ENABLE_DELETE=true to enable deletion of these images in $REPOSITORY:" - az acr manifest list-metadata --name $REPOSITORY --registry $REGISTRY \ - --orderby time_asc --query "[?lastUpdateTime < '$TIMESTAMP'].[digest, lastUpdateTime]" -o tsv -fi -``` --## Delete untagged images --As mentioned in the [Manifest digest](container-registry-concepts.md#manifest-digest) section, pushing a modified image using an existing tag **untags** the previously pushed image, resulting in an orphaned (or "dangling") image. The previously pushed image's manifest--and its layer data--remains in the registry. Consider the following sequence of events: --1. Push image *acr-helloworld* with tag **latest**: `docker push myregistry.azurecr.io/acr-helloworld:latest` -1. Check manifests for repository *acr-helloworld*: -- ```azurecli - az acr manifest list-metadata --name acr-helloworld --registry myregistry - - ``` - - ```output - [ - { - "digest": "sha256:d2bdc0c22d78cde155f53b4092111d7e13fe28ebf87a945f94b19c248000ceec", - "tags": [ - "latest" - ], - "timestamp": "2018-07-11T21:32:21.1400513Z" - } - ] - ``` --1. Modify *acr-helloworld* Dockerfile -1. Push image *acr-helloworld* with tag **latest**: `docker push myregistry.azurecr.io/acr-helloworld:latest` -1. Check manifests for repository *acr-helloworld*: -- ```azurecli - az acr manifest list-metadata --name acr-helloworld --registry myregistry - ``` - - ```output - [ - { - "architecture": "amd64", - "changeableAttributes": { - "deleteEnabled": true, - "listEnabled": true, - "quarantineDetails": "{\"state\":\"Scan Passed\",\"link\":\"https://aka.ms/test\",\"scanner\":\"Azure Security Monitoring-Qualys Scanner\",\"result\":{\"version\":\"2020-05-13T00:23:31.954Z\",\"summary\":[{\"severity\":\"High\",\"count\":2},{\"severity\":\"Medium\",\"count\":0},{\"severity\":\"Low\",\"count\":0}]}}", - "quarantineState": "Passed", - "readEnabled": true, - "writeEnabled": true - }, - "configMediaType": "application/vnd.docker.container.image.v1+json", - "createdTime": "2020-05-16T04:25:14.3112885Z", - "digest": "sha256:eef2ef471f9f9d01fd2ed81bd2492ddcbc0f281b0a6e4edb700fbf9025448388", - "imageSize": 22906605, - "lastUpdateTime": "2020-05-16T04:25:14.3112885Z", - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "os": "linux", - "timestamp": "2020-05-16T04:25:14.3112885Z" - } - ] - ``` --The tags array is removed from meta-data when an image is **untagged**. This manifest still exists within the registry, along with any unique layer data that it references. **To delete such orphaned images and their layer data, you must delete by manifest digest**. --## Automatically purge tags and manifests --Azure Container Registry provides the following automated methods to remove tags and manifests, and their associated unique layer data: --* Create an ACR task that runs the `acr purge` container command to delete all tags that are older than a certain duration or match a specified name filter. Optionally configure `acr purge` to delete untagged manifests. -- The `acr purge` container command is currently in preview. For more information, see [Automatically purge images from an Azure container registry](container-registry-auto-purge.md). --* Optionally set a [retention policy](container-registry-retention-policy.md) for each registry, to manage untagged manifests. When you enable a retention policy, image manifests in the registry that don't have any associated tags, and the underlying layer data, are automatically deleted after a set period. -- The retention policy is currently a preview feature of **Premium** container registries. The retention policy only applies to untagged manifests created after the policy takes effect. --## Next steps --For more information about image storage in Azure Container Registry, see [Container image storage in Azure Container Registry](container-registry-storage.md). --<!-- IMAGES --> -[manifest-digest]: ./media/container-registry-delete/01-manifest-digest.png --<!-- LINKS - External --> -[docker-manifest-inspect]: https://docs.docker.com/edge/engine/reference/commandline/manifest/#manifest-inspect --<!-- LINKS - Internal --> -[az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete -[az-acr-repository-show-manifests]: /cli/azure/acr/repository#az_acr_repository_show_manifests -[az-acr-repository-untag]: /cli/azure/acr/repository#az_acr_repository_untag |
container-registry | Container Registry Disable Authentication As Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-disable-authentication-as-arm.md | - Title: Disable authentication as ARM template -description: "Disabling azureADAuthenticationAsArmPolicy will force the registry to use ACR audience token." ---- Previously updated : 10/31/2023---# Disable authentication as ARM template --Azure AD Tokens are used when registry users authenticate with ACR. By default, Azure Container Registry (ACR) accepts Azure AD Tokens with an audience scope set for Azure Resource Manager (ARM), a control plane management layer for managing Azure resources. --By disabling ARM Audience Tokens and enforcing ACR Audience Tokens, you can enhance the security of your container registries during the authentication process by narrowing the scope of accepted tokens. --With ACR Audience Token enforcement, only Azure AD Tokens with an audience scope specifically set for ACR will be accepted during the registry authentication and sign-in process. This means that the previously accepted ARM Audience Tokens will no longer be valid for registry authentication, thereby enhancing the security of your container registries. --In this tutorial, you learn how to: --> [!div class="checklist"] -> * Disable authentication-as-arm in ACR - Azure CLI. -> * Disable authentication-as-arm in the ACR - Azure portal. --## Prerequisites --* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) version 2.40.0 or later. To find the version, run `az --version`. -* Sign in to the [Azure portal](https://portal.azure.com). --## Disable authentication-as-arm in ACR - Azure CLI --Disabling `azureADAuthenticationAsArmPolicy` will force the registry to use ACR audience token. You can use Azure CLI version 2.40.0 or later, run `az --version` to find the version. --1. Run the command to show the current configuration of the registry's policy for authentication using ARM tokens with the registry. If the status is `enabled`, then both ACRs and ARM audience tokens can be used for authentication. If the status is `disabled` it means only ACR's audience tokens can be used for authentication. -- ```azurecli-interactive - az acr config authentication-as-arm show -r <registry> - ``` --1. Run the command to update the status of the registry's policy. -- ```azurecli-interactive - az acr config authentication-as-arm update -r <registry> --status [enabled/disabled] - ``` --## Disable authentication-as-arm in the ACR - Azure portal --Disabling `authentication-as-arm` property by assigning a built-in policy will automatically disable the registry property for the current and the future registries. This automatic behavior is for registries created within the policy scope. The possible policy scopes include either Resource Group level scope or Subscription ID level scope within the tenant. --You can disable authentication-as-arm in the ACR, by following below steps: -- 1. Sign in to the [Azure portal](https://portal.azure.com). - - 1. Refer to the ACR's built-in policy definitions in the [azure-container-registry-built-in-policy definition's](policy-reference.md). - - 1. Assign a built-in policy to disable authentication-as-arm definition - Azure portal. --### Assign a built-in policy definition to disable ARM audience token authentication - Azure portal. - -You can enable registry's Conditional Access policy in the [Azure portal](https://portal.azure.com). --Azure Container Registry has two built-in policy definitions to disable authentication-as-arm, as below: --* `Container registries should have ARM audience token authentication disabled.` - This policy will report, block any non-compliant resources, and also sends a request to update non-compliant to compliant. -* `Configure container registries to disable ARM audience token authentication.` - This policy offers remediation and updates non-compliant to compliant resources. --- 1. Sign in to the [Azure portal](https://portal.azure.com). -- 1. Navigate to your **Azure Container Registry** > **Resource Group** > **Settings** > **Policies** . - - :::image type="content" source="media/container-registry-enable-conditional-policy/01-azure-policies.png" alt-text="Screenshot showing how to navigate Azure policies."::: -- 1. Navigate to **Azure Policy**, On the **Assignments**, select **Assign policy**. - - :::image type="content" source="media/container-registry-enable-conditional-policy/02-Assign-policy.png" alt-text="Screenshot showing how to assign a policy."::: -- 1. Under the **Assign policy** , use filters to search and find the **Scope**, **Policy definition**, **Assignment name**. -- :::image type="content" source="media/container-registry-enable-conditional-policy/03-Assign-policy-tab.png" alt-text="Screenshot of the assign policy tab."::: -- 1. Select **Scope** to filter and search for the **Subscription** and **ResourceGroup** and choose **Select**. - - - :::image type="content" source="media/container-registry-enable-conditional-policy/04-select-scope.png" alt-text="Screenshot of the Scope tab."::: --- 1. Select **Policy definition** to filter and search the built-in policy definitions for the Conditional Access policy. - - :::image type="content" source="media/container-registry-enable-conditional-policy/05-built-in-policy-definitions.png" alt-text="Screenshot of built-in-policy-definitions."::: --- 1. Use filters to select and confirm **Scope**, **Policy definition**, and **Assignment name**. -- 1. Use the filters to limit compliance states or to search for policies. -- 1. Confirm your settings and set policy enforcement as **enabled**. -- 1. Select **Review+Create**. -- :::image type="content" source="media/container-registry-enable-conditional-policy/06-enable-policy.png" alt-text="Screenshot to activate a Conditional Access policy."::: ---## Next steps --> [!div class="nextstepaction"] -> [Create and configure a Conditional Access policy](container-registry-configure-conditional-access.md) |
container-registry | Container Registry Event Grid Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-event-grid-quickstart.md | - Title: Quickstart - Send events to Event Grid -description: In this quickstart, you enable Event Grid events for your container registry, then send container image push and delete events to a sample application. ---- Previously updated : 10/31/2023--# Customer intent: As a container registry owner, I want to send events to Event Grid when container images are pushed to or deleted from my container registry so that downstream applications can react to those events. ---# Quickstart: Send events from private container registry to Event Grid --Azure Event Grid is a fully managed event routing service that provides uniform event consumption using a publish-subscribe model. In this quickstart, you use the Azure CLI to create a container registry, subscribe to registry events, then deploy a sample web application to receive the events. Finally, you trigger container image `push` and `delete` events and view the event payload in the sample application. --After you complete the steps in this article, events sent from your container registry to Event Grid appear in the sample web app: --![Web browser rendering the sample web application with three received events][sample-app-01] ----- The Azure CLI commands in this article are formatted for the **Bash** shell. If you're using a different shell like PowerShell or Command Prompt, you may need to adjust line continuation characters or variable assignment lines accordingly. This article uses variables to minimize the amount of command editing required.--## Create a resource group --An Azure resource group is a logical container in which you deploy and manage your Azure resources. The following [az group create][az-group-create] command creates a resource group named *myResourceGroup* in the *eastus* region. If you want to use a different name for your resource group, set `RESOURCE_GROUP_NAME` to a different value. --```azurecli-interactive -RESOURCE_GROUP_NAME=myResourceGroup --az group create --name $RESOURCE_GROUP_NAME --location eastus -``` --## Create a container registry --Next, deploy a container registry into the resource group with the following commands. Before you run the [az acr create][az-acr-create] command, set `ACR_NAME` to a name for your registry. The name must be unique within Azure, and is restricted to 5-50 alphanumeric characters. --```azurecli-interactive -ACR_NAME=<acrName> --az acr create --resource-group $RESOURCE_GROUP_NAME --name $ACR_NAME --sku Basic -``` --Once the registry has been created, the Azure CLI returns output similar to the following: --```json -{ - "adminUserEnabled": false, - "creationDate": "2018-08-16T20:02:46.569509+00:00", - "id": "/subscriptions/<Subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/myregistry", - "location": "eastus", - "loginServer": "myregistry.azurecr.io", - "name": "myregistry", - "provisioningState": "Succeeded", - "resourceGroup": "myResourceGroup", - "sku": { - "name": "Basic", - "tier": "Basic" - }, - "status": null, - "storageAccount": null, - "tags": {}, - "type": "Microsoft.ContainerRegistry/registries" -} --``` --## Create an event endpoint --In this section, you use a Resource Manager template located in a GitHub repository to deploy a prebuilt sample web application to Azure App Service. Later, you subscribe to your registry's Event Grid events and specify this app as the endpoint to which the events are sent. --To deploy the sample app, set `SITE_NAME` to a unique name for your web app, and execute the following commands. The site name must be unique within Azure because it forms part of the fully qualified domain name (FQDN) of the web app. In a later section, you navigate to the app's FQDN in a web browser to view your registry's events. --```azurecli-interactive -SITE_NAME=<your-site-name> --az deployment group create \ - --resource-group $RESOURCE_GROUP_NAME \ - --template-uri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/master/azuredeploy.json" \ - --parameters siteName=$SITE_NAME hostingPlanName=$SITE_NAME-plan -``` --Once the deployment has succeeded (it might take a few minutes), open a browser and navigate to your web app to make sure it's running: --`http://<your-site-name>.azurewebsites.net` --You should see the sample app rendered with no event messages displayed: --![Web browser showing sample web app with no events displayed][sample-app-02] ---## Subscribe to registry events --In Event Grid, you subscribe to a *topic* to tell it which events you want to track, and where to send them. The following [`az eventgrid event-subscription create`][az-eventgrid-event-subscription-create] command subscribes to the container registry you created, and specifies your web app's URL as the endpoint to which it should send events. The environment variables you populated in earlier sections are reused here, so no edits are required. --```azurecli-interactive -ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv) -APP_ENDPOINT=https://$SITE_NAME.azurewebsites.net/api/updates --az eventgrid event-subscription create \ - --name event-sub-acr \ - --source-resource-id $ACR_REGISTRY_ID \ - --endpoint $APP_ENDPOINT -``` --When the subscription is completed, you should see output similar to the following: --```json -{ - "destination": { - "endpointBaseUrl": "https://eventgridviewer.azurewebsites.net/api/updates", - "endpointType": "WebHook", - "endpointUrl": null - }, - "filter": { - "includedEventTypes": [ - "All" - ], - "isSubjectCaseSensitive": null, - "subjectBeginsWith": "", - "subjectEndsWith": "" - }, - "id": "/subscriptions/<Subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/myregistry/providers/Microsoft.EventGrid/eventSubscriptions/event-sub-acr", - "labels": null, - "name": "event-sub-acr", - "provisioningState": "Succeeded", - "resourceGroup": "myResourceGroup", - "topic": "/subscriptions/<Subscription ID>/resourceGroups/myresourcegroup/providers/microsoft.containerregistry/registries/myregistry", - "type": "Microsoft.EventGrid/eventSubscriptions" -} -``` --## Trigger registry events --Now that the sample app is up and running and you've subscribed to your registry with Event Grid, you're ready to generate some events. In this section, you use ACR Tasks to build and push a container image to your registry. ACR Tasks is a feature of Azure Container Registry that allows you to build container images in the cloud, without needing the Docker Engine installed on your local machine. --### Build and push image --Execute the following Azure CLI command to build a container image from the contents of a GitHub repository. By default, ACR Tasks automatically pushes a successfully built image to your registry, which generates the `ImagePushed` event. ---```azurecli-interactive -az acr build --registry $ACR_NAME --image myimage:v1 -f Dockerfile https://github.com/Azure-Samples/acr-build-helloworld-node.git#main -``` --You should see output similar to the following while ACR Tasks build and then pushes your image. The following sample output has been truncated for brevity. --```output -Sending build context to ACR... -Queued a build with build ID: aa2 -Waiting for build agent... -2018/08/16 22:19:38 Using acb_vol_27a2afa6-27dc-4ae4-9e52-6d6c8b7455b2 as the home volume -2018/08/16 22:19:38 Setting up Docker configuration... -2018/08/16 22:19:39 Successfully set up Docker configuration -2018/08/16 22:19:39 Logging in to registry: myregistry.azurecr.io -2018/08/16 22:19:55 Successfully logged in -Sending build context to Docker daemon 94.72kB -Step 1/5 : FROM node:9-alpine -... -``` --To verify that the built image is in your registry, execute the following command to view the tags in the `myimage` repository: --```azurecli-interactive -az acr repository show-tags --name $ACR_NAME --repository myimage -``` --The "v1" tag of the image you built should appear in the output, similar to the following: --```output -[ - "v1" -] -``` --### Delete the image --Now, generate an `ImageDeleted` event by deleting the image with the [az acr repository delete][az-acr-repository-delete] command: --```azurecli-interactive -az acr repository delete --name $ACR_NAME --image myimage:v1 -``` --You should see output similar to the following, asking for confirmation to delete the manifest and associated images: --```output -This operation will delete the manifest 'sha256:f15fa9d0a69081ba93eee308b0e475a54fac9c682196721e294b2bc20ab23a1b' and all the following images: 'myimage:v1'. -Are you sure you want to continue? (y/n): -``` --## View registry events --You've now pushed an image to your registry and then deleted it. Navigate to your Event Grid Viewer web app, and you should see both `ImageDeleted` and `ImagePushed` events. You might also see a subscription validation event generated by executing the command in the [Subscribe to registry events](#subscribe-to-registry-events) section. --The following screenshot shows the sample app with the three events, and the `ImageDeleted` event is expanded to show its details. --![Web browser showing the sample app with ImagePushed and ImageDeleted events][sample-app-03] --Congratulations! If you see the `ImagePushed` and `ImageDeleted` events, your registry is sending events to Event Grid, and Event Grid is forwarding those events to your web app endpoint. --## Clean up resources --Once you're done with the resources you created in this quickstart, you can delete them all with the following Azure CLI command. When you delete a resource group, all of the resources it contains are permanently deleted. --**WARNING**: This operation is irreversible. Be sure you no longer need any of the resources in the group before running the command. --```azurecli-interactive -az group delete --name $RESOURCE_GROUP_NAME -``` --## Event Grid event schema --You can find the Azure Container Registry event message schema reference in the Event Grid documentation: --[Azure Event Grid event schema for Container Registry](../event-grid/event-schema-container-registry.md) --## Next steps --In this quickstart, you deployed a container registry, built an image with ACR Tasks, deleted it, and have consumed your registry's events from Event Grid with a sample application. Next, move on to the ACR Tasks tutorial to learn more about building container images in the cloud, including automated builds on base image update: --> [!div class="nextstepaction"] -> [Build container images in the cloud with ACR Tasks](container-registry-tutorial-quick-task.md) --<!-- IMAGES --> -[sample-app-01]: ./media/container-registry-event-grid-quickstart/sample-app-01.png -[sample-app-02]: ./media/container-registry-event-grid-quickstart/sample-app-02-no-events.png -[sample-app-03]: ./media/container-registry-event-grid-quickstart/sample-app-03-with-events.png --<!-- LINKS - External --> -[azure-account]: https://azure.microsoft.com/free/?WT.mc_id=A261C142F -[sample-app]: https://github.com/dbarkol/azure-event-grid-viewer --<!-- LINKS - Internal --> -[az-acr-create]: /cli/azure/acr/repository -[az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete -[az-eventgrid-event-subscription-create]: /cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create -[az-group-create]: /cli/azure/group#az_group_create |
container-registry | Container Registry Firewall Access Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-firewall-access-rules.md | - Title: Firewall access rules -description: Configure rules to access an Azure container registry from behind a firewall, by allowing access to REST API and data endpoint domain names or service-specific IP address ranges. ---- Previously updated : 10/31/2023---# Configure rules to access an Azure container registry behind a firewall --This article explains how to configure rules on your firewall to allow access to an Azure container registry. For example, an Azure IoT Edge device behind a firewall or proxy server might need to access a container registry to pull a container image. Or, a locked-down server in an on-premises network might need access to push an image. --If instead you want to configure inbound network access to a container registry only within an Azure virtual network, see [Configure Azure Private Link for an Azure container registry](container-registry-private-link.md). --## About registry endpoints --To pull or push images or other artifacts to an Azure container registry, a client such as a Docker daemon needs to interact over HTTPS with two distinct endpoints. For clients that access a registry from behind a firewall, you need to configure access rules for both endpoints. Both endpoints are reached over port 443. --* **Registry REST API endpoint** - Authentication and registry management operations are handled through the registry's public REST API endpoint. This endpoint is the login server name of the registry. Example: `myregistry.azurecr.io` -- * **Registry REST API endpoint for certificates** - Azure container registry uses a wildcard SSL certificate for all subdomains. When connecting to the Azure container registry using SSL, the client must be able to download the certificate for the TLS handshake. In such cases, `azurecr.io` must also be accessible. --* **Storage (data) endpoint** - Azure [allocates blob storage](container-registry-storage.md) in Azure Storage accounts on behalf of each registry to manage the data for container images and other artifacts. When a client accesses image layers in an Azure container registry, it makes requests using a storage account endpoint provided by the registry. --If your registry is [geo-replicated](container-registry-geo-replication.md), a client might need to interact with the data endpoint in a specific region or in multiple replicated regions. --## Allow access to REST and data endpoints --* **REST endpoint** - Allow access to the fully qualified registry login server name, `<registry-name>.azurecr.io`, or an associated IP address range -* **Storage (data) endpoint** - Allow access to all Azure blob storage accounts using the wildcard `*.blob.core.windows.net`, or an associated IP address range. -> [!NOTE] -> Azure Container Registry is introducing [dedicated data endpoints](#enable-dedicated-data-endpoints), allowing you to tightly scope client firewall rules for your registry storage. Optionally enable data endpoints in all regions where the registry is located or replicated, using the form `<registry-name>.<region>.data.azurecr.io`. -- ## About Registry FQDN's --Registry has two FQDN's, the **login url** and the **data endpoint**. --* Both the **login url** and the **data endpoint** are accessible from within the virtual network, using private IP's by enabling a private link. -* A registry that does not use data endpoints would have to access the data from an endpoint of the form `*.blob.core.windows.net` and does not provide the isolation required when configuring firewall rules. -* A registry with a private link enabled gets the dedicated data endpoint automatically. -* A dedicated data endpoint is created per region for a registry. -* Login url remains the same irrespective of whether data endpoint is enabled or disabled. -## Allow access by IP address range --If your organization has policies to allow access only to specific IP addresses or address ranges, download [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). --To find the ACR REST endpoint IP ranges for which you need to allow access, search for **AzureContainerRegistry** in the JSON file. --> [!IMPORTANT] -> IP address ranges for Azure services can change, and updates are published weekly. Download the JSON file regularly, and make necessary updates in your access rules. If your scenario involves configuring network security group rules in an Azure virtual network or you use Azure Firewall, use the **AzureContainerRegistry** [service tag](#allow-access-by-service-tag) instead. -> --### REST IP addresses for all regions --```json -{ - "name": "AzureContainerRegistry", - "id": "AzureContainerRegistry", - "properties": { - "changeNumber": 10, - "region": "", - "platform": "Azure", - "systemService": "AzureContainerRegistry", - "addressPrefixes": [ - "13.66.140.72/29", - [...] -``` --### REST IP addresses for a specific region --Search for the specific region, such as **AzureContainerRegistry.AustraliaEast**. --```json -{ - "name": "AzureContainerRegistry.AustraliaEast", - "id": "AzureContainerRegistry.AustraliaEast", - "properties": { - "changeNumber": 1, - "region": "australiaeast", - "platform": "Azure", - "systemService": "AzureContainerRegistry", - "addressPrefixes": [ - "13.70.72.136/29", - [...] -``` --### Storage IP addresses for all regions --```json -{ - "name": "Storage", - "id": "Storage", - "properties": { - "changeNumber": 19, - "region": "", - "platform": "Azure", - "systemService": "AzureStorage", - "addressPrefixes": [ - "13.65.107.32/28", - [...] -``` --### Storage IP addresses for specific regions --Search for the specific region, such as **Storage.AustraliaCentral**. --```json -{ - "name": "Storage.AustraliaCentral", - "id": "Storage.AustraliaCentral", - "properties": { - "changeNumber": 1, - "region": "australiacentral", - "platform": "Azure", - "systemService": "AzureStorage", - "addressPrefixes": [ - "52.239.216.0/23" - [...] -``` --## Allow access by service tag --In an Azure virtual network, use network security rules to filter traffic from a resource such as a virtual machine to a container registry. To simplify the creation of the Azure network rules, use the **AzureContainerRegistry** [service tag](../virtual-network/network-security-groups-overview.md#service-tags). A service tag represents a group of IP address prefixes to access an Azure service globally or per Azure region. The tag is automatically updated when addresses change. --For example, create an outbound network security group rule with destination **AzureContainerRegistry** to allow traffic to an Azure container registry. To allow access to the service tag only in a specific region, specify the region in the following format: **AzureContainerRegistry**.[*region name*]. --## Enable dedicated data endpoints --> [!WARNING] -> If you previously configured client firewall access to the existing `*.blob.core.windows.net` endpoints, switching to dedicated data endpoints will impact client connectivity, causing pull failures. To ensure clients have consistent access, add the new data endpoint rules to the client firewall rules. Once completed, enable dedicated data endpoints for your registries using the Azure CLI or other tools. --Dedicated data endpoints is an optional feature of the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md). --You can enable dedicated data endpoints using the Azure portal or the Azure CLI. The data endpoints follow a regional pattern, `<registry-name>.<region>.data.azurecr.io`. In a geo-replicated registry, enabling data endpoints enables endpoints in all replica regions. --### Portal --To enable data endpoints using the portal: --1. Navigate to your container registry. -1. Select **Networking** > **Public access**. -1. Select the **Enable dedicated data endpoint** checkbox. -1. Select **Save**. --The data endpoint or endpoints appear in the portal. ---### Azure CLI --To enable data endpoints using the Azure CLI, use Azure CLI version 2.4.0 or higher. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --The following [az acr update][az-acr-update] command enables dedicated data endpoints on a registry *myregistry*. --```azurecli -az acr update --name myregistry --data-endpoint-enabled -``` --To view the data endpoints, use the [az acr show-endpoints][az-acr-show-endpoints] command: --```azurecli -az acr show-endpoints --name myregistry -``` --Output for demonstration purposes shows two regional endpoints --``` -{ - "loginServer": "myregistry.azurecr.io", - "dataEndpoints": [ - { - "region": "eastus", - "endpoint": "myregistry.eastus.data.azurecr.io", - }, - { - "region": "westus", - "endpoint": "myregistry.westus.data.azurecr.io", - } - ] -} -``` --After you set up dedicated data endpoints for your registry, you can enable client firewall access rules for the data endpoints. Enable data endpoint access rules for all required registry regions. --## Configure client firewall rules for MCR --If you need to access Microsoft Container Registry (MCR) from behind a firewall, see the guidance to configure [MCR client firewall rules](https://github.com/microsoft/containerregistry/blob/main/docs/client-firewall-rules.md). MCR is the primary registry for all Microsoft-published docker images, such as Windows Server images. --## Next steps --* Learn about [Azure best practices for network security](../security/fundamentals/network-best-practices.md) --* Learn more about [security groups](../virtual-network/network-security-groups-overview.md) in an Azure virtual network --* Learn more about setting up [Private Link](container-registry-private-link.md) for a container registry --* Learn more about [dedicated data endpoints](https://azure.microsoft.com/blog/azure-container-registry-mitigating-data-exfiltration-with-dedicated-data-endpoints/) for Azure Container Registry ----<!-- IMAGES --> --<!-- LINKS - External --> --<!-- LINKS - Internal --> --[az-acr-update]: /cli/azure/acr#az_acr_update -[az-acr-show-endpoints]: /cli/azure/acr#az_acr_show_endpoints |
container-registry | Container Registry Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-geo-replication.md | - Title: Geo-replicate a registry -description: Get started creating and managing a geo-replicated Azure container registry, which enables the registry to serve multiple regions with multi-primary regional replicas. Geo-replication is a feature of the Premium service tier. ---- Previously updated : 10/31/2023---# Geo-replication in Azure Container Registry --Companies that want a local presence or a hot backup choose to run services from multiple Azure regions. As a best practice, placing a container registry in each region where images are run allows network-close operations, enabling fast, reliable image layer transfers. Geo-replication enables an Azure container registry to function as a single registry, serving multiple regions with multi-primary regional registries. --A geo-replicated registry provides the following benefits: --* Single registry, image, and tag names can be used across multiple regions -* Improve performance and reliability of regional deployments with network-close registry access -* Reduce data transfer costs by pulling image layers from a local, replicated registry in the same or nearby region as your container host -* Single management of a registry across multiple regions -* Registry resilience if a regional outage occurs --> [!NOTE] -> * If you need to maintain copies of container images in more than one Azure container registry, Azure Container Registry also supports [image import](container-registry-import-images.md). For example, in a DevOps workflow, you can import an image from a development registry to a production registry without needing to use Docker commands. -> * If you want to move a registry to a different Azure region, instead of geo-replicating the registry, see [Manually move a container registry to another region](manual-regional-move.md). --## Prerequisites --* The user requires the following permissions (at the registry level) to create/delete replications: -- | Permission | Description | - ||| - | Microsoft.ContainerRegistry/registries/write | Create a replication | - | Microsoft.ContainerRegistry/registries/replications/write | Delete a replication | --## Example use case -Contoso runs a public presence website located across the US, Canada, and Europe. To serve these markets with local and network-close content, Contoso runs [Azure Kubernetes Service (AKS)](/azure/aks/) clusters in West US, East US, Canada Central, and West Europe. The website application, deployed as a Docker image, utilizes the same code and image across all regions. Content local to that region is retrieved from a database, which is provisioned uniquely in each region. Each regional deployment has its unique configuration for resources like the local database. --The development team is located in Seattle, WA, and utilizes the West US data center. --![Pushing to multiple registries](media/container-registry-geo-replication/before-geo-replicate.png)<br />*Pushing to multiple registries* --Prior to using the geo-replication features, Contoso had a US-based registry in West US, with an additional registry in West Europe. To serve these different regions, the development team pushed images to two different registries. --```bash -docker push contoso.azurecr.io/public/products/web:1.2 -docker push contosowesteu.azurecr.io/public/products/web:1.2 -``` -![Pulling from multiple registries](media/container-registry-geo-replication/before-geo-replicate-pull.png)<br />*Pulling from multiple registries* --Typical challenges of multiple registries include: --* All the East US, West US, and Canada Central clusters pull from the West US registry, incurring egress fees as each of these remote container hosts pull images from West US data centers. -* The development team must push images to West US and West Europe registries. -* The development team must configure and maintain each regional deployment with image names referencing the local registry. -* Registry access must be configured for each region. --## Benefits of geo-replication --![Pulling from a geo-replicated registry](media/container-registry-geo-replication/after-geo-replicate-pull.png) --The geo-replication feature of Azure Container Registry has the following benefits: --* Manage a single registry across all regions: `contoso.azurecr.io` -* Manage a single configuration of image deployments as all regions use the same image URL: `contoso.azurecr.io/public/products/web:1.2` -* Push to a single registry while ACR automatically manages the geo-replication. ACR only replicates unique layers, reducing data transfer across regions. -* Configure regional [webhooks](container-registry-webhook.md) to notify you of events in specific replicas. -* Provide a highly available registry that is resilient to regional outages. --Azure Container Registry also supports [availability zones](zone-redundancy.md) to create a resilient and high availability Azure container registry within an Azure region. The combination of availability zones for redundancy within a region, and geo-replication across multiple regions, enhances both the reliability and performance of a registry. --## Configure geo-replication --Configuring geo-replication is as easy as clicking regions on a map. You can also manage geo-replication using tools including the [az acr replication](/cli/azure/acr/replication) commands in the Azure CLI, or deploy a registry enabled for geo-replication with an [Azure Resource Manager template](https://azure.microsoft.com/resources/templates/container-registry-geo-replication/). --Geo-replication is a feature of [Premium registries](container-registry-skus.md). If your registry isn't yet Premium, you can change from Basic and Standard to Premium in the [Azure portal](https://portal.azure.com): --![Switching service tiers in the Azure portal](media/container-registry-skus/update-registry-sku.png) --To configure geo-replication for your Premium registry, sign in to the [Azure portal](https://portal.azure.com). --Navigate to your Azure Container Registry, and select **Replications**: --![Replications in the Azure portal container registry UI](media/container-registry-geo-replication/registry-services.png) --A map is displayed showing all current Azure Regions: -- ![Region map in the Azure portal](media/container-registry-geo-replication/registry-geo-map.png) --* Blue hexagons represent current replicas -* Green hexagons represent possible replica regions -* Gray hexagons represent Azure regions not yet available for replication --To configure a replica, select a green hexagon, then select **Create**: -- ![Create replication UI in the Azure portal](media/container-registry-geo-replication/create-replication.png) --To configure additional replicas, select the green hexagons for other regions, then click **Create**. --ACR begins syncing images across the configured replicas. Once complete, the portal reflects *Ready*. The replica status in the portal doesn't automatically update. Use the refresh button to see the updated status. --## Considerations for using a geo-replicated registry --* Each region in a geo-replicated registry is independent once set-up. Azure Container Registry SLAs apply to each geo-replicated region. -* For every push or pull image operation on a geo-replicated registry, Azure Traffic Manager in the background sends a request to the registry's closest location in the region to maintain network latency. -* After you push an image or tag update to the closest region, it takes some time for Azure Container Registry to replicate the manifests and layers to the remaining regions you opted into. Larger images take longer to replicate than smaller ones. Images and tags are synchronized across the replication regions with an eventual consistency model. -* To manage workflows that depend on push updates to a geo-replicated registry, we recommend that you configure [webhooks](container-registry-webhook.md) to respond to the push events. You can set up regional webhooks within a geo-replicated registry to track push events as they complete across the geo-replicated regions. -* To serve blobs representing content layers, Azure Container Registry uses data endpoints. You can enable [dedicated data endpoints](container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints) for your registry in each of your registry's geo-replicated regions. These endpoints allow configuration of tightly scoped firewall access rules. For troubleshooting purposes, you can optionally [disable routing to a replication](#temporarily-disable-routing-to-replication) while maintaining replicated data. -* If you configure a [private link](container-registry-private-link.md) for your registry using private endpoints in a virtual network, dedicated data endpoints in each of the geo-replicated regions are enabled by default. --## Considerations for high availability --* For high availability and resiliency, we recommend creating a registry in a region that supports enabling [zone redundancy](zone-redundancy.md). Enabling zone redundancy in each replica region is also recommended. -* If an outage occurs in the registry's home region (the region where it was created) or one of its replica regions, a geo-replicated registry remains available for data plane operations such as pushing or pulling container images. -* If the registry's home region becomes unavailable, you may be unable to carry out registry management operations, including configuring network rules, enabling availability zones, and managing replicas. -* To plan for high availability of a geo-replicated registry encrypted with a [customer-managed key](tutorial-enable-customer-managed-keys.md) stored in an Azure key vault, review the guidance for key vault [failover and redundancy](/azure/key-vault/general/disaster-recovery-guidance). --## Delete a replica --After you've configured a replica for your registry, you can delete it at any time if it's no longer needed. Delete a replica using the Azure portal or other tools such as the [az acr replication delete](/cli/azure/acr/replication#az-acr-replication-delete) command in the Azure CLI. --To delete a replica in the Azure portal: --1. Navigate to your Azure Container Registry and select **Replications**. -1. Select the name of a replica and select **Delete**. Confirm that you want to delete the replica. --To use the Azure CLI to delete a replica of *myregistry* in the East US region: --```azurecli -az acr replication delete --name eastus --registry myregistry -``` --## Geo-replication pricing --Geo-replication is a feature of the [Premium service tier](container-registry-skus.md) of Azure Container Registry. When you replicate a registry to your desired regions, you incur Premium registry fees for each region. --In the preceding example, Contoso consolidated two registries down to one, adding replicas to East US, Canada Central, and West Europe. Contoso would pay four times Premium per month, with no additional configuration or management. Each region now pulls their images locally, improving performance and reliability without network egress fees from the West US to Canada and the East US. --## Troubleshoot push operations with geo-replicated registries - -A Docker client that pushes an image to a geo-replicated registry may not push all image layers and its manifest to a single replicated region. This may occur because Azure Traffic Manager routes registry requests to the network-closest replicated registry. If the registry has two *nearby* replication regions, image layers and the manifest could be distributed to the two sites, and the push operation fails when the manifest is validated. This problem occurs because of the way the DNS name of the registry is resolved on some Linux hosts. This issue doesn't occur on Windows, which provides a client-side DNS cache. - -If this problem occurs, one solution is to apply a client-side DNS cache such as `dnsmasq` on the Linux host. This helps ensure that the registry's name is resolved consistently. If you're using a Linux VM in Azure to push to a registry, see options in [DNS Name Resolution options for Linux virtual machines in Azure](/azure/virtual-machines/linux/azure-dns). --To optimize DNS resolution to the closest replica when pushing images, configure a geo-replicated registry in the same Azure regions as the source of the push operations, or the closest region when working outside of Azure. --### Temporarily disable routing to replication --To troubleshoot operations with a geo-replicated registry, you might want to temporarily disable Traffic Manager routing to one or more replications. Starting in Azure CLI version 2.8, you can configure a `--region-endpoint-enabled` option (preview) when you create or update a replicated region. When you set a replication's `--region-endpoint-enabled` option to `false`, Traffic Manager no longer routes docker push or pull requests to that region. By default, routing to all replications is enabled, and data synchronization across all replications takes place whether routing is enabled or disabled. --To disable routing to an existing replication, first run [az acr replication list][az-acr-replication-list] to list the replications in the registry. Then, run [az acr replication update][az-acr-replication-update] and set `--region-endpoint-enabled false` for a specific replication. For example, to configure the setting for the *westus* replication in *myregistry*: --```azurecli -# Show names of existing replications -az acr replication list --registry --output table --# Disable routing to replication -az acr replication update --name westus \ - --registry myregistry --resource-group MyResourceGroup \ - --region-endpoint-enabled false -``` --To restore routing to a replication: --```azurecli -az acr replication update --name westus \ - --registry myregistry --resource-group MyResourceGroup \ - --region-endpoint-enabled true -``` --## Creating replication for a Private Endpoint enabled registry --When creating a new registry replication for the primary registry enabled with Private Endpoint, we recommend validating that the User Identity has valid Private Endpoint creation permissions. Otherwise, the operation gets stuck in the provisioning state while creating the replication. --Follow the below steps if you got stuck in the provisioning state while creating the registry replication: --- Manually delete the replication that got stuck in the provisioning state.-- Add the `Microsoft.Network/privateEndpoints/privateLinkServiceProxies/write` permission for the User Identity.-- Recreate the registry replication request.--This permission check is only applicable to the registries with Private Endpoint enabled. --## Next steps --Check out the three-part tutorial series, [Geo-replication in Azure Container Registry](container-registry-tutorial-prepare-registry.md). Walk through creating a geo-replicated registry, building a container, and then deploying it with a single `docker push` command to multiple regional Web Apps for Containers instances. --> [!div class="nextstepaction"] -> [Geo-replication in Azure Container Registry](container-registry-tutorial-prepare-registry.md) --[az-acr-replication-list]: /cli/azure/acr/replication#az_acr_replication_list -[az-acr-replication-update]: /cli/azure/acr/replication#az_acr_replication_update |
container-registry | Container Registry Get Started Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-azure-cli.md | - Title: Quickstart - Create registry - Azure CLI -description: Quickly learn to create a private Docker container registry with the Azure CLI. --- Previously updated : 10/31/2023----# Quickstart: Create a private container registry using the Azure CLI --Azure Container Registry is a private registry service for building, storing, and managing container images and related artifacts. In this quickstart, you create an Azure container registry instance with the Azure CLI. Then, use Docker commands to push a container image into the registry, and finally pull and run the image from your registry. --This quickstart requires that you are running the Azure CLI (version 2.0.55 or later recommended). Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli]. --You must also have Docker installed locally. Docker provides packages that easily configure Docker on any [macOS][docker-mac], [Windows][docker-windows], or [Linux][docker-linux] system. --Because the Azure Cloud Shell doesn't include all required Docker components (the `dockerd` daemon), you can't use the Cloud Shell for this quickstart. --## Create a resource group --Create a resource group with the [az group create][az-group-create] command. An Azure resource group is a logical container into which Azure resources are deployed and managed. --The following example creates a resource group named *myResourceGroup* in the *eastus* location. --```azurecli -az group create --name myResourceGroup --location eastus -``` --## Create a container registry --In this quickstart you create a *Basic* registry, which is a cost-optimized option for developers learning about Azure Container Registry. For details on available service tiers, see [Container registry service tiers][container-registry-skus]. --Create an ACR instance using the [az acr create][az-acr-create] command. The registry name must be unique within Azure, and contain 5-50 lowercase alphanumeric characters. In the following example, *mycontainerregistry* is used. Update this to a unique value. --```azurecli -az acr create --resource-group myResourceGroup \ - --name mycontainerregistry --sku Basic -``` --When the registry is created, the output is similar to the following: --```json -{ - "adminUserEnabled": false, - "creationDate": "2019-01-08T22:32:13.175925+00:00", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/mycontainerregistry", - "location": "eastus", - "loginServer": "mycontainerregistry.azurecr.io", - "name": "mycontainerregistry", - "provisioningState": "Succeeded", - "resourceGroup": "myResourceGroup", - "sku": { - "name": "Basic", - "tier": "Basic" - }, - "status": null, - "storageAccount": null, - "tags": {}, - "type": "Microsoft.ContainerRegistry/registries" -} -``` --Take note of `loginServer` in the output, which is the fully qualified registry name (all lowercase). Throughout the rest of this quickstart `<registry-name>` is a placeholder for the container registry name, and `<login-server>` is a placeholder for the registry's login server name. ---## Log in to registry --Before pushing and pulling container images, you must log in to the registry. To do so, use the [az acr login][az-acr-login] command. Specify only the registry resource name when logging in with the Azure CLI. Don't use the fully qualified login server name. --```azurecli -az acr login --name <registry-name> -``` --Example: --```azurecli -az acr login --name mycontainerregistry -``` --The command returns a `Login Succeeded` message once completed. ---## List container images --The following example lists the repositories in your registry: --```azurecli -az acr repository list --name <registry-name> --output table -``` --Output: --``` -Result ---hello-world -``` --The following example lists the tags on the **hello-world** repository. --```azurecli -az acr repository show-tags --name <registry-name> --repository hello-world --output table -``` --Output: --``` -Result -v1 -``` ---## Clean up resources --When no longer needed, you can use the [az group delete][az-group-delete] command to remove the resource group, the container registry, and the container images stored there. --```azurecli -az group delete --name myResourceGroup -``` --## Next steps --In this quickstart, you created an Azure Container Registry with the Azure CLI, pushed a container image to the registry, and pulled and ran the image from the registry. Continue to the Azure Container Registry tutorials for a deeper look at ACR. --> [!div class="nextstepaction"] -> [Azure Container Registry tutorials][container-registry-tutorial-prepare-registry] --> [!div class="nextstepaction"] -> [Azure Container Registry Tasks tutorials][container-registry-tutorial-quick-task] --<!-- LINKS - external --> -[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms -[docker-mac]: https://docs.docker.com/docker-for-mac/ -[docker-push]: https://docs.docker.com/engine/reference/commandline/push/ -[docker-pull]: https://docs.docker.com/engine/reference/commandline/pull/ -[docker-rmi]: https://docs.docker.com/engine/reference/commandline/rmi/ -[docker-run]: https://docs.docker.com/engine/reference/commandline/run/ -[docker-tag]: https://docs.docker.com/engine/reference/commandline/tag/ -[docker-windows]: https://docs.docker.com/docker-for-windows/ --<!-- LINKS - internal --> -[az-acr-create]: /cli/azure/acr#az_acr_create -[az-acr-login]: /cli/azure/acr#az_acr_login -[az-group-create]: /cli/azure/group#az_group_create -[az-group-delete]: /cli/azure/group#az_group_delete -[azure-cli]: /cli/azure/install-azure-cli -[container-registry-tutorial-quick-task]: container-registry-tutorial-quick-task.md -[container-registry-skus]: container-registry-skus.md -[container-registry-tutorial-prepare-registry]: container-registry-tutorial-prepare-registry.md |
container-registry | Container Registry Get Started Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-bicep.md | - Title: Quickstart - Create registry - Bicep -description: Learn how to create an Azure container registry by using a Bicep file. --- Previously updated : 10/31/2023---tags: azure-resource-manager, bicep ----# Quickstart: Create a container registry by using a Bicep file --This quickstart shows how to create an Azure Container Registry instance by using a Bicep file. ---## Prerequisites --If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. --## Review the Bicep file --Use Visual Studio Code or your favorite editor to create a file with the following content and name it **main.bicep**: --```bicep -@minLength(5) -@maxLength(50) -@description('Provide a globally unique name of your Azure Container Registry') -param acrName string = 'acr${uniqueString(resourceGroup().id)}' --@description('Provide a location for the registry.') -param location string = resourceGroup().location --@description('Provide a tier of your Azure Container Registry.') -param acrSku string = 'Basic' --resource acrResource 'Microsoft.ContainerRegistry/registries@2023-01-01-preview' = { - name: acrName - location: location - sku: { - name: acrSku - } - properties: { - adminUserEnabled: false - } -} --@description('Output the login server property for later use') -output loginServer string = acrResource.properties.loginServer --``` --The following resource is defined in the Bicep file: --* **[Microsoft.ContainerRegistry/registries](/azure/templates/microsoft.containerregistry/registries)**: create an Azure container registry --More Azure Container Registry template samples can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Containerregistry&pageNumber=1&sort=Popular). --## Deploy the Bicep file --To deploy the file you've created, open PowerShell or Azure CLI. If you want to use the integrated Visual Studio Code terminal, select the `ctrl` + ```` ` ```` key combination. Change the current directory to where the Bicep file is located. --# [CLI](#tab/CLI) --```azurecli -az group create --name myContainerRegRG --location centralus --az deployment group create --resource-group myContainerRegRG --template-file main.bicep --parameters acrName={your-unique-name} -``` --# [PowerShell](#tab/PowerShell) --```azurepowershell -New-AzResourceGroup -Name myContainerRegRG -Location centralus --New-AzResourceGroupDeployment -ResourceGroupName myContainerRegRG -TemplateFile ./main.bicep -acrName "{your-unique-name}" -``` ----> [!NOTE] -> Replace **{your-unique-name}**, including the curly braces, with a unique container registry name. --When the deployment finishes, you should see a message indicating the deployment succeeded. --## Review deployed resources --Use the Azure portal or a tool such as the Azure CLI to review the properties of the container registry. --1. In the portal, search for **Container Registries**, and select the container registry you created. --1. On the **Overview** page, note the **Login server** of the registry. Use this URI when you use Docker to tag and push images to your registry. For information, see [Push your first image using the Docker CLI](container-registry-get-started-docker-cli.md). -- :::image type="content" source="media/container-registry-get-started-bicep/registry-overview.png" alt-text="Registry overview"::: --## Clean up resources --When you no longer need the resource, delete the resource group, and the registry. To do so, go to the Azure portal, select the resource group that contains the registry, and then select **Delete resource group**. ---## Next steps --In this quickstart, you created an Azure Container Registry with a Bicep file. Continue to the Azure Container Registry tutorials for a deeper look at ACR. --> [!div class="nextstepaction"] -> [Azure Container Registry tutorials](container-registry-tutorial-prepare-registry.md) --For a step-by-step tutorial that guides you through the process of creating a Bicep file, see: --> [!div class="nextstepaction"] -> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md) |
container-registry | Container Registry Get Started Docker Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-docker-cli.md | - Title: Push & pull container image -description: Push and pull Docker images to your private container registry in Azure using the Docker CLI - Previously updated : 10/31/2023-------# Push your first image to your Azure container registry using the Docker CLI --An Azure container registry stores and manages private container images and other artifacts, similar to the way [Docker Hub](https://hub.docker.com/) stores public Docker container images. You can use the [Docker command-line interface](https://docs.docker.com/engine/reference/commandline/cli/) (Docker CLI) for [login](https://docs.docker.com/engine/reference/commandline/login/), [push](https://docs.docker.com/engine/reference/commandline/push/), [pull](https://docs.docker.com/engine/reference/commandline/pull/), and other container image operations on your container registry. --In the following steps, you download a public [Nginx image](https://store.docker.com/images/nginx), tag it for your private Azure container registry, push it to your registry, and then pull it from the registry. --## Prerequisites --* **Azure container registry** - Create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md), the [Azure CLI](container-registry-get-started-azure-cli.md), or [Azure PowerShell](container-registry-get-started-powershell.md). -* **Docker CLI** - You must also have Docker installed locally. Docker provides packages that easily configure Docker on any [macOS][docker-mac], [Windows][docker-windows], or [Linux][docker-linux] system. --## Log in to a registry --There are [several ways to authenticate](container-registry-authentication.md) to your private container registry. --### [Azure CLI](#tab/azure-cli) --The recommended method when working in a command line is with the Azure CLI command [az acr login](/cli/azure/acr#az-acr-login). For example, to access a registry named `myregistry`, sign in the Azure CLI and then authenticate to your registry: --```azurecli -az login -az acr login --name myregistry -``` --### [Azure PowerShell](#tab/azure-powershell) --The recommended method when working in PowerShell is with the Azure PowerShell cmdlet [Connect-AzContainerRegistry](/powershell/module/az.containerregistry/connect-azcontainerregistry). For example, to log in to a registry named *myregistry*, log into Azure and then authenticate to your registry: --```azurepowershell -Connect-AzAccount -Connect-AzContainerRegistry -Name myregistry -``` ----You can also log in with [docker login](https://docs.docker.com/engine/reference/commandline/login/). For example, you might have [assigned a service principal](container-registry-authentication.md#service-principal) to your registry for an automation scenario. When you run the following command, interactively provide the service principal appID (username) and password when prompted. For best practices to manage login credentials, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command reference: --``` -docker login myregistry.azurecr.io -``` --Both commands return `Login Succeeded` once completed. -> [!NOTE] ->* You might want to use Visual Studio Code with Docker extension for a faster and more convenient login. --> [!TIP] -> Always specify the fully qualified registry name (all lowercase) when you use `docker login` and when you tag images for pushing to your registry. In the examples in this article, the fully qualified name is *myregistry.azurecr.io*. --## Pull a public Nginx image --First, pull a public Nginx image to your local computer. This example pulls the [official Nginx image](https://hub.docker.com/_/nginx/). --``` -docker pull nginx -``` --## Run the container locally --Execute the following [docker run](https://docs.docker.com/engine/reference/run/) command to start a local instance of the Nginx container interactively (`-it`) on port 8080. The `--rm` argument specifies that the container should be removed when you stop it. --``` -docker run -it --rm -p 8080:80 nginx -``` --Browse to `http://localhost:8080` to view the default web page served by Nginx in the running container. You should see a page similar to the following: --![Nginx on local computer](./media/container-registry-get-started-docker-cli/nginx.png) --Because you started the container interactively with `-it`, you can see the Nginx server's output on the command line after navigating to it in your browser. --To stop and remove the container, press `Control`+`C`. --## Create an alias of the image --Use [docker tag](https://docs.docker.com/engine/reference/commandline/tag/) to create an alias of the image with the fully qualified path to your registry. This example specifies the `samples` namespace to avoid clutter in the root of the registry. --``` -docker tag nginx myregistry.azurecr.io/samples/nginx -``` --For more information about tagging with namespaces, see the [Repository namespaces](container-registry-best-practices.md#repository-namespaces) section of [Best practices for Azure Container Registry](container-registry-best-practices.md). --## Push the image to your registry --Now that you've tagged the image with the fully qualified path to your private registry, you can push it to the registry with [docker push](https://docs.docker.com/engine/reference/commandline/push/): --``` -docker push myregistry.azurecr.io/samples/nginx -``` --## Pull the image from your registry --Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to pull the image from your registry: --``` -docker pull myregistry.azurecr.io/samples/nginx -``` --## Start the Nginx container --Use the [docker run](https://docs.docker.com/engine/reference/run/) command to run the image you've pulled from your registry: --``` -docker run -it --rm -p 8080:80 myregistry.azurecr.io/samples/nginx -``` --Browse to `http://localhost:8080` to view the running container. --To stop and remove the container, press `Control`+`C`. --## Remove the image (optional) --If you no longer need the Nginx image, you can delete it locally with the [docker rmi](https://docs.docker.com/engine/reference/commandline/rmi/) command. --``` -docker rmi myregistry.azurecr.io/samples/nginx -``` --### [Azure CLI](#tab/azure-cli) --To remove images from your Azure container registry, you can use the Azure CLI command [az acr repository delete](/cli/azure/acr/repository#az-acr-repository-delete). For example, the following command deletes the manifest referenced by the `samples/nginx:latest` tag, any unique layer data, and all other tags referencing the manifest. --```azurecli -az acr repository delete --name myregistry --image samples/nginx:latest -``` --### [Azure PowerShell](#tab/azure-powershell) --The [Az.ContainerRegistry](/powershell/module/az.containerregistry) Azure PowerShell module contains multiple commands for removing images from your container instance. [Remove-AzContainerRegistryRepository](/powershell/module/az.containerregistry/remove-azcontainerregistryrepository) removes all images in a particular namespace such as `samples:nginx`, while [Remove-AzContainerRegistryManifest](/powershell/module/az.containerregistry/remove-azcontainerregistrymanifest) removes a specific tag or manifest. --In the following example, you use the `Remove-AzContainerRegistryRepository` cmdlet to remove all images in the `samples:nginx` namespace. --```azurepowershell -Remove-AzContainerRegistryRepository -RegistryName myregistry -Name samples/nginx -``` --In the following example, you use the `Remove-AzContainerRegistryManifest` cmdlet to delete the manifest referenced by the `samples/nginx:latest` tag, any unique layer data, and all other tags referencing the manifest. --```azurepowershell -Remove-AzContainerRegistryManifest -RegistryName myregistry -RepositoryName samples/nginx -Tag latest -``` ---## Recommendations --Here you can find more information on the [authentication options](../container-registry/container-registry-authentication.md). --## Next steps --Now that you know the basics, you're ready to start using your registry! For example, deploy container images from your registry to: --* [Azure Kubernetes Service (AKS)](/azure/aks/tutorial-kubernetes-prepare-app) -* [Azure Container Instances](/azure/container-instances/container-instances-tutorial-prepare-app) -* [Service Fabric](/azure/service-fabric/service-fabric-tutorial-create-container-images) --Optionally install the [Docker Extension for Visual Studio Code](https://code.visualstudio.com/docs/azure/docker) and the [Azure Account](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account) extension to work with your Azure container registries. Pull and push images to an Azure container registry, or run ACR Tasks, all within Visual Studio Code. ---<!-- LINKS - external --> -[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms -[docker-mac]: https://docs.docker.com/docker-for-mac/ -[docker-windows]: https://docs.docker.com/docker-for-windows/ |
container-registry | Container Registry Get Started Geo Replication Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-geo-replication-template.md | - Title: Quickstart - Create geo-replicated registry - Azure Resource Manager template -description: Learn how to create a geo-replicated Azure container registry by using an Azure Resource Manager template. --- Previously updated : 10/31/2023---tags: azure-resource-manager ----# Quickstart: Create a geo-replicated container registry by using an ARM template --This quickstart shows how to create an Azure Container Registry instance by using an Azure Resource Manager template (ARM template). The template sets up a [geo-replicated](container-registry-geo-replication.md) registry, which automatically synchronizes registry content across more than one Azure region. Geo-replication enables network-close access to images from regional deployments, while providing a single management experience. It's a feature of the [Premium](container-registry-skus.md) registry service tier. ---The registry with replications does not support the ARM/Bicep template Complete mode deployments. --If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal. ---## Prerequisites --If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. --## Review the template --The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/container-registry-geo-replication/). The template sets up a registry and an additional regional replica. ---The following resources are defined in the template: --* **[Microsoft.ContainerRegistry/registries](/azure/templates/microsoft.containerregistry/registries)**: create an Azure container registry -* **[Microsoft.ContainerRegistry/registries/replications](/azure/templates/microsoft.containerregistry/registries/replications)**: create a container registry replica --More Azure Container Registry template samples can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Containerregistry&pageNumber=1&sort=Popular). --## Deploy the template -- 1. Select the following image to sign in to Azure and open a template. -- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Button to deploy the Resource Manager template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.containerregistry%2Fcontainer-registry-geo-replication%2Fazuredeploy.json"::: -- 1. Select or enter the following values. -- * **Subscription**: select an Azure subscription. - * **Resource group**: select **Create new**, enter a unique name for the resource group, and then select **OK**. - * **Region**: select a location for the resource group. Example: **Central US**. - * **Acr Name**: accept the generated name for the registry, or enter a name. It must be globally unique. - * **Acr Admin User Enabled**: accept the default value. - * **Location**: accept the generated location for the registry's home replica, or enter a location such as **Central US**. - * **Acr Sku**: accept the default value. - * **Acr Replica Location**: enter a location for the registry replica, using the region's short name. It must be different from the home registry location. Example: **westeurope**. -- :::image type="content" source="media/container-registry-get-started-geo-replication-template/template-properties.png" alt-text="Template properties"::: --1. Select **Review + Create**, then review the terms and conditions. If you agree, select **Create**. --1. After the registry has been created successfully, you get a notification: -- :::image type="content" source="media/container-registry-get-started-geo-replication-template/deployment-notification.png" alt-text="Portal notification"::: -- The Azure portal is used to deploy the template. In addition to the Azure portal, you can use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-cli.md). --## Review deployed resources --Use the Azure portal or a tool such as the Azure CLI to review the properties of the container registry. --1. In the portal, search for Container Registries, and select the container registry you created. --1. On the **Overview** page, note the **Login server** of the registry. Use this URI when you use Docker to tag and push images to your registry. For information, see [Push your first image using the Docker CLI](container-registry-get-started-docker-cli.md). -- :::image type="content" source="media/container-registry-get-started-geo-replication-template/registry-overview.png" alt-text="Registry overview"::: --1. On the **Replications** page, confirm the locations of the home replica and the replica added through the template. If desired, add more replicas on this page. -- :::image type="content" source="media/container-registry-get-started-geo-replication-template/registry-replications.png" alt-text="Registry replications"::: --## Clean up resources --When you no longer need them, delete the resource group, the registry, and the registry replica. To do so, go to the Azure portal, select the resource group that contains the registry, and then select **Delete resource group**. ---## Next steps --In this quickstart, you created an Azure Container Registry with an ARM template, and configured a registry replica in another location. Continue to the Azure Container Registry tutorials for a deeper look at ACR. --> [!div class="nextstepaction"] -> [Azure Container Registry tutorials](container-registry-tutorial-prepare-registry.md) --For a step-by-step tutorial that guides you through the process of creating a template, see: --> [!div class="nextstepaction"] -> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md) |
container-registry | Container Registry Get Started Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-portal.md | - Title: Quickstart - Create registry in portal -description: Quickly learn to create a private Azure container registry using the Azure portal. -- Previously updated : 10/31/2023-----# Quickstart: Create an Azure container registry using the Azure portal --Azure Container Registry is a private registry service for building, storing, and managing container images and related artifacts. In this quickstart, you create an Azure container registry instance with the Azure portal. Then, use Docker commands to push a container image into the registry, and finally pull and run the image from your registry. --### [Azure CLI](#tab/azure-cli) --To log in to the registry to work with container images, this quickstart requires that you are running the Azure CLI (version 2.0.55 or later recommended). Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. --### [Azure PowerShell](#tab/azure-powershell) --To log in to the registry to work with container images, this quickstart requires that you are running the Azure PowerShell (version 7.5.0 or later recommended). Run `Get-Module Az -ListAvailable` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module][azure-powershell-install]. ----You must also have Docker installed locally with the daemon running. Docker provides packages that easily configure Docker on any [Mac][docker-mac], [Windows][docker-windows], or [Linux][docker-linux] system. --## Sign in to Azure --Sign in to the [Azure portal](https://portal.azure.com). --## Create a container registry --Select **Create a resource** > **Containers** > **Container Registry**. ---In the **Basics** tab, enter values for **Resource group** and **Registry name**. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. For this quickstart create a new resource group in the `West US` location named `myResourceGroup`, and for **SKU**, select 'Basic'. ---Accept default values for the remaining settings. Then select **Review + create**. After reviewing the settings, select **Create**. ---When the **Deployment succeeded** message appears, select the container registry in the portal. ---Take note of the registry name and the value of the **Login server**, which is a fully qualified name ending with `azurecr.io` in the Azure cloud. You use these values in the following steps when you push and pull images with Docker. --## Log in to registry --### [Azure CLI](#tab/azure-cli) --Before pushing and pulling container images, you must log in to the registry instance. [Sign into the Azure CLI][get-started-with-azure-cli] on your local machine, then run the [az acr login][az-acr-login] command. Specify only the registry resource name when logging in with the Azure CLI. Don't use the fully qualified login server name. --```azurecli -az acr login --name <registry-name> -``` --Example: --```azurecli -az acr login --name mycontainerregistry -``` --The command returns `Login Succeeded` once completed. --### [Azure PowerShell](#tab/azure-powershell) --Before pushing and pulling container images, you must log in to the registry instance. [Sign into the Azure PowerShell][get-started-with-azure-powershell] on your local machine, then run the [Connect-AzContainerRegistry][connect-azcontainerregistry] cmdlet. Specify only the registry resource name when logging in with the Azure PowerShell. Don't use the fully qualified login server name. --```azurepowershell -Connect-AzContainerRegistry -Name <registry-name> -``` --Example: --```azurepowershell -Connect-AzContainerRegistry -Name mycontainerregistry -``` --The command returns `Login Succeeded` once completed. -----## List container images --To list the images in your registry, navigate to your registry in the portal and select **Repositories**, then select the **hello-world** repository you created with `docker push`. ---By selecting the **hello-world** repository, you see the `v1`-tagged image under **Tags**. ---## Clean up resources --To clean up your resources, navigate to the **myResourceGroup** resource group in the portal. Once the resource group is loaded, click on **Delete resource group** to remove the resource group, the container registry, and the container images stored there. ----## Next steps --In this quickstart, you created an Azure Container Registry with the Azure portal, pushed a container image, and pulled and ran the image from the registry. Continue to the Azure Container Registry tutorials for a deeper look at ACR. --> [!div class="nextstepaction"] -> [Azure Container Registry tutorials][container-registry-tutorial-prepare-registry] --> [!div class="nextstepaction"] -> [Azure Container Registry Tasks tutorials][container-registry-tutorial-quick-task] --<!-- LINKS - external --> -[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms -[docker-mac]: https://docs.docker.com/docker-for-mac/ -[docker-pull]: https://docs.docker.com/engine/reference/commandline/pull/ -[docker-push]: https://docs.docker.com/engine/reference/commandline/push/ -[docker-rmi]: https://docs.docker.com/engine/reference/commandline/rmi/ -[docker-run]: https://docs.docker.com/engine/reference/commandline/run/ -[docker-tag]: https://docs.docker.com/engine/reference/commandline/tag/ -[docker-windows]: https://docs.docker.com/docker-for-windows/ --<!-- LINKS - internal --> -[container-registry-tutorial-prepare-registry]: container-registry-tutorial-prepare-registry.md -[container-registry-skus]: container-registry-skus.md -[azure-cli-install]: /cli/azure/install-azure-cli -[azure-powershell-install]: /powershell/azure/install-az-ps -[get-started-with-azure-cli]: /cli/azure/get-started-with-azure-cli -[get-started-with-azure-powershell]: /powershell/azure/get-started-azureps -[az-acr-login]: /cli/azure/acr#az_acr_login -[connect-azcontainerregistry]: /powershell/module/az.containerregistry/connect-azcontainerregistry -[container-registry-tutorial-quick-task]: container-registry-tutorial-quick-task.md |
container-registry | Container Registry Get Started Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-powershell.md | - Title: Quickstart - Create registry - PowerShell -description: Quickly learn to create a private Docker registry in Azure Container Registry with PowerShell -- Previously updated : 10/31/2023------# Quickstart: Create a private container registry using Azure PowerShell --Azure Container Registry is a private registry service for building, storing, and managing container images and related artifacts. In this quickstart, you create an Azure container registry instance with Azure PowerShell. Then, use Docker commands to push a container image into the registry, and finally pull and run the image from your registry. --## Prerequisites ---This quickstart requires Azure PowerShell module. Run `Get-Module -ListAvailable Az` to determine your installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). --You must also have Docker installed locally. Docker provides packages for [macOS][docker-mac], [Windows][docker-windows], and [Linux][docker-linux] systems. --Because the Azure Cloud Shell doesn't include all required Docker components (the `dockerd` daemon), you can't use the Cloud Shell for this quickstart. --## Sign in to Azure --Sign in to your Azure subscription with the [Connect-AzAccount][Connect-AzAccount] command, and follow the on-screen directions. --```powershell -Connect-AzAccount -``` --## Create resource group --Once you're authenticated with Azure, create a resource group with [New-AzResourceGroup][New-AzResourceGroup]. A resource group is a logical container in which you deploy and manage your Azure resources. --```powershell -New-AzResourceGroup -Name myResourceGroup -Location EastUS -``` --## Create container registry --Next, create a container registry in your new resource group with the [New-AzContainerRegistry][New-AzContainerRegistry] command. --The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. The following example creates a registry named "mycontainerregistry." Replace *mycontainerregistry* in the following command, then run it to create the registry: --```powershell -$registry = New-AzContainerRegistry -ResourceGroupName "myResourceGroup" -Name "mycontainerregistry" -EnableAdminUser -Sku Basic -``` ---## Log in to registry --Before pushing and pulling container images, you must log in to your registry with the [Connect-AzContainerRegistry][connect-azcontainerregistry] cmdlet. The following example uses the same credentials you logged in with when authenticating to Azure with the `Connect-AzAccount` cmdlet. --> [!NOTE] -> In the following example, the value of `$registry.Name` is the resource name, not the fully qualified registry name. --```powershell -Connect-AzContainerRegistry -Name $registry.Name -``` --The command returns `Login Succeeded` once completed. ----## Clean up resources --Once you're done working with the resources you created in this quickstart, use the [Remove-AzResourceGroup][Remove-AzResourceGroup] command to remove the resource group, the container registry, and the container images stored there: --```powershell -Remove-AzResourceGroup -Name myResourceGroup -``` --## Next steps --In this quickstart, you created an Azure Container Registry with Azure PowerShell, pushed a container image, and pulled and ran the image from the registry. Continue to the Azure Container Registry tutorials for a deeper look at ACR. --> [!div class="nextstepaction"] -> [Azure Container Registry tutorials][container-registry-tutorial-prepare-registry] --> [!div class="nextstepaction"] -> [Azure Container Registry Tasks tutorials][container-registry-tutorial-quick-task] --<!-- LINKS - external --> -[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms -[docker-login]: https://docs.docker.com/engine/reference/commandline/login/ -[docker-mac]: https://docs.docker.com/docker-for-mac/ -[docker-push]: https://docs.docker.com/engine/reference/commandline/push/ -[docker-tag]: https://docs.docker.com/engine/reference/commandline/tag/ -[docker-windows]: https://docs.docker.com/docker-for-windows/ --<!-- Links - internal --> -[Connect-AzAccount]: /powershell/module/az.accounts/connect-azaccount -[Get-Module]: /powershell/module/microsoft.powershell.core/get-module -[New-AzContainerRegistry]: /powershell/module/az.containerregistry/New-AzContainerRegistry -[New-AzResourceGroup]: /powershell/module/az.resources/new-azresourcegroup -[Remove-AzResourceGroup]: /powershell/module/az.resources/remove-azresourcegroup -[container-registry-tutorial-quick-task]: container-registry-tutorial-quick-task.md -[container-registry-skus]: container-registry-skus.md -[container-registry-tutorial-prepare-registry]: container-registry-tutorial-prepare-registry.md -[connect-azcontainerregistry]: /powershell/module/az.containerregistry/connect-azcontainerregistry |
container-registry | Container Registry Health Error Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-health-error-reference.md | - Title: Error reference for registry health checks -description: Error codes and possible solutions to problems found by running the az acr check-health diagnostic command in Azure Container Registry ---- Previously updated : 10/31/2023--# Health check error reference --Following are details about error codes returned by the [az acr check-health][az-acr-check-health] command. For each error, possible solutions are listed. --For information about running `az acr check-health`, see [Check the health of an Azure container registry](container-registry-check-health.md). --## DOCKER_COMMAND_ERROR --This error means that Docker client for CLI could not be found. As a result, the following additional checks aren't run: finding Docker version, evaluating Docker daemon status, and running a Docker pull command. --*Potential solutions*: Install Docker client; add Docker path to the system variables. --## DOCKER_DAEMON_ERROR --This error means that the Docker daemon status is unavailable, or that it couldn't be reached using the CLI. As a result, Docker operations (such as `docker login` and `docker pull`) are unavailable through the CLI. --*Potential solutions*: Restart Docker daemon, or validate that it is properly installed. --## DOCKER_VERSION_ERROR --This error means that CLI wasn't able to run the command `docker --version`. --*Potential solutions*: Try running the command manually, make sure you have the latest CLI version, and investigate the error message. --## DOCKER_PULL_ERROR --This error means that the CLI wasn't able to pull a sample image to your environment. --*Potential solutions*: Validate that all components necessary to pull an image are running properly. --## HELM_COMMAND_ERROR --This error means that Helm client couldn't be found by the CLI, which precludes other Helm operations. --*Potential solutions*: Verify that Helm client is installed, and that its path is added to the system environment variables. --## HELM_VERSION_ERROR --This error means that the CLI was unable to determine the Helm version installed. This can happen if the Azure CLI version (or if the Helm version) being used is obsolete. --*Potential solutions*: Update to the latest Azure CLI version or to the recommended Helm version; run the command manually and investigate the error message. --## CMK_ERROR --This error means that the registry can't access the user-assigned or sysem-assigned managed identity used to configure registry encryption with a customer-managed key. The managed identity might have been deleted. --*Potential solution*: To resolve the issue and rotate the key using a different managed identity, see steps to troubleshoot [the user-assigned identity](tutorial-troubleshoot-customer-managed-keys.md). --## CONNECTIVITY_DNS_ERROR --This error means that the DNS for the given registry login server was pinged but did not respond, which means it is unavailable. This can indicate some connectivity issues. Alternatively, the registry might not exist, the user might not have the permissions on the registry (to retrieve its login server properly), or the target registry is in a different cloud than the one used in the Azure CLI. --*Potential solutions*: Validate connectivity; verify spelling of the registry, and that registry exists; verify that the user has the right permissions on it and that the registry's cloud is the same that is used in the Azure CLI. --## CONNECTIVITY_FORBIDDEN_ERROR --This error means that the challenge endpoint for the given registry responded with a 403 Forbidden HTTP status. This error means that users don't have access to the registry, most likely because of a virtual network configuration or because access to the registry's public endpoint is not allowed. To see the currently configured firewall rules, run `az acr show --query networkRuleSet --name <registry>`. --*Potential solutions*: Remove virtual network rules, or add the current client IP address to the allowed list. --## CONNECTIVITY_CHALLENGE_ERROR --This error means that the challenge endpoint of the target registry did not issue a challenge. --*Potential solutions*: Try again after some time. If the error persists, open an issue at https://aka.ms/acr/issues. --## CONNECTIVITY_AAD_LOGIN_ERROR --This error means that the challenge endpoint of the target registry issued a challenge, but the registry does not support Microsoft Entra authentication. --*Potential solutions*: Try a different way to authenticate, for example, with admin credentials. If users need to authenticate using Microsoft Entra ID, open an issue at https://aka.ms/acr/issues. --## CONNECTIVITY_REFRESH_TOKEN_ERROR --This error means that the registry login server did not respond with a refresh token, so access to the target registry was denied. This error can occur if the user does not have the right permissions on the registry or if the user credentials for the Azure CLI are stale. --*Potential solutions*: Verify if the user has the right permissions on the registry; run `az login` to refresh permissions, tokens, and credentials. --## CONNECTIVITY_ACCESS_TOKEN_ERROR --This error means that the registry login server did not respond with an access token, so that the access to the target registry was denied. This error can occur if the user does not have the right permissions on the registry or if the user credentials for the Azure CLI are stale. --*Potential solutions*: Verify if the user has the right permissions on the registry; run `az login` to refresh permissions, tokens, and credentials. --## CONNECTIVITY_SSL_ERROR --This error means that the client was unable to establish a secure connection to the container registry. This error generally occurs if you're running or using a proxy server. --*Potential solutions*: More information on working behind a proxy can be [found here](/cli/azure/use-cli-effectively). --## LOGIN_SERVER_ERROR --This error means that the CLI was unable to find the login server of the given registry, and no default suffix was found for the current cloud. This error can occur if the registry does not exist, if the user does not have the right permissions on the registry, if the registry's cloud and the current Azure CLI cloud do not match, or if the Azure CLI version is obsolete. --*Potential solutions*: Verify that the spelling is correct and that the registry exists; verify that user has the right permissions on the registry, and that the clouds of the registry and the CLI environment match; update Azure CLI to the latest version. --## NOTARY_VERSION_ERROR --This error means that the CLI is not compatible with the currently installed version of Docker/Notary. Try downgrading your notary.exe version to a version earlier than 0.6.0 by replacing your Docker installation's Notary client manually to resolve this issue. You can also try downloading and installing a pre-compiled binary of Notary earlier than 0.6.0 for 64 bit Linux or macOS X from the Notary repository's releases page on GitHub. For windows download the .exe, place it in the(default path: C:\ProgramFiles\Docker\Docker\resources\bin) and rename it to notary.exe. --## CONNECTIVITY_TOOMANYREQUESTS_ERROR --This error means that the user has sent too many requests in a short period causing the authentication system to block further requests to prevent overload. This error occurs by reaching a configured limit in the user's registry service tier or environment. We recommend waiting for a moment before sending another request. This will allow the authentication system's block to lift and you can try sending a request again. --## Next steps --For options to check the health of a registry, see [Check the health of an Azure container registry](container-registry-check-health.md). --See the [FAQ](container-registry-faq.yml) for frequently asked questions and other known issues about Azure Container Registry. ------<!-- LINKS - internal --> -[az-acr-check-health]: /cli/azure/acr#az_acr_check_health |
container-registry | Container Registry Helm Repos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md | - Title: Store Helm charts -description: Learn how to store Helm charts for your Kubernetes applications using repositories in Azure Container Registry ----- Previously updated : 10/31/2023---# Push and pull Helm charts to an Azure container registry --To quickly manage and deploy applications for Kubernetes, you can use the [open-source Helm package manager][helm]. With Helm, application packages are defined as [charts](https://helm.sh/docs/topics/charts/), which are collected and stored in a [Helm chart repository](https://helm.sh/docs/topics/chart_repository/). --This article shows you how to host Helm charts repositories in an Azure container registry, using Helm 3 commands and storing charts as [OCI artifacts](container-registry-image-formats.md#oci-artifacts). In many scenarios, you would build and upload your own charts for the applications you develop. For more information on how to build your own Helm charts, see the [Chart Template Developer's Guide][develop-helm-charts]. You can also store an existing Helm chart from another Helm repo. --> [!IMPORTANT] -> This article has been updated with Helm 3 commands. Helm 3.7 includes changes to Helm CLI commands and OCI support introduced in earlier versions of Helm 3. By design `helm` moves forward with version. We recommend to use **3.7.2** or later. --## Helm 3 or Helm 2? --To store, manage, and install Helm charts, you use commands in the Helm CLI. Major Helm releases include Helm 3 and Helm 2. For details on the version differences, see the [version FAQ](https://helm.sh/docs/faq/). --Helm 3 should be used to host Helm charts in Azure Container Registry. With Helm 3, you: --* Can store and manage Helm charts in repositories in an Azure container registry -* Store Helm charts in your registry as [OCI artifacts](container-registry-image-formats.md#oci-artifacts). Azure Container Registry provides GA support for OCI artifacts, including Helm charts. -* Authenticate with your registry using the `helm registry login` or `az acr login` command. -* Use `helm` commands to push, pull, and manage Helm charts in a registry -* Use `helm install` to install charts to a Kubernetes cluster from the registry. --### Feature support --Azure Container Registry supports specific Helm chart management features depending on whether you are using Helm 3 (current) or Helm 2 (deprecated). --| Feature | Helm 2 | Helm 3 | -| - | - | - | -| Manage charts using `az acr helm` commands | :heavy_check_mark: | | -| Store charts as OCI artifacts | | :heavy_check_mark: | -| Manage charts using `az acr repository` commands and the **Repositories** blade in Azure portal| | :heavy_check_mark: | ---> [!NOTE] -> As of Helm 3, [az acr helm][az-acr-helm] commands for use with the Helm 2 client are being deprecated. A minimum of 3 months' notice will be provided in advance of command removal. --### Chart version compatibility --The following Helm [chart versions](https://helm.sh/docs/topics/charts/#the-apiversion-field) can be stored in Azure Container Registry and are installable by the Helm 2 and Helm 3 clients. --| Version | Helm 2 | Helm 3 | -| - | - | - | -| apiVersion v1 | :heavy_check_mark: | :heavy_check_mark: | -| apiVersion v2 | | :heavy_check_mark: | --### Migrate from Helm 2 to Helm 3 --If you've previously stored and deployed charts using Helm 2 and Azure Container Registry, we recommend migrating to Helm 3. See: --* [Migrating Helm 2 to 3](https://helm.sh/docs/topics/v2_v3_migration/) in the Helm documentation. -* [Migrate your registry to store Helm OCI artifacts](#migrate-your-registry-to-store-helm-oci-artifacts), later in this article --## Prerequisites --The following resources are needed for the scenario in this article: --- **An Azure container registry** in your Azure subscription. If needed, create a registry using the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md).-- **Helm client version 3.7 or later** - Run `helm version` to find your current version. For more information on how to install and upgrade Helm, see [Installing Helm][helm-install]. If you upgrade from an earlier version of Helm 3, review the [release notes](https://github.com/helm/helm/releases).-- **A Kubernetes cluster** where you will install a Helm chart. If needed, create an AKS cluster [using the Azure CLI](/azure/aks/learn/quick-kubernetes-deploy-cli), [using Azure PowerShell](/azure/aks/learn/quick-kubernetes-deploy-powershell), or [using the Azure portal](/azure/aks/learn/quick-kubernetes-deploy-portal).-- **Azure CLI version 2.0.71 or later** - Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].--## Set up Helm client --Use the `helm version` command to verify that you have installed Helm 3: --```console -helm version -``` --> [!NOTE] -> The version indicated must be at least 3.8.0, as OCI support in earlier versions was experimental. --Set the following environment variables for the target registry. The ACR_NAME is the registry resource name. If the ACR registry url is myregistry.azurecr.io, set the ACR_NAME to myregistry --```console -ACR_NAME=<container-registry-name> -``` --## Create a sample chart --Create a test chart using the following commands: --```console -mkdir helmtest --cd helmtest -helm create hello-world -``` --As a basic example, change directory to the `templates` folder and first delete the contents there: --```console -cd hello-world/templates -rm -rf * -``` --In the `templates` folder, create a file called `configmap.yaml`, by running the following command: --```console -cat <<EOF > configmap.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: hello-world-configmap -data: - myvalue: "Hello World" -EOF -``` --For more about creating and running this example, see [Getting Started](https://helm.sh/docs/chart_template_guide/getting_started/) in the Helm Docs. --## Save chart to local archive --Change directory to the `hello-world` subdirectory. Then, run `helm package` to save the chart to a local archive. --In the following example, the chart is saved with the name and version in `Chart.yaml`. --```console -cd .. -helm package . -``` --Output is similar to: --```output -Successfully packaged chart and saved it to: /my/path/hello-world-0.1.0.tgz -``` --## Authenticate with the registry --Run `helm registry login` to authenticate with the registry. You may pass [registry credentials](container-registry-authentication.md) appropriate for your scenario, such as service principal credentials, user identity, or a repository-scoped token. --- Authenticate with a Microsoft Entra [service principal with pull and push permissions](container-registry-auth-service-principal.md#create-a-service-principal) (AcrPush role) to the registry.- ```azurecli - SERVICE_PRINCIPAL_NAME=<acr-helm-sp> - ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv) - PASSWORD=$(az ad sp create-for-rbac --name $SERVICE_PRINCIPAL_NAME \ - --scopes $(az acr show --name $ACR_NAME --query id --output tsv) \ - --role acrpush \ - --query "password" --output tsv) - USER_NAME=$(az identity show -n $SERVICE_PRINCIPAL_NAME -g $RESOURCE_GROUP_NAME --subscription $SUBSCRIPTION_ID --query "clientId" -o tsv) - ``` -- Authenticate with your [individual Microsoft Entra identity](container-registry-authentication.md?tabs=azure-cli#individual-login-with-azure-ad) to push and pull Helm charts using an AD token.- ```azurecli - USER_NAME="00000000-0000-0000-0000-000000000000" - PASSWORD=$(az acr login --name $ACR_NAME --expose-token --output tsv --query accessToken) - ``` -- Authenticate with a [repository scoped token](container-registry-repository-scoped-permissions.md) (Preview).- ```azurecli - USER_NAME="helmtoken" - PASSWORD=$(az acr token create -n $USER_NAME \ - -r $ACR_NAME \ - --scope-map _repositories_admin \ - --only-show-errors \ - --query "credentials.passwords[0].value" -o tsv) - ``` -- Then supply the credentials to `helm registry login`.- ```bash - helm registry login $ACR_NAME.azurecr.io \ - --username $USER_NAME \ - --password $PASSWORD - ``` --## Push chart to registry as OCI artifact --Run the `helm push` command in the Helm 3 CLI to push the chart archive to the fully qualified target repository. Separate the words in the chart names and use only lower case letters and numbers. In the following example, the target repository namespace is `helm/hello-world`, and the chart is tagged `0.1.0`: --```console -helm push hello-world-0.1.0.tgz oci://$ACR_NAME.azurecr.io/helm -``` --After a successful push, output is similar to: --```output -Pushed: <registry>.azurecr.io/helm/hello-world:0.1.0 -digest: sha256:5899db028dcf96aeaabdadfa5899db02589b2899b025899b059db02 -``` --## List charts in the repository --As with images stored in an Azure container registry, you can use [az acr repository][az-acr-repository] commands to show the repositories hosting your charts, and chart tags and manifests. --For example, run [az acr repository show][az-acr-repository-show] to see the properties of the repo you created in the previous step: --```azurecli -az acr repository show \ - --name $ACR_NAME \ - --repository helm/hello-world -``` --Output is similar to: --```output -{ - "changeableAttributes": { - "deleteEnabled": true, - "listEnabled": true, - "readEnabled": true, - "writeEnabled": true - }, - "createdTime": "2021-10-05T12:11:37.6701689Z", - "imageName": "helm/hello-world", - "lastUpdateTime": "2021-10-05T12:11:37.7637082Z", - "manifestCount": 1, - "registry": "mycontainerregistry.azurecr.io", - "tagCount": 1 -} -``` --Run the [az acr manifest list-metadata][az-acr-manifest-list-metadata] command to see details of the chart stored in the repository. For example: --```azurecli -az acr manifest list-metadata \ - --registry $ACR_NAME \ - --name helm/hello-world -``` --Output, abbreviated in this example, shows a `configMediaType` of `application/vnd.cncf.helm.config.v1+json`: --```output -[ - { - [...] - "configMediaType": "application/vnd.cncf.helm.config.v1+json", - "createdTime": "2021-10-05T12:11:37.7167893Z", - "digest": "sha256:0c03b71c225c3ddff53660258ea16ca7412b53b1f6811bf769d8c85a1f0663ee", - "imageSize": 3301, - "lastUpdateTime": "2021-10-05T12:11:37.7167893Z", - "mediaType": "application/vnd.oci.image.manifest.v1+json", - "tags": [ - "0.1.0" - ] -``` --## Install Helm chart --Run `helm install` to install the Helm chart you pushed to the registry. The chart tag is passed using the `--version` parameter. Specify a release name such as *myhelmtest*, or pass the `--generate-name` parameter. For example: --```console -helm install myhelmtest oci://$ACR_NAME.azurecr.io/helm/hello-world --version 0.1.0 -``` --Output after successful chart installation is similar to: --```console -NAME: myhelmtest -LAST DEPLOYED: Tue Oct 4 16:59:51 2021 -NAMESPACE: default -STATUS: deployed -REVISION: 1 -TEST SUITE: None -``` --To verify the installation, run the `helm get manifest` command. --```console -helm get manifest myhelmtest -``` --The command returns the YAML data in your `configmap.yaml` template file. --Run `helm uninstall` to uninstall the chart release on your cluster: --```console -helm uninstall myhelmtest -``` --## Pull chart to local archive --You can optionally pull a chart from the container registry to a local archive using `helm pull`. The chart tag is passed using the `--version` parameter. If a local archive exists at the current path, this command overwrites it. --```console -helm pull oci://$ACR_NAME.azurecr.io/helm/hello-world --version 0.1.0 -``` --## Delete chart from the registry --To delete a chart from the container registry, use the [az acr repository delete][az-acr-repository-delete] command. Run the following command and confirm the operation when prompted: --```azurecli -az acr repository delete --name $ACR_NAME --image helm/hello-world:0.1.0 -``` --## Migrate your registry to store Helm OCI artifacts --If you previously set up your Azure container registry as a chart repository using Helm 2 and the `az acr helm` commands, we recommend that you [upgrade][helm-install] to the Helm 3 client. Then, follow these steps to store the charts as OCI artifacts in your registry. --> [!IMPORTANT] -> * After you complete migration from a Helm 2-style (index.yaml-based) chart repository to OCI artifact repositories, use the Helm CLI and `az acr repository` commands to manage the charts. See previous sections in this article. -> * The Helm OCI artifact repositories are not discoverable using Helm commands such as `helm search` and `helm repo list`. For more information about Helm commands used to store charts as OCI artifacts, see the [Helm documentation](https://helm.sh/docs/topics/registries/). --### Enable OCI support (enabled by default in Helm v3.8.0) --Ensure that you are using the Helm 3 client: --```console -helm version -``` --If you are using Helm v3.8.0 or higher, this is enabled by default. If you are using a lower version, you can enable OCI support setting the environment variable: --```console -export HELM_EXPERIMENTAL_OCI=1 -``` --### List current charts --List the charts currently stored in the registry, here named *myregistry*: --```console -helm search repo myregistry -``` --Output shows the charts and chart versions: --``` -NAME CHART VERSION APP VERSION DESCRIPTION -myregistry/ingress-nginx 3.20.1 0.43.0 Ingress controller for Kubernetes... -myregistry/wordpress 9.0.3 5.3.2 Web publishing platform for building... -[...] -``` --### Pull chart archives locally --For each chart in the repo, pull the chart archive locally, and take note of the filename: --```console -helm pull myregisry/ingress-nginx -ls *.tgz -``` --A local chart archive such as `ingress-nginx-3.20.1.tgz` is created. --### Push charts as OCI artifacts to registry --Login to the registry: --```azurecli -az acr login --name $ACR_NAME -``` --Push each chart archive to the registry. Example: --```console -helm push ingress-nginx-3.20.1.tgz oci://$ACR_NAME.azurecr.io/helm -``` --After pushing a chart, confirm it is stored in the registry: --```azurecli -az acr repository list --name $ACR_NAME -``` --After pushing all of the charts, optionally remove the Helm 2-style chart repository from the registry. Doing so reduces storage in your registry: --```console -helm repo remove $ACR_NAME -``` --## Next steps --* For more information on how to create and deploy Helm charts, see [Developing Helm charts][develop-helm-charts]. -* Learn more about installing applications with Helm in [Azure Kubernetes Service (AKS)](/azure/aks/kubernetes-helm). -* Helm charts can be used as part of the container build process. For more information, see [Use Azure Container Registry Tasks][acr-tasks]. --<!-- LINKS - external --> -[helm]: https://helm.sh/ -[helm-install]: https://helm.sh/docs/intro/install/ -[develop-helm-charts]: https://helm.sh/docs/chart_template_guide/ --<!-- LINKS - internal --> -[azure-cli-install]: /cli/azure/install-azure-cli -[aks-quickstart]: ../aks/kubernetes-walkthrough.md -[acr-bestpractices]: container-registry-best-practices.md -[az-acr-login]: /cli/azure/acr#az_acr_login -[az-acr-helm]: /cli/azure/acr/helm -[az-acr-repository]: /cli/azure/acr/repository -[az-acr-repository-show]: /cli/azure/acr/repository#az_acr_repository_show -[az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete -[az-acr-manifest-list-metadata]: /cli/azure/acr/manifest#az-acr-manifest-list-metadata -[acr-tasks]: container-registry-tasks-overview.md |
container-registry | Container Registry Image Formats | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-formats.md | - Title: Supported content formats -description: Learn about content formats supported by Azure Container Registry, including Docker-compatible container images, Helm charts, OCI images, and OCI artifacts. ---- Previously updated : 10/31/2023---# Content formats supported in Azure Container Registry --Use a private repository in Azure Container Registry to manage one of the following content formats. --## Docker-compatible container images --The following Docker container image formats are supported: --* [Docker Image Manifest V2, Schema 1](https://docs.docker.com/registry/spec/manifest-v2-1/) --* [Docker Image Manifest V2, Schema 2](https://docs.docker.com/registry/spec/manifest-v2-2/) - includes Manifest Lists which allow registries to store [multi-architecture images](push-multi-architecture-images.md) under a single `image:tag` reference --## OCI images --Azure Container Registry supports images that meet the [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md), including the optional [image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md) specification. Packaging formats include [Singularity Image Format (SIF)](https://github.com/sylabs/sif). --## OCI artifacts --Azure Container Registry supports the [OCI Distribution Specification](https://github.com/opencontainers/distribution-spec), a vendor-neutral, cloud-agnostic spec to store, share, secure, and deploy container images and other content types (artifacts). The specification allows a registry to store a wide range of artifacts in addition to container images. You use tooling appropriate to the artifact to push and pull artifacts. For examples, see: --* [Push and pull an OCI artifact using an Azure container registry](container-registry-manage-artifact.md) -* [Push and pull Helm charts to an Azure container registry](container-registry-helm-repos.md) --To learn more about OCI artifacts, see the [OCI Registry as Storage (ORAS)](https://github.com/deislabs/oras) repo and the [OCI Artifacts](https://github.com/opencontainers/artifacts) repo on GitHub. --## Helm charts --Azure Container Registry can host repositories for [Helm charts](https://helm.sh/), a packaging format used to quickly manage and deploy applications for Kubernetes. [Helm client](https://docs.helm.sh/using_helm/#installing-helm) version 3 is recommended. See [Push and pull Helm charts to an Azure container registry](container-registry-helm-repos.md). --## Next steps --* See how to [push and pull](container-registry-get-started-docker-cli.md) images with Azure Container Registry. --* Use [ACR tasks](container-registry-tasks-overview.md) to build and test container images. --* Use the [Moby BuildKit](https://github.com/moby/buildkit) to build and package containers in OCI format. --* Set up a [Helm repository](container-registry-helm-repos.md) hosted in Azure Container Registry. -- |
container-registry | Container Registry Image Lock | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-lock.md | - Title: Lock images -description: Set attributes for a container image or repository so it can't be deleted or overwritten in an Azure container registry. ----- Previously updated : 10/31/2023---# Lock a container image in an Azure container registry --In an Azure container registry, you can lock an image version or a repository so that it can't be deleted or updated. To lock an image or a repository, update its attributes using the Azure CLI command [az acr repository update][az-acr-repository-update]. --This article requires that you run the Azure CLI in Azure Cloud Shell or locally (version 2.0.55 or later recommended). Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli]. --> [!IMPORTANT] -> This article doesn't apply to locking an entire registry, for example, using **Settings > Locks** in the Azure portal, or `az lock` commands in the Azure CLI. Locking a registry resource doesn't prevent you from creating, updating, or deleting data in repositories. Locking a registry only affects management operations such as adding or deleting replications, or deleting the registry itself. More information in [Lock resources to prevent unexpected changes](../azure-resource-manager/management/lock-resources.md). --## Scenarios --By default, a tagged image in Azure Container Registry is *mutable*, so with appropriate permissions you can repeatedly update and push an image with the same tag to a registry. Container images can also be [deleted](container-registry-delete.md) as needed. This behavior is useful when you develop images and need to maintain a size for your registry. --However, when you deploy a container image to production, you might need an *immutable* container image. An immutable image is one that you can't accidentally delete or overwrite. --See [Recommendations for tagging and versioning container images](container-registry-image-tag-version.md) for strategies to tag and version images in your registry. --Use the [az acr repository update][az-acr-repository-update] command to set repository attributes so you can: --* Lock an image version, or an entire repository --* Protect an image version or repository from deletion, but allow updates --* Prevent read (pull) operations on an image version, or an entire repository --See the following sections for examples. --## Lock an image or repository --### Show the current repository attributes --To see the current attributes of a repository, run the following [az acr repository show][az-acr-repository-show] command: --```azurecli -az acr repository show \ - --name myregistry --repository myrepo \ - --output jsonc -``` --### Show the current image attributes --To see the current attributes of a tag, run the following [az acr repository show][az-acr-repository-show] command: --```azurecli -az acr repository show \ - --name myregistry --image myrepo:tag \ - --output jsonc -``` --### Lock an image by tag --To lock the *myrepo:tag* image in *myregistry*, run the following [az acr repository update][az-acr-repository-update] command: --```azurecli -az acr repository update \ - --name myregistry --image myrepo:tag \ - --write-enabled false -``` --### Lock an image by manifest digest --To lock a *myrepo* image identified by manifest digest (SHA-256 hash, represented as `sha256:...`), run the following command. (To find the manifest digest associated with one or more image tags, run the [az acr manifest list-metadata][az-acr-manifest-list-metadata] command.) --```azurecli -az acr repository update \ - --name myregistry --image myrepo@sha256:123456abcdefg \ - --write-enabled false -``` --### Lock a repository --To lock the *myrepo* repository and all images in it, run the following command: --```azurecli -az acr repository update \ - --name myregistry --repository myrepo \ - --write-enabled false -``` --### List the current repository attributes --To update the repository attributes to indicate image lock listing, run the [az acr repository update][az-acr-repository-update] command. --```azurecli -az acr repository update \ - --name myregistry --repository myrepo \ - --list-enabled false -``` --### Show the image attributes on image lock - -To query the tags on a image lock with `--list-enabled false` enabled on the attribute, run the [az acr repository show][az-acr-repository-show] command. --```azurecli -az acr repository show-manifests \ - --name myregistry --repository myrepo \ - --query "[?listEnabled==null].tags" - --output table -``` --## Check image attributes for tag and its corresponding manifest. --> [!NOTE] -> * The changeable attributes of tags and manifest are managed separately. That is, setting attribute `deleteEnabled=false` for the tag won't set the same for the corresponding manifest. -->* Query the attributes using the script below: --```bash -registry="myregistry" -repo="myrepo" -tag="mytag" --az login -az acr repository show -n $registry --repository $repo -az acr manifest show-metadata -r $registry -n "$repo:$tag" -digest=$(az acr manifest show-metadata -r $registry -n "$repo:$tag" --query digest -o tsv) -az acr manifest show-metadata -r $registry -n "$repo@$digest" -``` --> [!NOTE] -> If the image attributes are set with `writeEnabled=false` or `deleteEnabled=false`, then it will block image deletion. --## Protect an image or repository from deletion --### Protect an image from deletion --To allow the *myrepo:tag* image to be updated but not deleted, run the following command: --```azurecli -az acr repository update \ - --name myregistry --image myrepo:tag \ - --delete-enabled false --write-enabled true -``` --### Protect a repository from deletion --The following command sets the *myrepo* repository so it can't be deleted. Individual images can still be updated or deleted. --```azurecli -az acr repository update \ - --name myregistry --repository myrepo \ - --delete-enabled false --write-enabled true -``` --## Prevent read operations on an image or repository --To prevent read (pull) operations on the *myrepo:tag* image, run the following command: --```azurecli -az acr repository update \ - --name myregistry --image myrepo:tag \ - --read-enabled false -``` --To prevent read operations on all images in the *myrepo* repository, run the following command: --```azurecli -az acr repository update \ - --name myregistry --repository myrepo \ - --read-enabled false -``` --## Unlock an image or repository --To restore the default behavior of the *myrepo:tag* image so that it can be deleted and updated, run the following command: --```azurecli -az acr repository update \ - --name myregistry --image myrepo:tag \ - --delete-enabled true --write-enabled true -``` --To restore the default behavior of the *myrepo* repository, enabling individual images to be deleted and updated, run the following command: --```azurecli -az acr repository update \ - --name myregistry --repository myrepo \ - --delete-enabled true --write-enabled true -``` --However, if there is a lock on the manifest, you need to run an additional command to unlock the manifest. - -```azurecli -az acr repository update \ - --name myregistry --image $repo@$digest \ - --delete-enabled true --write-enabled true -``` --## Next steps --In this article, you learned about using the [az acr repository update][az-acr-repository-update] command to prevent deletion or updating of image versions in a repository. To set additional attributes, see the [az acr repository update][az-acr-repository-update] command reference. --To see the attributes set for an image version or repository, use the [az acr repository show][az-acr-repository-show] command. --For details about delete operations, see [Delete container images in Azure Container Registry][container-registry-delete]. --<!-- LINKS - Internal --> -[az-acr-manifest-list-metadata]: /cli/azure/acr/manifest#az-acr-manifest-list-metadata -[az-acr-repository-update]: /cli/azure/acr/repository#az_acr_repository_update -[az-acr-repository-show]: /cli/azure/acr/repository#az_acr_repository_show -[azure-cli]: /cli/azure/install-azure-cli -[container-registry-delete]: container-registry-delete.md |
container-registry | Container Registry Image Tag Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-tag-version.md | - Title: Image tag best practices -description: Best practices for tagging and versioning Docker container images when pushing images to and pulling images from an Azure container registry -- Previously updated : 10/31/2023-----# Recommendations for tagging and versioning container images --When pushing container images to a container registry and then deploying them, you need a strategy for image tagging and versioning. This article discusses two approaches and where each fits during the container lifecycle: --* **Stable tags** - Tags that you reuse, for example, to indicate a major or minor version such as *mycontainerimage:1.0*. -* **Unique tags** - A different tag for each image you push to a registry, such as *mycontainerimage:abc123*. --## Stable tags --**Recommendation**: Use stable tags to maintain **base images** for your container builds. Avoid deployments with stable tags, because those tags continue to receive updates and can introduce inconsistencies in production environments. --*Stable tags* mean a developer, or a build system, can continue to pull a specific tag, which continues to get updates. Stable doesnΓÇÖt mean the contents are frozen. Rather, stable implies the image should be stable for the intent of that version. To stay ΓÇ£stableΓÇ¥, it might be serviced to apply security patches or framework updates. --### Example --A framework team ships version 1.0. They know theyΓÇÖll ship updates, including minor updates. To support stable tags for a given major and minor version, they have two sets of stable tags. --* `:1` ΓÇô a stable tag for the major version. `1` represents the ΓÇ£newestΓÇ¥ or ΓÇ£latestΓÇ¥ 1.* version. -* `:1.0`- a stable tag for version 1.0, allowing a developer to bind to updates of 1.0, and not be rolled forward to 1.1 when it is released. --When base image updates are available, or any type of servicing release of the framework, images with the stable tags are updated to the newest digest that represents the most current stable release of that version. --In this case, both the major and minor tags are continually being serviced. From a base image scenario, this allows the image owner to provide serviced images. --### Delete untagged manifests --If an image with a stable tag is updated, the previously tagged image is untagged, resulting in an orphaned image. The previous image's manifest and unique layer data remain in the registry. To maintain your registry size, you can periodically delete untagged manifests resulting from stable image updates. For example, [auto-purge](container-registry-auto-purge.md) untagged manifests older than a specified duration, or set a [retention policy](container-registry-retention-policy.md) for untagged manifests. --## Unique tags --**Recommendation**: Use unique tags for **deployments**, especially in an environment that could scale on multiple nodes. You likely want deliberate deployments of a consistent version of components. If your container restarts or an orchestrator scales out more instances, your hosts wonΓÇÖt accidentally pull a newer version, inconsistent with the other nodes. --Unique tagging simply means that every image pushed to a registry has a unique tag. Tags are not reused. There are several patterns you can follow to generate unique tags, including: --* **Date-time stamp** - This approach is fairly common, since you can clearly tell when the image was built. But, how to correlate it back to your build system? Do you have to find the build that was completed at the same time? What time zone are you in? Are all your build systems calibrated to UTC? -* **Git commit** ΓÇô This approach works until you start supporting base image updates. If a base image update happens, your build system kicks off with the same Git commit as the previous build. However, the base image has new content. In general, a Git commit provides a *semi*-stable tag. -* **Manifest digest** - Each container image pushed to a container registry is associated with a manifest, identified by a unique SHA-256 hash, or digest. While unique, the digest is long, difficult to read, and uncorrelated with your build environment. -* **Build ID** - This option may be best since it's likely incremental, and it allows you to correlate back to the specific build to find all the artifacts and logs. However, like a manifest digest, it might be difficult for a human to read. -- If your organization has several build systems, prefixing the tag with the build system name is a variation on this option: `<build-system>-<build-id>`. For example, you could differentiate builds from the API teamΓÇÖs Jenkins build system and the web team's Azure Pipelines build system. --### Lock deployed image tags --As a best practice, we recommend that you [lock](container-registry-image-lock.md) any deployed image tag, by setting its `write-enabled` attribute to `false`. This practice prevents you from inadvertently removing an image from the registry and possibly disrupting your deployments. You can include the locking step in your release pipeline. --Locking a deployed image still allows you to remove other, undeployed images from your registry using Azure Container Registry features to maintain your registry. For example, [auto-purge](container-registry-auto-purge.md) untagged manifests or unlocked images older than a specified duration, or set a [retention policy](container-registry-retention-policy.md) for untagged manifests. --## Next steps --For a more detailed discussion of the concepts in this article, see the blog post [Docker Tagging: Best practices for tagging and versioning docker images](https://stevelasker.blog/2018/03/01/docker-tagging-best-practices-for-tagging-and-versioning-docker-images/). --To help maximize the performance and cost-effective use of your Azure container registry, see [Best practices for Azure Container Registry](container-registry-best-practices.md). --<!-- IMAGES --> ---<!-- LINKS - Internal --> - |
container-registry | Container Registry Import Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-import-images.md | - Title: Import container images -description: Import container images to an Azure container registry by using Azure APIs, without needing to run Docker commands. --- Previously updated : 10/31/2023-----# Import container images to a container registry --You can easily import (copy) container images to an Azure container registry, without using Docker commands. For example, import images from a development registry to a production registry, or copy base images from a public registry. --Azure Container Registry handles many common scenarios to copy images and other artifacts from an existing registry: --* Import images from a public registry --* Import images or OCI artifacts including Helm 3 charts from another Azure container registry, in the same, or a different Azure subscription or tenant --* Import from a non-Azure private container registry --Image import into an Azure container registry has the following benefits over using Docker CLI commands: --* If your client environment doesn't need a local Docker installation, you can Import any container image, regardless of the supported OS type. --* If you import multi-architecture images (such as official Docker images), images for all architectures and platforms specified in the manifest list get copied. --* If you have access to the target registry, you don't require the registry's public endpoint. --> [!IMPORTANT] ->* Importing images requires the external registry support [RFC 7233](https://www.rfc-editor.org/rfc/rfc7233#section-2.3). We recommend using a registry that supports RFC 7233 ranges while using az acr import command with the registry URI to avoid failures. --## Limitations --* The maximum number of manifests for an imported image is 50. -* The maximum layer size for an image imported from a public registry is 2 GiB. --### [Azure CLI](#tab/azure-cli) --To import container images, this article requires that you run the Azure CLI in Azure Cloud Shell or locally (version 2.0.55 or later recommended). Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli]. --### [Azure PowerShell](#tab/azure-powershell) --To import container images, this article requires that you run Azure PowerShell in Azure Cloud Shell or locally (version 5.9.0 or later recommended). Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install the Azure Az PowerShell module][install-the-azure-az-powershell-module]. -----> [!IMPORTANT] -> Changes to image import between two Azure container registries have been introduced as of January 2021: -> * Import to or from a network-restricted Azure container registry requires the restricted registry to [**allow access by trusted services**](allow-access-trusted-services.md) to bypass the network. By default, the setting is enabled, allowing import. If the setting isn't enabled in a newly created registry with a private endpoint or with registry firewall rules, import will fail. -> * In an existing network-restricted Azure container registry that is used as an import source or target, enabling this network security feature is optional but recommended. --## Prerequisites --### [Azure CLI](#tab/azure-cli) --If you don't already have an Azure container registry, create a registry. For steps, see [Quickstart: Create a private container registry using the Azure CLI](container-registry-get-started-azure-cli.md). --### [Azure PowerShell](#tab/azure-powershell) --If you don't already have an Azure container registry, create a registry. For steps, see [Quickstart: Create a private container registry using Azure PowerShell](container-registry-get-started-powershell.md). ----To import an image to an Azure container registry, your identity must have write permissions to the target registry (at least Contributor role, or a custom role that allows the importImage action). See [Azure Container Registry roles and permissions](container-registry-roles.md#custom-roles). --## Import from a public registry --> [!IMPORTANT] -> To import from a public registry to a network-restricted Azure container registry requires the restricted registry to [**allow access by trusted services**](allow-access-trusted-services.md) to bypass the network.By default, the setting is enabled, allowing import. If the setting isn't enabled in a newly created registry with a private endpoint or with registry firewall rules, import will fail. --### Import from Docker Hub --### [Azure CLI](#tab/azure-cli) --For example, use the [az acr import][az-acr-import] command to import the multi-architecture `hello-world:latest` image from Docker Hub to a registry named *myregistry*. Because `hello-world` is an official image from Docker Hub, this image is in the default `library` repository. Include the repository name and optionally a tag in the value of the `--source` image parameter. (You can optionally identify an image by its manifest digest instead of by tag, which guarantees a particular version of an image.) --```azurecli -az acr import \ - --name myregistry \ - --source docker.io/library/hello-world:latest \ - --image hello-world:latest -``` --You can verify that multiple manifests are associated with this image by running the [az acr manifest list-metadata](/cli/azure/acr/manifest#az-acr-manifest-list-metadata) command: --```azurecli -az acr manifest list-metadata \ - --name hello-world \ - --registry myregistry -``` --To import an artifact by digest without adding a tag: --```azurecli -az acr import \ - --name myregistry \ - --source docker.io/library/hello-world@sha256:abc123 \ - --repository hello-world -``` --If you have a [Docker Hub account](https://www.docker.com/pricing), we recommend that you use the credentials when importing an image from Docker Hub. Pass the Docker Hub user name and the password or a [personal access token](https://docs.docker.com/docker-hub/access-tokens/) as parameters to `az acr import`. The following example imports a public image from the `tensorflow` repository in Docker Hub, using Docker Hub credentials: --```azurecli -az acr import \ - --name myregistry \ - --source docker.io/tensorflow/tensorflow:latest-gpu \ - --image tensorflow:latest-gpu - --username <Docker Hub user name> - --password <Docker Hub token> -``` --### [Azure PowerShell](#tab/azure-powershell) --For example, use the [Import-AzContainerRegistryImage][import-azcontainerregistryimage] command to import the multi-architecture `hello-world:latest` image from Docker Hub to a registry named *myregistry*. Because `hello-world` is an official image from Docker Hub, this image is in the default `library` repository. Include the repository name and optionally a tag in the value of the `-SourceImage` parameter. (You can optionally identify an image by its manifest digest instead of by tag, which guarantees a particular version of an image.) --```azurepowershell -Import-AzContainerRegistryImage -RegistryName myregistry -ResourceGroupName myResourceGroup -SourceRegistryUri docker.io -SourceImage library/hello-world:latest -``` --You can verify that multiple manifests are associated with this image by running the `Get-AzContainerRegistryManifest` cmdlet: --```azurepowershell -Get-AzContainerRegistryManifest -RepositoryName library/hello-world -RegistryName myregistry -``` --If you have a [Docker Hub account](https://www.docker.com/pricing), we recommend that you use the credentials when importing an image from Docker Hub. Pass the Docker Hub user name and the password or a [personal access token](https://docs.docker.com/docker-hub/access-tokens/) as parameters to `Import-AzContainerRegistryImage`. The following example imports a public image from the `tensorflow` repository in Docker Hub, using Docker Hub credentials: --```azurepowershell -Import-AzContainerRegistryImage -RegistryName myregistry -ResourceGroupName myResourceGroup -SourceRegistryUri docker.io -SourceImage tensorflow/tensorflow:latest-gpu -Username <Docker Hub user name> -Password <Docker Hub token> -``` ----### Import from Microsoft Container Registry --For example, import the `ltsc2019` Windows Server Core image from the `windows` repository in Microsoft Container Registry. --### [Azure CLI](#tab/azure-cli) --```azurecli -az acr import \ name myregistry \source mcr.microsoft.com/windows/servercore:ltsc2019 \image servercore:ltsc2019-``` --### [Azure PowerShell](#tab/azure-powershell) --```azurepowershell -Import-AzContainerRegistryImage -RegistryName myregistry -ResourceGroupName myResourceGroup -SourceRegistryUri mcr.microsoft.com -SourceImage windows/servercore:ltsc2019 -``` ----## Import from an Azure container registry in the same AD tenant --You can import an image from an Azure container registry in the same AD tenant using integrated Microsoft Entra permissions. --* Your identity must have Microsoft Entra permissions to read from the source registry (Reader role) and to import to the target registry (Contributor role, or a [custom role](container-registry-roles.md#custom-roles) that allows the importImage action). --* The registry can be in the same or a different Azure subscription in the same Active Directory tenant. --* [Public access](container-registry-access-selected-networks.md#disable-public-network-access) to the source registry is disabled. If public access is disabled, specify the source registry by resource ID instead of by registry login server name. --* The source registry and/or the target registry with a private endpoint or registry firewall rules must ensure the restricted registry [allows trusted services](allow-access-trusted-services.md) to access the network. --### Import from a registry in the same subscription --For example, import the `aci-helloworld:latest` image from a source registry *mysourceregistry* to *myregistry* in the same Azure subscription. --### [Azure CLI](#tab/azure-cli) --```azurecli -az acr import \ - --name myregistry \ - --source mysourceregistry.azurecr.io/aci-helloworld:latest \ - --image aci-helloworld:latest -``` --The following example imports the `aci-helloworld:latest` image to *myregistry* from a source registry *mysourceregistry* in which access to the registry's public endpoint is disabled. Supply the resource ID of the source registry with the `--registry` parameter. Notice that the `--source` parameter specifies only the source repository and tag, not the registry login server name. --```azurecli -az acr import \ - --name myregistry \ - --source aci-helloworld:latest \ - --image aci-helloworld:latest \ - --registry /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sourceResourceGroup/providers/Microsoft.ContainerRegistry/registries/mysourceregistry -``` --The following example imports an image by manifest digest (SHA-256 hash, represented as `sha256:...`) instead of by tag: --```azurecli -az acr import \ - --name myregistry \ - --source mysourceregistry.azurecr.io/aci-helloworld@sha256:123456abcdefg -``` --### [Azure PowerShell](#tab/azure-powershell) --```azurepowershell -Import-AzContainerRegistryImage -RegistryName myregistry -ResourceGroupName myResourceGroup -SourceRegistryUri mysourceregistry.azurecr.io -SourceImage aci-helloworld:latest -``` --The following example imports the `aci-helloworld:latest` image to *myregistry* from a source registry *mysourceregistry* in which access to the registry's public endpoint is disabled. Supply the resource ID of the source registry with the `--registry` parameter. Notice that the `--source` parameter specifies only the source repository and tag, not the registry login server name. --```azurepowershell -Import-AzContainerRegistryImage -RegistryName myregistry -ResourceGroupName myResourceGroup -SourceRegistryResourceId '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sourceResourceGroup/providers/Microsoft.ContainerRegistry/registries/mysourceregistry' -SourceImage aci-helloworld:latest -``` --The following example imports an image by manifest digest (SHA-256 hash, represented as `sha256:...`) instead of by tag: --```azurepowershell -Import-AzContainerRegistryImage -RegistryName myregistry -ResourceGroupName myResourceGroup -SourceRegistryUri mysourceregistry.azurecr.io -SourceImage aci-helloworld@sha256:123456abcdefg -``` ----### Import from a registry in a different subscription --> [!NOTE] -> To import an image from one registry to another, the source and target registries must ensure that both regions are registered for Azure Container Registry (ACR) under the subscriptionΓÇÖs resource providers. --### [Azure CLI](#tab/azure-cli) --In the following example, *mysourceregistry* is in a different subscription from *myregistry* in the same Active Directory tenant. Supply the resource ID of the source registry with the `--registry` parameter. Notice that the `--source` parameter specifies only the source repository and tag, not the registry login server name. --```azurecli -az acr import \ - --name myregistry \ - --source aci-helloworld:latest \ - --image aci-hello-world:latest \ - --registry /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sourceResourceGroup/providers/Microsoft.ContainerRegistry/registries/mysourceregistry -``` --### [Azure PowerShell](#tab/azure-powershell) --In the following example, *mysourceregistry* is in a different subscription from *myregistry* in the same Active Directory tenant. Supply the resource ID of the source registry with the `--registry` parameter. Notice that the `--source` parameter specifies only the source repository and tag, not the registry login server name. --```azurepowershell -Import-AzContainerRegistryImage -RegistryName myregistry -ResourceGroupName myResourceGroup -SourceRegistryResourceId '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sourceResourceGroup/providers/Microsoft.ContainerRegistry/registries/mysourceregistry' -SourceImage aci-helloworld:latest -``` ----### Import from a registry using service principal credentials --To import from a registry that you can't access using integrated Active Directory permissions, you can use service principal credentials (if available) to the source registry. Supply the appID and password of an Active Directory [service principal](container-registry-auth-service-principal.md) that has ACRPull access to the source registry. Using a service principal is useful for build systems and other unattended systems that need to import images to your registry. ---### [Azure CLI](#tab/azure-cli) --```azurecli -az acr import \ - --name myregistry \ - --source sourceregistry.azurecr.io/sourcerrepo:tag \ - --image targetimage:tag \ - --username <SP_App_ID> \ - --password <SP_Passwd> -``` --### [Azure PowerShell](#tab/azure-powershell) --```azurepowershell -Import-AzContainerRegistryImage -RegistryName myregistry -ResourceGroupName myResourceGroup -SourceRegistryUri sourceregistry.azurecr.io -SourceImage sourcerrepo:tag -Username <SP_App_ID> -Password <SP_Passwd> -``` ----## Import from an Azure container registry in a different AD tenant --To import from an Azure container registry in a different Microsoft Entra tenant, specify the source registry by login server name, and provide credentials that enable pull access to the registry. --* Cross-tenant import over public access disabled registry is not supported. --### Cross-tenant import with username and password --For example, use a [repository-scoped token](container-registry-repository-scoped-permissions.md) and password, or the appID and password of an Active Directory [service principal](container-registry-auth-service-principal.md) that has ACRPull access to the source registry. --### [Azure CLI](#tab/azure-cli) --```azurecli -az acr import \ - --name myregistry \ - --source sourceregistry.azurecr.io/sourcerrepo:tag \ - --image targetimage:tag \ - --username <SP_App_ID> \ - --password <SP_Passwd> -``` --### [Azure PowerShell](#tab/azure-powershell) --```azurepowershell -Import-AzContainerRegistryImage -RegistryName myregistry -ResourceGroupName myResourceGroup -SourceRegistryUri sourceregistry.azurecr.io -SourceImage sourcerrepo:tag -Username <SP_App_ID> -Password <SP_Passwd> -``` ----### Cross-tenant import with access token --* Cross-tenant import over public access disabled registry is not supported. --To access the source registry using an identity in the source tenant that has registry permissions, you can get an access token: --### [Azure CLI](#tab/azure-cli) --```azurecli -# Login to Azure CLI with the identity, for example a user-assigned managed identity -az login --identity --username <identity_ID> --# Get access token returned by `az account get-access-token` -az account get-access-token -``` --In the target tenant, pass the access token as a password to the `az acr import` command. The source registry specifies the login server name. Notice that no username is needed in this command: --```azurecli -az acr import \ - --name myregistry \ - --source sourceregistry.azurecr.io/sourcerrepo:tag \ - --image targetimage:tag \ - --password <access-token> -``` --### [Azure PowerShell](#tab/azure-powershell) --```azurepowershell -# Login to Azure PowerShell with the identity, for example a user-assigned managed identity -Connect-AzAccount -Identity -AccountId <identity_ID> --# Get access token returned by `Get-AzAccessToken` -Get-AzAccessToken -``` --In the target tenant, pass the access token as a password to the `Import-AzContainerRegistryImage` cmdlet. The source registry specifies login server name. Notice that no username is needed in this command: --```azurepowershell -Import-AzContainerRegistryImage -RegistryName myregistry -ResourceGroupName myResourceGroup -SourceRegistryUri sourceregistry.azurecr.io -SourceImage sourcerrepo:tag -Password <access-token> -``` ----## Import from a non-Azure private container registry --Import an image from a non-Azure private registry by specifying credentials that enable pull access to the registry. For example, pull an image from a private Docker registry: --### [Azure CLI](#tab/azure-cli) --```azurecli -az acr import \ - --name myregistry \ - --source docker.io/sourcerepo/sourceimage:tag \ - --image sourceimage:tag \ - --username <username> \ - --password <password> -``` --### [Azure PowerShell](#tab/azure-powershell) -```azurepowershell -Import-AzContainerRegistryImage -RegistryName myregistry -ResourceGroupName myResourceGroup -SourceRegistryUri docker.io/sourcerepo -SourceImage sourcerrepo:tag -Username <username> -Password <password> -``` -> [!NOTE] -> If you're importing from a non-Azure private registry with IP rules, [follow these steps.](container-registry-access-selected-networks.md) --### Troubleshoot Import Container Images --#### Symptoms and Causes -- `The remote server may not be RFC 7233 compliant`- - The [distribution-spec](https://github.com/opencontainers/distribution-spec/blob/main/spec.md) allows range header form of `Range: bytes=<start>-<end>`. However, the remote server may not be RFC 7233 compliant. -- `Unexpected response status code`- - Get an unexpected response status code from source repository when doing range query. -- `Unexpected length of body in response`- - The received content length does not match the size expected. Expected size is decided by blob size and range header. ----## Next steps --In this article, you learned about importing container images to an Azure container registry from a public registry or another private registry. --### [Azure CLI](#tab/azure-cli) --* For additional image import options, see the [az acr import][az-acr-import] command reference. --### [Azure PowerShell](#tab/azure-powershell) --* For additional image import options, see the [Import-AzContainerRegistryImage][import-azcontainerregistryimage] cmdlet reference. ----* Image import can help you move content to a container registry in a different Azure region, subscription, or Microsoft Entra tenant. For more information, see [Manually move a container registry to another region](manual-regional-move.md). --* [Disable artifact export](data-loss-prevention.md) from a network-restricted container registry. ---<!-- LINKS - Internal --> -[az-login]: /cli/azure/reference-index#az_login -[az-acr-import]: /cli/azure/acr#az_acr_import -[azure-cli]: /cli/azure/install-azure-cli -[install-the-azure-az-powershell-module]: /powershell/azure/install-az-ps -[import-azcontainerregistryimage]: /powershell/module/az.containerregistry/import-azcontainerregistryimage |
container-registry | Container Registry Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-intro.md | - Title: Introduction to Azure Container Registry -description: Get basic information about the Azure service that provides cloud-based, managed container registries. -- Previously updated : 10/31/2023------# Introduction to Azure Container Registry --Azure Container Registry is a managed registry service based on the open-source Docker Registry 2.0. Create and maintain Azure container registries to store and manage your container images and related artifacts. --Use container registries with your existing container development and deployment pipelines, or use Azure Container Registry tasks to build container images in Azure. Build on demand, or fully automate builds with triggers such as source code commits and base image updates. --To learn more about Docker and registry concepts, see the [Docker overview on Docker Docs](https://docs.docker.com/engine/docker-overview/) and [About registries, repositories, and images](container-registry-concepts.md). --## Use cases --Pull images from an Azure container registry to various deployment targets: --* *Scalable orchestration systems* that manage containerized applications across clusters of hosts, including [Kubernetes](https://kubernetes.io/docs/), [DC/OS](https://dcos.io/), and [Docker Swarm](https://docs.docker.com/get-started/swarm-deploy/). -* *Azure services* that support building and running applications at scale, such as [Azure Kubernetes Service (AKS)](/azure/aks/), [App Service](../app-service/index.yml), [Batch](../batch/index.yml), and [Service Fabric](/azure/service-fabric/). --Developers can also push to a container registry as part of a container development workflow. For example, you can target a container registry from a continuous integration and continuous delivery (CI/CD) tool such as [Azure Pipelines](/azure/devops/pipelines/ecosystems/containers/acr-template) or [Jenkins](https://jenkins.io/). --Configure Azure Container Registry tasks to automatically rebuild application images when their base images are updated, or automate image builds when your team commits code to a Git repository. Create multi-step tasks to automate building, testing, and patching container images in parallel in the cloud. --Azure provides tooling like the Azure CLI, the Azure portal, and API support to manage your container registries. Optionally, install the [Docker extension](https://code.visualstudio.com/docs/azure/docker) and the [Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account) for Visual Studio Code. You can use these extensions to pull images from a container registry, push images to a container registry, or run Azure Container Registry tasks, all within Visual Studio Code. --## Key features --* **Registry service tiers**: Create one or more container registries in your Azure subscription. Registries are available in three tiers: [Basic, Standard, and Premium](container-registry-skus.md). Each tier supports webhook integration, registry authentication with Microsoft Entra ID, and delete functionality. -- Take advantage of local, network-close storage of your container images by creating a registry in the same Azure location as your deployments. Use the [geo-replication](container-registry-geo-replication.md) feature of Premium registries for advanced replication and container image distribution. --* **Security and access**: You log in to a registry by using the Azure CLI or the standard `docker login` command. Azure Container Registry transfers container images over HTTPS, and it supports TLS to help secure client connections. -- > [!IMPORTANT] - > As of January 13, 2020, Azure Container Registry requires all secure connections from servers and applications to use TLS 1.2. Enable TLS 1.2 by using any recent Docker client (version 18.03.0 or later). -- You [control access](container-registry-authentication.md) to a container registry by using an Azure identity, a Microsoft Entra [service principal](../active-directory/develop/app-objects-and-service-principals.md), or a provided admin account. Use Azure role-based access control (RBAC) to assign specific registry permissions to users or systems. -- Security features of the Premium service tier include [content trust](container-registry-content-trust.md) for image tag signing, and [firewalls and virtual networks (preview)](container-registry-vnet.md) to restrict access to the registry. Microsoft Defender for Cloud optionally integrates with Azure Container Registry to [scan images](/azure/container-registry/scan-images-defender) whenever you push an image to a registry. --* **Supported images and artifacts**: When images are grouped in a repository, each image is a read-only snapshot of a Docker-compatible container. Azure container registries can include both Windows and Linux images. You control image names for all your container deployments. -- Use standard [Docker commands](https://docs.docker.com/engine/reference/commandline/) to push images into a repository or pull an image from a repository. In addition to Docker container images, Azure Container Registry stores [related content formats](container-registry-image-formats.md) such as [Helm charts](container-registry-helm-repos.md) and images built to the [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md). --* **Automated image builds**: Use [Azure Container Registry tasks](container-registry-tasks-overview.md) to streamline building, testing, pushing, and deploying images in Azure. For example, use Azure Container Registry tasks to extend your development inner loop to the cloud by offloading `docker build` operations to Azure. Configure build tasks to automate your container OS and framework patching pipeline, and build images automatically when your team commits code to source control. -- [Multi-step tasks](container-registry-tasks-overview.md#multi-step-tasks) provide step-based task definition and execution for building, testing, and patching container images in the cloud. Task steps define individual build and push operations for container images. They can also define the execution of one or more containers, in which each step uses a container as its execution environment. --## Related content --* [Create a container registry by using the Azure portal](container-registry-get-started-portal.md) -* [Create a container registry by using the Azure CLI](container-registry-get-started-azure-cli.md) -* [Automate container builds and maintenance by using Azure Container Registry tasks](container-registry-tasks-overview.md) |
container-registry | Container Registry Java Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-java-quickstart.md | - Title: Quickstart - Build and push container images of the Java Spring Boot App to Azure Container Registry -description: Learn to build and push a containerized Java Spring Boot app to the Azure Container Registry using Maven and Jib plugin. -- Previously updated : 10/31/2023------# Quickstart: Build and push container images of the Java Spring Boot app to Azure Container Registry --You can use this Quickstart to build container images of Java Spring Boot app and push it to Azure Container Registry using Maven and Jib. Maven and Jib are one way of using developer tooling to interact with an Azure container registry. --## Prerequisites --* An Azure subscription; Sign up for a [free Azure account](https://azure.microsoft.com/pricing/free-trial) or activate [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) if you don't already have an Azure subscription. -* A supported Java Development Kit (JDK); For more information on available JDKs when developing on Azure, see [Java support on Azure and Azure Stack](/azure/developer/java/fundamentals/java-support-on-azure). -* The [Azure CLI](/cli/azure/overview). -* The Apache's [Maven](http://maven.apache.org) build tool (Version 3 or above). -* A [Git](https://git-scm.com) client. -* A [Docker](https://www.docker.com) client. -* The [ACR Docker credential helper](https://github.com/Azure/acr-docker-credential-helper). --## Create and build a Spring Boot application on Docker --The following steps walk you through building a containerized Java Spring Boot web application and testing it locally. --1. From the command prompt, use the following command to clone the [Spring Boot on Docker Getting Started](https://github.com/spring-guides/gs-spring-boot-docker) sample project. -- ```bash - git clone https://github.com/spring-guides/gs-spring-boot-docker.git - ``` --1. Change directory to the complete project. -- ```bash - cd gs-spring-boot-docker/complete - ``` --1. Use Maven to build and run the sample app. -- ```bash - mvn package spring-boot:run - ``` --1. Test the web app by browsing to `http://localhost:8080`, or with the following `curl` command: -- ```bash - curl http://localhost:8080 - ``` --You should see the following message displayed: **Hello Docker World** --## Create an Azure Container Registry using the Azure CLI --Next, you'll create an Azure resource group and your ACR using the following steps: --1. Log in to your Azure account by using the following command: -- ```azurecli - az login - ``` --1. Specify the Azure subscription to use: -- ```azurecli - az account set -s <subscription ID> - ``` --1. Create a resource group for the Azure resources used in this tutorial. In the following command, be sure to replace the placeholders with your own resource name and a location such as `eastus`. -- ```azurecli - az group create \ - --name=<your resource group name> \ - --location=<location> - ``` --1. Create a private Azure container registry in the resource group, using the following command. Be sure to replace the placeholders with actual values. The tutorial pushes the sample app as a Docker image to this registry in later steps. -- ```azurecli - az acr create \ - --resource-group <your resource group name> \ - --location <location> \ - --name <your registry name> \ - --sku Basic - ``` --## Push your app to the container registry via Jib --Finally, you'll update your project configuration and use the command prompt to build and deploy your image. --> [!NOTE] -> To log in the Azure container registry that you just created, you will need to have the Docker daemon running. To install Docker on your machine, [here is the official Docker documentation](https://docs.docker.com/install/). --1. Log in to your Azure Container Registry from the Azure CLI using the following command. Be sure to replace the placeholder with your own registry name. -- ```azurecli - az config set defaults.acr=<your registry name> - az acr login - ``` -- The `az config` command sets the default registry name to use with `az acr` commands. --1. Navigate to the completed project directory for your Spring Boot application (for example, "*C:\SpringBoot\gs-spring-boot-docker\complete*" or "*/users/robert/SpringBoot/gs-spring-boot-docker/complete*"), and open the *pom.xml* file with a text editor. --1. Update the `<properties>` collection in the *pom.xml* file with the following XML. Replace the placeholder with your registry name, and add a `<jib-maven-plugin.version>` property with value `2.2.0`, or a newer version of the [jib-maven-plugin](https://github.com/GoogleContainerTools/jib/tree/master/jib-maven-plugin). -- ```xml - <properties> - <docker.image.prefix><your registry name>.azurecr.io</docker.image.prefix> - <java.version>1.8</java.version> - <jib-maven-plugin.version>2.2.0</jib-maven-plugin.version> - </properties> - ``` --1. Update the `<plugins>` collection in the *pom.xml* file so that the `<plugin>` element contains and an entry for the `jib-maven-plugin`, as shown in the following example. Note that we are using a base image from the Microsoft Container Registry (MCR): `mcr.microsoft.com/openjdk/jdk:11-ubuntu`, which contains an officially supported JDK for Azure. For other MCR base images with officially supported JDKs, see [Install the Microsoft Build of OpenJDK.](/java/openjdk/install) -- ```xml - <plugin> - <artifactId>jib-maven-plugin</artifactId> - <groupId>com.google.cloud.tools</groupId> - <version>${jib-maven-plugin.version}</version> - <configuration> - <from> -  - </from> - <to> -  - </to> - </configuration> - </plugin> - ``` --1. Navigate to the complete project directory for your Spring Boot application and run the following command to build the image and push the image to the registry: -- ```azurecli - az acr login && mvn compile jib:build - ``` --> [!NOTE] -> -> For security reasons, the credential created by `az acr login` is valid for 1 hour only. If you receive a *401 Unauthorized* error, you can run the `az acr login -n <your registry name>` command again to reauthenticate. --## Verify your container image --Congratulations! Now you have your containerized Java App build on Azure supported JDK pushed to your ACR. You can now test the image by deploying it to Azure App Service, or pulling it to local with command (replacing the placeholder): --```bash -docker pull <your registry name>.azurecr.io/gs-spring-boot-docker -``` --## Next steps --For other versions of the official Microsoft-supported Java base images, see: --* [Install the Microsoft Build of OpenJDK](/java/openjdk/install) --To learn more about Spring and Azure, continue to the Spring on Azure documentation center. --> [!div class="nextstepaction"] -> [Spring on Azure](/azure/developer/java/spring-framework) --### Additional Resources --For more information, see the following resources: --* [Azure for Java Developers](/azure/java) -* [Working with Azure DevOps and Java](/azure/devops/pipelines/ecosystems/java) -* [Spring Boot on Docker Getting Started](https://spring.io/guides/gs/spring-boot-docker/) -* [Spring Initializr](https://start.spring.io) -* [Deploy a Spring Boot Application to the Azure App Service](/azure/developer/java/spring-framework/deploy-spring-boot-java-app-on-linux#configure-maven-to-build-image-to-your-azure-container-registry) -* [Using a custom Docker image for Azure Web App on Linux](../app-service/tutorial-custom-container.md) |
container-registry | Container Registry Manage Artifact | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-manage-artifact.md | - Title: Manage OCI Artifacts and Supply Chain Artifacts with ORAS -description: A comprehensive guide on how to use Azure Container Registry to store, manage, and retrieve OCI and supply chain artifacts. -- Previously updated : 01/24/2024---#customer intent: As a developer, I want a comprehensive guide on using Azure Container Registry to manage OCI and supply chain artifacts so that I can effectively store and retrieve them. ---# Manage OCI Artifacts and Supply Chain Artifacts with ORAS --Azure container registry (ACR) helps you manage both the Open container initiative (OCI) artifacts and supply chain artifacts. This article guides you how to use ACR for managing OCI artifacts and supply chain artifacts effectively. Learn to store, manage, and retrieve both OCI artifacts and a graph of supply chain artifacts, including signatures, software bill of materials (SBOM), security scan results, and other types. --This article is divided into two main sections: --* [Push and pull OCI artifacts with ORAS](container-registry-manage-artifact.md#push-and-pull-oci-artifacts-with-oras) -* [Attach, push, and pull supply chain artifacts with ORAS](container-registry-manage-artifact.md#attach-push-and-pull-supply-chain-artifacts-with-oras) --## Prerequisites --* **Azure container registry** - Create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI][az-acr-create]. -* **Azure CLI** - Version `2.29.1` or later is required. See [Install Azure CLI][azure-cli-install] for installation and/or upgrade. -* **ORAS CLI** - Version `v1.1.0` or later version is required. See: [ORAS installation][oras-install-docs]. -* **Docker (Optional)** - To complete the walkthrough, a container image is referenced. -You can use [Docker installed locally][docker-install] to build and push a container image, or use [`acr build`][az-acr-build] to build remotely in Azure. -While Docker Desktop isn't required, the `oras` cli utilizes the Docker desktop credential store for storing credentials. If Docker Desktop is installed, it must be running for `oras login`. --## Configure the registry --To configure your environment for easy command execution, follow these steps: --1. Set the `ACR_NAME` variable to your registry name. -2. Set the `REGISTRY` variable to `$ACR_NAME.azurecr.io`. -3. Set the `REPO` variable to your repository name. -4. Set the `TAG` variable to your desired tag. -5. Set the `IMAGE` variable to `$REGISTRY/${REPO}:$TAG`. --### Set environment variables --Configure a registry name, login credentials, a repository name, and tag to push and pull artifacts. The following example uses the `net-monitor` repository name and `v1` tag. Replace with your own repository name and tag. --```bash -ACR_NAME=myregistry -REGISTRY=$ACR_NAME.azurecr.io -REPO=net-monitor -TAG=v1 -IMAGE=$REGISTRY/${REPO}:$TAG -``` --### Sign in to a registry --Authenticate with the ACR, for allowing you to pull and push container images. --```azurecli -az login -az acr login -n $REGISTRY -``` --If Docker isn't available, you can utilize the AD token provided for authentication. Authenticate with your [individual Microsoft Entra identity](container-registry-authentication.md?tabs=azure-cli#individual-login-with-azure-ad) using an AD token. Always use "000..." for the `USER_NAME` as the token is parsed through the `PASSWORD` variable. --```azurecli -# Login to Azure -az login -``` --### Sign in with ORAS --Provide the credentials to `oras login`. --```bash -oras login $REGISTRY \ - --username $USER_NAME \ - --password $PASSWORD -``` --This setup enables you to seamlessly push and pull artifacts to and from your Azure Container Registry. Adjust the variables as needed for your specific configuration. --## Push and Pull OCI Artifacts with ORAS --You can use an [Azure container registry][acr-landing] to store and manage [Open Container Initiative (OCI) artifacts](container-registry-image-formats.md#oci-artifacts) as well as Docker and OCI container images. --To demonstrate this capability, this section shows how to use the [OCI Registry as Storage (ORAS)][oras-cli] CLI to push and pull OCI artifacts to/from an Azure container registry. You can manage various OCI artifacts in an Azure container registry using different command-line tools appropriate to each artifact. --> [!NOTE] -> ACR and ORAS support multiple authentication options for users and system automation. This article uses individual identity, using an Azure token. For more authentication options see [Authenticate with an Azure container registry.][acr-authentication] --### Push an artifact --A single file artifact that has no `subject` parent can be anything from a container image, a helm chart, a readme file for the repository. Reference artifacts can be anything from a signature, software bill of materials, scan reports, or other evolving types. Reference artifacts, described in [Attach, push, and pull supply chain artifacts](container-registry-manage-artifact.md#attach-push-and-pull-supply-chain-artifacts-with-oras) are artifacts that refer to another artifact. --#### Push a Single-File Artifact --For this example, create content that represents a markdown file: --```bash -echo 'Readme Content' > readme.md -``` --The following step pushes the `readme.md` file to `<myregistry>.azurecr.io/samples/artifact:readme`. -- The registry is identified with the fully qualified registry name `<myregistry>.azurecr.io` (all lowercase), followed by the namespace and repo: `/samples/artifact`.-- The artifact is tagged `:readme`, to identify it uniquely from other artifacts listed in the repo (`:latest, :v1, :v1.0.1`).-- Setting `--artifact-type readme/example` differentiates the artifact from a container image, which uses `application/vnd.oci.image.config.v1+json`.-- The `./readme.md` identifies the file uploaded, and the `:application/markdown` represents the [IANA `mediaType`][iana-mediatypes] of the file. - For more information, see [OCI Artifact Authors Guidance](https://github.com/opencontainers/artifacts/blob/main/artifact-authors.md). --Use the `oras push` command to push the file to your registry. --**Linux, WSL2 or macOS** --```bash -oras push $REGISTRY/samples/artifact:readme \ - --artifact-type readme/example \ - ./readme.md:application/markdown -``` --**Windows** --```cmd -.\oras.exe push $REGISTRY/samples/artifact:readme ^ - --artifact-type readme/example ^ - .\readme.md:application/markdown -``` --Output for a successful push is similar to the following output: --```console -Uploading 2fdeac43552b readme.md -Uploaded 2fdeac43552b readme.md -Pushed <myregistry>.azurecr.io/samples/artifact:readme -Digest: sha256:e2d60d1b171f08bd10e2ed171d56092e39c7bac1 --aec5d9dcf7748dd702682d53 -``` --#### Push a multi-file artifact --When OCI artifacts are pushed to a registry with ORAS, each file reference is pushed as a blob. To push separate blobs, reference the files individually, or collection of files by referencing a directory. -For more information how to push a collection of files, see [Pushing artifacts with multiple files.][oras-push-multifiles] --Create some documentation for the repository: --```bash -echo 'Readme Content' > readme.md -mkdir details/ -echo 'Detailed Content' > details/readme-details.md -echo 'More detailed Content' > details/readme-more-details.md -``` --Push the multi-file artifact: --**Linux, WSL2 or macOS** --```bash -oras push $REGISTRY/samples/artifact:readme \ - --artifact-type readme/example\ - ./readme.md:application/markdown\ - ./details -``` --**Windows** --```cmd -.\oras.exe push $REGISTRY/samples/artifact:readme ^ - --artifact-type readme/example ^ - .\readme.md:application/markdown ^ - .\details -``` --### Discover the manifest --To view the manifest created as a result of `oras push`, use `oras manifest fetch`: --```bash -oras manifest fetch --pretty $REGISTRY/samples/artifact:readme -``` --The output is similar to: --```json -{ - "mediaType": "application/vnd.oci.artifact.manifest.v1+json", - "artifactType": "readme/example", - "blobs": [ - { - "mediaType": "application/markdown", - "digest": "sha256:2fdeac43552b71eb9db534137714c7bad86b53a93c56ca96d4850c9b41b777fc", - "size": 15, - "annotations": { - "org.opencontainers.image.title": "readme.md" - } - }, - { - "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip", - "digest": "sha256:0d6c7434a34f6854f971487621426332e6c0fda08040b9e6cc8a93f354cee0b1", - "size": 189, - "annotations": { - "io.deis.oras.content.digest": "sha256:11eceb2e7ac3183ec9109003a7389468ec73ad5ceaec0c4edad0c1b664c5593a", - "io.deis.oras.content.unpack": "true", - "org.opencontainers.image.title": "details" - } - } - ], - "annotations": { - "org.opencontainers.artifact.created": "2023-01-10T14:44:06Z" - } -} -``` --### Pull an artifact --Create a clean directory for downloading. --```bash -mkdir ./download -``` --Run the `oras pull` command to pull the artifact from your registry. --```bash -oras pull -o ./download $REGISTRY/samples/artifact:readme -``` --### View the pulled files --```bash -tree ./download -``` --### Remove the artifact (optional) --To remove the artifact from your registry, use the `oras manifest delete` command. --```bash - oras manifest delete $REGISTRY/samples/artifact:readme -``` --## Attach, push, and pull supply chain artifacts with ORAS --To demonstrate this capability, this article shows how to use the [OCI Registry as Storage (ORAS)](https://oras.land) CLI to `push`, `discover`, and `pull` a graph of supply chain artifacts to an Azure container registry. -Storing individual (subject) OCI Artifacts are covered in [Push and pull OCI artifacts](container-registry-manage-artifact.md#push-and-pull-oci-artifacts-with-oras). --To store a graph of artifacts, a reference to a `subject` artifact is defined using the [OCI image manifest][oci-image-manifest], which is part of the [prerelease OCI 1.1 Distribution specification][oci-1_1-spec]. --### Push a container image --To associate a graph of artifacts with a container image using the Azure CLI: --You can build and push a container image, or skip this step if `$IMAGE` references an existing image in the registry. --```bash -az acr build -r $ACR_NAME -t $IMAGE https://github.com/wabbit-networks/net-monitor.git#main -``` --### Attaching a Signature --```bash -echo '{"artifact": "'${IMAGE}'", "signature": "jayden hancock"}' > signature.json -``` --#### Attach a signature to the registry, as a reference to the container image --The `oras attach` command creates a reference between the file (`./signature.json`) to the `$IMAGE`. The `--artifact-type` provides for differentiating artifacts, similar to file extensions that enable different file types. One or more files can be attached by specifying `[file]:[mediaType]`. --```bash -oras attach $IMAGE \ - --artifact-type signature/example \ - ./signature.json:application/json -``` --For more information on oras attach, see [ORAS documentation][oras-docs]. --### Attach a multi-file artifact as a reference --When OCI artifacts are pushed to a registry with ORAS, each file reference is pushed as a blob. To push separate blobs, reference the files individually, or collection of files by referencing a directory. -For more information how to push a collection of files, see [Pushing artifacts with multiple files][oras-push-multifiles]. --### Discovering artifact references --The [OCI v1.1 Specification][oci-spec] defines a [referrers API][oci-artifact-referrers] for discovering references to a `subject` artifact. The `oras discover` command can show the list of references to the container image. --Using `oras discover`, view the graph of artifacts now stored in the registry. --```bash -oras discover -o tree $IMAGE -``` --The output shows the beginning of a graph of artifacts, where the signature and docs are viewed as children of the container image. --```output -myregistry.azurecr.io/net-monitor:v1 -├── signature/example -│ └── sha256:555ea91f39e7fb30c06f3b7aa483663f067f2950dcb... -└── readme/example - └── sha256:1a118663d1085e229ff1b2d4d89b5f6d67911f22e55... -``` --### Creating Artifacts graphs --The OCI v1.1 Specification enables deep graphs, enabling signed software bill of materials (SBOM) and other artifact types. --Here's how to create and attach an SBOM to the registry: --#### Create a sample SBOM --```bash -echo '{"version": "0.0.0.0", "artifact": "'${IMAGE}'", "contents": "good"}' > sbom.json -``` --#### Attach a sample SBOM to the image in the registry --**Linux, WSL2 or macOS** --```bash -oras attach $IMAGE \ - --artifact-type sbom/example \ - ./sbom.json:application/json -``` --**Windows** --```cmd -.\oras.exe attach $IMAGE ^ - --artifact-type sbom/example ^ - ./sbom.json:application/json -``` --#### Sign the SBOM -->[!IMPORTANT] -> Microsoft recommends using a secure crypto signing tool, like [Notation][Notation] to sign the image and generate a signature for signing SBOMs. --Artifacts that are pushed as references, typically don't have tags as they're considered part of the `subject` artifact. To push a signature to an artifact that is a child of another artifact, use the `oras discover` with `--artifact-type` filtering to find the digest. This example uses a simple JSON signature for demonstration purposes. ---```bash -SBOM_DIGEST=$(oras discover -o json \ - --artifact-type sbom/example \ - $IMAGE | jq -r ".manifests[0].digest") -``` --Create a signature of an SBOM. --```bash -echo '{"artifact": "'$IMAGE@$SBOM_DIGEST'", "signature": "jayden hancock"}' > sbom-signature.json -``` --#### Attach the SBOM signature --```bash -oras attach $IMAGE@$SBOM_DIGEST \ - --artifact-type 'signature/example' \ - ./sbom-signature.json:application/json -``` --#### View the graph --```bash -oras discover -o tree $IMAGE -``` --Generates the following output: --```output -myregistry.azurecr.io/net-monitor:v1 -├── sbom/example -│   └── sha256:4f1843833c029ecf0524bc214a0df9a5787409fd27bed2160d83f8cc39fedef5 -│   └── signature/example -│   └── sha256:3c43b8cb0c941ec165c9f33f197d7f75980a292400d340f1a51c6b325764aa93 -├── readme/example -│   └── sha256:5fafd40589e2c980e2864a78818bff51ee641119cf96ebb0d5be83f42aa215af -└── signature/example - └── sha256:00da2c1c3ceea087b16e70c3f4e80dbce6f5b7625d6c8308ad095f7d3f6107b5 -``` --### Promoting the Artifact Graph --A typical DevOps workflow promotes artifacts from dev through staging, to the production environment. Secure supply chain workflows promote public content to privately secured environments. -In either case you want to promote the signatures, SBOMs, scan results, and other related artifact with the subject artifact to have a complete graph of dependencies. --Using the [`oras copy`][oras-cli] command, you can promote a filtered graph of artifacts across registries. --Copy the `net-monitor:v1` image, and related artifacts to `sample-staging/net-monitor:v1`: --```bash -TARGET_REPO=$REGISTRY/sample-staging/$REPO -oras copy -r $IMAGE $TARGET_REPO:$TAG -``` --The output of `oras copy`: --```console -Copying 6bdea3cdc730 sbom-signature.json -Copying 78e159e81c6b sbom.json -Copied 6bdea3cdc730 sbom-signature.json -Copied 78e159e81c6b sbom.json -Copying 7cf1385c7f4d signature.json -Copied 7cf1385c7f4d signature.json -Copying 3e797ecd0697 details -Copying 2fdeac43552b readme.md -Copied 3e797ecd0697 details -Copied 2fdeac43552b readme.md -Copied demo42.myregistry.io/net-monitor:v1 => myregistry.azurecr.io/sample-staging/net-monitor:v1 -Digest: sha256:ff858b2ea3cdf4373cba65d2ca6bcede4da1d620503a547cab5916614080c763 -``` --### Discover the promoted artifact graph --```bash -oras discover -o tree $TARGET_REPO:$TAG -``` --Output of `oras discover`: --```console -myregistry.azurecr.io/sample-staging/net-monitor:v1 -├── sbom/example -│   └── sha256:4f1843833c029ecf0524bc214a0df9a5787409fd27bed2160d83f8cc39fedef5 -│   └── signature/example -│   └── sha256:3c43b8cb0c941ec165c9f33f197d7f75980a292400d340f1a51c6b325764aa93 -├── readme/example -│   └── sha256:5fafd40589e2c980e2864a78818bff51ee641119cf96ebb0d5be83f42aa215af -└── signature/example - └── sha256:00da2c1c3ceea087b16e70c3f4e80dbce6f5b7625d6c8308ad095f7d3f6107b5 -``` --### Pulling Referenced Artifacts --To pull a specific referenced artifact, the digest of reference is discovered with the `oras discover` command: --```bash -DOC_DIGEST=$(oras discover -o json \ - --artifact-type 'readme/example' \ - $TARGET_REPO:$TAG | jq -r ".manifests[0].digest") -``` --#### Create a clean directory for downloading --```bash -mkdir ./download -``` --#### Pull the docs into the download directory --```bash -oras pull -o ./download $TARGET_REPO@$DOC_DIGEST -``` --#### View the docs --```bash -tree ./download -``` --The output of `tree`: --```output -./download -├── details -│   ├── readme-details.md -│   └── readme-more-details.md -└── readme.md -``` --### View the repository and tag listing --ORAS enables artifact graphs to be pushed, discovered, pulled, and copied without having to assign tags. It also enables a tag listing to focus on the artifacts users think about, as opposed to the signatures and SBOMs that are associated with the container images, helm charts, and other artifacts. --#### View a list of tags --```bash -oras repo tags $REGISTRY/$REPO -``` --### Deleting all artifacts in the graph --Support for the OCI v1.1 Specification enables deleting the graph of artifacts associated with the subject artifact. Use the [`oras manifest delete`][oras-cli] command to delete the graph of artifacts (signature, SBOM, and the signature of the SBOM). --```azurecli -oras manifest delete -f $REGISTRY/$REPO:$TAG --oras manifest delete -f $REGISTRY/sample-staging/$REPO:$TAG -``` --You can view the list of manifests to confirm the deletion of the subject artifact, and all related artifacts leaving a clean environment. --```azurecli -az acr manifest list-metadata \ - --name $REPO \ - --registry $ACR_NAME -o jsonc -``` --Output: -```output -2023-01-10 18:38:45.366387 Error: repository "net-monitor" is not found. -``` --## Summary --In this article, you learned how to use Azure Container Registry to store, manage, and retrieve both OCI artifacts and supply chain artifacts. You used ORAS CLI to push and pull artifacts to/from an Azure Container Registry. You also discovered the manifest of the pushed artifacts and viewed the graph of artifacts attached to the container image. --## Next steps --* Learn about [Artifact References](https://oras.land/docs/concepts/reftypes), associating signatures, software bill of materials and other reference types. -* Learn more about [the ORAS Project](https://oras.land/), including how to configure a manifest for an artifact. -* Visit the [OCI Artifacts](https://github.com/opencontainers/artifacts) repo for reference information about new artifact types. ---<!-- LINKS - external --> -[docker-install]: https://www.docker.com/get-started/ -[oci-image-manifest]: https://github.com/opencontainers/image-spec/blob/main/manifest.md -[oci-artifact-referrers]: https://github.com/opencontainers/distribution-spec/blob/main/spec.md#listing-referrers/ -[oci-spec]: https://github.com/opencontainers/distribution-spec/blob/main/spec.md/ -[oci-1_1-spec]: https://github.com/opencontainers/distribution-spec/releases/tag/v1.1.0-rc1 -[oras-docs]: https://oras.land/ -[oras-install-docs]: https://oras.land/docs/installation -[oras-cli]: https://oras.land/docs/category/oras-commands/ -[oras-push-multifiles]: https://oras.land/docs/how_to_guides/pushing_and_pulling#pushing-artifacts-with-multiple-files ---<!-- LINKS - internal --> -[acr-authentication]: ./container-registry-authentication.md?tabs=azure-cli -[az-acr-create]: ./container-registry-get-started-azure-cli.md -[az-acr-build]: /cli/azure/acr#az_acr_build -[az-acr-manifest-metadata]: /cli/azure/acr/manifest/metadata#az_acr_manifest_list_metadata -[azure-cli-install]: /cli/azure/install-azure-cli -[iana-mediatypes]: https://www.rfc-editor.org/rfc/rfc6838 -[acr-landing]: https://aka.ms/acr -[Notation]: /azure/container-registry/container-registry-tutorial-sign-build-push |
container-registry | Container Registry Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md | - Title: Set up private endpoint with private link -description: Set up a private endpoint on a container registry and enable access over a private link in a local virtual network. Private link access is a feature of the Premium service tier. ---- Previously updated : 10/31/2023----# Connect privately to an Azure container registry using Azure Private Link --Limit access to a registry by assigning virtual network private IP addresses to the registry endpoints and using [Azure Private Link](../private-link/private-link-overview.md). Network traffic between the clients on the virtual network and the registry's private endpoints traverses the virtual network and a private link on the Microsoft backbone network, eliminating exposure from the public internet. Private Link also enables private registry access from on-premises through [Azure ExpressRoute](../expressroute/expressroute-introduction.md), private peering, or a [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). --You can [configure DNS settings](../private-link/private-endpoint-overview.md#dns-configuration) for the registry's private endpoints, so that the settings resolve to the registry's allocated private IP address. With DNS configuration, clients and services in the network can continue to access the registry at the registry's fully qualified domain name, such as *myregistry.azurecr.io*. --This article shows how to configure a private endpoint for your registry using the Azure portal (recommended) or the Azure CLI. This feature is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry tiers](container-registry-skus.md). ---> [!NOTE] -> Starting from October 2021, new container registries allow a maximum of 200 private endpoints. Registries created earlier allow a maximum of 10 private endpoints. Use the [az acr show-usage](/cli/azure/acr#az-acr-show-usage) command to see the limit for your registry. Please open a support ticket to increase the limit to 200 private endpoints. --## Prerequisites --* A virtual network and subnet in which to set up the private endpoint. If needed, [create a new virtual network and subnet](../virtual-network/quick-create-portal.md). -* For testing, it's recommended to set up a VM in the virtual network. For steps to create a test virtual machine to access your registry, see [Create a Docker-enabled virtual machine](container-registry-vnet.md#create-a-docker-enabled-virtual-machine). -* To use the Azure CLI steps in this article, Azure CLI version 2.6.0 or later is recommended. If you need to install or upgrade, see [Install Azure CLI][azure-cli]. Or run in [Azure Cloud Shell](../cloud-shell/quickstart.md). -* If you don't already have a container registry, create one (Premium tier required) and [import](container-registry-import-images.md) a sample public image such as `mcr.microsoft.com/hello-world` from Microsoft Container Registry. For example, use the [Azure portal][quickstart-portal] or the [Azure CLI][quickstart-cli] to create a registry. --### Register container registry resource provider --To configure registry access using a private link in a different Azure subscription or tenant, you need to [register the resource provider](../azure-resource-manager/management/resource-providers-and-types.md) for Azure Container Registry in that subscription. Use the Azure portal, Azure CLI, or other tools. --Example: --```azurecli -az account set --subscription <Name or ID of subscription of private link> --az provider register --namespace Microsoft.ContainerRegistry -``` --## Set up private endpoint - portal (recommended) --Set up a private endpoint when you create a registry, or add a private endpoint to an existing registry. --### Create a private endpoint - new registry --1. When creating a registry in the portal, on the **Basics** tab, in **SKU**, select **Premium**. -1. Select the **Networking** tab. -1. In **Network connectivity**, select **Private endpoint** > **+ Add**. -1. Enter or select the following information: -- | Setting | Value | - | - | -- | - | Subscription | Select your subscription. | - | Resource group | Enter the name of an existing group or create a new one.| - | Name | Enter a unique name. | - | Registry subresource |Select **registry**| - | **Networking** | | - | Virtual network| Select the virtual network for the private endpoint. Example: *myDockerVMVNET*. | - | Subnet | Select the subnet for the private endpoint. Example: *myDockerVMSubnet*. | - |**Private DNS integration**|| - |Integrate with private DNS zone |Select **Yes**. | - |Private DNS Zone |Select *(New) privatelink.azurecr.io* | - ||| -1. Configure the remaining registry settings, and then select **Review + create**. - ----Your private link is now configured and ready for use. --### Create a private endpoint - existing registry --1. In the portal, navigate to your container registry. -1. Under **Settings**, select **Networking**. -1. On the **Private endpoints** tab, select **+ Private endpoint**. - :::image type="content" source="media/container-registry-private-link/private-endpoint-existing-registry.png" alt-text="Add private endpoint to registry"::: --1. In the **Basics** tab, enter or select the following information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Enter the name of an existing group or create a new one.| - | **Instance details** | | - | Name | Enter a name. | - |Region|Select a region.| - ||| -1. Select **Next: Resource**. -1. Enter or select the following information: -- | Setting | Value | - | - | -- | - |Connection method | For this example, select **Connect to an Azure resource in my directory**.| - | Subscription| Select your subscription. | - | Resource type | Select **Microsoft.ContainerRegistry/registries**. | - | Resource |Select the name of your registry| - |Target subresource |Select **registry**| - ||| -1. Select **Next: Configuration**. -1. Enter or select the information: -- | Setting | Value | - | - | -- | - |**Networking**| | - | Virtual network| Select the virtual network for the private endpoint | - | Subnet | Select the subnet for the private endpoint | - |**Private DNS Integration**|| - |Integrate with private DNS zone |Select **Yes**. | - |Private DNS Zone |Select *(New) privatelink.azurecr.io* | - ||| --1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. -1. When you see the **Validation passed** message, select **Create**. --### Confirm endpoint configuration --After the private endpoint is created, DNS settings in the private zone appear with the **Private endpoints** settings in the portal: --1. In the portal, navigate to your container registry and select **Settings > Networking**. -1. On the **Private endpoints** tab, select the private endpoint you created. -1. Select **DNS configuration**. -1. Review the link settings and custom DNS settings. --## Set up private endpoint - CLI --The Azure CLI examples in this article use the following environment variables. You'll need the names of an existing container registry, virtual network, and subnet to set up a private endpoint. Substitute values appropriate for your environment. All examples are formatted for the Bash shell: --```bash -REGISTRY_NAME=<container-registry-name> -REGISTRY_LOCATION=<container-registry-location> # Azure region such as westeurope where registry created -RESOURCE_GROUP=<resource-group-name> # Resource group for your existing virtual network and subnet -NETWORK_NAME=<virtual-network-name> -SUBNET_NAME=<subnet-name> -``` -### Disable network policies in subnet --[Disable network policies](../private-link/disable-private-endpoint-network-policy.md) such as network security groups in the subnet for the private endpoint. Update your subnet configuration with [az network vnet subnet update][az-network-vnet-subnet-update]: --```azurecli -az network vnet subnet update \ - --name $SUBNET_NAME \ - --vnet-name $NETWORK_NAME \ - --resource-group $RESOURCE_GROUP \ - --disable-private-endpoint-network-policies -``` --### Configure the private DNS zone --Create a [private Azure DNS zone](../dns/private-dns-privatednszone.md) for the private Azure container registry domain. In later steps, you create DNS records for your registry domain in this DNS zone. For more information, see [DNS configuration options](#dns-configuration-options), later in this article. --To use a private zone to override the default DNS resolution for your Azure container registry, the zone must be named **privatelink.azurecr.io**. Run the following [az network private-dns zone create][az-network-private-dns-zone-create] command to create the private zone: --```azurecli -az network private-dns zone create \ - --resource-group $RESOURCE_GROUP \ - --name "privatelink.azurecr.io" -``` --### Create an association link --Run [az network private-dns link vnet create][az-network-private-dns-link-vnet-create] to associate your private zone with the virtual network. This example creates a link called *myDNSLink*. --```azurecli -az network private-dns link vnet create \ - --resource-group $RESOURCE_GROUP \ - --zone-name "privatelink.azurecr.io" \ - --name MyDNSLink \ - --virtual-network $NETWORK_NAME \ - --registration-enabled false -``` --### Create a private registry endpoint --In this section, create the registry's private endpoint in the virtual network. First, get the resource ID of your registry: --```azurecli -REGISTRY_ID=$(az acr show --name $REGISTRY_NAME \ - --query 'id' --output tsv) -``` --Run the [az network private-endpoint create][az-network-private-endpoint-create] command to create the registry's private endpoint. --The following example creates the endpoint *myPrivateEndpoint* and service connection *myConnection*. To specify a container registry resource for the endpoint, pass `--group-ids registry`: --```azurecli -az network private-endpoint create \ - --name myPrivateEndpoint \ - --resource-group $RESOURCE_GROUP \ - --vnet-name $NETWORK_NAME \ - --subnet $SUBNET_NAME \ - --private-connection-resource-id $REGISTRY_ID \ - --group-ids registry \ - --connection-name myConnection -``` --### Get endpoint IP configuration --To configure DNS records, get the IP configuration of the private endpoint. Associated with the private endpoint's network interface in this example are two private IP addresses for the container registry: one for the registry itself, and one for the registry's data endpoint. If your registry is geo-replicated, an additional IP address is associated with each replica. --First, run [az network private-endpoint show][az-network-private-endpoint-show] to query the private endpoint for the network interface ID: --```azurecli -NETWORK_INTERFACE_ID=$(az network private-endpoint show \ - --name myPrivateEndpoint \ - --resource-group $RESOURCE_GROUP \ - --query 'networkInterfaces[0].id' \ - --output tsv) -``` --The following [az network nic show][az-network-nic-show] commands get the private IP addresses and FQDNs for the container registry and the registry's data endpoint: --```azurecli -REGISTRY_PRIVATE_IP=$(az network nic show \ - --ids $NETWORK_INTERFACE_ID \ - --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry'].privateIPAddress" \ - --output tsv) --DATA_ENDPOINT_PRIVATE_IP=$(az network nic show \ - --ids $NETWORK_INTERFACE_ID \ - --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry_data_$REGISTRY_LOCATION'].privateIPAddress" \ - --output tsv) --# An FQDN is associated with each IP address in the IP configurations --REGISTRY_FQDN=$(az network nic show \ - --ids $NETWORK_INTERFACE_ID \ - --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry'].privateLinkConnectionProperties.fqdns" \ - --output tsv) --DATA_ENDPOINT_FQDN=$(az network nic show \ - --ids $NETWORK_INTERFACE_ID \ - --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry_data_$REGISTRY_LOCATION'].privateLinkConnectionProperties.fqdns" \ - --output tsv) -``` --#### Additional endpoints for geo-replicas --If your registry is [geo-replicated](container-registry-geo-replication.md), query for the additional data endpoint for each registry replica. For example, in the *eastus* region: --```azurecli -REPLICA_LOCATION=eastus -GEO_REPLICA_DATA_ENDPOINT_PRIVATE_IP=$(az network nic show \ - --ids $NETWORK_INTERFACE_ID \ - --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry_data_$REPLICA_LOCATION'].privateIPAddress" \ - --output tsv) --GEO_REPLICA_DATA_ENDPOINT_FQDN=$(az network nic show \ - --ids $NETWORK_INTERFACE_ID \ - --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry_data_$REPLICA_LOCATION'].privateLinkConnectionProperties.fqdns" \ - --output tsv) -``` --Once a new geo-replication is added, a private endpoint connection is set to be pending. To approve a private endpoint connection configured manually run [az acr private-endpoint-connection approve][az-acr-private-endpoint-connection-approve] command. --### Create DNS records in the private zone --The following commands create DNS records in the private zone for the registry endpoint and its data endpoint. For example, if you have a registry named *myregistry* in the *westeurope* region, the endpoint names are `myregistry.azurecr.io` and `myregistry.westeurope.data.azurecr.io`. --First run [az network private-dns record-set a create][az-network-private-dns-record-set-a-create] to create empty A-record sets for the registry endpoint and data endpoint: --```azurecli -az network private-dns record-set a create \ - --name $REGISTRY_NAME \ - --zone-name privatelink.azurecr.io \ - --resource-group $RESOURCE_GROUP --# Specify registry region in data endpoint name -az network private-dns record-set a create \ - --name ${REGISTRY_NAME}.${REGISTRY_LOCATION}.data \ - --zone-name privatelink.azurecr.io \ - --resource-group $RESOURCE_GROUP -``` --Run the [az network private-dns record-set a add-record][az-network-private-dns-record-set-a-add-record] command to create the A-records for the registry endpoint and data endpoint: --```azurecli -az network private-dns record-set a add-record \ - --record-set-name $REGISTRY_NAME \ - --zone-name privatelink.azurecr.io \ - --resource-group $RESOURCE_GROUP \ - --ipv4-address $REGISTRY_PRIVATE_IP --# Specify registry region in data endpoint name -az network private-dns record-set a add-record \ - --record-set-name ${REGISTRY_NAME}.${REGISTRY_LOCATION}.data \ - --zone-name privatelink.azurecr.io \ - --resource-group $RESOURCE_GROUP \ - --ipv4-address $DATA_ENDPOINT_PRIVATE_IP -``` --#### Additional records for geo-replicas --If your registry is geo-replicated, create additional DNS settings for each replica. Continuing the example in the *eastus* region: --```azurecli -az network private-dns record-set a create \ - --name ${REGISTRY_NAME}.${REPLICA_LOCATION}.data \ - --zone-name privatelink.azurecr.io \ - --resource-group $RESOURCE_GROUP --az network private-dns record-set a add-record \ - --record-set-name ${REGISTRY_NAME}.${REPLICA_LOCATION}.data \ - --zone-name privatelink.azurecr.io \ - --resource-group $RESOURCE_GROUP \ - --ipv4-address $GEO_REPLICA_DATA_ENDPOINT_PRIVATE_IP -``` --The private link is now configured and ready for use. --## Disable public access --For many scenarios, disable registry access from public networks. This configuration prevents clients outside the virtual network from reaching the registry endpoints. --### Disable public access - portal --1. In the portal, navigate to your container registry and select **Settings > Networking**. -1. On the **Public access** tab, in **Allow public network access**, select **Disabled**. Then select **Save**. --### Disable public access - CLI --> [!NOTE] ->If the public access is disabled, the `az acr build` commands will no longer work. --To disable public access using the Azure CLI, run [az acr update][az-acr-update] and set `--public-network-enabled` to `false`. --```azurecli -az acr update --name $REGISTRY_NAME --public-network-enabled false -``` --## Execute the `az acr build` with private endpoint and private registry --> [!NOTE] -> Once you disable public network [access here](#disable-public-access), then `az acr build` commands will no longer work. -> Unless you are utilizing dedicated agent pools, it's typically require the public IP's. Tasks reserve a set of public IPs in each region for outbound requests. If needed, we have the option to add these IPs to our firewall's allowed list for seamless communication.`az acr build` command uses the same set of IPs as the tasks. --Consider the following options to execute the `az acr build` successfully. --* Assign a [dedicated agent pool.](./tasks-agent-pools.md) -* If agent pool is not available in the region, add the regional [Azure Container Registry Service Tag IPv4](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) to the [firewall access rules.](./container-registry-firewall-access-rules.md#allow-access-by-ip-address-range). Tasks reserve a set of public IPs in each region (a.k.a. AzureContainerRegistry Service Tag) for outbound requests. You can choose to add the IPs in the firewall allowed list. -* Create an ACR task with a managed identity, and enable trusted services to [access network restricted ACR.](./allow-access-trusted-services.md#example-acr-tasks) --## Disable access to a container registry using a service endpoint --> [!IMPORTANT] -> The container registry does not support enabling both private link and service endpoint features configured from a virtual network. --Once the registry has public access disabled and private link configured, you can disable the service endpoint access to a container registry from a virtual network by [removing virtual network rules.](container-registry-vnet.md#remove-network-rules) --* Run [`az acr network-rule list`](/cli/azure/acr/network-rule#az-acr-network-rule-list) command to list the existing network rules. -* Run [`az acr network-rule remove`](/cli/azure/acr/network-rule#az-acr-network-rule-remove) command to remove the network rule. --## Validate private link connection --You should validate that the resources within the subnet of the private endpoint connect to your registry over a private IP address, and have the correct private DNS zone integration. --To validate the private link connection, connect to the virtual machine you set up in the virtual network. --Run a utility such as `nslookup` or `dig` to look up the IP address of your registry over the private link. For example: --```bash -dig $REGISTRY_NAME.azurecr.io -``` --Example output shows the registry's IP address in the address space of the subnet: --```console -[...] -; <<>> DiG 9.11.3-1ubuntu1.13-Ubuntu <<>> myregistry.azurecr.io -;; global options: +cmd -;; Got answer: -;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 52155 -;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 --;; OPT PSEUDOSECTION: -; EDNS: version: 0, flags:; udp: 65494 -;; QUESTION SECTION: -;myregistry.azurecr.io. IN A --;; ANSWER SECTION: -myregistry.azurecr.io. 1783 IN CNAME myregistry.privatelink.azurecr.io. -myregistry.privatelink.azurecr.io. 10 IN A 10.0.0.7 --[...] -``` --Compare this result with the public IP address in `dig` output for the same registry over a public endpoint: --```console -[...] -;; ANSWER SECTION: -myregistry.azurecr.io. 2881 IN CNAME myregistry.privatelink.azurecr.io. -myregistry.privatelink.azurecr.io. 2881 IN CNAME xxxx.xx.azcr.io. -xxxx.xx.azcr.io. 300 IN CNAME xxxx-xxx-reg.trafficmanager.net. -xxxx-xxx-reg.trafficmanager.net. 300 IN CNAME xxxx.westeurope.cloudapp.azure.com. -xxxx.westeurope.cloudapp.azure.com. 10 IN A 20.45.122.144 --[...] -``` --### Registry operations over private link --Also verify that you can perform registry operations from the virtual machine in the network. Make an SSH connection to your virtual machine, and run [az acr login][az-acr-login] to login to your registry. Depending on your VM configuration, you might need to prefix the following commands with `sudo`. --```azurecli -az acr login --name $REGISTRY_NAME -``` --Perform registry operations such as `docker pull` to pull a sample image from the registry. Replace `hello-world:v1` with an image and tag appropriate for your registry, prefixed with the registry login server name (all lowercase): --```bash -docker pull myregistry.azurecr.io/hello-world:v1 -``` --Docker successfully pulls the image to the VM. --## Manage private endpoint connections --Manage a registry's private endpoint connections using the Azure portal, or by using commands in the [az acr private-endpoint-connection][az-acr-private-endpoint-connection] command group. Operations include approve, delete, list, reject, or show details of a registry's private endpoint connections. --For example, to list the private endpoint connections of a registry, run the [az acr private-endpoint-connection list][az-acr-private-endpoint-connection-list] command. For example: --```azurecli -az acr private-endpoint-connection list \ - --registry-name $REGISTRY_NAME -``` --When you set up a private endpoint connection using the steps in this article, the registry automatically accepts connections from clients and services that have Azure RBAC permissions on the registry. You can set up the endpoint to require manual approval of connections. For information about how to approve and reject private endpoint connections, see [Manage a Private Endpoint Connection](../private-link/manage-private-endpoint.md). --> [!IMPORTANT] -> Currently, if you delete a private endpoint from a registry, you might also need to delete the virtual network's link to the private zone. If the link isn't deleted, you may see an error similar to `unresolvable host`. --## DNS configuration options --The private endpoint in this example integrates with a private DNS zone associated with a basic virtual network. This setup uses the Azure-provided DNS service directly to resolve the registry's public FQDN to its private IP addresses in the virtual network. --Private link supports additional DNS configuration scenarios that use the private zone, including with custom DNS solutions. For example, you might have a custom DNS solution deployed in the virtual network, or on-premises in a network you connect to the virtual network using a VPN gateway or Azure ExpressRoute. --To resolve the registry's public FQDN to the private IP address in these scenarios, you need to configure a server-level forwarder to the Azure DNS service (168.63.129.16). Exact configuration options and steps depend on your existing networks and DNS. For examples, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md). --> [!IMPORTANT] -> If for high availability you created private endpoints in several regions, we recommend that you use a separate resource group in each region and place the virtual network and the associated private DNS zone in it. This configuration also prevents unpredictable DNS resolution caused by sharing the same private DNS zone. --### Manually configure DNS records --For some scenarios, you may need to manually configure DNS records in a private zone instead of using the Azure-provided private zone. Be sure to create records for each of the following endpoints: the registry endpoint, the registry's data endpoint, and the data endpoint for any additional regional replica. If all records aren't configured, the registry may be unreachable. --> [!IMPORTANT] -> If you later add a new replica, you need to manually add a new DNS record for the data endpoint in that region. For example, if you create a replica of *myregistry* in the northeurope location, add a record for `myregistry.northeurope.data.azurecr.io`. --The FQDNs and private IP addresses you need to create DNS records are associated with the private endpoint's network interface. You can obtain this information using the Azure portal or Azure CLI. --* In the portal, navigate to your private endpoint, and select **DNS configuration**. -* Using the Azure CLI, run the [az network nic show][az-network-nic-show] command. For example commands, see [Get endpoint IP configuration](#get-endpoint-ip-configuration), earlier in this article. --After creating DNS records, make sure that the registry FQDNs resolve properly to their respective private IP addresses. --## Clean up resources --To clean up your resources in the portal, navigate to your resource group. Once the resource group is loaded, click on **Delete resource group** to remove the resource group and the resources stored there. --If you created all the Azure resources in the same resource group and no longer need them, you can optionally delete the resources by using a single [az group delete](/cli/azure/group) command: --```azurecli -az group delete --name $RESOURCE_GROUP -``` --## Integrating with a registry with private link enabled --To pull content from a registry with private link enabled, clients must allow access to the registry REST endpoint, as well as all regional data endpoints. The client proxy or firewall must allow access to --REST endpoint: `{REGISTRY_NAME}.azurecr.io` -Data endpoint(s): `{REGISTRY_NAME}.{REGISTRY_LOCATION}.data.azurecr.io` --For a geo-replicated registry, customer needs to configure access to the data endpoint for each regional replica. --You have to update the routing configuration for the client proxy and client firewall with the data endpoints to handle the pull requests successfully. A client proxy will provide central traffic control to the [outbound requests][outbound-connection]. To handle local traffic a client proxy is not required, you can add into `noProxy` section to bypass the proxy. Learn more about [HTTP proxy doc](/azure/aks/http-proxy) to integrate with AKS. --Requests to token server over private endpoint connection doesn't require the data endpoint configuration. --## Next steps --* To learn more about Private Link, see the [Azure Private Link](../private-link/private-link-overview.md) documentation. --* To verify DNS settings in the virtual network that route to a private endpoint, run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command with the `--vnet` parameter. For more information, see [Check the health of an Azure container registry](container-registry-check-health.md). --* If you need to set up registry access rules from behind a client firewall, see [Configure rules to access an Azure container registry behind a firewall](container-registry-firewall-access-rules.md). --* [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md). --* If you need to deploy Azure Container Instances that can pull images from an ACR through a private endpoint, see [Deploy to Azure Container Instances from Azure Container Registry using a managed identity](/azure/container-instances/using-azure-container-registry-mi). --<!-- LINKS - external --> -[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms -[docker-login]: https://docs.docker.com/engine/reference/commandline/login/ -[docker-mac]: https://docs.docker.com/docker-for-mac/ -[docker-push]: https://docs.docker.com/engine/reference/commandline/push/ -[docker-tag]: https://docs.docker.com/engine/reference/commandline/tag/ -[docker-windows]: https://docs.docker.com/docker-for-windows/ --<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-create]: /cli/azure/acr#az_acr_create -[az-acr-show]: /cli/azure/acr#az_acr_show -[az-acr-repository-show]: /cli/azure/acr/repository#az_acr_repository_show -[az-acr-repository-list]: /cli/azure/acr/repository#az_acr_repository_list -[az-acr-login]: /cli/azure/acr#az_acr_login -[az-acr-private-endpoint-connection]: /cli/azure/acr/private-endpoint-connection -[az-acr-private-endpoint-connection-list]: /cli/azure/acr/private-endpoint-connection#az_acr_private-endpoint-connection-list -[az-acr-private-endpoint-connection-approve]: /cli/azure/acr/private-endpoint-connection#az_acr_private_endpoint_connection_approve -[az-acr-update]: /cli/azure/acr#az_acr_update -[az-group-create]: /cli/azure/group -[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create -[az-vm-create]: /cli/azure/vm#az_vm_create -[az-network-vnet-subnet-show]: /cli/azure/network/vnet/subnet/#az_network_vnet_subnet_show -[az-network-vnet-subnet-update]: /cli/azure/network/vnet/subnet/#az_network_vnet_subnet_update -[az-network-vnet-list]: /cli/azure/network/vnet/#az_network_vnet_list -[az-network-private-endpoint-create]: /cli/azure/network/private-endpoint#az_network_private_endpoint_create -[az-network-private-endpoint-show]: /cli/azure/network/private-endpoint#az_network_private_endpoint_show -[az-network-private-dns-zone-create]: /cli/azure/network/private-dns/zone#az_network_private_dns_zone_create -[az-network-private-dns-link-vnet-create]: /cli/azure/network/private-dns/link/vnet#az_network_private_dns_link_vnet_create -[az-network-private-dns-record-set-a-create]: /cli/azure/network/private-dns/record-set/a#az_network_private_dns_record_set_a_create -[az-network-private-dns-record-set-a-add-record]: /cli/azure/network/private-dns/record-set/a#az_network_private_dns_record_set_a_add_record -[az-network-nic-show]: /cli/azure/network/nic#az_network_nic_show -[quickstart-portal]: container-registry-get-started-portal.md -[quickstart-cli]: container-registry-get-started-azure-cli.md -[outbound-connection]: /azure/firewall/rule-processing#outbound-connectivity |
container-registry | Container Registry Quickstart Task Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-quickstart-task-cli.md | - Title: Quickstart - Build a container image on-demand in Azure -description: Use Azure Container Registry commands to quickly build, push, and run a Docker container image on-demand, in the Azure cloud. --- Previously updated : 10/31/2023-----# Quickstart: Build and run a container image using Azure Container Registry Tasks --In this quickstart, you use [Azure Container Registry Tasks][container-registry-tasks-overview] commands to quickly build, push, and run a Docker container image natively within Azure, without a local Docker installation. ACR Tasks is a suite of features within Azure Container Registry to help you manage and modify container images across the container lifecycle. This example shows how to offload your "inner-loop" container image development cycle to the cloud with on-demand builds using a local Dockerfile. --After this quickstart, explore more advanced features of ACR Tasks using the [tutorials](container-registry-tutorial-quick-task.md). ACR Tasks can automate image builds based on code commits or base image updates, or test multiple containers, in parallel, among other scenarios. --- -- This quickstart requires version 2.0.58 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Create a resource group --If you don't already have a container registry, first create a resource group with the [az group create][az-group-create] command. An Azure resource group is a logical container into which Azure resources are deployed and managed. --The following example creates a resource group named *myResourceGroup* in the *eastus* location. --```azurecli-interactive -az group create --name myResourceGroup --location eastus -``` --## Create a container registry --Create a container registry using the [az acr create][az-acr-create] command. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the following example, *mycontainerregistry008* is used. Update this to a unique value. --```azurecli-interactive -az acr create --resource-group myResourceGroup \ - --name mycontainerregistry008 --sku Basic -``` --This example creates a *Basic* registry, a cost-optimized option for developers learning about Azure Container Registry. For details on available service tiers, see [Container registry service tiers][container-registry-skus]. --## Build and push image from a Dockerfile --Now use Azure Container Registry to build and push an image. First, create a local working directory and then create a Dockerfile named *Dockerfile* with the single line: `FROM mcr.microsoft.com/hello-world`. This is a simple example to build a Linux container image from the `hello-world` image hosted at Microsoft Container Registry. You can create your own standard Dockerfile and build images for other platforms. If you are working at a bash shell, create the Dockerfile with the following command: --```bash -echo "FROM mcr.microsoft.com/hello-world" > Dockerfile -``` --Run the [az acr build][az-acr-build] command, which builds the image and, after the image is successfully built, pushes it to your registry. The following example builds and pushes the `sample/hello-world:v1` image. The `.` at the end of the command sets the location of the Dockerfile, in this case the current directory. --```azurecli-interactive -az acr build --image sample/hello-world:v1 \ - --registry mycontainerregistry008 \ - --file Dockerfile . -``` --Output from a successful build and push is similar to the following: --```console -Packing source code into tar to upload... -Uploading archived source code from '/tmp/build_archive_b0bc1e5d361b44f0833xxxx41b78c24e.tar.gz'... -Sending context (1.856 KiB) to registry: mycontainerregistry008... -Queued a build with ID: ca8 -Waiting for agent... -2019/03/18 21:56:57 Using acb_vol_4c7ffa31-c862-4be3-xxxx-ab8e615c55c4 as the home volume -2019/03/18 21:56:57 Setting up Docker configuration... -2019/03/18 21:56:58 Successfully set up Docker configuration -2019/03/18 21:56:58 Logging in to registry: mycontainerregistry008.azurecr.io -2019/03/18 21:56:59 Successfully logged into mycontainerregistry008.azurecr.io -2019/03/18 21:56:59 Executing step ID: build. Working directory: '', Network: '' -2019/03/18 21:56:59 Obtaining source code and scanning for dependencies... -2019/03/18 21:57:00 Successfully obtained source code and scanned for dependencies -2019/03/18 21:57:00 Launching container with name: build -Sending build context to Docker daemon 13.82kB -Step 1/1 : FROM mcr.microsoft.com/hello-world -latest: Pulling from hello-world -Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586fxxxx21577a99efb77324b0fe535 -Successfully built fce289e99eb9 -Successfully tagged mycontainerregistry008.azurecr.io/sample/hello-world:v1 -2019/03/18 21:57:01 Successfully executed container: build -2019/03/18 21:57:01 Executing step ID: push. Working directory: '', Network: '' -2019/03/18 21:57:01 Pushing image: mycontainerregistry008.azurecr.io/sample/hello-world:v1, attempt 1 -The push refers to repository [mycontainerregistry008.azurecr.io/sample/hello-world] -af0b15c8625b: Preparing -af0b15c8625b: Layer already exists -v1: digest: sha256:92c7f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e899a size: 524 -2019/03/18 21:57:03 Successfully pushed image: mycontainerregistry008.azurecr.io/sample/hello-world:v1 -2019/03/18 21:57:03 Step ID: build marked as successful (elapsed time in seconds: 2.543040) -2019/03/18 21:57:03 Populating digests for step ID: build... -2019/03/18 21:57:05 Successfully populated digests for step ID: build -2019/03/18 21:57:05 Step ID: push marked as successful (elapsed time in seconds: 1.473581) -2019/03/18 21:57:05 The following dependencies were found: -2019/03/18 21:57:05 -- image:- registry: mycontainerregistry008.azurecr.io - repository: sample/hello-world - tag: v1 - digest: sha256:92c7f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e899a - runtime-dependency: - registry: registry.hub.docker.com - repository: library/hello-world - tag: v1 - digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535 - git: {} --Run ID: ca8 was successful after 10s -``` --## Run the image --Now quickly run the image you built and pushed to your registry. Here you use [az acr run][az-acr-run] to run the container command. In your container development workflow, this might be a validation step before you deploy the image, or you could include the command in a [multi-step YAML file][container-registry-tasks-multi-step]. --The following example uses $Registry to specify the endpoint of the registry where you run the command: --```azurecli-interactive -az acr run --registry mycontainerregistry008 \ - --cmd '$Registry/sample/hello-world:v1' -``` --The `cmd` parameter in this example runs the container in its default configuration, but `cmd` supports additional `docker run` parameters or even other `docker` commands. --Output is similar to the following: --```console -Packing source code into tar to upload... -Uploading archived source code from '/tmp/run_archive_ebf74da7fcb04683867b129e2ccad5e1.tar.gz'... -Sending context (1.855 KiB) to registry: mycontainerre... -Queued a run with ID: cab -Waiting for an agent... -2019/03/19 19:01:53 Using acb_vol_60e9a538-b466-475f-9565-80c5b93eaa15 as the home volume -2019/03/19 19:01:53 Creating Docker network: acb_default_network, driver: 'bridge' -2019/03/19 19:01:53 Successfully set up Docker network: acb_default_network -2019/03/19 19:01:53 Setting up Docker configuration... -2019/03/19 19:01:54 Successfully set up Docker configuration -2019/03/19 19:01:54 Logging in to registry: mycontainerregistry008.azurecr.io -2019/03/19 19:01:55 Successfully logged into mycontainerregistry008.azurecr.io -2019/03/19 19:01:55 Executing step ID: acb_step_0. Working directory: '', Network: 'acb_default_network' -2019/03/19 19:01:55 Launching container with name: acb_step_0 --Hello from Docker! -This message shows that your installation appears to be working correctly. --To generate this message, Docker took the following steps: - 1. The Docker client contacted the Docker daemon. - 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. - (amd64) - 3. The Docker daemon created a new container from that image which runs the - executable that produces the output you are currently reading. - 4. The Docker daemon streamed that output to the Docker client, which sent it - to your terminal. --To try something more ambitious, you can run an Ubuntu container with: - $ docker run -it ubuntu bash --Share images, automate workflows, and more with a free Docker ID: - https://hub.docker.com/ --For more examples and ideas, visit: - https://docs.docker.com/get-started/ --2019/03/19 19:01:56 Successfully executed container: acb_step_0 -2019/03/19 19:01:56 Step ID: acb_step_0 marked as successful (elapsed time in seconds: 0.843801) --Run ID: cab was successful after 6s -``` --## Clean up resources --When no longer needed, you can use the [az group delete][az-group-delete] command to remove the resource group, the container registry, and the container images stored there. --```azurecli -az group delete --name myResourceGroup -``` --## Next steps --In this quickstart, you used features of ACR Tasks to quickly build, push, and run a Docker container image natively within Azure, without a local Docker installation. Continue to the Azure Container Registry Tasks tutorials to learn about using ACR Tasks to automate image builds and updates. --> [!div class="nextstepaction"] -> [Azure Container Registry Tasks tutorials][container-registry-tutorial-quick-task] --<!-- LINKS - external --> -[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms -[docker-mac]: https://docs.docker.com/docker-for-mac/ -[docker-push]: https://docs.docker.com/engine/reference/commandline/push/ -[docker-pull]: https://docs.docker.com/engine/reference/commandline/pull/ -[docker-rmi]: https://docs.docker.com/engine/reference/commandline/rmi/ -[docker-run]: https://docs.docker.com/engine/reference/commandline/run/ -[docker-tag]: https://docs.docker.com/engine/reference/commandline/tag/ -[docker-windows]: https://docs.docker.com/docker-for-windows/ -[azure-account]: https://azure.microsoft.com/free/ --<!-- LINKS - internal --> -[az-acr-create]: /cli/azure/acr#az_acr_create -[az-acr-build]: /cli/azure/acr#az_acr_build -[az-acr-run]: /cli/azure/acr#az_acr_run -[az-group-create]: /cli/azure/group#az_group_create -[az-group-delete]: /cli/azure/group#az_group_delete -[azure-cli]: /cli/azure/install-azure-cli -[container-registry-tasks-overview]: container-registry-tasks-overview.md -[container-registry-tasks-multi-step]: container-registry-tasks-multi-step.md -[container-registry-tutorial-quick-task]: container-registry-tutorial-quick-task.md -[container-registry-skus]: container-registry-skus.md -[azure-cli-install]: /cli/azure/install-azure-cli |
container-registry | Container Registry Repositories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-repositories.md | - Title: View repositories in portal -description: Use the Azure portal to view Azure Container Registry repositories, which host Docker container images and other supported artifacts. --- Previously updated : 10/31/2023----# View container registry repositories in the Azure portal --Azure Container Registry allows you to store Docker container images in repositories. By storing images in repositories, you can store groups of images (or versions of images) in isolated environments. You can specify these repositories when you push images to your registry, and view their contents in the Azure portal. --## Prerequisites --* **Container registry**: Create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md). -* **Docker CLI**: Install [Docker][docker-install] on your local machine, which provides you with the Docker command-line interface. -* **Container image**: Push an image to your container registry. For guidance on how to push and pull images, see [Push and pull an image](container-registry-get-started-docker-cli.md). --## View repositories in Azure portal --You can see a list of the repositories hosting your images, as well as the image tags, in the Azure portal. --If you followed the steps in [Push and pull an image](container-registry-get-started-docker-cli.md) (and didn't subsequently delete the image), you should have an Nginx image in your container registry. The instructions in that article specified that you tag the image with a namespace, the "samples" in `/samples/nginx`. As a refresher, the [docker push][docker-push] command specified in that article was: --```Bash -docker push myregistry.azurecr.io/samples/nginx -``` -- Because Azure Container Registry supports such multilevel repository namespaces, you can scope collections of images related to a specific app, or a collection of apps, to different development or operational teams. To read more about repositories in container registries, see [Private Docker container registries in Azure](container-registry-intro.md). --To view a repository: --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Select the **Azure Container Registry** to which you pushed the Nginx image. -1. Select **Repositories** to see a list of the repositories that contain the images in the registry. -1. Select a repository to see the image tags within that repository. --For example, if you pushed the Nginx image as instructed in [Push and pull an image](container-registry-get-started-docker-cli.md), you should see something similar to: --![Repositories in the portal](./media/container-registry-repositories/container-registry-repositories.png) --## Next steps --Now that you know the basics of viewing and working with repositories in the portal, try using Azure Container Registry with an [Azure Kubernetes Service (AKS)](/azure/aks/tutorial-kubernetes-prepare-app) cluster. --<!-- LINKS - External --> -[docker-install]: https://docs.docker.com/engine/installation/ -[docker-push]: https://docs.docker.com/engine/reference/commandline/push/ |
container-registry | Container Registry Repository Scoped Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-repository-scoped-permissions.md | - Title: Permissions to repositories in Azure Container Registry -description: Create a token to grant and manage repository scoped permissions within a container registry. The token helps to perform actions, such as pull images, push images, delete images, read metadata, and write metadata. --- Previously updated : 10/31/2023-----# Create a token with repository-scoped permissions --This article describes how to create tokens and scope maps to manage access to repositories in your container registry. By creating tokens, a registry owner can provide users or services with scoped, time-limited access to repositories to pull or push images or perform other actions. A token provides more fine-grained permissions than other registry [authentication options](container-registry-authentication.md), which scope permissions to an entire registry. --Common scenarios for creating a token include: --* Allow IoT devices with individual tokens to pull an image from a repository. -* Provide an external organization with permissions to a repository path. -* Limit repository access to different user groups in your organization. For example, provide write and read access to developers who build images that target specific repositories, and read access to teams that deploy from those repositories. --This feature is available in all the service tiers. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md) --## Limitations --* You can't currently assign repository-scoped permissions to a Microsoft Entra identity, such as a service principal or managed identity. --## Concepts --To configure repository-scoped permissions, you create a *token* with an associated *scope map*. --* A **token** along with a generated password lets the user authenticate with the registry. You can set an expiration date for a token password, or disable a token at any time. -- After authenticating with a token, the user or service can perform one or more *actions* scoped to one or more repositories. -- |Action |Description | Example | - |||--| - |`content/delete` | Remove data from the repository | Delete a repository or a manifest | - |`content/read` | Read data from the repository | Pull an artifact | - |`content/write` | Write data to the repository | Use with `content/read` to push an artifact | - |`metadata/read` | Read metadata from the repository | List tags or manifests | - |`metadata/write` | Write metadata to the repository | Enable or disable read, write, or delete operations | --> [!NOTE] -> Repository-scoped permissions do not support the ability to list the catalog of all repositories in the registry. --* A **scope map** groups the repository permissions you apply to a token and can reapply to other tokens. Every token is associated with a single scope map. With a scope map, you can: -- * Configure multiple tokens with identical permissions to a set of repositories. - * Update token permissions when you add or remove repository actions in the scope map, or apply a different scope map. -- Azure Container Registry also provides several system-defined scope maps you can apply when creating tokens. The permissions of system-defined scope maps apply to all repositories in your registry.The individual *actions* corresponds to the limit of [Repositories per scope map.](container-registry-skus.md) --The following image shows the relationship between tokens and scope maps. --![Registry tokens and scope maps](media/container-registry-repository-scoped-permissions/token-scope-map-concepts.png) --## Prerequisites --* **Azure CLI** - Azure CLI command examples in this article require Azure CLI version 2.17.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). -* **Docker** - To authenticate with the registry to pull or push images, you need a local Docker installation. Docker provides installation instructions for [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms) systems. -* **Container registry** - If you don't have one, create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md). --## Create token - CLI --### Create token and specify repositories --Create a token using the [az acr token create][az-acr-token-create] command. When creating a token, you can specify one or more repositories and associated actions on each repository. The repositories don't need to be in the registry yet. To create a token by specifying an existing scope map, see the [next section](#create-token-and-specify-scope-map). --The following example creates a token in the registry *myregistry* with the following permissions on the `samples/hello-world` repo: `content/write` and `content/read`. By default, the command sets the default token status to `enabled`, but you can update the status to `disabled` at any time. --```azurecli -az acr token create --name MyToken --registry myregistry \ - --repository samples/hello-world \ - content/write content/read \ - --output json -``` --The output shows details about the token. By default, two passwords are generated that don't expire, but you can optionally set an expiration date. It's recommended to save the passwords in a safe place to use later for authentication. The passwords can't be retrieved again, but new ones can be generated. --```console -{ - "creationDate": "2020-01-18T00:15:34.066221+00:00", - "credentials": { - "certificates": [], - "passwords": [ - { - "creationTime": "2020-01-18T00:15:52.837651+00:00", - "expiry": null, - "name": "password1", - "value": "uH54BxxxxK7KOxxxxRbr26dAs8JXxxxx" - }, - { - "creationTime": "2020-01-18T00:15:52.837651+00:00", - "expiry": null, - "name": "password2", - "value": "kPX6Or/xxxxLXpqowxxxxkA0idwLtmxxxx" - } - ], - "username": "MyToken" - }, - "id": "/subscriptions/xxxxxxxx-adbd-4cb4-c864-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.ContainerRegistry/registries/myregistry/tokens/MyToken", - "name": "MyToken", - "objectId": null, - "provisioningState": "Succeeded", - "resourceGroup": "myresourcegroup", - "scopeMapId": "/subscriptions/xxxxxxxx-adbd-4cb4-c864-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.ContainerRegistry/registries/myregistry/scopeMaps/MyToken-scope-map", - "status": "enabled", - "type": "Microsoft.ContainerRegistry/registries/tokens" -} -``` --> [!NOTE] -> To regenerate token passwords and expiration periods, see [Regenerate token passwords](#regenerate-token-passwords) later in this article. --The output includes details about the scope map the command created. You can use the scope map, here named `MyToken-scope-map`, to apply the same repository actions to other tokens. Or, update the scope map later to change the permissions of the associated tokens. --### Create token and specify scope map --An alternative way to create a token is to specify an existing scope map. If you don't already have a scope map, first create one by specifying repositories and associated actions. Then, specify the scope map when creating a token. --To create a scope map, use the [az acr scope-map create][az-acr-scope-map-create] command. The following command creates a scope map with the same permissions on the `samples/hello-world` repository used previously. --```azurecli -az acr scope-map create --name MyScopeMap --registry myregistry \ - --repository samples/hello-world \ - content/write content/read \ - --description "Sample scope map" -``` --Run [az acr token create][az-acr-token-create] to create a token, specifying the *MyScopeMap* scope map. As in the previous example, the command sets the default token status to `enabled`. --```azurecli -az acr token create --name MyToken \ - --registry myregistry \ - --scope-map MyScopeMap -``` --The output shows details about the token. By default, two passwords are generated. It's recommended to save the passwords in a safe place to use later for authentication. The passwords can't be retrieved again, but new ones can be generated. --> [!NOTE] -> To regenerate token passwords and expiration periods, see [Regenerate token passwords](#regenerate-token-passwords) later in this article. --### How to use scope maps to define and assign permissions for multiple repositories --A scope map allows for the use of a wildcard character to define and grant similar permissions for multiple repositories that share a common prefix. Repositories with specific permissions, repositories with a wildcard character can also be used in the same scope map. This provides flexibility in managing permissions for a multiple set of repositories in a single scope map. --Repository permissions can be created when a scope map is created and assigned to a token. Alternatively, a token can be created and directly assigned to a repository. --The following example creates a scope map with a wildcard character and then assigns it to a token. --```azurecli -az acr scope-map create --name MyScopeMapWildcard --registry myregistry \ - --repository samples/* \ - content/write content/read \ - --description "Sample scope map with wildcards" -az acr token create --name MyTokenWildcard \ - --registry myregistry \ - --scope-map MyScopeMapWildcard -``` --The following example creates a token with a wildcard. --```azurecli - az acr token create --name MyTokenWildcard --registry myregistry \ - --repository samples/* \ - content/write content/read \ -``` --The wildcard permissions are additive, which means that when a specific repository is accessed, the resulting permissions will include the permissions for all the scope map rules that match the wildcard prefix. --In this example, the scope map defines permissions for three different types of repositories: -- |Repository |Permission | - ||| - |`sample/*` | `content/read` | - |`sample/teamA/*` | `content/write` | - |`sample/teamA/projectB` | `content/delete` | --The token is assigned a scope map to grant `[content/read, content/write, content/delete]` permissions for accessing repository `sample/teamA/projectB`. However, when the same token is used to access the `sample/teamA/projectC` repository, it only has `[content/read, content/write]` permissions. --> [!IMPORTANT] -> Repositories using wildcards in the scope map should always end with a `/*` suffix to be valid and have a single wildcard character in the repository name. -> Here are some examples of invalid wildcards: -> -> * `sample/*/teamA` with a wildcard in the middle of the repository name. -> * `sample/teamA*` with a wildcard does not end with `/*``. -> * `sample/teamA/*/projectB/*` with multiple wildcards in the repository name. --#### Root level wildcards --Wildcards can also be applied at a root level. This means that any permissions assigned to the repository defined as `*`, will be applied registry wide. --The example shows how to create a token with a root level wildcard that would give the token `[content/read, content/write]` permissions to all repositories in the registry. This provides a simple way to grant permissions to all repositories in the registry without having to specify each repository individually. --```azurecli - az acr token create --name MyTokenWildcard --registry myregistry \ - --repository * \ - content/write content/read \ -``` --> [!IMPORTANT] -> If a wildcard rule encompasses a repository that does not exist yet, the wildcard rule's permissions will still apply to that repository name. -> For example, a token that is assigned to a scope map that grants `[content/write, metadata/write]` permissions for `sample/*` repositories. -> Additionally, suppose the repository `sample/teamC/teamCimage` does not exist yet. -> The token will have permissions for pushing images to repository `sample/teamC/teamCimage`, which will simultaneously create the repository on successful push. --## Create token - portal --You can use the Azure portal to create tokens and scope maps. As with the `az acr token create` CLI command, you can apply an existing scope map, or create a scope map when you create a token by specifying one or more repositories and associated actions. The repositories don't need to be in the registry yet. --The following example creates a token, and creates a scope map with the following permissions on the `samples/hello-world` repository: `content/write` and `content/read`. --1. In the portal, navigate to your container registry. -1. Under **Repository permissions**, select **Tokens > +Add**. -- :::image type="content" source="media/container-registry-repository-scoped-permissions/portal-token-add.png" alt-text="Create token in portal"::: -1. Enter a token name. -1. Under **Scope map**, select **Create new**. -1. Configure the scope map: - 1. Enter a name and description for the scope map. - 1. Under **Repositories**, enter `samples/hello-world`, and under **Permissions**, select `content/read` and `content/write`. Then select **+Add**. -- :::image type="content" source="media/container-registry-repository-scoped-permissions/portal-scope-map-add.png" alt-text="Create scope map in portal"::: -- 1. After adding repositories and permissions, select **Add** to add the scope map. -1. Accept the default token **Status** of **Enabled** and then select **Create**. --After the token is validated and created, token details appear in the **Tokens** screen. --### Add token password --To use a token created in the portal, you must generate a password. You can generate one or two passwords, and set an expiration date for each one. New passwords created for tokens are available immediately. Regenerating new passwords for tokens will take 60 seconds to replicate and be available. --1. In the portal, navigate to your container registry. -1. Under **Repository permissions**, select **Tokens**, and select a token. -1. In the token details, select **password1** or **password2**, and select the Generate icon. -1. In the password screen, optionally set an expiration date for the password, and select **Generate**. It's recommended to set an expiration date. -1. After generating a password, copy and save it to a safe location. You can't retrieve a generated password after closing the screen, but you can generate a new one. -- :::image type="content" source="media/container-registry-repository-scoped-permissions/portal-token-password.png" alt-text="Create token password in portal"::: --## Authenticate with token --When a user or service uses a token to authenticate with the target registry, it provides the token name as a user name and one of its generated passwords. --The authentication method depends on the configured action or actions associated with the token. --|Action |How to authenticate | - ||| - |`content/delete` | `az acr repository delete` in Azure CLI<br/><br/>Example: `az acr repository delete --name myregistry --repository myrepo --username MyToken --password xxxxxxxxxx`| - |`content/read` | `docker login`<br/><br/>`az acr login` in Azure CLI<br/><br/>Example: `az acr login --name myregistry --username MyToken --password xxxxxxxxxx` | - |`content/write` | `docker login`<br/><br/>`az acr login` in Azure CLI | - |`metadata/read` | `az acr repository show`<br/><br/>`az acr repository show-tags`<br/><br/>`az acr manifest list-metadata` in Azure CLI | - |`metadata/write` | `az acr repository untag`<br/><br/>`az acr repository update` in Azure CLI | --## Examples: Use token --The following examples use the token created earlier in this article to perform common operations on a repository: push and pull images, delete images, and list repository tags. The token was set up initially with push permissions (`content/write` and `content/read` actions) on the `samples/hello-world` repository. --### Pull and tag test images --For the following examples, pull public `hello-world` and `nginx` images from Microsoft Container Registry, and tag them for your registry and repository. --```bash -docker pull mcr.microsoft.com/hello-world -docker pull mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine -docker tag mcr.microsoft.com/hello-world myregistry.azurecr.io/samples/hello-world:v1 -docker tag mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine myregistry.azurecr.io/samples/nginx:v1 -``` --### Authenticate using token --Run `docker login` or `az acr login` to authenticate with the registry to push or pull images. Provide the token name as the user name, and provide one of its passwords. The token must have the `Enabled` status. --The following example is formatted for the bash shell, and provides the values using environment variables. --```bash -TOKEN_NAME=MyToken -TOKEN_PWD=<token password> --echo $TOKEN_PWD | docker login --username $TOKEN_NAME --password-stdin myregistry.azurecr.io -``` --Output should show successful authentication: --```console -Login Succeeded -``` --### Push images to registry --After successful login, attempt to push the tagged images to the registry. Because the token has permissions to push images to the `samples/hello-world` repository, the following push succeeds: --```bash -docker push myregistry.azurecr.io/samples/hello-world:v1 -``` --The token doesn't have permissions to the `samples/nginx` repo, so the following push attempt fails with an error similar to `requested access to the resource is denied`: --```bash -docker push myregistry.azurecr.io/samples/nginx:v1 -``` --### Update token permissions --To update the permissions of a token, update the permissions in the associated scope map. The updated scope map is applied immediately to all associated tokens. --For example, update `MyToken-scope-map` with `content/write` and `content/read` actions on the `samples/ngnx` repository, and remove the `content/write` action on the `samples/hello-world` repository. --To use the Azure CLI, run [az acr scope-map update][az-acr-scope-map-update] to update the scope map: --```azurecli -az acr scope-map update \ - --name MyScopeMap \ - --registry myregistry \ - --add-repository samples/nginx content/write content/read \ - --remove-repository samples/hello-world content/write -``` --In the Azure portal: --1. Navigate to your container registry. -1. Under **Repository permissions**, select **Scope maps**, and select the scope map to update. -1. Under **Repositories**, enter `samples/nginx`, and under **Permissions**, select `content/read` and `content/write`. Then select **+Add**. -1. Under **Repositories**, select `samples/hello-world` and under **Permissions**, deselect `content/write`. Then select **Save**. --After updating the scope map, the following push succeeds: --```bash -docker push myregistry.azurecr.io/samples/nginx:v1 -``` --Because the scope map only has the `content/read` permission on the `samples/hello-world` repository, a push attempt to the `samples/hello-world` repo now fails: - -```bash -docker push myregistry.azurecr.io/samples/hello-world:v1 -``` --Pulling images from both repos succeeds, because the scope map provides `content/read` permissions on both repositories: --```bash -docker pull myregistry.azurecr.io/samples/nginx:v1 -docker pull myregistry.azurecr.io/samples/hello-world:v1 -``` -### Delete images --Update the scope map by adding the `content/delete` action to the `nginx` repository. This action allows deletion of images in the repository, or deletion of the entire repository. --For brevity, we show only the [az acr scope-map update][az-acr-scope-map-update] command to update the scope map: --```azurecli -az acr scope-map update \ - --name MyScopeMap \ - --registry myregistry \ - --add-repository samples/nginx content/delete -``` --To update the scope map using the portal, see the [previous section](#update-token-permissions). --Use the following [az acr repository delete][az-acr-repository-delete] command to delete the `samples/nginx` repository. To delete images or repositories, pass the token's name and password to the command. The following example uses the environment variables created earlier in the article: --```azurecli -az acr repository delete \ - --name myregistry --repository samples/nginx \ - --username $TOKEN_NAME --password $TOKEN_PWD -``` --### Show repo tags --Update the scope map by adding the `metadata/read` action to the `hello-world` repository. This action allows reading manifest and tag data in the repository. --For brevity, we show only the [az acr scope-map update][az-acr-scope-map-update] command to update the scope map: --```azurecli -az acr scope-map update \ - --name MyScopeMap \ - --registry myregistry \ - --add-repository samples/hello-world metadata/read -``` --To update the scope map using the portal, see the [previous section](#update-token-permissions). --To read metadata in the `samples/hello-world` repository, run the [az acr manifest list-metadata][az-acr-manifest-list-metadata] or [az acr repository show-tags][az-acr-repository-show-tags] command. --To read metadata, pass the token's name and password to either command. The following example uses the environment variables created earlier in the article: --```azurecli -az acr repository show-tags \ - --name myregistry --repository samples/hello-world \ - --username $TOKEN_NAME --password $TOKEN_PWD -``` --Sample output: --```console -[ - "v1" -] -``` --## Manage tokens and scope maps --### List scope maps --Use the [az acr scope-map list][az-acr-scope-map-list] command, or the **Scope maps** screen in the portal, to list all the scope maps configured in a registry. For example: --```azurecli -az acr scope-map list \ - --registry myregistry --output table -``` --The output consists of the three system-defined scope maps and other scope maps generated by you. Tokens can be configured with any of these scope maps. --``` -NAME TYPE CREATION DATE DESCRIPTION -- - -- -_repositories_admin SystemDefined 2020-01-20T09:44:24Z Can perform all read, write and delete operations on the ... -_repositories_pull SystemDefined 2020-01-20T09:44:24Z Can pull any repository of the registry -_repositories_push SystemDefined 2020-01-20T09:44:24Z Can push to any repository of the registry -MyScopeMap UserDefined 2019-11-15T21:17:34Z Sample scope map -``` --### Show token details --To view the details of a token, such as its status and password expiration dates, run the [az acr token show][az-acr-token-show] command, or select the token in the **Tokens** screen in the portal. For example: --```azurecli -az acr scope-map show \ - --name MyScopeMap --registry myregistry -``` --Use the [az acr token list][az-acr-token-list] command, or the **Tokens** screen in the portal, to list all the tokens configured in a registry. For example: --```azurecli -az acr token list --registry myregistry --output table -``` --### Regenerate token passwords --If you didn't generate a token password, or you want to generate new passwords, run the [az acr token credential generate][az-acr-token-credential-generate] command. Regenerating new passwords for tokens will take 60 seconds to replicate and be available. --The following example generates a new value for password1 for the *MyToken* token, with an expiration period of 30 days. It stores the password in the environment variable `TOKEN_PWD`. This example is formatted for the bash shell. --```azurecli -TOKEN_PWD=$(az acr token credential generate \ - --name MyToken --registry myregistry --expiration-in-days 30 \ - --password1 --query 'passwords[0].value' --output tsv) -``` --To use the Azure portal to generate a token password, see the steps in [Create token - portal](#create-tokenportal) earlier in this article. --### Update token with new scope map --If you want to update a token with a different scope map, run [az acr token update][az-acr-token-update] and specify the new scope map. For example: --```azurecli -az acr token update --name MyToken --registry myregistry \ - --scope-map MyNewScopeMap -``` --In the portal, on the **Tokens** screen, select the token, and under **Scope map**, select a different scope map. --> [!TIP] -> After updating a token with a new scope map, you might want to generate new token passwords. Use the [az acr token credential generate][az-acr-token-credential-generate] command or regenerate a token password in the Azure portal. --## Disable or delete token --You might need to temporarily disable use of the token credentials for a user or service. --Using the Azure CLI, run the [az acr token update][az-acr-token-update] command to set the `status` to `disabled`: --```azurecli -az acr token update --name MyToken --registry myregistry \ - --status disabled -``` --In the portal, select the token in the **Tokens** screen, and select **Disabled** under **Status**. --To delete a token to permanently invalidate access by anyone using its credentials, run the [az acr token delete][az-acr-token-delete] command. --```azurecli -az acr token delete --name MyToken --registry myregistry -``` --In the portal, select the token in the **Tokens** screen, and select **Discard**. --## Next steps --* To manage scope maps and tokens, use additional commands in the [az acr scope-map][az-acr-scope-map] and [az acr token][az-acr-token] command groups. -* See the [authentication overview](container-registry-authentication.md) for other options to authenticate with an Azure container registry, including using a Microsoft Entra identity, a service principal, or an admin account. -* Learn about [connected registries](intro-connected-registry.md) and using tokens for [access](overview-connected-registry-access.md). --<!-- LINKS - External --> ---<!-- LINKS - Internal --> -[az-acr-login]: /cli/azure/acr#az_acr_login -[az-acr-manifest-list-metadata]: /cli/azure/acr/manifest#az-acr-manifest-list-metadata -[az-acr-repository]: /cli/azure/acr/repository/ -[az-acr-repository-show-tags]: /cli/azure/acr/repository/#az_acr_repository_show_tags -[az-acr-repository-delete]: /cli/azure/acr/repository/#az_acr_repository_delete -[az-acr-scope-map]: /cli/azure/acr/scope-map/ -[az-acr-scope-map-create]: /cli/azure/acr/scope-map/#az_acr_scope_map_create -[az-acr-scope-map-list]: /cli/azure/acr/scope-map/#az_acr_scope_map_show -[az-acr-scope-map-show]: /cli/azure/acr/scope-map/#az_acr_scope_map_list -[az-acr-scope-map-update]: /cli/azure/acr/scope-map/#az_acr_scope_map_update -[az-acr-scope-map-list]: /cli/azure/acr/scope-map/#az_acr_scope_map_list -[az-acr-token]: /cli/azure/acr/token/ -[az-acr-token-show]: /cli/azure/acr/token/#az_acr_token_show -[az-acr-token-list]: /cli/azure/acr/token/#az_acr_token_list -[az-acr-token-delete]: /cli/azure/acr/token/#az_acr_token_delete -[az-acr-token-create]: /cli/azure/acr/token/#az_acr_token_create -[az-acr-token-update]: /cli/azure/acr/token/#az_acr_token_update -[az-acr-token-credential-generate]: /cli/azure/acr/token/credential/#az_acr_token_credential_generate |
container-registry | Container Registry Retention Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-retention-policy.md | - Title: Policy to retain untagged manifests -description: Learn how to enable a retention policy in your Premium Azure container registry, for automatic deletion of untagged manifests after a defined period. ---- Previously updated : 10/31/2023----# Set a retention policy for untagged manifests --Azure Container Registry gives you the option to set a *retention policy* for stored image manifests that don't have any associated tags (*untagged manifests*). When a retention policy is enabled, untagged manifests in the registry are automatically deleted after a number of days you set. This feature prevents the registry from filling up with artifacts that aren't needed and helps you save on storage costs. --You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.0.74 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli]. --A retention policy for untagged manifests is currently a preview feature of **Premium** container registries. For information about registry service tiers, see [Azure Container Registry service tiers](container-registry-skus.md). --> [!WARNING] -> Set a retention policy with care--deleted image data is UNRECOVERABLE. If you have systems that pull images by manifest digest (as opposed to image name), you should not set a retention policy for untagged manifests. Deleting untagged images will prevent those systems from pulling the images from your registry. Instead of pulling by manifest, consider adopting a *unique tagging* scheme, a [recommended best practice](container-registry-image-tag-version.md). --## About the retention policy --Azure Container Registry does reference counting for manifests in the registry. When a manifest is untagged, it checks the retention policy. If a retention policy is enabled, a manifest delete operation is queued, with a specific date, according to the number of days set in the policy. --A separate queue management job constantly processes messages, scaling as needed. As an example, suppose you untagged two manifests, 1 hour apart, in a registry with a retention policy of 30 days. Two messages would be queued. Then, 30 days later, approximately 1 hour apart, the messages would be retrieved from the queue and processed, assuming the policy was still in effect. --If the `delete-enabled` attribute of an untagged manifest is set to `false`, the manifest is locked and is not deleted by the policy. --> [!IMPORTANT] -> The retention policy applies only to untagged manifests with timestamps *after* the policy is enabled. Untagged manifests in the registry with earlier timestamps aren't subject to the policy. For other options to delete image data, see examples in [Delete container images in Azure Container Registry](container-registry-delete.md). --## Set a retention policy - CLI --The following example shows you how to use the Azure CLI to set a retention policy for untagged manifests in a registry. --### Enable a retention policy --By default, no retention policy is set in a container registry. To set or update a retention policy, run the [az acr config retention update][az-acr-config-retention-update] command in the Azure CLI. You can specify a number of days between 0 and 365 to retain the untagged manifests. If you don't specify a number of days, the command sets a default of 7 days. After the retention period, all untagged manifests in the registry are automatically deleted. --The following example sets a retention policy of 30 days for untagged manifests in the registry *myregistry*: --```azurecli -az acr config retention update --registry myregistry --status enabled --days 30 --type UntaggedManifests -``` --The following example sets a policy to delete any manifest in the registry as soon as it's untagged. Create this policy by setting a retention period of 0 days. --```azurecli -az acr config retention update \ - --registry myregistry --status enabled \ - --days 0 --type UntaggedManifests -``` --### Validate a retention policy --If you enable the preceding policy with a retention period of 0 days, you can quickly verify that untagged manifests are deleted: --1. Push a test image `hello-world:latest` image to your registry, or substitute another test image of your choice. -1. Untag the `hello-world:latest` image, for example, using the [az acr repository untag][az-acr-repository-untag] command. The untagged manifest remains in the registry. - ```azurecli - az acr repository untag \ - --name myregistry --image hello-world:latest - ``` -1. Within a few seconds, the untagged manifest is deleted. You can verify the deletion by listing manifests in the repository, for example, using the [az acr manifest list-metadata][az-acr-manifest-list-metadata] command. If the test image was the only one in the repository, the repository itself is deleted. --### Manage a retention policy --To show the retention policy set in a registry, run the [az acr config retention show][az-acr-config-retention-show] command: --```azurecli -az acr config retention show --registry myregistry -``` --To disable a retention policy in a registry, run the [az acr config retention update][az-acr-config-retention-update] command and set `--status disabled`: --```azurecli -az acr config retention update \ - --registry myregistry --status disabled \ - --type UntaggedManifests -``` --## Set a retention policy - portal --You can also set a registry's retention policy in the [Azure portal](https://portal.azure.com). --### Enable a retention policy --1. Navigate to your Azure container registry. Under **Policies**, select **Retention** (Preview). -1. In **Status**, select **Enabled**. -1. Select a number of days between 0 and 365 to retain the untagged manifests. Select **Save**. --![Enable a retention policy in Azure portal](media/container-registry-retention-policy/container-registry-retention-policy01.png) --### Disable a retention policy --1. Navigate to your Azure container registry. Under **Policies**, select **Retention** (Preview). -1. In **Status**, select **Disabled**. Select **Save**. --## Next steps --* Learn more about options to [delete images and repositories](container-registry-delete.md) in Azure Container Registry --* Learn how to [automatically purge](container-registry-auto-purge.md) selected images and manifests from a registry --* Learn more about options to [lock images and manifests](container-registry-image-lock.md) in a registry --<!-- LINKS - external --> -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ ---<!-- LINKS - internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-config-retention-update]: /cli/azure/acr/config/retention#az_acr_config_retention_update -[az-acr-config-retention-show]: /cli/azure/acr/config/retention#az_acr_config_retention_show -[az-acr-manifest-list-metadata]: /cli/azure/acr/manifest#az-acr-manifest-list-metadata -[az-acr-repository-untag]: /cli/azure/acr/repository#az_acr_repository_untag |
container-registry | Container Registry Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-roles.md | - Title: Registry roles and permissions -description: Use Azure role-based access control (Azure RBAC) and identity and access management (IAM) to provide fine-grained permissions to resources in an Azure container registry. --- Previously updated : 10/31/2023----# Azure Container Registry roles and permissions --The Azure Container Registry service supports a set of [built-in Azure roles](../role-based-access-control/built-in-roles.md) that provide different levels of permissions to an Azure container registry. Use [Azure role-based access control (Azure RBAC)](../role-based-access-control/index.yml) to assign specific permissions to users, service principals, or other identities that need to interact with a registry, for example to pull or push container images. You can also define [custom roles](#custom-roles) with fine-grained permissions to a registry for different operations. --| Role/Permission | [Access Resource Manager](#access-resource-manager) | [Create/delete registry](#create-and-delete-registry) | [Push image](#push-image) | [Pull image](#pull-image) | [Delete image data](#delete-image-data) | [Change policies](#change-policies) | [Sign images](#sign-images) | -| | | | | | | | | -| Owner | X | X | X | X | X | X | | -| Contributor | X | X | X | X | X | X | | -| Reader | X | | | X | | | | -| AcrPush | | | X | X | | | | -| AcrPull | | | | X | | | | -| AcrDelete | | | | | X | | | -| AcrImageSigner | | | | | | | X | --## Assign roles --See [Steps to add a role assignment](../role-based-access-control/role-assignments-steps.md) for high-level steps to add a role assignment to an existing user, group, service principal, or managed identity. You can use the Azure portal, Azure CLI, Azure PowerShell, or other Azure tools. --When creating a service principal, you also configure its access and permissions to Azure resources such as a container registry. For an example script using the Azure CLI, see [Azure Container Registry authentication with service principals](container-registry-auth-service-principal.md#create-a-service-principal). --## Differentiate users and services --Any time permissions are applied, a best practice is to provide the most limited set of permissions for a person, or service, to accomplish a task. The following permission sets represent a set of capabilities that may be used by humans and headless services. --### CI/CD solutions --When automating `docker build` commands from CI/CD solutions, you need `docker push` capabilities. For these headless service scenarios, we recommend assigning the **AcrPush** role. This role, unlike the broader **Contributor** role, prevents the account from performing other registry operations or accessing Azure Resource Manager. --### Container host nodes --Likewise, nodes running your containers need the **AcrPull** role, but shouldn't require **Reader** capabilities. --### Visual Studio Code Docker extension --For tools like the Visual Studio Code [Docker extension](https://code.visualstudio.com/docs/azure/docker), additional resource provider access is required to list the available Azure container registries. In this case, provide your users access to the **Reader** or **Contributor** role. These roles allow `docker pull`, `docker push`, `az acr list`, `az acr build`, and other capabilities. --## Access Resource Manager --### [Azure CLI](#tab/azure-cli) --Azure Resource Manager access is required for the Azure portal and registry management with the [Azure CLI](/cli/azure/). For example, to get a list of registries by using the `az acr list` command, you need this permission set. --### [Azure PowerShell](#tab/azure-powershell) --Azure Resource Manager access is required for the Azure portal and registry management with [Azure PowerShell](/powershell/azure/). For example, to get a list of registries by using the `Get-AzContainerRegistry` cmdlet, you need this permission set. ----## Create and delete registry --The ability to create and delete Azure container registries. --## Push image --The ability to `docker push` an image, or push another [supported artifact](container-registry-image-formats.md) such as a Helm chart, to a registry. Requires [authentication](container-registry-authentication.md) with the registry using the authorized identity. --## Pull image --The ability to `docker pull` a non-quarantined image, or pull another [supported artifact](container-registry-image-formats.md) such as a Helm chart, from a registry. Requires [authentication](container-registry-authentication.md) with the registry using the authorized identity. --## Delete image data --The ability to [delete container images](container-registry-delete.md), or delete other [supported artifacts](container-registry-image-formats.md) such as Helm charts, from a registry. --## Change policies --The ability to configure policies on a registry. Policies include image purging, enabling quarantine, and image signing. --## Sign images --The ability to sign images, usually assigned to an automated process, which would use a service principal. This permission is typically combined with [push image](#push-image) to allow pushing a trusted image to a registry. For details, see [Content trust in Azure Container Registry](container-registry-content-trust.md). --## Custom roles --As with other Azure resources, you can create [custom roles](../role-based-access-control/custom-roles.md) with fine-grained permissions to Azure Container Registry. Then assign the custom roles to users, service principals, or other identities that need to interact with a registry. --To determine which permissions to apply to a custom role, see the list of Microsoft.ContainerRegistry [actions](../role-based-access-control/resource-provider-operations.md#microsoftcontainerregistry), review the permitted actions of the [built-in ACR roles](../role-based-access-control/built-in-roles.md), or run the following command: --# [Azure CLI](#tab/azure-cli) --```azurecli -az provider operation show --namespace Microsoft.ContainerRegistry -``` --To define a custom role, see [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role). --> [!NOTE] -> In tenants configured with [Azure Resource Manager private link](../azure-resource-manager/management/create-private-link-access-portal.md), Azure Container Registry supports wildcard actions such as `Microsoft.ContainerRegistry/*/read` or `Microsoft.ContainerRegistry/registries/*/write` in custom roles, granting access to all matching actions. In a tenant without an ARM private link, specify all required registry actions individually in a custom role. --# [Azure PowerShell](#tab/azure-powershell) --```azurepowershell -Get-AzProviderOperation -OperationSearchString Microsoft.ContainerRegistry/* -``` --To define a custom role, see [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role). --> [!NOTE] -> In tenants configured with [Azure Resource Manager private link](../azure-resource-manager/management/create-private-link-access-portal.md), Azure Container Registry supports wildcard actions such as `Microsoft.ContainerRegistry/*/read` or `Microsoft.ContainerRegistry/registries/*/write` in custom roles, granting access to all matching actions. In a tenant without an ARM private link, specify all required registry actions individually in a custom role. ----### Example: Custom role to import images --For example, the following JSON defines the minimum actions for a custom role that permits [importing images](container-registry-import-images.md) to a registry. --```json -{ - "assignableScopes": [ - "/subscriptions/<optional, but you can limit the visibility to one or more subscriptions>" - ], - "description": "Can import images to registry", - "Name": "AcrImport", - "permissions": [ - { - "actions": [ - "Microsoft.ContainerRegistry/registries/push/write", - "Microsoft.ContainerRegistry/registries/pull/read", - "Microsoft.ContainerRegistry/registries/read", - "Microsoft.ContainerRegistry/registries/importImage/action" - ], - "dataActions": [], - "notActions": [], - "notDataActions": [] - } - ], - "roleType": "CustomRole" - } -``` --To create or update a custom role using the JSON description, use the [Azure CLI](../role-based-access-control/custom-roles-cli.md), [Azure Resource Manager template](../role-based-access-control/custom-roles-template.md), [Azure PowerShell](../role-based-access-control/custom-roles-powershell.md), or other Azure tools. Add or remove role assignments for a custom role in the same way that you manage role assignments for built-in Azure roles. --## Next steps --* Learn more about assigning Azure roles to an Azure identity by using the [Azure portal](../role-based-access-control/role-assignments-portal.yml), the [Azure CLI](../role-based-access-control/role-assignments-cli.md), [Azure PowerShell](../role-based-access-control/role-assignments-powershell.md), or other Azure tools. --* Learn about [authentication options](container-registry-authentication.md) for Azure Container Registry. --* Learn about enabling [repository-scoped permissions](container-registry-repository-scoped-permissions.md) in a container registry. |
container-registry | Container Registry Service Tag | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-service-tag.md | - Title: Service tags for Azure Container Registry -description: Learn about service tags for Azure Container Registry, which you can use to define network access controls for Azure resources. ---- Previously updated : 04/30/2024----# Service tags for Azure Container Registry --Service tags help set rules to allow or deny traffic to a specific Azure service. In Azure Container Registry, a service tag represents a group of IP address prefixes that can be used to access the service either globally or per Azure region. Azure Container Registry generates network traffic that originates from a service tag for features such as image import, webhooks, and Azure Container Registry tasks. --Microsoft manages the address prefixes that a service tag encompasses. Microsoft automatically updates a service tag as addresses change, to minimize the complexity of frequent updates to network security rules. --When you configure a firewall for a registry, Azure Container Registry serves the requests on the IP addresses for its service tags. For the scenarios mentioned in [Firewall access rules](container-registry-firewall-access-rules.md), you can configure the firewall outbound rule to allow access to Azure Container Registry IP addresses for service tags. --## Image import --Azure Container Registry sends requests to the external registry service through service-tag IP addresses to download images. If the external registry service runs behind a firewall, it requires an inbound rule to allow IP addresses for service tags. These IPs fall under the `AzureContainerRegistry` service tag, which includes the necessary IP ranges for importing images from public or Azure registries. --Azure ensures that these IP ranges are updated automatically. Establishing this security protocol is crucial for upholding the registry's integrity and ensuring its availability. --To configure network security rules and allow traffic from the `AzureContainerRegistry` service tag for image import in Azure Container Registry, see [About registry endpoints](container-registry-firewall-access-rules.md#about-registry-endpoints). For detailed steps and guidance on how to use the service tag during image import, see [Import container images to a container registry](container-registry-import-images.md). --## Webhooks --In Azure Container Registry, you use service tags to manage network traffic for features like webhooks to ensure that only trusted sources can trigger these events. When you set up a webhook in Azure Container Registry, it can respond to events at the registry level or be scoped down to a specific repository tag. For geo-replicated registries, you configure each webhook to respond to events in a specific regional replica. --The endpoint for a webhook must be publicly accessible from the registry. You can configure registry webhook requests to authenticate to a secured endpoint. --Azure Container Registry sends the request to the configured webhook endpoint through the IP addresses for service tags. If the webhook endpoint runs behind a firewall, it requires an inbound rule to allow these IP addresses. To help secure the webhook endpoint access, you must also configure the proper authentication to validate the request. --For detailed steps on creating a webhook setup, refer to the [Azure Container Registry documentation](container-registry-webhook.md). --## Azure Container Registry tasks --When you're using Azure Container Registry tasks, such as when you're building container images or automating workflows, the service tag represents the group of IP address prefixes that Azure Container Registry uses. --During the execution of tasks, Azure Container Registry sends requests to external resources through the IP addresses for service tags. If an external resource runs behind a firewall, it requires an inbound rule to allow these IP addresses. Applying these inbound rules is a common practice to help ensure security and proper access management in cloud environments. --To learn more about Azure Container Registry tasks, see [Automate container image builds and maintenance with Azure Container Registry tasks](container-registry-tasks-overview.md). To learn how to use a service tag to set up firewall access rules for Azure Container Registry tasks, see [Configure rules to access an Azure container registry behind a firewall](container-registry-firewall-access-rules.md). --## Best practices --* Configure and customize network security rules to allow traffic from the `AzureContainerRegistry` service tag for features like image import, webhooks, and Azure Container Registry tasks, such as port numbers and protocols. --* Set up firewall rules to permit traffic solely from IP ranges that are associated with Azure Container Registry service tags for each feature. --* Detect and prevent unauthorized traffic that doesn't originate from Azure Container Registry IP addresses for service tags. --* Monitor network traffic continuously and review security configurations periodically to address unexpected traffic for each Azure Container Registry feature by using [Azure Monitor](/azure/azure-monitor/overview) or [Network Watcher](/azure/network-watcher/frequently-asked-questions). |
container-registry | Container Registry Skus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-skus.md | - Title: Registry service tiers and features -description: Learn about the features and limits (quotas) in the Basic, Standard, and Premium service tiers (SKUs) of Azure Container Registry. --- Previously updated : 10/31/2023----# Azure Container Registry service tiers --Azure Container Registry is available in multiple service tiers (also known as SKUs). These tiers provide predictable pricing and several options for aligning to the capacity and usage patterns of your private Docker registry in Azure. --| Tier | Description | -| | -- | -| **Basic** | A cost-optimized entry point for developers learning about Azure Container Registry. Basic registries have the same programmatic capabilities as Standard and Premium (such as Microsoft Entra [authentication integration](container-registry-authentication.md#individual-login-with-azure-ad), [image deletion][container-registry-delete], and [webhooks][container-registry-webhook]). However, the included storage and image throughput are most appropriate for lower usage scenarios. | -| **Standard** | Standard registries offer the same capabilities as Basic, with increased included storage and image throughput. Standard registries should satisfy the needs of most production scenarios. | -| **Premium** | Premium registries provide the highest amount of included storage and concurrent operations, enabling high-volume scenarios. In addition to higher image throughput, Premium adds features such as [geo-replication][container-registry-geo-replication] for managing a single registry across multiple regions, [content trust](container-registry-content-trust.md) for image tag signing, [private link with private endpoints](container-registry-private-link.md) to restrict access to the registry. | --The Basic, Standard, and Premium tiers all provide the same programmatic capabilities. They also all benefit from [image storage][container-registry-storage] managed entirely by Azure. Choosing a higher-level tier provides more performance and scale. With multiple service tiers, you can get started with Basic, then convert to Standard and Premium as your registry usage increases. --For example : --- If you purchase a Basic tier registry, it includes a storage of 10 GB. The price you pay here is $0.167 per day. Prices are calculated based on US dollars.-- If you have a Basic tier registry and use 25 GB storage, you are paying $0.003/day*15 = $0.045 per day for the additional 15 GB.-- So, the pricing for the Basic ACR with 25 GB storage is $0.167+$0.045= 0.212 USD per day with other related charges like networking, builds, etc, according to the [Pricing - Container Registry.](https://azure.microsoft.com/pricing/details/container-registry/)---## Service tier features and limits --The following table details the features and registry limits of the Basic, Standard, and Premium service tiers. ---## Registry throughput and throttling --### Throughput --When generating a high rate of registry operations, use the service tier's limits for read and write operations and bandwidth as a guide for expected maximum throughput. These limits affect data-plane operations including listing, deleting, pushing, and pulling images and other artifacts. --To estimate the throughput of image pulls and pushes specifically, consider the registry limits and these factors: --* Number and size of image layers -* Reuse of layers or base images across images -* additional API calls that might be required for each pull or push --For details, see documentation for the [Docker HTTP API V2](https://docs.docker.com/registry/spec/api/). --When evaluating or troubleshooting registry throughput, also consider the configuration of your client environment: --* your Docker daemon configuration for concurrent operations -* your network connection to the registry's data endpoint (or endpoints, if your registry is [geo-replicated](container-registry-geo-replication.md)). --If you experience issues with throughput to your registry, see [Troubleshoot registry performance](container-registry-troubleshoot-performance.md). --#### Example --Pushing a single 133 MB `nginx:latest` image to an Azure container registry requires multiple read and write operations for the image's five layers: --* Read operations to read the image manifest, if it exists in the registry -* Write operations to write the configuration blob of the image -* Write operations to write the image manifest --### Throttling --You may experience throttling of pull or push operations when the registry determines the rate of requests exceeds the limits allowed for the registry's service tier. You may see an HTTP 429 error similar to `Too many requests`. --Throttling could occur temporarily when you generate a burst of image pull or push operations in a very short period, even when the average rate of read and write operations is within registry limits. You may need to implement retry logic with some backoff in your code or reduce the maximum rate of requests to the registry. --## Show registry usage --Use the [az acr show-usage](/cli/azure/acr#az-acr-show-usage) command in the Azure CLI, [Get-AzContainerRegistryUsage](/powershell/module/az.containerregistry/get-azcontainerregistryusage) in Azure PowerShell, or the [List Usages](/rest/api/containerregistry/registries/list-usages) REST API, to get a snapshot of your registry's current consumption of storage and other resources, compared with the limits for that registry's service tier. Storage usage also appears on the registry's **Overview** page in the portal. --Usage information helps you make decisions about [changing the service tier](#changing-tiers) when your registry nears a limit. This information also helps you [manage consumption](container-registry-best-practices.md#manage-registry-size). --> [!NOTE] -> The registry's storage usage should only be used as a guide and may not reflect recent registry operations. Monitor the registry's [StorageUsed metric](monitor-service-reference.md#container-registry-metrics) for up-to-date data. --Depending on your registry's service tier, usage information includes some or all of the following, along with the limit in that tier: --* Storage consumed in bytes<sup>1</sup> -* Number of [webhooks](container-registry-webhook.md) -* Number of [geo-replications](container-registry-geo-replication.md) (includes the home replica) -* Number of [private endpoints](container-registry-private-link.md) -* Number of [IP access rules](container-registry-access-selected-networks.md) -* Number of [virtual network rules](container-registry-vnet.md) --<sup>1</sup>In a geo-replicated registry, storage usage is shown for the home region. Multiply by the number of replications for total storage consumed. --## Changing tiers --You can change a registry's service tier with the Azure CLI or in the Azure portal. You can move freely between tiers as long as the tier you're switching to has the required maximum storage capacity. --There is no registry downtime or impact on registry operations when you move between service tiers. --### Azure CLI --To move between service tiers in the Azure CLI, use the [az acr update][az-acr-update] command. For example, to switch to Premium: --```azurecli -az acr update --name myContainerRegistry --sku Premium -``` --### Azure PowerShell --To move between service tiers in Azure PowerShell, use the [Update-AzContainerRegistry][update-azcontainerregistry] cmdlet. For example, to switch to Premium: --```azurepowershell -Update-AzContainerRegistry -ResourceGroupName myResourceGroup -Name myContainerRegistry -Sku Premium -``` --### Azure portal --In the container registry **Overview** in the Azure portal, select **Update**, then select a new **SKU** from the SKU drop-down. --![Update container registry SKU in Azure portal][update-registry-sku] --## Pricing --For pricing information on each of the Azure Container Registry service tiers, see [Container Registry pricing][container-registry-pricing]. --For details about pricing for data transfers, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/). --## Next steps --**Azure Container Registry Roadmap** --Visit the [ACR Roadmap][acr-roadmap] on GitHub to find information about upcoming features in the service. --**Azure Container Registry UserVoice** --Submit and vote on new feature suggestions in [ACR UserVoice][container-registry-uservoice]. --<!-- IMAGES --> -[update-registry-sku]: ./media/container-registry-skus/update-registry-sku.png --<!-- LINKS - External --> -[acr-roadmap]: https://aka.ms/acr/roadmap -[container-registry-pricing]: https://azure.microsoft.com/pricing/details/container-registry/ -[container-registry-uservoice]: https://feedback.azure.com/d365community/forum/180a533d-0d25-ec11-b6e6-000d3a4f0858 --<!-- LINKS - Internal --> -[az-acr-update]: /cli/azure/acr#az_acr_update -[update-azcontainerregistry]: /powershell/module/az.containerregistry/update-azcontainerregistry -[container-registry-geo-replication]: container-registry-geo-replication.md -[container-registry-storage]: container-registry-storage.md -[container-registry-delete]: container-registry-delete.md -[container-registry-webhook]: container-registry-webhook.md |
container-registry | Container Registry Soft Delete Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-soft-delete-policy.md | - Title: "Recover deleted artifacts with soft delete policy in Azure Container Registry (Preview)" -description: Learn how to enable the soft delete policy in Azure Container Registry to manage and recover the accidentally deleted artifacts as soft deleted artifacts with a set retention period. ---- Previously updated : 01/22/2024----# Recover deleted artifacts with soft delete policy in Azure Container Registry (Preview) --Azure Container Registry (ACR) allows you to enable the *soft delete policy* to recover any accidentally deleted artifacts for a set retention period. -----## Aspects of soft delete policy --The soft delete policy can be enabled/disabled at any time. Once you enable the soft-delete policy in ACR, it manages the deleted artifacts as soft deleted artifacts with a set retention period. Thereby you have ability to list, filter, and restore the soft deleted artifacts. --### Retention period --The default retention period for soft deleted artifacts is seven days, but itΓÇÖs possible to set the retention period value between one to 90 days. You can set, update, and change the retention policy value. The soft deleted artifacts expire once the retention period is complete. --### Autopurge --The autopurge runs every 24 hours and always considers the current value of retention days before permanently deleting the soft deleted artifacts. For example, after five days of soft deleting the artifact, if you change the value of retention days from seven to 14 days, the artifact will only expire after 14 days from the initial soft delete. --## Availability and pricing information --This feature is available in all the service tiers (also known as SKUs). For information about registry service tiers, see [Azure Container Registry service tiers](container-registry-skus.md). --> [!NOTE] ->The soft deleted artifacts are billed as per active sku pricing for storage. --## Preview limitations --> [!IMPORTANT] -> The soft delete policy is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --* ACR currently doesn't support manually purging soft deleted artifacts. -* The soft delete policy doesn't support a geo-replicated registry. -* ACR doesn't allow enabling both the retention policy and the soft delete policy. See [retention policy for untagged manifests.](container-registry-retention-policy.md) ---## Prerequisites --* The user requires following permissions (at registry level) to perform soft delete operations: --| Permission | Description | -| - | -- | -| Microsoft.ContainerRegistry/registries/deleted/read | List soft-deleted artifacts | -| Microsoft.ContainerRegistry/registries/deleted/restore/action | Restore soft-deleted artifact | --* You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.0.74 or later is required. Run `az --version` for the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --* Sign in to the [Azure portal](https://ms.portal.azure.com/). --## Enable soft delete policy for registry - CLI --1. Update soft delete policy for a given `MyRegistry` ACR with a retention period set between 1 to 90 days. -- ```azurecli-interactive - az acr config soft-delete update -r MyRegistry --days 7 --status <enabled/disabled> - ``` --2. Show configured soft delete policy for a given `MyRegistry` ACR. -- ```azurecli-interactive - az acr config soft-delete show -r MyRegistry - ``` --### List the soft deleted artifacts- CLI --The `az acr repository list-deleted` commands enable fetching and listing of the soft deleted repositories. For more information use `--help`. --1. List the soft deleted repositories in a given `MyRegistry` ACR. -- ```azurecli-interactive - az acr repository list-deleted -n MyRegistry - ``` --The `az acr manifest list-deleted` commands enable fetching and listing of the soft delete manifests. --2. List the soft deleted manifests of a `hello-world` repository in a given `MyRegistry` ACR. -- ```azurecli-interactive - az acr manifest list-deleted -r MyRegistry -n hello-world - ``` --The `az acr manifest list-deleted-tags` commands enable fetching and listing of the soft delete tags. --3. List the soft delete tags of a `hello-world` repository in a given `MyRegistry` ACR. -- ```azurecli-interactive - az acr manifest list-deleted-tags -r MyRegistry -n hello-world - ``` --4. Filter the soft delete tags of a `hello-world` repository to match tag `latest` in a given `MyRegistry` ACR. -- ```azurecli-interactive - az acr manifest list-deleted-tags -r MyRegistry -n hello-world:latest - ``` --### Restore the soft deleted artifacts - CLI --The `az acr manifest restore` commands restore a single image by tag and digest. --1. Restore the image of a `hello-world` repository by tag `latest`and digest `sha256:abc123` in a given `MyRegistry` ACR. -- ```azurecli-interactive - az acr manifest restore -r MyRegistry -n hello-world:latest -d sha256:abc123 - ``` --2. Restore the most recently deleted manifest of a `hello-world` repository by tag `latest` in a given `MyRegistry` ACR. -- ```azurecli-interactive - az acr manifest restore -r MyRegistry -n hello-world:latest - ``` --Force restore overwrites the existing tag with the same name in the repository. If the soft delete policy is enabled during force restore. The overwritten tag is soft deleted. You can force restore with specific arguments `--force, -f`. --3. Force restore the image of a `hello-world` repository by tag `latest`and digest `sha256:abc123` in a given `MyRegistry` ACR. -- ```azurecli-interactive - az acr manifest restore -r MyRegistry -n hello-world:latest -d sha256:abc123 -f - ``` --> [!IMPORTANT] -> Restoring a [manifest list](push-multi-architecture-images.md#manifest-list) won't recursively restore any underlying soft deleted manifests. -> If you're restoring soft deleted [ORAS artifacts](container-registry-manage-artifact.md), then restoring a subject doesn't recursively restore the referrer chain. Also, the subject has to be restored first, only then a referrer manifest is allowed to restore. Otherwise it throws an error. --## Enable soft delete policy for registry - Portal --You can also enable a registry's soft delete policy in the [Azure portal](https://portal.azure.com). --1. Navigate to your Azure Container Registry. -2. In the **Overview tab**, verify the status of the **Soft Delete** (Preview). -3. If the **Status** is **Disabled**, Select **Update**. -------4. Select the checkbox to **Enable Soft Delete**. -5. Select the number of days between `0` and `90` days for retaining the soft deleted artifacts. -6. Select **Save** to save your changes. -------### Restore the soft deleted artifacts - Portal --1. Navigate to your Azure Container Registry. -2. In the **Menu** section, Select **Services**, and Select **Repositories**. -3. In the **Repositories**, Select your preferred **Repository**. -4. Select on the **Manage deleted artifacts** to see all the soft deleted artifacts. --> [!NOTE] -> Once you enable the soft delete policy and perform actions such as untag a manifest or delete an artifact, You will be able to find these tags and artifacts in the Managed delete artifacts before the number of retention days expire. -------5. Filter the deleted artifact you have to restore. -6. Select the artifact, and select on the **Restore** in the right column. -7. A **Restore Artifact** window pops up. -------8. Select the tag to restore, here you have an option to choose, and recover any additional tags. -9. Select on **Restore**. -------### Restore from soft deleted repositories - Portal --1. Navigate to your Azure Container Registry. -2. In the **Menu** section, Select **Services**, -3. In the **Services** tab, Select **Repositories**. -4. In the **Repositories** tab, select on **Manage Deleted Repositories**. -------5. Filter the deleted repository in the **Soft Deleted Repositories**(Preview). -------6. Select the deleted repository, filter the deleted artifact from on the **Manage deleted artifacts**. -7. Select the artifact, and select on the **Restore** in the right column. -8. A **Restore Artifact** window pops up. -------9. Select the tag to restore, here you have an option to choose, and recover any other tags. -10. Select on **Restore**. -------> [!IMPORTANT] -> Importing a soft deleted image at both source and target resources is blocked. -> Pushing an image to the soft deleted repository will restore the soft deleted repository. -> Pushing an image that shares a same manifest digest with the soft deleted image is not allowed. Instead restore the soft deleted image. --## Next steps --* Learn more about options to [delete images and repositories](container-registry-delete.md) in Azure Container Registry. |
container-registry | Container Registry Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-storage.md | - Title: Container image storage -description: Details on how your container images and other artifacts are stored in Azure Container Registry, including security, redundancy, and capacity. --- Previously updated : 10/31/2023-----# Container image storage in Azure Container Registry --Every [Basic, Standard, and Premium](container-registry-skus.md) Azure container registry benefits from advanced Azure storage features including encryption-at-rest. The following sections describe the features and limits of image storage in Azure Container Registry (ACR). --## Encryption-at-rest --All container images and other artifacts in your registry are encrypted at rest. Azure automatically encrypts an image before storing it, and decrypts it on-the-fly when you or your applications and services pull the image. Optionally apply an extra encryption layer with a [customer-managed key](tutorial-enable-customer-managed-keys.md). --## Regional storage --Azure Container Registry stores data in the region where the registry is created, to help customers meet data residency and compliance requirements. In all regions except Brazil South and Southeast Asia, Azure may also store registry data in a paired region in the same geography. In the Brazil South and Southeast Asia regions, registry data is always confined to the region, to accommodate data residency requirements for those regions. --If a regional outage occurs, the registry data may become unavailable and is not automatically recovered. Customers who wish to have their registry data stored in multiple regions for better performance across different geographies or who wish to have resiliency in the event of a regional outage should enable [geo-replication](container-registry-geo-replication.md). --## Geo-replication --For scenarios requiring high-availability assurance, consider using the [geo-replication](container-registry-geo-replication.md) feature of Premium registries. Geo-replication helps guard against losing access to your registry in the event of a regional failure. Geo-replication provides other benefits, too, like network-close image storage for faster pushes and pulls in distributed development or deployment scenarios. --## Zone redundancy --To help create a resilient and high-availability Azure container registry, optionally enable [zone redundancy](zone-redundancy.md) in select Azure regions. A feature of the Premium service tier, zone redundancy uses Azure [availability zones](../availability-zones/az-overview.md) to replicate your registry to a minimum of three separate zones in each enabled region. Combine geo-replication and zone redundancy to enhance both the reliability and performance of a registry. --## Scalable storage --Azure Container Registry allows you to create as many repositories, images, layers, or tags as you need, up to the [registry storage limit](container-registry-skus.md#service-tier-features-and-limits). --High numbers of repositories and tags can affect the performance of your registry. Periodically delete unused repositories, tags, and images as part of your registry maintenance routine, and optionally set a [retention policy](container-registry-retention-policy.md) for untagged manifests. Deleted registry resources such as repositories, images, and tags *cannot* be recovered after deletion. For more information about deleting registry resources, see [Delete container images in Azure Container Registry](container-registry-delete.md). --## Storage cost --For full details about pricing, see [Azure Container Registry pricing][pricing]. --## Next steps --For more information about Basic, Standard, and Premium container registries, see [Azure Container Registry service tiers](container-registry-skus.md). --<!-- IMAGES --> --<!-- LINKS - External --> -[pricing]: https://aka.ms/acr/pricing --<!-- LINKS - Internal --> |
container-registry | Container Registry Support Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-support-policies.md | - Title: Azure Container Registry technical support policies -description: Learn about Azure Container Registry (ACR) technical support policies --- Previously updated : 10/31/2023--#Customer intent: As a developer, I want to understand what ACR components I need to manage, what components are managed by Microsoft. ---# Support policies for Azure Container Registry (ACR) --This article provides details about Azure Container Registry (ACR) support policies, supported features, and limitations. --## Features supported by Azure Container Registry -->* [Connect to ACR using Azure private link](container-registry-private-link.md) ->* [Push and pull Helm charts to ACR](container-registry-helm-repos.md) ->* [Encrypt using Customer managed keys](tutorial-enable-customer-managed-keys.md) ->* [Enable Content trust](container-registry-content-trust.md) ->* [Scan Images using Azure Security Center](/azure/defender-for-cloud/defender-for-container-registries-introduction) ->* [ACR Tasks](./container-registry-tasks-overview.md) ->* [Import container images to ACR](container-registry-import-images.md) ->* [Image locking in ACR](container-registry-image-lock.md) ->* [Synchronize content with ACR using Connected Registry](intro-connected-registry.md) ->* [Geo replication in ACR](container-registry-geo-replication.md) --## Microsoft/ACR canΓÇÖt extend support --* Any local network issues that interrupt the connection to ACR service. -* Vulnerabilities or issues caused by running third-party container images using ACR Tasks. -* Vulnerabilities or bugs with images in the ACR customer store. --## Microsoft/ACR extends support --* General queries about the supported features of ACR. -* Unable to pull image due to authentication errors, image size, and client-side issues with container runtime. -* Unable to push an image to ACR due to authentication errors, image size, and client-side issues with container runtime. -* Unable to add VNET/Subnet to ACR Firewall across subscription. -* Issues with slow push/pull operations due to client, network, or ACR. -* Issues with integration of ACR with Azure Kubernetes Service (AKS) or with any other Azure service. -* Authentication issues in ACR, authentication errors in integration, repository-based access role(RBAC). --## Shared responsibility --* Issues with slow push/pull operations caused by a slow-performing client VM, network, or ACR. Here the customers have to provide the time range, image name, and configuration settings. -* Issues with Integration of ACR with any other Azure service. Here the customers have to provide the details of the client used to build and pull the image and push it to ACR. For example, the customer uses the DevOps pipeline to build the image and push it to ACR. --## Customers have to self support - -* Microsoft/ACR canΓÇÖt make any changes if there's a detection of base image vulnerability in the security center (Microsoft Defender for Cloud). Customers can reach out for guidance -* Microsoft/ACR canΓÇÖt make any changes with Dockerfile. Customers have to identify and review it from their end. --| ACR support | Link | -| - | -- | -| Create a support ticket | https://aka.ms/acr/support/create-ticket | -| Service updates and releases | [ACR Blog](https://azure.microsoft.com/blog/tag/azure-container-registry/) | -| Roadmap | https://aka.ms/acr/roadmap | -| FAQ | https://aka.ms/acr/faq | -| Audit Logs | https://aka.ms/acr/audit-logs | -| Health-Check-CLI | https://aka.ms/acr/health-check | -| ACR Links | https://aka.ms/acr/links | -### API and SDK reference -->* [SDK for Python](https://pypi.org/project/azure-mgmt-containerregistry/) ->* [SDK for .NET](https://www.nuget.org/packages/Azure.Containers.ContainerRegistry) ->* [REST API Reference](/rest/api/containerregistry/) --## Upstream bugs --The ACR support will identify the root cause of every issue raised. The team will report all the identified bugs as an [issue in the ACR repository](https://github.com/Azure/acr/issues) with supporting details. The engineering team will review and provide a workaround solution, bug fix, or upgrade with a new release timeline. All the bug fixes integrate from upstream. -Customers can watch the issues, bug fixes, add more details, and follow the new releases. |
container-registry | Container Registry Task Run Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-task-run-template.md | - Title: Quick task run with template -description: Queue an ACR task run to build an image using an Azure Resource Manager template ---- Previously updated : 10/31/2023----# Run ACR Tasks using Resource Manager templates --[ACR Tasks](container-registry-tasks-overview.md) is a suite of features within Azure Container Registry to help you manage and modify container images across the container lifecycle. --This article shows Azure Resource Manager template examples to queue a quick task run, similar to one you can create manually using the [az acr build][az-acr-build] command. --A Resource Manager template to queue a task run is useful in automation scenarios and extends the functionality of `az acr build`. For example: --* Use a template to create a container registry and immediately queue a task run to build and push a container image -* Create or enable additional resources you can use in a quick task run, such as a managed identity for Azure resources --## Limitations --* You must specify a remote context such as a GitHub repo as the [source location](container-registry-tasks-overview.md#context-locations) for your task run. You can't use a local source context. -* For task runs using a managed identity, only a *user-assigned* managed identity is permitted. --## Prerequisites --* **GitHub account** - Create an account on https://github.com if you don't already have one. -* **Fork sample repository** - For the task examples shown here, use the GitHub UI to fork the following sample repository into your GitHub account: https://github.com/Azure-Samples/acr-build-helloworld-node. This repo contains sample Dockerfiles and source code to build small container images. --## Example: Create registry and queue task run --This example uses a [sample template](https://github.com/Azure/acr/tree/master/docs/tasks/run-as-deployment/quickdockerbuild) to create a container registry and queue a task run that builds and pushes an image. --### Template parameters --For this example, provide values for the following template parameters: --|Parameter |Value | -||| -|registryName |Unique name of registry that's created | -|repository |Target repository for build task | -|taskRunName |Name of task run, which specifies image tag | -|sourceLocation |Remote context for the build task, for example, https://github.com/Azure-Samples/acr-build-helloworld-node. The Dockerfile in the repo root builds a container image for a small Node.js web app. If desired, use your fork of the repo as the build context. | --### Deploy the template --Deploy the template with the [az deployment group create][az-deployment-group-create] command. This example builds and pushes the *helloworld-node:testrun* image to a registry named *mycontainerregistry*. ---```azurecli -az deployment group create \ - --resource-group myResourceGroup \ - --template-uri https://raw.githubusercontent.com/Azure/acr/master/docs/tasks/run-as-deployment/quickdockerbuild/azuredeploy.json \ - --parameters \ - registryName=mycontainerregistry \ - repository=helloworld-node \ - taskRunName=testrun \ - sourceLocation=https://github.com/Azure-Samples/acr-build-helloworld-node.git#main - ``` --The previous command passes the parameters on the command line. If desired, pass them in a [parameters file](../azure-resource-manager/templates/parameter-files.md). --### Verify deployment --After the deployment completes successfully, verify the image is built by running [az acr repository show-tags][az-acr-repository-show-tags]: --```azurecli -az acr repository show-tags \ - --name mycontainerregistry \ - --repository helloworld-node --output table -``` --Output: --```console -Result -testrun -``` --### View run log --To view details about the task run, view the run log. --First, get the run ID with [az acr task list-runs][az-acr-task-list-runs] -```azurecli -az acr task list-runs \ - --registry mycontainerregistry --output table -``` --Output is similar to: --```console -RUN ID TASK PLATFORM STATUS TRIGGER STARTED DURATION - -- --ca1 linux Succeeded Manual 2020-03-23T17:54:28Z 00:00:48 -``` --Run [az acr task logs][az-acr-task-logs] to view task run logs for the run ID, in this case *ca1*: --```azurecli -az acr task logs \ - --registry mycontainerregistry \ - --run-id ca1 -``` --The output shows the task run log. --You can also view the task run log in the Azure portal. --1. Navigate to your container registry -2. Under **Services**, select **Tasks** > **Runs**. -3. Select the run ID, in this case *ca1*. --The portal shows the task run log. --## Example: Task run with managed identity --Use a [sample template](https://github.com/Azure/acr/tree/master/docs/tasks/run-as-deployment/quickdockerbuildwithidentity) to queue a task run that enables a user-assigned managed identity. During the task run, the identity authenticates to pull an image from another Azure container registry. --This scenario is similar to [Cross-registry authentication in an ACR task using an Azure-managed identity](container-registry-tasks-cross-registry-authentication.md). For example, an organization might maintain a centralized registry with base images accessed by multiple development teams. --### Prepare base registry --For demonstration purposes, create a separate container registry as your base registry, and push a Node.js base image pulled from Docker Hub. --1. Create a second container registry, for example *mybaseregistry*, to store base images. -1. Pull the `node:9-alpine` image from Docker Hub, tag it for your base registry, and push it to the base registry: -- ```azurecli - docker pull node:9-alpine - docker tag node:9-alpine mybaseregistry.azurecr.io/baseimages/node:9-alpine - az acr login -n mybaseregistry - docker push mybaseregistry.azurecr.io/baseimages/node:9-alpine - ``` --### Create new Dockerfile --Create a Dockerfile that pulls the base image from your base registry. Perform the following steps in your local fork of the GitHub repo, for example, `https://github.com/myGitHubID/acr-build-helloworld-node.git`. --1. In the GitHub UI, select **Create new file**. -1. Name your file *Dockerfile-test* and paste the following contents. Substitute your registry name for *mybaseregistry*. - ``` - FROM mybaseregistry.azurecr.io/baseimages/node:9-alpine - COPY . /src - RUN cd /src && npm install - EXPOSE 80 - CMD ["node", "/src/server.js"] - ``` - 1. Select **Commit new file**. ---### Give identity pull permissions to the base registry --Give the managed identity permissions to pull from the base registry, *mybaseregistry*. --Use the [az acr show][az-acr-show] command to get the resource ID of the base registry and store it in a variable: --```azurecli -baseregID=$(az acr show \ - --name mybaseregistry \ - --query id --output tsv) -``` --Use the [az role assignment create][az-role-assignment-create] command to assign the identity the Acrpull role to the base registry. This role has permissions only to pull images from the registry. --```azurecli -az role assignment create \ - --assignee $principalID \ - --scope $baseregID \ - --role acrpull -``` --### Template parameters --For this example, provide values for the following template parameters: --|Parameter |Value | -||| -|registryName |Name of registry where image is built | -|repository |Target repository for build task | -|taskRunName |Name of task run, which specifies image tag | -|userAssignedIdentity |Resource ID of user-assigned identity enabled in the task| -|customRegistryIdentity | Client ID of user-assigned identity enabled in the task, used to authenticate with custom registry | -|customRegistry |Login server name of the custom registry accessed in the task, for example, *mybaseregistry.azurecr.io*| -|sourceLocation |Remote context for the build task, for example, *https://github.com/\<your-GitHub-ID\>/acr-build-helloworld-node.* | -|dockerFilePath | Path to the Dockerfile at the remote context, used to build the image. | --### Deploy the template --Deploy the template with the [az deployment group create][az-deployment-group-create] command. This example builds and pushes the *helloworld-node:testrun* image to a registry named *mycontainerregistry*. The base image is pulled from *mybaseregistry.azurecr.io*. --```azurecli -az deployment group create \ - --resource-group myResourceGroup \ - --template-uri https://raw.githubusercontent.com/Azure/acr/master/docs/tasks/run-as-deployment/quickdockerbuildwithidentity/azuredeploy.json \ - --parameters \ - registryName=mycontainerregistry \ - repository=helloworld-node \ - taskRunName=basetask \ - userAssignedIdentity=$resourceID \ - customRegistryIdentity=$clientID \ - sourceLocation=https://github.com/<your-GitHub-ID>/acr-build-helloworld-node.git#main \ - dockerFilePath=Dockerfile-test \ - customRegistry=mybaseregistry.azurecr.io -``` --The previous command passes the parameters on the command line. If desired, pass them in a [parameters file](../azure-resource-manager/templates/parameter-files.md). --### Verify deployment --After the deployment completes successfully, verify the image is built by running [az acr repository show-tags][az-acr-repository-show-tags]: --```azurecli -az acr repository show-tags \ - --name mycontainerregistry \ - --repository helloworld-node --output table -``` --Output: --```console -Result -basetask -``` --### View run log --To view the run log, see steps in the [preceding section](#view-run-log). --## Next steps -- * See more template examples in the [ACR GitHub repo](https://github.com/Azure/acr/tree/master/docs/tasks/run-as-deployment). - * For details about template properties, see the template reference for [Task runs](/azure/templates/microsoft.containerregistry/2019-06-01-preview/registries/taskruns) and [Tasks](/azure/templates/microsoft.containerregistry/2019-06-01-preview/registries/tasks). ---<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-build]: /cli/azure/acr#az_acr_build -[az-acr-show]: /cli/azure/acr#az_acr_show -[az-acr-task-run]: /cli/azure/acr/task#az_acr_task_run -[az-acr-task-logs]: /cli/azure/acr/task#az_acr_task_logs -[az-acr-repository-show-tags]: /cli/azure/acr/repository#az_acr_repository_show_tags -[az-acr-task-list-runs]: /cli/azure/acr/task#az_acr_task_list_runs -[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create -[az-identity-create]: /cli/azure/identity#az_identity_create -[az-identity-show]: /cli/azure/identity#az_identity_show -[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create |
container-registry | Container Registry Tasks Authentication Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-authentication-key-vault.md | - Title: External authentication from ACR task -description: Configure an Azure Container Registry Task (ACR Task) to read Docker Hub credentials stored in an Azure key vault, by using a managed identity for Azure resources. ---- Previously updated : 10/31/2023----# External authentication in an ACR task using an Azure-managed identity --In an [ACR task](container-registry-tasks-overview.md), you can [enable a managed identity for Azure resources](container-registry-tasks-authentication-managed-identity.md). The task can use the identity to access other Azure resources, without needing to provide or manage credentials. --In this article, you learn how to enable a managed identity in a task that accesses secrets stored in an Azure key vault. --To create the Azure resources, this article requires that you run the Azure CLI version 2.0.68 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli]. --## Scenario overview --The example task reads Docker Hub credentials stored in an Azure key vault. The credentials are for a Docker Hub account with write (push) permissions to a private Docker Hub repository. To read the credentials, you configure the task with a managed identity and assign appropriate permissions to it. The task associated with the identity builds an image, and signs into Docker Hub to push the image to the private repo. --This example shows steps using either a user-assigned or system-assigned managed identity. Your choice of identity depends on your organization's needs. --In a real-world scenario, a company might publish images to a private repo in Docker Hub as part of a build process. --## Prerequisites --You need an Azure container registry in which you run the task. In this article, this registry is named *myregistry*. Replace with your own registry name in later steps. --If you don't already have an Azure container registry, see [Quickstart: Create a private container registry using the Azure CLI](container-registry-get-started-azure-cli.md). You don't need to push images to the registry yet. --You also need a private repository in Docker Hub, and a Docker Hub account with permissions to write to the repo. In this example, this repo is named *hubuser/hubrepo*. --## Create a key vault and store secrets --First, if you need to, create a resource group named *myResourceGroup* in the *eastus* location with the following [az group create][az-group-create] command: --```azurecli-interactive -az group create --name myResourceGroup --location eastus -``` --Use the [az keyvault create][az-keyvault-create] command to create a key vault. Be sure to specify a unique key vault name. --```azurecli-interactive -az keyvault create --name mykeyvault --resource-group myResourceGroup --location eastus -``` --Store the required Docker Hub credentials in the key vault using the [az keyvault secret set][az-keyvault-secret-set] command. In these commands, the values are passed in environment variables: --```azurecli -# Store Docker Hub user name -az keyvault secret set \ - --name UserName \ - --value $USERNAME \ - --vault-name mykeyvault --# Store Docker Hub password -az keyvault secret set \ - --name Password \ - --value $PASSWORD \ - --vault-name mykeyvault -``` --In a real-world scenario, secrets would likely be set and maintained in a separate process. --## Define task steps in YAML file --The steps for this example task are defined in a [YAML file](container-registry-tasks-reference-yaml.md). Create a file named `dockerhubtask.yaml` in a local working directory and paste the following contents. Be sure to replace the key vault name in the file with the name of your key vault. --```yml -version: v1.1.0 -# Replace mykeyvault with the name of your key vault -secrets: - - id: username - keyvault: https://mykeyvault.vault.azure.net/secrets/UserName - - id: password - keyvault: https://mykeyvault.vault.azure.net/secrets/Password -steps: -# Log in to Docker Hub - - cmd: bash echo '{{.Secrets.password}}' | docker login --username '{{.Secrets.username}}' --password-stdin -# Build image - - build: -t {{.Values.PrivateRepo}}:$ID https://github.com/Azure-Samples/acr-tasks.git -f hello-world.dockerfile -# Push image to private repo in Docker Hub - - push: - - {{.Values.PrivateRepo}}:$ID -``` --The task steps do the following: --* Manage secret credentials to authenticate with Docker Hub. -* Authenticate with Docker Hub by passing the secrets to the `docker login` command. -* Build an image using a sample Dockerfile in the [Azure-Samples/acr-tasks](https://github.com/Azure-Samples/acr-tasks.git) repo. -* Push the image to the private Docker Hub repository. ---## Option 1: Create task with user-assigned identity --The steps in this section create a task and enable a user-assigned identity. If you want to enable a system-assigned identity instead, see [Option 2: Create task with system-assigned identity](#option-2-create-task-with-system-assigned-identity). ---### Create task --Create the task *dockerhubtask* by executing the following [az acr task create][az-acr-task-create] command. The task runs without a source code context, and the command references the file `dockerhubtask.yaml` in the working directory. The `--assign-identity` parameter passes the resource ID of the user-assigned identity. --```azurecli -az acr task create \ - --name dockerhubtask \ - --registry myregistry \ - --context \ - --file dockerhubtask.yaml \ - --assign-identity $resourceID -``` ----### Grant identity access to key vault --Run the following [az keyvault set-policy][az-keyvault-set-policy] command to set an access policy on the key vault. The following example allows the identity to read secrets from the key vault. --```azurecli -az keyvault set-policy --name mykeyvault \ - --resource-group myResourceGroup \ - --object-id $principalID \ - --secret-permissions get -``` --Proceed to [Manually run the task](#manually-run-the-task). --## Option 2: Create task with system-assigned identity --The steps in this section create a task and enable a system-assigned identity. If you want to enable a user-assigned identity instead, see [Option 1: Create task with user-assigned identity](#option-1-create-task-with-user-assigned-identity). --### Create task --Create the task *dockerhubtask* by executing the following [az acr task create][az-acr-task-create] command. The task runs without a source code context, and the command references the file `dockerhubtask.yaml` in the working directory. The `--assign-identity` parameter with no value enables the system-assigned identity on the task. --```azurecli -az acr task create \ - --name dockerhubtask \ - --registry myregistry \ - --context \ - --file dockerhubtask.yaml \ - --assign-identity -``` ---### Grant identity access to key vault --Run the following [az keyvault set-policy][az-keyvault-set-policy] command to set an access policy on the key vault. The following example allows the identity to read secrets from the key vault. --```azurecli -az keyvault set-policy --name mykeyvault \ - --resource-group myResourceGroup \ - --object-id $principalID \ - --secret-permissions get -``` --## Manually run the task --To verify that the task in which you enabled a managed identity runs successfully, manually trigger the task with the [az acr task run][az-acr-task-run] command. The `--set` parameter is used to pass the private repo name to the task. In this example, the placeholder repo name is *hubuser/hubrepo*. --```azurecli -az acr task run --name dockerhubtask --registry myregistry --set PrivateRepo=hubuser/hubrepo -``` --When the task runs successfully, output shows successful authentication to Docker Hub, and the image is successfully built and pushed to the private repo: --```console -Queued a run with ID: cf24 -Waiting for an agent... -2019/06/20 18:05:55 Using acb_vol_b1edae11-30de-4f2b-a9c7-7d743e811101 as the home volume -2019/06/20 18:05:58 Creating Docker network: acb_default_network, driver: 'bridge' -2019/06/20 18:05:58 Successfully set up Docker network: acb_default_network -2019/06/20 18:05:58 Setting up Docker configuration... -2019/06/20 18:05:59 Successfully set up Docker configuration -2019/06/20 18:05:59 Logging in to registry: myregistry.azurecr.io -2019/06/20 18:06:00 Successfully logged into myregistry.azurecr.io -2019/06/20 18:06:00 Executing step ID: acb_step_0. Timeout(sec): 600, Working directory: '', Network: 'acb_default_network' -2019/06/20 18:06:00 Launching container with name: acb_step_0 -[...] -Login Succeeded -2019/06/20 18:06:02 Successfully executed container: acb_step_0 -2019/06/20 18:06:02 Executing step ID: acb_step_1. Timeout(sec): 600, Working directory: '', Network: 'acb_default_network' -2019/06/20 18:06:02 Scanning for dependencies... -2019/06/20 18:06:04 Successfully scanned dependencies -2019/06/20 18:06:04 Launching container with name: acb_step_1 -Sending build context to Docker daemon 129kB -[...] -2019/06/20 18:06:07 Successfully pushed image: hubuser/hubrepo:cf24 -2019/06/20 18:06:07 Step ID: acb_step_0 marked as successful (elapsed time in seconds: 2.064353) -2019/06/20 18:06:07 Step ID: acb_step_1 marked as successful (elapsed time in seconds: 2.594061) -2019/06/20 18:06:07 Populating digests for step ID: acb_step_1... -2019/06/20 18:06:09 Successfully populated digests for step ID: acb_step_1 -2019/06/20 18:06:09 Step ID: acb_step_2 marked as successful (elapsed time in seconds: 2.743923) -2019/06/20 18:06:09 The following dependencies were found: -2019/06/20 18:06:09 -- image:- registry: registry.hub.docker.com - repository: hubuser/hubrepo - tag: cf24 - digest: sha256:92c7f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e899a - runtime-dependency: - registry: registry.hub.docker.com - repository: library/hello-world - tag: latest - digest: sha256:0e11c388b664df8a27a901dce21eb89f11d8292f7fca1b3e3c4321bf7897bffe - git: - git-head-revision: b0ffa6043dd893a4c75644c5fed384c82ebb5f9e --Run ID: cf24 was successful after 15s -``` --To confirm the image is pushed, check for the tag (`cf24` in this example) in the private Docker Hub repo. --## Next steps --* Learn more about [enabling a managed identity in an ACR task](container-registry-tasks-authentication-managed-identity.md). -* See the [ACR Tasks YAML reference](container-registry-tasks-reference-yaml.md) ---<!-- LINKS - Internal --> -[az-login]: /cli/azure/reference-index#az_login -[az-acr-login]: /cli/azure/acr#az_acr_login -[az-acr-show]: /cli/azure/acr#az_acr_show -[az-acr-build]: /cli/azure/acr#az_acr_build -[az-acr-repository-show-tags]: /cli/azure/acr/repository#az_acr_repository_show_tags -[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create -[az-acr-login]: /cli/azure/acr#az_acr_login -[az-identity-create]: /cli/azure/identity#az_identity_create -[az-identity-show]: /cli/azure/identity#az_identity_show -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-task-create]: /cli/azure/acr/task#az_acr_task_create -[az-acr-task-show]: /cli/azure/acr/task#az_acr_task_show -[az-acr-task-run]: /cli/azure/acr/task#az_acr_task_run -[az-acr-task-list-runs]: /cli/azure/acr/task#az_acr_task_list_runs -[az-acr-task-credential-add]: /cli/azure/acr/task/credential#az_acr_task_credential_add -[az-group-create]: /cli/azure/group?#az_group_create -[az-keyvault-create]: /cli/azure/keyvault?#az_keyvault_create -[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az_keyvault_secret_set -[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy |
container-registry | Container Registry Tasks Authentication Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-authentication-managed-identity.md | - Title: Managed identity in ACR task -description: Enable a managed identity for Azure Resources in an Azure Container Registry task to allow the task to access other Azure resources including other private container registries. ------ Previously updated : 10/31/2023---# Use an Azure-managed identity in ACR Tasks --Enable a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) in an [ACR task](container-registry-tasks-overview.md), so the task can access other Azure resources, without needing to provide or manage credentials. For example, use a managed identity to enable a task step to pull or push container images to another registry. --In this article, you learn how to use the Azure CLI to enable a user-assigned or system-assigned managed identity on an ACR task. You can use the Azure Cloud Shell or a local installation of the Azure CLI. If you'd like to use it locally, version 2.0.68 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. --For illustration purposes, the example commands in this article use [az acr task create][az-acr-task-create] to create a basic image build task that enables a managed identity. For sample scenarios to access secured resources from an ACR task using a managed identity, see: --* [Cross-registry authentication](container-registry-tasks-cross-registry-authentication.md) -* [Access external resources with secrets stored in Azure Key Vault](container-registry-tasks-authentication-key-vault.md) --## Why use a managed identity? --A managed identity for Azure resources provides selected Azure services with an automatically managed identity in Microsoft Entra ID. You can configure an ACR task with a managed identity so that the task can access other secured Azure resources, without passing credentials in the task steps. --Managed identities are of two types: --* *User-assigned identities*, which you can assign to multiple resources and persist for as long as you want. User-assigned identities are currently in preview. --* A *system-assigned identity*, which is unique to a specific resource such as an ACR task and lasts for the lifetime of that resource. --You can enable either or both types of identity in an ACR task. Grant the identity access to another resource, just like any security principal. When the task runs, it uses the identity to access the resource in any task steps that require access. --## Steps to use a managed identity --Follow these high-level steps to use a managed identity with an ACR task. --### 1. (Optional) Create a user-assigned identity --If you plan to use a user-assigned identity, use an existing identity, or create the identity using the Azure CLI or other Azure tools. For example, use the [az identity create][az-identity-create] command. --If you plan to use only a system-assigned identity, skip this step. You create a system-assigned identity when you create the ACR task. --### 2. Enable identity on an ACR task --When you create an ACR task, optionally enable a user-assigned identity, a system-assigned identity, or both. For example, pass the `--assign-identity` parameter when you run the [az acr task create][az-acr-task-create] command in the Azure CLI. --To enable a system-assigned identity, pass `--assign-identity` with no value or `assign-identity [system]`. The following example command creates a Linux task from a public GitHub repository which builds the `hello-world` image and enables a system-assigned managed identity: --```azurecli -az acr task create \ - --image hello-world:{{.Run.ID}} \ - --name hello-world --registry MyRegistry \ - --context https://github.com/Azure-Samples/acr-build-helloworld-node.git#main \ - --file Dockerfile \ - --commit-trigger-enabled false \ - --assign-identity -``` --To enable a user-assigned identity, pass `--assign-identity` with a value of the *resource ID* of the identity. The following example command creates a Linux task from a public GitHub repository which builds the `hello-world` image and enables a user-assigned managed identity: --```azurecli -az acr task create \ - --image hello-world:{{.Run.ID}} \ - --name hello-world --registry MyRegistry \ - --context https://github.com/Azure-Samples/acr-build-helloworld-node.git#main \ - --file Dockerfile \ - --commit-trigger-enabled false - --assign-identity <resourceID> -``` --You can get the resource ID of the identity by running the [az identity show][az-identity-show] command. The resource ID for the ID *myUserAssignedIdentity* in resource group *myResourceGroup* is of the form: --``` -"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myUserAssignedIdentity" -``` --### 3. Grant the identity permissions to access other Azure resources --Depending on the requirements of your task, grant the identity permissions to access other Azure resources. Examples include: --* Assign the managed identity a role with pull, push and pull, or other permissions to a target container registry in Azure. For a complete list of registry roles, see [Azure Container Registry roles and permissions](container-registry-roles.md). -* Assign the managed identity a role to read secrets in an Azure key vault. --Use the [Azure CLI](../role-based-access-control/role-assignments-cli.md) or other Azure tools to manage role-based access to resources. For example, run the [az role assignment create][az-role-assignment-create] command to assign the identity a role to the resource. --The following example assigns a managed identity the permissions to pull from a container registry. The command specifies the *principal ID* of the task identity and the *resource ID* of the target registry. ---```azurecli -az role assignment create \ - --assignee <principalID> \ - --scope <registryID> \ - --role acrpull -``` --### 4. (Optional) Add credentials to the task --If your task needs credentials to pull or push images to another custom registry, or to access other resources, add credentials to the task. Run the [az acr task credential add][az-acr-task-credential-add] command to add credentials, and pass the `--use-identity` parameter to indicate that the identity can access the credentials. --For example, to add credentials for a system-assigned identity to authenticate with the Azure container registry *targetregistry*, pass `use-identity [system]`: --```azurecli -az acr task credential add \ - --name helloworld \ - --registry myregistry \ - --login-server targetregistry.azurecr.io \ - --use-identity [system] -``` --To add credentials for a user-assigned identity to authenticate with the registry *targetregistry*, pass `use-identity` with a value of the *client ID* of the identity. For example: --```azurecli -az acr task credential add \ - --name helloworld \ - --registry myregistry \ - --login-server targetregistry.azurecr.io \ - --use-identity <clientID> -``` --You can get the client ID of the identity by running the [az identity show][az-identity-show] command. The client ID is a GUID of the form `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. --The `--use-identity` parameter is not optional, if the registry has public network access disabled and relies only on certain trusted services to run ACR tasks. See, [example of ACR Tasks](allow-access-trusted-services.md#example-acr-tasks) as a trusted service. --### 5. Run the task --After configuring a task with a managed identity, run the task. For example, to test one of the tasks created in this article, manually trigger it using the [az acr task run][az-acr-task-run] command. If you configured additional, automated task triggers, the task runs when automatically triggered. --## Next steps --In this article, you learned how to enable and use a user-assigned or system-assigned managed identity on an ACR task. For scenarios to access secured resources from an ACR task using a managed identity, see: --* [Cross-registry authentication](container-registry-tasks-cross-registry-authentication.md) -* [Access external resources with secrets stored in Azure Key Vault](container-registry-tasks-authentication-key-vault.md) ---<!-- LINKS - Internal --> -[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create -[az-identity-create]: /cli/azure/identity#az_identity_create -[az-identity-show]: /cli/azure/identity#az_identity_show -[az-acr-task-create]: /cli/azure/acr/task#az_acr_task_create -[az-acr-task-run]: /cli/azure/acr/task#az_acr_task_run -[az-acr-task-credential-add]: /cli/azure/acr/task/credential#az_acr_task_credential_add -[azure-cli-install]: /cli/azure/install-azure-cli |
container-registry | Container Registry Tasks Base Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-base-images.md | - Title: Base image updates - Tasks -description: Learn about base images for application container images, and about how a base image update can trigger an Azure Container Registry task. --- Previously updated : 10/31/2023----# About base image updates for ACR Tasks --This article provides background information about updates to an application's base image and how these updates can trigger an Azure Container Registry task. --## What are base images? --Dockerfiles defining most container images specify a parent image from which the image is based, often referred to as its *base image*. Base images typically contain the operating system, for example [Alpine Linux][base-alpine] or Windows Nano Server, on which the rest of the container's layers are applied. They might also include application frameworks such as [Node.js][base-node] or [.NET Core][base-dotnet]. These base images are themselves typically based on public upstream images. Several of your application images might share a common base image. --A base image is often updated by the image maintainer to include new features or improvements to the OS or framework in the image. Security patches are another common cause for a base image update. When these upstream updates occur, you must also update your base images to include the critical fix. Each application image must then also be rebuilt to include these upstream fixes now included in your base image. --In some cases, such as a private development team, a base image might specify more than OS or framework. For example, a base image could be a shared service component image that needs to be tracked. Members of a team might need to track this base image for testing, or need to regularly update the image when developing application images. --## Maintain copies of base images --For any content in your registries that depends on base content maintained in a public registry such as Docker Hub, we recommend that you copy the content to an Azure container registry or another private registry. Then, ensure that you build your application images by referencing the private base images. Azure Container Registry provides an [image import](container-registry-import-images.md) capability to easily copy content from public registries or other Azure container registries. The next section describes using ACR Tasks to track base image updates when building application updates. You can track base image updates in your own Azure container registries and optionally in upstream public registries. --## Track base image updates --ACR Tasks includes the ability to automatically build images for you when a container's base image is updated. You can use this ability to maintain and update copies of public base images in your Azure container registries, and then to rebuild application images that depend on base images. --ACR Tasks dynamically discovers base image dependencies when it builds a container image. As a result, it can detect when an application image's base image is updated. With one pre-configured build task, ACR Tasks can automatically rebuild every application image that references the base image. With this automatic detection and rebuilding, ACR Tasks saves you the time and effort normally required to manually track and update each and every application image referencing your updated base image. --## Base image locations --For image builds from a Dockerfile, an ACR task detects dependencies on base images in the following locations: --* The same Azure container registry where the task runs -* Another private Azure container registry in the same or a different region -* A public repo in Docker Hub -* A public repo in Microsoft Container Registry --If the base image specified in the `FROM` statement resides in one of these locations, the ACR task adds a hook to ensure the image is rebuilt anytime its base is updated. --## Base image notifications --The time between when a base image is updated and when the dependent task is triggered depends on the base image location: --* **Base images from a public repo in Docker Hub or MCR** - For base images in public repositories, an ACR task checks for image updates at a random interval of between 10 and 60 minutes. Dependent tasks are run accordingly. -* **Base images from an Azure container registry** - For base images in Azure container registries, an ACR task immediately triggers a run when its base image is updated. The base image may be in the same ACR where the task runs or in a different ACR in any region. --## Additional considerations --* **Base images for application images** - Currently, an ACR task only tracks base image updates for application (*runtime*) images. It doesn't track base image updates for intermediate (*buildtime*) images used in multi-stage Dockerfiles. --* **Enabled by default** - When you create an ACR task with the [az acr task create][az-acr-task-create] command, by default the task is *enabled* for trigger by a base image update. That is, the `base-image-trigger-enabled` property is set to True. If you want to disable this behavior in a task, update the property to False. For example, run the following [az acr task update][az-acr-task-update] command: -- ```azurecli - az acr task update --registry myregistry --name mytask --base-image-trigger-enabled False - ``` --* **Trigger to track dependencies** - To enable an ACR task to determine and track a container image's dependencies -- which include its base image -- you must first trigger the task to build the image **at least once**. For example, trigger the task manually using the [az acr task run][az-acr-task-run] command. --* **Stable tag for base image** - To trigger a task on base image update, the base image must have a *stable* tag, such as `node:9-alpine`. This tagging is typical for a base image that is updated with OS and framework patches to a latest stable release. If the base image is updated with a new version tag, it does not trigger a task. For more information about image tagging, see the [best practices guidance](container-registry-image-tag-version.md). --* **Other task triggers** - In a task triggered by base image updates, you can also enable triggers based on [source code commit](container-registry-tutorial-build-task.md) or [a schedule](container-registry-tasks-scheduled.md). A base image update can also trigger a [multi-step task](container-registry-tasks-multi-step.md). --## Next steps --See the following tutorials for scenarios to automate application image builds after a base image is updated: --* [Automate container image builds when a base image is updated in the same registry](container-registry-tutorial-base-image-update.md) --* [Automate container image builds when a base image is updated in a different registry](container-registry-tutorial-private-base-image-update.md) ---<!-- LINKS - External --> -[base-alpine]: https://hub.docker.com/_/alpine/ -[base-dotnet]: https://hub.docker.com/_/microsoft-dotnet -[base-node]: https://hub.docker.com/_/node/ -[sample-archive]: https://github.com/Azure-Samples/acr-build-helloworld-node/archive/master.zip -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ --<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-build]: /cli/azure/acr#az-acr-build -[az-acr-pack-build]: /cli/azure/acr/pack#az-acr-pack-build -[az-acr-task]: /cli/azure/acr/task -[az-acr-task-create]: /cli/azure/acr/task#az-acr-task-create -[az-acr-task-run]: /cli/azure/acr/task#az-acr-task-run -[az-acr-task-update]: /cli/azure/acr/task#az-acr-task-update -[az-login]: /cli/azure/reference-index#az-login -[az-login-service-principal]: /cli/azure/authenticate-azure-cli --<!-- IMAGES --> -[quick-build-01-fork]: ./media/container-registry-tutorial-quick-build/quick-build-01-fork.png -[quick-build-02-browser]: ./media/container-registry-tutorial-quick-build/quick-build-02-browser.png |
container-registry | Container Registry Tasks Cross Registry Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-cross-registry-authentication.md | - Title: Cross-registry authentication from ACR task -description: Configure an Azure Container Registry Task (ACR Task) to access another private Azure container registry by using a managed identity for Azure resources ---- Previously updated : 10/31/2023----# Cross-registry authentication in an ACR task using an Azure-managed identity --In an [ACR task](container-registry-tasks-overview.md), you can [enable a managed identity for Azure resources](container-registry-tasks-authentication-managed-identity.md). The task can use the identity to access other Azure resources, without needing to provide or manage credentials. --In this article, you learn how to enable a managed identity in a task to pull an image from a registry different from the one used to run the task. --To create the Azure resources, this article requires that you run the Azure CLI version 2.0.68 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli]. --## Scenario overview --The example task pulls a base image from another Azure container registry to build and push an application image. To pull the base image, you configure the task with a managed identity and assign appropriate permissions to it. --This example shows steps using either a user-assigned or system-assigned managed identity. Your choice of identity depends on your organization's needs. --In a real-world scenario, an organization might maintain a set of base images used by all development teams to build their applications. These base images are stored in a corporate registry, with each development team having only pull rights. --## Prerequisites --For this article, you need two Azure container registries: --* You use the first registry to create and execute ACR tasks. In this article, this registry is named *myregistry*. -* The second registry hosts a base image used for the task to build an image. In this article, the second registry is named *mybaseregistry*. --Replace with your own registry names in later steps. --If you don't already have the needed Azure container registries, see [Quickstart: Create a private container registry using the Azure CLI](container-registry-get-started-azure-cli.md). You don't need to push images to the registry yet. --## Prepare base registry --For demonstration purposes, as a one-time operation, run [az acr import][az-acr-import] to import a public Node.js image from Docker Hub to your base registry. In practice, another team or process in the organization might maintain images in the base registry. --```azurecli -az acr import --name mybaseregistry \ - --source docker.io/library/node:15-alpine \ - --image baseimages/node:15-alpine -``` --## Define task steps in YAML file --The steps for this example [multi-step task](container-registry-tasks-multi-step.md) are defined in a [YAML file](container-registry-tasks-reference-yaml.md). Create a file named `helloworldtask.yaml` in your local working directory and paste the following contents. Update the value of `REGISTRY_NAME` in the build step with the server name of your base registry. --```yml -version: v1.1.0 -steps: -# Replace mybaseregistry with the name of your registry containing the base image - - build: -t $Registry/hello-world:$ID https://github.com/Azure-Samples/acr-build-helloworld-node.git#main -f Dockerfile-app --build-arg REGISTRY_NAME=mybaseregistry.azurecr.io - - push: ["$Registry/hello-world:$ID"] -``` --The build step uses the `Dockerfile-app` file in the [Azure-Samples/acr-build-helloworld-node](https://github.com/Azure-Samples/acr-build-helloworld-node.git) repo to build an image. The `--build-arg` references the base registry to pull the base image. When successfully built, the image is pushed to the registry used to run the task. --## Option 1: Create task with user-assigned identity --The steps in this section create a task and enable a user-assigned identity. If you want to enable a system-assigned identity instead, see [Option 2: Create task with system-assigned identity](#option-2-create-task-with-system-assigned-identity). ---### Create task --Create the task *helloworldtask* by executing the following [az acr task create][az-acr-task-create] command. The task runs without a source code context, and the command references the file `helloworldtask.yaml` in the working directory. The `--assign-identity` parameter passes the resource ID of the user-assigned identity. --```azurecli -az acr task create \ - --registry myregistry \ - --name helloworldtask \ - --context \ - --file helloworldtask.yaml \ - --assign-identity $resourceID -``` ---### Give identity pull permissions to the base registry --In this section, give the managed identity permissions to pull from the base registry, *mybaseregistry*. --Use the [az acr show][az-acr-show] command to get the resource ID of the base registry and store it in a variable: --```azurecli -baseregID=$(az acr show --name mybaseregistry --query id --output tsv) -``` --Use the [az role assignment create][az-role-assignment-create] command to assign the identity the `acrpull` role to the base registry. This role has permissions only to pull images from the registry. --```azurecli -az role assignment create \ - --assignee $principalID \ - --scope $baseregID \ - --role acrpull -``` --Proceed to [Add target registry credentials to task](#add-target-registry-credentials-to-task). --## Option 2: Create task with system-assigned identity --The steps in this section create a task and enable a system-assigned identity. If you want to enable a user-assigned identity instead, see [Option 1: Create task with user-assigned identity](#option-1-create-task-with-user-assigned-identity). --### Create task --Create the task *helloworldtask* by executing the following [az acr task create][az-acr-task-create] command. The task runs without a source code context, and the command references the file `helloworldtask.yaml` in the working directory. The `--assign-identity` parameter with no value enables the system-assigned identity on the task. --```azurecli -az acr task create \ - --registry myregistry \ - --name helloworldtask \ - --context \ - --file helloworldtask.yaml \ - --assign-identity -``` --### Give identity pull permissions to the base registry --In this section, give the managed identity permissions to pull from the base registry, *mybaseregistry*. --Use the [az acr show][az-acr-show] command to get the resource ID of the base registry and store it in a variable: --```azurecli -baseregID=$(az acr show --name mybaseregistry --query id --output tsv) -``` --Use the [az role assignment create][az-role-assignment-create] command to assign the identity the `acrpull` role to the base registry. This role has permissions only to pull images from the registry. --```azurecli -az role assignment create \ - --assignee $principalID \ - --scope $baseregID \ - --role acrpull -``` --## Add target registry credentials to task --Now use the [az acr task credential add][az-acr-task-credential-add] command to enable the task to authenticate with the base registry using the identity's credentials. Run the command corresponding to the type of managed identity you enabled in the task. If you enabled a user-assigned identity, pass `--use-identity` with the client ID of the identity. If you enabled a system-assigned identity, pass `--use-identity [system]`. --```azurecli -# Add credentials for user-assigned identity to the task -az acr task credential add \ - --name helloworldtask \ - --registry myregistry \ - --login-server mybaseregistry.azurecr.io \ - --use-identity $clientID --# Add credentials for system-assigned identity to the task -az acr task credential add \ - --name helloworldtask \ - --registry myregistry \ - --login-server mybaseregistry.azurecr.io \ - --use-identity [system] -``` --## Manually run the task --To verify that the task in which you enabled a managed identity runs successfully, manually trigger the task with the [az acr task run][az-acr-task-run] command. --```azurecli -az acr task run \ - --name helloworldtask \ - --registry myregistry -``` --If the task runs successfully, output is similar to: --``` -Queued a run with ID: cf10 -Waiting for an agent... -2019/06/14 22:47:32 Using acb_vol_dbfbe232-fd76-4ca3-bd4a-687e84cb4ce2 as the home volume -2019/06/14 22:47:39 Creating Docker network: acb_default_network, driver: 'bridge' -2019/06/14 22:47:40 Successfully set up Docker network: acb_default_network -2019/06/14 22:47:40 Setting up Docker configuration... -2019/06/14 22:47:41 Successfully set up Docker configuration -2019/06/14 22:47:41 Logging in to registry: myregistry.azurecr.io -2019/06/14 22:47:42 Successfully logged into myregistry.azurecr.io -2019/06/14 22:47:42 Logging in to registry: mybaseregistry.azurecr.io -2019/06/14 22:47:43 Successfully logged into mybaseregistry.azurecr.io -2019/06/14 22:47:43 Executing step ID: acb_step_0. Timeout(sec): 600, Working directory: '', Network: 'acb_default_network' -2019/06/14 22:47:43 Scanning for dependencies... -2019/06/14 22:47:45 Successfully scanned dependencies -2019/06/14 22:47:45 Launching container with name: acb_step_0 -Sending build context to Docker daemon 25.6kB -Step 1/6 : ARG REGISTRY_NAME -Step 2/6 : FROM ${REGISTRY_NAME}/baseimages/node:15-alpine -15-alpine: Pulling from baseimages/node -[...] -Successfully built 41b49a112663 -Successfully tagged myregistry.azurecr.io/hello-world:cf10 -2019/06/14 22:47:56 Successfully executed container: acb_step_0 -2019/06/14 22:47:56 Executing step ID: acb_step_1. Timeout(sec): 600, Working directory: '', Network: 'acb_default_network' -2019/06/14 22:47:56 Pushing image: myregistry.azurecr.io/hello-world:cf10, attempt 1 -The push refers to repository [myregistry.azurecr.io/hello-world] -[...] -2019/06/14 22:48:00 Step ID: acb_step_1 marked as successful (elapsed time in seconds: 2.517011) -2019/06/14 22:48:00 The following dependencies were found: -2019/06/14 22:48:00 -- image:- registry: myregistry.azurecr.io - repository: hello-world - tag: cf10 - digest: sha256:611cf6e3ae3cb99b23fadcd89fa144e18aa1b1c9171ad4a0da4b62b31b4e38d1 - runtime-dependency: - registry: mybaseregistry.azurecr.io - repository: baseimages/node - tag: 15-alpine - digest: sha256:e8e92cffd464fce3be9a3eefd1b65dc9cbe2484da31c11e813a4effc6105c00f - git: - git-head-revision: 0f988779c97fe0bfc7f2f74b88531617f4421643 --Run ID: cf10 was successful after 32s -``` --Run the [az acr repository show-tags][az-acr-repository-show-tags] command to verify that the image built and was successfully pushed to *myregistry*: --```azurecli -az acr repository show-tags --name myregistry --repository hello-world --output tsv -``` --Example output: --```console -cf10 -``` --## Next steps --* Learn more about [enabling a managed identity in an ACR task](container-registry-tasks-authentication-managed-identity.md). -* See the [ACR Tasks YAML reference](container-registry-tasks-reference-yaml.md) --<!-- LINKS - Internal --> -[az-login]: /cli/azure/reference-index#az_login -[az-acr-login]: /cli/azure/acr#az_acr_login -[az-acr-show]: /cli/azure/acr#az_acr_show -[az-acr-build]: /cli/azure/acr#az_acr_build -[az-acr-repository-show-tags]: /cli/azure/acr/repository#az_acr_repository_show_tags -[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create -[az-acr-login]: /cli/azure/acr#az_acr_login -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-task-create]: /cli/azure/acr/task#az_acr_task_create -[az-acr-task-show]: /cli/azure/acr/task#az_acr_task_show -[az-acr-task-run]: /cli/azure/acr/task#az_acr_task_run -[az-acr-task-list-runs]: /cli/azure/acr/task#az_acr_task_list_runs -[az-acr-task-credential-add]: /cli/azure/acr/task/credential#az_acr_task_credential_add -[az-group-create]: /cli/azure/group?#az_group_create |
container-registry | Container Registry Tasks Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-logs.md | - Title: View task run logs - Tasks -description: How to view and manage run logs generated by ACR Tasks. --- Previously updated : 10/31/2023----# View and manage task run logs --Each task run in [Azure Container Registry tasks](container-registry-tasks-overview.md) generates log output that you can inspect to determine whether the task steps ran successfully. --This article explains how to view and manage task run logs. --## View streamed logs --When you trigger a task manually, log output is streamed directly to the console. For example, when you trigger a task manually by using the [az acr build](/cli/azure/acr#az-acr-build), [az acr run](/cli/azure/acr#az-acr-run), or [az acr task run](/cli/azure/acr/task#az-acr-task-run) command, you see log output streamed to the console. --The following sample [az acr run](/cli/azure/acr#az-acr-run) command manually triggers a task that runs a container pulled from the same registry: --```azurecli -az acr run --registry mycontainerregistry1220 \ - --cmd '$Registry/samples/hello-world:v1' -``` --Streamed log: --```console -Queued a run with ID: cf4 -Waiting for an agent... -2020/03/09 20:30:10 Alias support enabled for version >= 1.1.0, please see https://aka.ms/acr/tasks/task-aliases for more information. -2020/03/09 20:30:10 Creating Docker network: acb_default_network, driver: 'bridge' -2020/03/09 20:30:10 Successfully set up Docker network: acb_default_network -2020/03/09 20:30:10 Setting up Docker configuration... -2020/03/09 20:30:11 Successfully set up Docker configuration -2020/03/09 20:30:11 Logging in to registry: mycontainerregistry1220azurecr.io -2020/03/09 20:30:12 Successfully logged into mycontainerregistry1220azurecr.io -2020/03/09 20:30:12 Executing step ID: acb_step_0. Timeout(sec): 600, Working directory: '', Network: 'acb_default_network' -2020/03/09 20:30:12 Launching container with name: acb_step_0 -Unable to find image 'mycontainerregistry1220azurecr.io/samples/hello-world:v1' locally -v1: Pulling from samples/hello-world -Digest: sha256:92c7f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e888a -Status: Downloaded newer image for mycontainerregistry1220azurecr.io/samples/hello-world:v1 --Hello from Docker! -This message shows that your installation appears to be working correctly. -[...] --2020/03/09 20:30:13 Successfully executed container: acb_step_0 -2020/03/09 20:30:13 Step ID: acb_step_0 marked as successful (elapsed time in seconds: 1.180081) --Run ID: cf4 was successful after 5s -``` --## View stored logs --Azure Container Registry stores run logs for all tasks. You can view stored run logs in the Azure portal. Or, use the [az acr task logs](/cli/azure/acr/task#az-acr-task-logs) command to view a selected log. By default, logs are retained for 30 days. --If a task is automatically triggered, for example by a source code update, accessing the stored logs is the *only* way to view the run logs. Automatic task triggers include source code commits or pull requests, base image updates, and timer triggers. --To view run logs in the portal: --1. Navigate to your container registry. -1. In **Services**, select **Tasks** > **Runs**. -1. Select a **Run Id** to view the run status and run logs. The log contains the same information as a streamed log, if one is generated. --![View task run login portal](./media/container-registry-tasks-logs/portal-task-run-logs.png) --To view a log using the Azure CLI, run [az acr task logs](/cli/azure/acr/task#az-acr-task-logs) and specify a run ID, a task name, specific image created by a build task. If a task name is specified, the command shows the log for the last created run. --The following example outputs the log for the run with ID *cf4*: --```azurecli -az acr task logs --registry mycontainerregistry1220 \ - --run-id cf4 -``` --## Alternative log storage --You might want to store task run logs on a local file system or use an alternative archiving solution such as Azure Storage. --For example, create a local *tasklogs* directory, and redirect the output of [az acr task logs](/cli/azure/acr/task#az-acr-task-logs) to a local file: --```azurecli -mkdir ~/tasklogs --az acr task logs --registry mycontainerregistry1220 \ - --run-id cf4 > ~/tasklogs/cf4.log -``` --You can also save local log files to Azure Storage. For example, use the [Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md), the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md), or other methods to upload files to a storage account. --## Next steps --* Learn more about [Azure Container Registry Tasks](container-registry-tasks-overview.md) ---<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-build]: /cli/azure/acr#az-acr-build -[az-acr-pack-build]: /cli/azure/acr/pack#az-acr-pack-build -[az-acr-task]: /cli/azure/acr/task -[az-acr-task-create]: /cli/azure/acr/task#az-acr-task-create -[az-acr-task-run]: /cli/azure/acr/task#az-acr-task-run -[az-acr-task-update]: /cli/azure/acr/task#az-acr-task-update -[az-login]: /cli/azure/reference-index#az-login -[az-login-service-principal]: /cli/azure/authenticate-azure-cli --<!-- IMAGES --> -[quick-build-01-fork]: ./media/container-registry-tutorial-quick-build/quick-build-01-fork.png -[quick-build-02-browser]: ./media/container-registry-tutorial-quick-build/quick-build-02-browser.png |
container-registry | Container Registry Tasks Multi Step | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-multi-step.md | - Title: Multi-step task to build, test & patch image -description: Introduction to multi-step tasks, a feature of ACR Tasks in Azure Container Registry that provides task-based workflows for building, testing, and patching container images in the cloud. --- Previously updated : 10/31/2023----# Run multi-step build, test, and patch tasks in ACR Tasks --Multi-step tasks extend the single image build-and-push capability of ACR Tasks with multi-step, multi-container-based workflows. Use multi-step tasks to build and push several images, in series or in parallel. Then run those images as commands within a single task run. Each step defines a container image build or push operation, and can also define the execution of a container. Each step in a multi-step task uses a container as its execution environment. --> [!IMPORTANT] -> If you previously created tasks during the preview with the `az acr build-task` command, those tasks need to be re-created using the [az acr task][az-acr-task] command. --For example, you can run a task with steps that automate the following logic: --1. Build a web application image -1. Run the web application container -1. Build a web application test image -1. Run the web application test container which performs tests against the running application container -1. If the tests pass, build a Helm chart archive package -1. Perform a `helm upgrade` using the new Helm chart archive package --All steps are performed within Azure, offloading the work to Azure's compute resources and freeing you from infrastructure management. Besides your Azure container registry, you pay only for the resources you use. For information on pricing, see the **Container Build** section in [Azure Container Registry pricing][pricing]. ---## Common task scenarios --Multi-step tasks enable scenarios like the following logic: --* Build, tag, and push one or more container images, in series or in parallel. -* Run and capture unit test and code coverage results. -* Run and capture functional tests. ACR Tasks supports running more than one container, executing a series of requests between them. -* Perform task-based execution, including pre/post steps of a container image build. -* Deploy one or more containers with your favorite deployment engine to your target environment. --## Multi-step task definition --A multi-step task in ACR Tasks is defined as a series of steps within a YAML file. Each step can specify dependencies on the successful completion of one or more previous steps. The following task step types are available: --* [`build`](container-registry-tasks-reference-yaml.md#build): Build one or more container images using familiar `docker build` syntax, in series or in parallel. -* [`push`](container-registry-tasks-reference-yaml.md#push): Push built images to a container registry. Private registries like Azure Container Registry are supported, as is the public Docker Hub. -* [`cmd`](container-registry-tasks-reference-yaml.md#cmd): Run a container, such that it can operate as a function within the context of the running task. You can pass parameters to the container's `[ENTRYPOINT]`, and specify properties like env, detach, and other familiar `docker run` parameters. The `cmd` step type enables unit and functional testing, with concurrent container execution. --The following snippets show how to combine these task step types. Multi-step tasks can be as simple as building a single image from a Dockerfile and pushing to your registry, with a YAML file similar to: --```yml -version: v1.1.0 -steps: - - build: -t $Registry/hello-world:$ID . - - push: ["$Registry/hello-world:$ID"] -``` --Or more complex, such as this fictitious multi-step definition which includes steps for build, test, helm package, and helm deploy (container registry and Helm repository configuration not shown): --```yml -version: v1.1.0 -steps: - - id: build-web - build: -t $Registry/hello-world:$ID . - when: ["-"] - - id: build-tests - build: -t $Registry/hello-world-tests ./funcTests - when: ["-"] - - id: push - push: ["$Registry/helloworld:$ID"] - when: ["build-web", "build-tests"] - - id: hello-world-web - cmd: $Registry/helloworld:$ID - - id: funcTests - cmd: $Registry/helloworld:$ID - env: ["host=helloworld:80"] - - cmd: $Registry/functions/helm package --app-version $ID -d ./helm ./helm/helloworld/ - - cmd: $Registry/functions/helm upgrade helloworld ./helm/helloworld/ --reuse-values --set helloworld.image=$Registry/helloworld:$ID -``` --See [task examples](container-registry-tasks-samples.md) for multi-step task YAML files and Dockerfiles for several scenarios. --## Run a sample task --Tasks support both manual execution, called a "quick run," and automated execution on Git commit or base image update. --To run a task, you first define the task's steps in a YAML file, then execute the Azure CLI command [az acr run][az-acr-run]. --Here's an example Azure CLI command that runs a task using a sample task YAML file. Its steps build and then push an image. Update `\<acrName\>` with the name of your own Azure container registry before running the command. --```azurecli -az acr run --registry <acrName> -f build-push-hello-world.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --When you run the task, the output should show the progress of each step defined in the YAML file. In the following output, the steps appear as `acb_step_0` and `acb_step_1`. --```azurecli -az acr run --registry myregistry -f build-push-hello-world.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --```output -Sending context to registry: myregistry... -Queued a run with ID: yd14 -Waiting for an agent... -2018/09/12 20:08:44 Using acb_vol_0467fe58-f6ab-4dbd-a022-1bb487366941 as the home volume -2018/09/12 20:08:44 Creating Docker network: acb_default_network -2018/09/12 20:08:44 Successfully set up Docker network: acb_default_network -2018/09/12 20:08:44 Setting up Docker configuration... -2018/09/12 20:08:45 Successfully set up Docker configuration -2018/09/12 20:08:45 Logging in to registry: myregistry.azurecr-test.io -2018/09/12 20:08:46 Successfully logged in -2018/09/12 20:08:46 Executing step: acb_step_0 -2018/09/12 20:08:46 Obtaining source code and scanning for dependencies... -2018/09/12 20:08:47 Successfully obtained source code and scanned for dependencies -Sending build context to Docker daemon 109.6kB -Step 1/1 : FROM hello-world - > 4ab4c602aa5e -Successfully built 4ab4c602aa5e -Successfully tagged myregistry.azurecr-test.io/hello-world:yd14 -2018/09/12 20:08:48 Executing step: acb_step_1 -2018/09/12 20:08:48 Pushing image: myregistry.azurecr-test.io/hello-world:yd14, attempt 1 -The push refers to repository [myregistry.azurecr-test.io/hello-world] -428c97da766c: Preparing -428c97da766c: Layer already exists -yd14: digest: sha256:1a6fd470b9ce10849be79e99529a88371dff60c60aab424c077007f6979b4812 size: 524 -2018/09/12 20:08:55 Successfully pushed image: myregistry.azurecr-test.io/hello-world:yd14 -2018/09/12 20:08:55 Step id: acb_step_0 marked as successful (elapsed time in seconds: 2.035049) -2018/09/12 20:08:55 Populating digests for step id: acb_step_0... -2018/09/12 20:08:57 Successfully populated digests for step id: acb_step_0 -2018/09/12 20:08:57 Step id: acb_step_1 marked as successful (elapsed time in seconds: 6.832391) -The following dependencies were found: -- image:- registry: myregistry.azurecr-test.io - repository: hello-world - tag: yd14 - digest: sha256:1a6fd470b9ce10849be79e99529a88371dff60c60aab424c077007f6979b4812 - runtime-dependency: - registry: registry.hub.docker.com - repository: library/hello-world - tag: latest - digest: sha256:0add3ace90ecb4adbf7777e9aacf18357296e799f81cabc9fde470971e499788 - git: {} ---Run ID: yd14 was successful after 19s -``` --For more information about automated builds on Git commit or base image update, see the [Automate image builds](container-registry-tutorial-build-task.md) and [Base image update builds](container-registry-tutorial-base-image-update.md) tutorial articles. --## Next steps --You can find multi-step task reference and examples here: --* [Task reference](container-registry-tasks-reference-yaml.md) - Task step types, their properties, and usage. -* [Task examples](container-registry-tasks-samples.md) - Example `task.yaml` and Docker files for several scenarios, simple to complex. -* [Cmd repo](https://github.com/AzureCR/cmd) - A collection of containers as commands for ACR tasks. --<!-- IMAGES --> --<!-- LINKS - External --> -[pricing]: https://azure.microsoft.com/pricing/details/container-registry/ -[task-examples]: https://github.com/Azure-Samples/acr-tasks -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ --<!-- LINKS - Internal --> -[az-acr-task-create]: /cli/azure/acr/task#az_acr_task_create -[az-acr-run]: /cli/azure/acr#az_acr_run -[az-acr-task]: /cli/azure/acr/task |
container-registry | Container Registry Tasks Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-overview.md | - Title: Overview of Azure Container Registry tasks -description: Learn about Azure Container Registry tasks, a suite of features that provide automated building, management, and patching of container images in the cloud. --- Previously updated : 01/24/2024----# Automate container image builds and maintenance with Azure Container Registry tasks --Containers provide new levels of virtualization by isolating application and developer dependencies from infrastructure and operational requirements. What remains is the need to address how this application virtualization is managed and patched over the container lifecycle. --Azure Container Registry tasks are a suite of features that: --- Provide cloud-based container image building for [platforms](#image-platforms) like Linux, Windows, and ARM.-- Extend the early parts of an application development cycle to the cloud with on-demand container image builds.-- Enable automated builds triggered by source code updates, updates to a container's base image, or timers.--For example, with triggers for updates to a base image, you can automate [OS and framework patching](#automate-os-and-framework-patching) for your Docker containers. These triggers can help you maintain secure environments while adhering to the principles of immutable containers. --> [!IMPORTANT] -> Azure Container Registry task runs are temporarily paused from Azure free credits. This pause might affect existing task runs. If you encounter problems, open a [support case](../azure-portal/supportability/how-to-create-azure-support-request.md) for our team to provide additional guidance. --> [!WARNING] -> Please be advised that any information provided on the command line or as part of a URI may be logged as part of Azure Container Registry (ACR) diagnostic tracing. This includes sensitive data such as credentials, GitHub personal access tokens, and other secure information. Exercise caution to prevent any potential security risks, it is crucial to avoid including sensitive details in command lines or URIs that are subject to diagnostic logging. --## Task scenarios --Azure Container Registry tasks support several scenarios to build and maintain container images and other artifacts. This article describes [quick tasks](#quick-tasks), [automatically triggered tasks](#automatically-triggered-tasks), and [multi-step tasks](#multi-step-tasks). --Each task has an associated [source code context](#context-locations), which is the location of source files that are used to build a container image or other artifact. Example contexts include a Git repository and a local file system. --Tasks can also take advantage of [run variables](container-registry-tasks-reference-yaml.md#run-variables), so you can reuse task definitions and standardize tags for images and artifacts. --## Quick tasks --The *inner-loop* development cycle is the iterative process of writing code, building, and testing your application before committing to source control. It's really the beginning of container lifecycle management. --The *quick task* feature in Azure Container Registry tasks can provide an integrated development experience by offloading your container image builds to Azure. You can build and push a single container image to a container registry on demand, in Azure, without needing a local Docker Engine installation. Think `docker build`, `docker push` in the cloud. With quick tasks, you can verify your automated build definitions and catch potential problems before committing your code. --By using the familiar `docker build` format, the [az acr build][az-acr-build] command in the Azure CLI takes a [context](#context-locations). The command then sends the context to Azure Container Registry and (by default) pushes the built image to its registry upon completion. --Azure Container Registry tasks are designed as a container lifecycle primitive. For example, you can integrate Azure Container Registry tasks into your continuous integration and continuous delivery (CI/CD) solution. If you run [az login][az-login] with a [service principal][az-login-service-principal], your CI/CD solution can then issue [az acr build][az-acr-build] commands to start image builds. --To learn how to use quick tasks, see the [quickstart](container-registry-quickstart-task-cli.md) and [tutorial](container-registry-tutorial-quick-task.md) for building and deploying container images by using Azure Container Registry tasks. --> [!TIP] -> If you want to build and push an image directly from source code, without a Dockerfile, Azure Container Registry provides the [az acr pack build][az-acr-pack-build] command (preview). This tool builds and pushes an image from application source code by using [Cloud Native Buildpacks](https://buildpacks.io/). --## Automatically triggered tasks --Enable one or more *triggers* to build an image. --### Trigger a task on a source code update --You can trigger a container image build or multi-step task when code is committed, or a pull request is made or updated, to a public or private Git repository in GitHub or Azure DevOps. For example, configure a build task with the Azure CLI command [az acr task create][az-acr-task-create] by specifying a Git repository and optionally a branch and Dockerfile. When your team updates code in the repository, a webhook created in Azure Container Registry tasks triggers a build of the container image defined in the repo. --Azure Container Registry tasks support the following triggers when you set a Git repo as a task's context: --| Trigger | Enabled by default | -| - | | -| Commit | Yes | -| Pull request | No | --> [!NOTE] -> Currently, Azure Container Registry tasks don't support commit or pull-request triggers in GitHub Enterprise repos. --To learn how to trigger builds on source code commits, see [Automate container image builds with Azure Container Registry tasks](container-registry-tutorial-build-task.md). --#### Personal access token --To configure a trigger for source code updates, you need to provide the task a personal access token to set the webhook in the public or private GitHub or Azure DevOps repo. Required scopes for the personal access token are as follows: --| Repo type |GitHub |Azure DevOps | -|||| -|Public repo | repo:status<br/>public_repo | Code (Read) | -|Private repo | repo (Full control) | Code (Read) | --To create a personal access token, see the [GitHub](https://docs.github.com/en/github/authenticating-to-github/keeping-your-account-and-data-secure/creating-a-personal-access-token) or [Azure DevOps](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate) documentation. --### Automate OS and framework patching --The power of Azure Container Registry tasks to enhance your container build workflow comes from their ability to detect an update to a *base image*. A base image is a feature of most container images. It's a parent image on which one or more application images are based. Base images typically contain the operating system and sometimes application frameworks. --You can set up an Azure Container Registry task to track a dependency on a base image when it builds an application image. When the updated base image is pushed to your registry, or a base image is updated in a public repo such as in Docker Hub, Azure Container Registry tasks can automatically build any application images based on it. With this automatic detection and rebuilding, Azure Container Registry tasks save you the time and effort that's normally required to manually track and update every application image that references your updated base image. --For more information, see [About base image updates for Azure Container Registry tasks](container-registry-tasks-base-images.md) and [Tutorial: Automate container image builds when a base image is updated in an Azure container registry](container-registry-tutorial-base-image-update.md). --### Schedule a task --You can schedule a task by setting up one or more *timer triggers* when you create or update the task. Scheduling a task is useful for running container workloads on a defined schedule, or running maintenance operations or tests on images pushed regularly to your registry. For more information, see [Run an Azure Container Registry task on a defined schedule](container-registry-tasks-scheduled.md). --## Multi-step tasks --Extend the single-image build-and-push capability of Azure Container Registry tasks with multi-step workflows that are based on multiple containers. --Multi-step tasks provide step-based task definition and execution for building, testing, and patching container images in the cloud. Task steps defined in a [YAML file](container-registry-tasks-reference-yaml.md) specify individual build and push operations for container images or other artifacts. They can also define the execution of one or more containers, with each step using the container as its execution environment. --For example, you can create a multi-step task that automates the following steps: --1. Build a web application image. -1. Run the web application container. -1. Build a web application test image. -1. Run the web application test container, which performs tests against the running application container. -1. If the tests pass, build a Helm chart archive package. -1. Perform a `helm upgrade` task by using the new Helm chart archive package. --Multi-step tasks enable you to split the building, running, and testing of an image into more composable steps, with dependency support between steps. With multi-step tasks in Azure Container Registry tasks, you have more granular control over workflows for image building, testing, and OS and framework patching. --[Learn more about running multi-step build, test, and patch tasks in Azure Container Registry tasks](container-registry-tasks-multi-step.md). --## Context locations --The following table shows examples of supported context locations for Azure Container Registry tasks: --| Context location | Description | Example | -| - | -- | - | -| Local file system | Files within a directory on the local file system. | `/home/user/projects/myapp` | -| GitHub main branch | Files within the main (or other default) branch of a public or private GitHub repository. | `https://github.com/gituser/myapp-repo.git` | -| GitHub branch | Specific branch of a public or private GitHub repo.| `https://github.com/gituser/myapp-repo.git#mybranch` | -| GitHub subfolder | Files within a subfolder in a public or private GitHub repo. The example shows combination of a branch and subfolder specification. | `https://github.com/gituser/myapp-repo.git#mybranch:myfolder` | -| GitHub commit | Specific commit in a public or private GitHub repo. The example shows combination of a commit hash (SHA) and subfolder specification. | `https://github.com/gituser/myapp-repo.git#git-commit-hash:myfolder` | -| Azure DevOps subfolder | Files within a subfolder in a public or private Azure repo. The example shows combination of branch and subfolder specification. | `https://dev.azure.com/user/myproject/_git/myapp-repo#mybranch:myfolder` | -| Remote tarball | Files in a compressed archive on a remote web server. | `http://remoteserver/myapp.tar.gz` | -| Artifact in container registry | [OCI artifact](container-registry-manage-artifact.md) files in a container registry repository. | `oci://myregistry.azurecr.io/myartifact:mytag` | --> [!NOTE] -> When you're using a Git repo as a context for a task that's triggered by a source code update, you need to provide a [personal access token](#personal-access-token). --## Image platforms --By default, Azure Container Registry tasks build images for the Linux OS and the AMD64 architecture. Specify the `--platform` tag to build Windows images or Linux images for other architectures. Specify the OS and optionally a supported architecture in *OS/architecture* format (for example, `--platform Linux/arm`). For ARM architectures, optionally specify a variant in *OS/architecture/variant* format (for example, `--platform Linux/arm64/v8`). --| OS | Architecture| -| | - | -| Linux | AMD64<br/>ARM<br/>ARM64<br/>386 | -| Windows | AMD64 | --## Task output --Each task run generates log output that you can inspect to determine whether the task steps ran successfully. When you trigger a task manually, log output for the task run is streamed to the console and stored for later retrieval. When a task is triggered automatically (for example, by a source code commit or a base image update), task logs are only stored. View the run logs in the Azure portal, or use the [az acr task logs](/cli/azure/acr/task#az-acr-task-logs) command. --[Learn more about viewing and managing task logs](container-registry-tasks-logs.md). --## Related content --- When you're ready to automate container image builds and maintenance in the cloud, see [Tutorial: Build and deploy container images in the cloud with Azure Container Registry tasks](container-registry-tutorial-quick-task.md).--- Optionally, learn about the [Docker extension](https://code.visualstudio.com/docs/azure/docker) and the [Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account) for Visual Studio Code. You can use these extensions to pull images from a container registry, push images to a container registry, or run Azure Container Registry tasks, all within Visual Studio Code.--<!-- LINKS - Internal --> -[az-acr-build]: /cli/azure/acr#az-acr-build -[az-acr-pack-build]: /cli/azure/acr/pack#az-acr-pack-build -[az-acr-task-create]: /cli/azure/acr/task#az-acr-task-create -[az-login]: /cli/azure/reference-index#az-login -[az-login-service-principal]: /cli/azure/authenticate-azure-cli |
container-registry | Container Registry Tasks Pack Build | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-pack-build.md | - Title: Build image with Cloud Native Buildpack -description: Use the az acr pack build command to build a container image from an app and push to Azure Container Registry, without using a Dockerfile. --- Previously updated : 10/31/2023-----# Build and push an image from an app using a Cloud Native Buildpack --The Azure CLI command `az acr pack build` uses the [`pack`](https://github.com/buildpack/pack) CLI tool, from [Buildpacks](https://buildpacks.io/), to build an app and push its image into an Azure container registry. This feature provides an option to quickly build a container image from your application source code in Node.js, Java, and other languages without having to define a Dockerfile. --You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the examples in this article. If you'd like to use it locally, version 2.0.70 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. --> [!IMPORTANT] -> This feature is currently in preview. Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA). --## Use the build command --To build and push a container image using Cloud Native Buildpacks, run the [az acr pack build][az-acr-pack-build] command. Whereas the [az acr build][az-acr-build] command builds and pushes an image from a Dockerfile source and related code, with `az acr pack build` you specify an application source tree directly. --At a minimum, specify the following when you run `az acr pack build`: --* An Azure container registry where you run the command -* An image name and tag for the resulting image -* One of the [supported context locations](container-registry-tasks-overview.md#context-locations) for ACR Tasks, such as a local directory, a GitHub repo, or a remote tarball -* The name of a Buildpack builder image suitable for your application. If not cached by Azure Container Registry, the builder image must be pulled using the `--pull` parameter. --`az acr pack build` supports other features of ACR Tasks commands including [run variables](container-registry-tasks-reference-yaml.md#run-variables) and [task run logs](container-registry-tasks-logs.md) that are streamed and also saved for later retrieval. --## Example: Build Node.js image with Cloud Foundry builder --The following example builds a container image from a Node.js app in the [Azure-Samples/nodejs-docs-hello-world](https://github.com/Azure-Samples/nodejs-docs-hello-world) repo, using the `cloudfoundry/cnb:cflinuxfs3` builder. --```azurecli -az acr pack build \ - --registry myregistry \ - --image node-app:1.0 \ - --pull --builder cloudfoundry/cnb:cflinuxfs3 \ - https://github.com/Azure-Samples/nodejs-docs-hello-world.git -``` --This example builds the `node-app` image with the `1.0` tag and pushes it to the *myregistry* container registry. In this example, the target registry name is explicitly prepended to the image name. If not specified, the registry login server name is automatically prepended to the image name. --Command output shows the progress of building and pushing the image. --After the image is successfully built, you can run it with Docker, if you have it installed. First sign into your registry: --```azurecli -az acr login --name myregistry -``` --Run the image: --```console -docker run --rm -p 1337:1337 myregistry.azurecr.io/node-app:1.0 -``` --Browse to `localhost:1337` in your favorite browser to see the sample web app. Press `[Ctrl]+[C]` to stop the container. --## Example: Build Java image with Heroku builder --The following example builds a container image from the Java app in the [buildpack/sample-java-app](https://github.com/buildpack/sample-java-app) repo, using the `heroku/buildpacks:18` builder. --```azurecli -az acr pack build \ - --registry myregistry \ - --image java-app:{{.Run.ID}} \ - --pull --builder heroku/buildpacks:18 \ - https://github.com/buildpack/sample-java-app.git -``` --This example builds the `java-app` image tagged with the run ID of the command and pushes it to the *myregistry* container registry. --Command output shows the progress of building and pushing the image. --After the image is successfully built, you can run it with Docker, if you have it installed. First sign into your registry: --```azurecli -az acr login --name myregistry -``` --Run the image, substituting your image tag for *runid*: --```console -docker run --rm -p 8080:8080 myregistry.azurecr.io/java-app:runid -``` --Browse to `localhost:8080` in your favorite browser to see the sample web app. Press `[Ctrl]+[C]` to stop the container. ---## Next steps --After you build and push a container image with `az acr pack build`, you can deploy it like any image to a target of your choice. Azure deployment options include running it in [App Service](../app-service/tutorial-custom-container.md) or [Azure Kubernetes Service](/azure/aks/tutorial-kubernetes-deploy-cluster), among others. --For more information about ACR Tasks features, see [Automate container image builds and maintenance with ACR Tasks](container-registry-tasks-overview.md). ---<!-- LINKS - External --> -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ --<!-- LINKS - Internal --> -[azure-cli-install]: /cli/azure/install-azure-cli -[az-acr-build]: /cli/azure/acr/task -[az-acr-pack-build]: /cli/azure/acr/pack#az_acr_pack_build |
container-registry | Container Registry Tasks Reference Yaml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-reference-yaml.md | - Title: YAML reference - ACR Tasks -description: Reference for defining tasks in YAML for ACR Tasks, including task properties, step types, step properties, and built-in variables. ---- Previously updated : 10/31/2023----# ACR Tasks reference: YAML --Multi-step task definition in ACR Tasks provides a container-centric compute primitive focused on building, testing, and patching containers. This article covers the commands, parameters, properties, and syntax for the YAML files that define your multi-step tasks. --This article contains reference for creating multi-step task YAML files for ACR Tasks. If you'd like an introduction to ACR Tasks, see the [ACR Tasks overview](container-registry-tasks-overview.md). --## acr-task.yaml file format --ACR Tasks supports multi-step task declaration in standard YAML syntax. You define a task's steps in a YAML file. You can then run the task manually by passing the file to the [az acr run][az-acr-run] command. Or, use the file to create a task with [az acr task create][az-acr-task-create] that's triggered automatically on a Git commit, a base image update, or a schedule. Although this article refers to `acr-task.yaml` as the file containing the steps, ACR Tasks supports any valid filename with a [supported extension](#supported-task-filename-extensions). --The top-level `acr-task.yaml` primitives are **task properties**, **step types**, and **step properties**: --* [Task properties](#task-properties) apply to all steps throughout task execution. There are several global task properties, including: - * `version` - * `stepTimeout` - * `workingDirectory` -* [Task step types](#task-step-types) represent the types of actions that can be performed in a task. There are three step types: - * `build` - * `push` - * `cmd` -* [Task step properties](#task-step-properties) are parameters that apply to an individual step. There are several step properties, including: - * `startDelay` - * `timeout` - * `when` - * ...and many more. --The base format of an `acr-task.yaml` file, including some common step properties, follows. While not an exhaustive representation of all available step properties or step type usage, it provides a quick overview of the basic file format. --```yml -version: # acr-task.yaml format version. -stepTimeout: # Seconds each step may take. -steps: # A collection of image or container actions. - - build: # Equivalent to "docker build," but in a multi-tenant environment - - push: # Push a newly built or retagged image to a registry. - when: # Step property that defines either parallel or dependent step execution. - - cmd: # Executes a container, supports specifying an [ENTRYPOINT] and parameters. - startDelay: # Step property that specifies the number of seconds to wait before starting execution. -``` --### Supported task filename extensions --ACR Tasks has reserved several filename extensions, including `.yaml`, that it will process as a task file. Any extension *not* in the following list is considered by ACR Tasks to be a Dockerfile: .yaml, .yml, .toml, .json, .sh, .bash, .zsh, .ps1, .ps, .cmd, .bat, .ts, .js, .php, .py, .rb, .lua --YAML is the only file format currently supported by ACR Tasks. The other filename extensions are reserved for possible future support. --## Run the sample tasks --There are several sample task files referenced in the following sections of this article. The sample tasks are in a public GitHub repository, [Azure-Samples/acr-tasks][acr-tasks]. You can run them with the Azure CLI command [az acr run][az-acr-run]. The sample commands are similar to: --```azurecli -az acr run -f build-push-hello-world.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --The formatting of the sample commands assumes you've configured a default registry in the Azure CLI, so they omit the `--registry` parameter. To configure a default registry, use the [az config][az-config] command with the `set` command, which accepts an `defaults.acr=REGISTRY_NAME` key value pair. --For example, to configure the Azure CLI with a default registry named "myregistry": --```azurecli -az config set defaults.acr=myregistry -``` --## Task properties --Task properties typically appear at the top of an `acr-task.yaml` file, and are global properties that apply throughout the full execution of the task steps. Some of these global properties can be overridden within an individual step. --| Property | Type | Optional | Description | Override supported | Default value | -| -- | - | -- | -- | | - | -| `version` | string | Yes | The version of the `acr-task.yaml` file as parsed by the ACR Tasks service. While ACR Tasks strives to maintain backward compatibility, this value allows ACR Tasks to maintain compatibility within a defined version. If unspecified, defaults to `v1.0.0`. | N/A | `v1.0.0` | -| `stepTimeout` | int (seconds) | Yes | The maximum number of seconds a step can run. If the `stepTimeout` property is specified on a task, it sets the default `timeout` property of all the steps. If the `timeout` property is specified on a step, it overrides the `stepTimeout` property provided by the task.<br/><br/>The sum of the step timeout values for a task should equal the value of the task's run `timeout` property (for example, set by passing `--timeout` to the `az acr task create` command). If the tasks's run `timeout` value is smaller, it takes priority. | Yes | 600 (10 minutes) | -| `workingDirectory` | string | Yes | The working directory of the container during runtime. If the property is specified on a task, it sets the default `workingDirectory` property of all the steps. If specified on a step, it overrides the property provided by the task. | Yes | `c:\workspace` in Windows or `/workspace` in Linux | -| `env` | [string, string, ...] | Yes | Array of strings in `key=value` format that define the environment variables for the task. If the property is specified on a task, it sets the default `env` property of all the steps. If specified on a step, it overrides any environment variables inherited from the task. | Yes | None | -| `secrets` | [secret, secret, ...] | Yes | Array of [secret](#secret) objects. | No | None | -| `networks` | [network, network, ...] | Yes | Array of [network](#network) objects. | No | None | -| `volumes` | [volume, volume, ...] | Yes | Array of [volume](#volume) objects. Specifies volumes with source content to mount to a step. | No | None | --### secret --The secret object has the following properties. --| Property | Type | Optional | Description | Default value | -| -- | - | -- | -- | - | -| `id` | string | No | The identifier of the secret. | None | -| `keyvault` | string | Yes | The Azure Key Vault Secret URL. | None | -| `clientID` | string | Yes | The client ID of the [user-assigned managed identity](container-registry-tasks-authentication-managed-identity.md) for Azure resources. | None | --### network --The network object has the following properties. --| Property | Type | Optional | Description | Default value | -| -- | - | -- | -- | - | -| `name` | string | No | The name of the network. | None | -| `driver` | string | Yes | The driver to manage the network. | None | -| `ipv6` | bool | Yes | Whether IPv6 networking is enabled. | `false` | -| `skipCreation` | bool | Yes | Whether to skip network creation. | `false` | -| `isDefault` | bool | Yes | Whether the network is a default network provided with Azure Container Registry. | `false` | --### volume --The volume object has the following properties. --| Property | Type | Optional | Description | Default value | -| -- | - | -- | -- | - | -| `name` | string | No | The name of the volume to mount. Can contain only alphanumeric characters, '-', and '_'. | None | -| `secret` | map[string]string | No | Each key of the map is the name of a file created and populated in the volume. Each value is the string version of the secret. Secret values must be Base64 encoded. | None | --## Task step types --ACR Tasks supports three step types. Each step type supports several properties, detailed in the section for each step type. --| Step type | Description | -| | -- | -| [`build`](#build) | Builds a container image using familiar `docker build` syntax. | -| [`push`](#push) | Executes a `docker push` of newly built or retagged images to a container registry. Azure Container Registry, other private registries, and the public Docker Hub are supported. | -| [`cmd`](#cmd) | Runs a container as a command, with parameters passed to the container's `[ENTRYPOINT]`. The `cmd` step type supports parameters like `env`, `detach`, and other familiar `docker run` command options, enabling unit and functional testing with concurrent container execution. | --## build --Build a container image. The `build` step type represents a multi-tenant, secure means of running `docker build` in the cloud as a first-class primitive. --### Syntax: build --```yml -version: v1.1.0 -steps: - - [build]: -t [imageName]:[tag] -f [Dockerfile] [context] - [property]: [value] -``` --Run the [az acr run][az-acr-run]command to get the docker version. --```azurecli -az acr run -r $ACR_NAME --cmd "docker version" -``` --Add environment variable `DOCKER_BUILDKIT=1` in yaml file to enable `buildkit` and use `secret` with `buildkit`. --The `build` step type supports the parameters in the following table. The `build` step type also supports all build options of the [docker build](https://docs.docker.com/engine/reference/commandline/build/) command, such as `--build-arg` to set build-time variables. ---| Parameter | Description | Optional | -| | -- | :-: | -| `-t` | `--image` | Defines the fully qualified `image:tag` of the built image.<br /><br />As images may be used for inner task validations, such as functional tests, not all images require `push` to a registry. However, to instance an image within a Task execution, the image does need a name to reference.<br /><br />Unlike `az acr build`, running ACR Tasks doesn't provide default push behavior. With ACR Tasks, the default scenario assumes the ability to build, validate, then push an image. See [push](#push) for how to optionally push built images. | Yes | -| `-f` | `--file` | Specifies the Dockerfile passed to `docker build`. If not specified, the default Dockerfile in the root of the context is assumed. To specify a Dockerfile, pass the filename relative to the root of the context. | Yes | -| `context` | The root directory passed to `docker build`. The root directory of each task is set to a shared [workingDirectory](#task-step-properties), and includes the root of the associated Git cloned directory. | No | --### Properties: build --The `build` step type supports the following properties. Find details of these properties in the [Task step properties](#task-step-properties) section of this article. --| Properties | Type | Required | -| -- | - | -- | -| `detach` | bool | Optional | -| `disableWorkingDirectoryOverride` | bool | Optional | -| `entryPoint` | string | Optional | -| `env` | [string, string, ...] | Optional | -| `expose` | [string, string, ...] | Optional | -| `id` | string | Optional | -| `ignoreErrors` | bool | Optional | -| `isolation` | string | Optional | -| `keep` | bool | Optional | -| `network` | object | Optional | -| `ports` | [string, string, ...] | Optional | -| `pull` | bool | Optional | -| `repeat` | int | Optional | -| `retries` | int | Optional | -| `retryDelay` | int (seconds) | Optional | -| `secret` | object | Optional | -| `startDelay` | int (seconds) | Optional | -| `timeout` | int (seconds) | Optional | -| `volumeMount` | object | Optional | -| `when` | [string, string, ...] | Optional | -| `workingDirectory` | string | Optional | --### Examples: build --#### Build image - context in root --```azurecli -az acr run -f build-hello-world.yaml https://github.com/AzureCR/acr-tasks-sample.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/build-hello-world.yaml --> -[!code-yml[task](~/acr-tasks/build-hello-world.yaml)] --#### Build image - context in subdirectory --```yml -version: v1.1.0 -steps: - - build: -t $Registry/hello-world -f hello-world.dockerfile ./subDirectory -``` --### Dynamic variable passing in ACR Tasks --When working with Azure container registry (ACR) tasks, you may find yourself needing to pass different values to your build process without changing the task definition by using the `--set` flag with the `az acr task run` command. --#### Example: Setting image tag at runtime --Suppose you have an ACR task defined in a `acr-task.yml` file with a placeholder for the image tag: --```yaml -steps: - - build: -t $Registry/hello-world:{{.Values.tag}} -``` --You can trigger the task and set the `tag` variable to `v2` at runtime using the following Azure CLI command: --```azurecli -az acr task run --registry myregistry --name mytask --set tag=v2 -``` --This command will start the ACR task named `mytask` and build the image using the `v2` tag, overriding the placeholder in the `acr-task.yml` file. --This approach allows for customization in your CI/CD pipelines, enabling you to dynamically adjust parameters based on your current needs without altering the task definitions. --## push --Push one or more built or retagged images to a container registry. Supports pushing to private registries like Azure Container Registry, or to the public Docker Hub. --### Syntax: push --The `push` step type supports a collection of images. YAML collection syntax supports inline and nested formats. Pushing a single image is typically represented using inline syntax: --```yml -version: v1.1.0 -steps: - # Inline YAML collection syntax - - push: ["$Registry/hello-world:$ID"] -``` --For increased readability, use nested syntax when pushing multiple images: --```yml -version: v1.1.0 -steps: - # Nested YAML collection syntax - - push: - - $Registry/hello-world:$ID - - $Registry/hello-world:latest -``` --### Properties: push --The `push` step type supports the following properties. Find details of these properties in the [Task step properties](#task-step-properties) section of this article. --| Property | Type | Required | -| -- | - | -- | -| `env` | [string, string, ...] | Optional | -| `id` | string | Optional | -| `ignoreErrors` | bool | Optional | -| `startDelay` | int (seconds) | Optional | -| `timeout` | int (seconds) | Optional | -| `when` | [string, string, ...] | Optional | --### Examples: push --#### Push multiple images --```azurecli -az acr run -f build-push-hello-world.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/build-push-hello-world.yaml --> -[!code-yml[task](~/acr-tasks/build-push-hello-world.yaml)] --#### Build, push, and run --```azurecli -az acr run -f build-run-hello-world.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/build-run-hello-world.yaml --> -[!code-yml[task](~/acr-tasks/build-run-hello-world.yaml)] --## cmd --The `cmd` step type runs a container. --### Syntax: cmd --```yml -version: v1.1.0 -steps: - - [cmd]: [containerImage]:[tag (optional)] [cmdParameters to the image] -``` --### Properties: cmd --The `cmd` step type supports the following properties: --| Property | Type | Required | -| -- | - | -- | -| `detach` | bool | Optional | -| `disableWorkingDirectoryOverride` | bool | Optional | -| `entryPoint` | string | Optional | -| `env` | [string, string, ...] | Optional | -| `expose` | [string, string, ...] | Optional | -| `id` | string | Optional | -| `ignoreErrors` | bool | Optional | -| `isolation` | string | Optional | -| `keep` | bool | Optional | -| `network` | object | Optional | -| `ports` | [string, string, ...] | Optional | -| `pull` | bool | Optional | -| `repeat` | int | Optional | -| `retries` | int | Optional | -| `retryDelay` | int (seconds) | Optional | -| `secret` | object | Optional | -| `startDelay` | int (seconds) | Optional | -| `timeout` | int (seconds) | Optional | -| `volumeMount` | object | Optional | -| `when` | [string, string, ...] | Optional | -| `workingDirectory` | string | Optional | --You can find details of these properties in the [Task step properties](#task-step-properties) section of this article. --### Examples: cmd --#### Run hello-world image --This command executes the `hello-world.yaml` task file, which references the [hello-world](https://hub.docker.com/_/hello-world/) image on Docker Hub. --```azurecli -az acr run -f hello-world.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/hello-world.yaml --> -[!code-yml[task](~/acr-tasks/hello-world.yaml)] --#### Run bash image and echo "hello world" --This command executes the `bash-echo.yaml` task file, which references the [bash](https://hub.docker.com/_/bash/) image on Docker Hub. --```azurecli -az acr run -f bash-echo.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/bash-echo.yaml --> -[!code-yml[task](~/acr-tasks/bash-echo.yaml)] --#### Run specific bash image tag --To run a specific image version, specify the tag in the `cmd`. --This command executes the `bash-echo-3.yaml` task file, which references the [bash:3.0](https://hub.docker.com/_/bash/) image on Docker Hub. --```azurecli -az acr run -f bash-echo-3.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/bash-echo-3.yaml --> -[!code-yml[task](~/acr-tasks/bash-echo-3.yaml)] --#### Run custom images --The `cmd` step type references images using the standard `docker run` format. Images not prefaced with a registry are assumed to originate from docker.io. The previous example could equally be represented as: --```yml -version: v1.1.0 -steps: - - cmd: docker.io/bash:3.0 echo hello world -``` --By using the standard `docker run` image reference convention, `cmd` can run images from any private registry or the public Docker Hub. If you're referencing images in the same registry in which ACR Task is executing, you don't need to specify any registry credentials. --* Run an image that's from an Azure container registry. The following example assumes you have a registry named `myregistry`, and a custom image `myimage:mytag`. -- ```yml - version: v1.1.0 - steps: - - cmd: myregistry.azurecr.io/myimage:mytag - ``` --* Generalize the registry reference with a Run variable or alias -- Instead of hard-coding your registry name in an `acr-task.yaml` file, you can make it more portable by using a [Run variable](#run-variables) or [alias](#aliases). The `Run.Registry` variable or `$Registry` alias expands at runtime to the name of the registry in which the task is executing. -- For example, to generalize the preceding task so that it works in any Azure container registry, reference the $Registry variable in the image name: -- ```yml - version: v1.1.0 - steps: - - cmd: $Registry/myimage:mytag - ``` --#### Access secret volumes --The `volumes` property allows volumes and their secret contents to be specified for `build` and `cmd` steps in a task. Inside each step, an optional `volumeMounts` property lists the volumes and corresponding container paths to mount into the container at that step. Secrets are provided as files at each volume's mount path. --Execute a task and mount two secrets to a step: one stored in a key vault and one specified on the command line: --```azurecli -az acr run -f mounts-secrets.yaml --set-secret mysecret=abcdefg123456 https://github.com/Azure-Samples/acr-tasks.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/mounts-secrets.yaml --> -[!code-yml[task](~/acr-tasks/mounts-secrets.yaml)] ---## Task step properties --Each step type supports several properties appropriate for its type. The following table defines all of the available step properties. Not all step types support all properties. To see which of these properties are available for each step type, see the [cmd](#cmd), [build](#build), and [push](#push) step type reference sections. --| Property | Type | Optional | Description | Default value | -| -- | - | -- | -- | - | -| `detach` | bool | Yes | Whether the container should be detached when running. | `false` | -| `disableWorkingDirectoryOverride` | bool | Yes | Whether to disable `workingDirectory` override functionality. Use this in combination with `workingDirectory` to have complete control over the container's working directory. | `false` | -| `entryPoint` | string | Yes | Overrides the `[ENTRYPOINT]` of a step's container. | None | -| `env` | [string, string, ...] | Yes | Array of strings in `key=value` format that define the environment variables for the step. | None | -| `expose` | [string, string, ...] | Yes | Array of ports that are exposed from the container. | None | -| [`id`](#example-id) | string | Yes | Uniquely identifies the step within the task. Other steps in the task can reference a step's `id`, such as for dependency checking with `when`.<br /><br />The `id` is also the running container's name. Processes running in other containers in the task can refer to the `id` as its DNS host name, or for accessing it with docker logs [id], for example. | `acb_step_%d`, where `%d` is the 0-based index of the step top-down in the YAML file | -| `ignoreErrors` | bool | Yes | Whether to mark the step as successful regardless of whether an error occurred during container execution. | `false` | -| `isolation` | string | Yes | The isolation level of the container. | `default` | -| `keep` | bool | Yes | Whether the step's container should be kept after execution. | `false` | -| `network` | object | Yes | Identifies a network in which the container runs. | None | -| `ports` | [string, string, ...] | Yes | Array of ports that are published from the container to the host. | None | -| `pull` | bool | Yes | Whether to force a pull of the container before executing it to prevent any caching behavior. | `false` | -| `privileged` | bool | Yes | Whether to run the container in privileged mode. | `false` | -| `repeat` | int | Yes | The number of retries to repeat the execution of a container. | 0 | -| `retries` | int | Yes | The number of retries to attempt if a container fails its execution. A retry is only attempted if a container's exit code is non-zero. | 0 | -| `retryDelay` | int (seconds) | Yes | The delay in seconds between retries of a container's execution. | 0 | -| `secret` | object | Yes | Identifies an Azure Key Vault secret or [managed identity for Azure resources](container-registry-tasks-authentication-managed-identity.md). | None | -| `startDelay` | int (seconds) | Yes | Number of seconds to delay a container's execution. | 0 | -| `timeout` | int (seconds) | Yes | Maximum number of seconds a step may execute before being terminated. | 600 | -| [`when`](#example-when) | [string, string, ...] | Yes | Configures a step's dependency on one or more other steps within the task. | None | -| `user` | string | Yes | The user name or UID of a container | None | -| `workingDirectory` | string | Yes | Sets the working directory for a step. By default, ACR Tasks creates a root directory as the working directory. However, if your build has several steps, earlier steps can share artifacts with later steps by specifying the same working directory. | `c:\workspace` in Windows or `/workspace` in Linux | --### volumeMount --The volumeMount object has the following properties. --| Property | Type | Optional | Description | Default value | -| -- | - | -- | -- | - | -| `name` | string | No | The name of the volume to mount. Must exactly match the name from a `volumes` property. | None | -| `mountPath` | string | no | The absolute path to mount files in the container. | None | --### Examples: Task step properties --#### Example: id --Build two images, instancing a functional test image. Each step is identified by a unique `id` which other steps in the task reference in their `when` property. --```azurecli -az acr run -f when-parallel-dependent.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/when-parallel-dependent.yaml --> -[!code-yml[task](~/acr-tasks/when-parallel-dependent.yaml)] --#### Example: when --The `when` property specifies a step's dependency on other steps within the task. It supports two parameter values: --* `when: ["-"]` - Indicates no dependency on other steps. A step specifying `when: ["-"]` will begin execution immediately, and enables concurrent step execution. -* `when: ["id1", "id2"]` - Indicates the step is dependent upon steps with `id` "id1" and `id` "id2". This step won't be executed until both "id1" and "id2" steps complete. --If `when` isn't specified in a step, that step is dependent on completion of the previous step in the `acr-task.yaml` file. --Sequential step execution without `when`: --```azurecli -az acr run -f when-sequential-default.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/when-sequential-default.yaml --> -[!code-yml[task](~/acr-tasks/when-sequential-default.yaml)] --Sequential step execution with `when`: --```azurecli -az acr run -f when-sequential-id.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/when-sequential-id.yaml --> -[!code-yml[task](~/acr-tasks/when-sequential-id.yaml)] --Parallel images build: --```azurecli -az acr run -f when-parallel.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/when-parallel.yaml --> -[!code-yml[task](~/acr-tasks/when-parallel.yaml)] --Parallel image build and dependent testing: --```azurecli -az acr run -f when-parallel-dependent.yaml https://github.com/Azure-Samples/acr-tasks.git -``` --<!-- SOURCE: https://github.com/Azure-Samples/acr-tasks/blob/master/when-parallel-dependent.yaml --> -[!code-yml[task](~/acr-tasks/when-parallel-dependent.yaml)] --## Run variables --ACR Tasks includes a default set of variables that are available to task steps when they execute. These variables can be accessed by using the format `{{.Run.VariableName}}`, where `VariableName` is one of the following: --* `Run.ID` -* `Run.SharedVolume` -* `Run.Registry` -* `Run.RegistryName` -* `Run.Date` -* `Run.OS` -* `Run.Architecture` -* `Run.Commit` -* `Run.Branch` -* `Run.TaskName` --The variable names are generally self-explanatory. Details follow for commonly used variables. As of YAML version `v1.1.0`, you can use an abbreviated, predefined [task alias](#aliases) in place of most run variables. For example, in place of `{{.Run.Registry}}`, use the `$Registry` alias. --### Run.ID --Each Run, through `az acr run`, or trigger based execution of tasks created through `az acr task create`, has a unique ID. The ID represents the Run currently being executed. --Typically used for a uniquely tagging an image: --```yml -version: v1.1.0 -steps: - - build: -t $Registry/hello-world:$ID . -``` --### Run.SharedVolume --The unique identifier for a shared volume that is accessible by all task steps. The volume is mounted to `c:\workspace` in Windows or `/workspace` in Linux. --### Run.Registry --The fully qualified server name of the registry. Typically used to generically reference the registry where the task is being run. --```yml -version: v1.1.0 -steps: - - build: -t $Registry/hello-world:$ID . -``` --### Run.RegistryName --The name of the container registry. Typically used in task steps that don't require a fully qualified server name, for example, `cmd` steps that run Azure CLI commands on registries. --```yml -version 1.1.0 -steps: -# List repositories in registry -- cmd: az login --identity-- cmd: az acr repository list --name $RegistryName-``` --### Run.Date --The current UTC time the run began. --### Run.Commit --For a task triggered by a commit to a GitHub repository, the commit identifier. --### Run.Branch --For a task triggered by a commit to a GitHub repository, the branch name. --## Aliases --As of `v1.1.0`, ACR Tasks supports aliases that are available to task steps when they execute. Aliases are similar in concept to aliases (command shortcuts) supported in bash and some other command shells. --With an alias, you can launch any command or group of commands (including options and filenames) by entering a single word. --ACR Tasks supports several predefined aliases and also custom aliases you create. --### Predefined aliases --The following task aliases are available to use in place of [run variables](#run-variables): --| Alias | Run variable | -| -- | | -| `ID` | `Run.ID` | -| `SharedVolume` | `Run.SharedVolume` | -| `Registry` | `Run.Registry` | -| `RegistryName` | `Run.RegistryName` | -| `Date` | `Run.Date` | -| `OS` | `Run.OS` | -| `Architecture` | `Run.Architecture` | -| `Commit` | `Run.Commit` | -| `Branch` | `Run.Branch` | --In task steps, precede an alias with the `$` directive, as in this example: --```yml -version: v1.1.0 -steps: - - build: -t $Registry/hello-world:$ID -f hello-world.dockerfile . -``` --### Image aliases --Each of the following aliases points to a stable image in Microsoft Container Registry (MCR). You can refer to each of them in the `cmd` section of a Task file without using a directive. --| Alias | Image | -| -- | -- | -| `acr` | `mcr.microsoft.com/acr/acr-cli:0.5` | -| `az` | `mcr.microsoft.com/acr/azure-cli:7ee1d7f` | -| `bash` | `mcr.microsoft.com/acr/bash:7ee1d7f` | -| `curl` | `mcr.microsoft.com/acr/curl:7ee1d7f` | --The following example task uses several aliases to [purge](container-registry-auto-purge.md) image tags older than 7 days in the repo `samples/hello-world` in the run registry: --```yml -version: v1.1.0 -steps: - - cmd: acr tag list --registry $RegistryName --repository samples/hello-world - - cmd: acr purge --registry $RegistryName --filter samples/hello-world:.* --ago 7d -``` --### Custom alias --Define a custom alias in your YAML file and use it as shown in the following example. An alias can contain only alphanumeric characters. The default directive to expand an alias is the `$` character. --```yml -version: v1.1.0 -alias: - values: - repo: myrepo -steps: - - build: -t $Registry/$repo/hello-world:$ID -f Dockerfile . -``` --You can link to a remote or local YAML file for custom alias definitions. The following example links to a YAML file in Azure blob storage: --```yml -version: v1.1.0 -alias: - src: # link to local or remote custom alias files - - 'https://link/to/blob/remoteAliases.yml?readSasToken' -[...] -``` --## Next steps --For an overview of multi-step tasks, see the [Run multi-step build, test, and patch tasks in ACR Tasks](container-registry-tasks-multi-step.md). --For single-step builds, see the [ACR Tasks overview](container-registry-tasks-overview.md). ----<!-- IMAGES --> --<!-- LINKS - External --> -[acr-tasks]: https://github.com/Azure-Samples/acr-tasks --<!-- LINKS - Internal --> -[az-acr-run]: /cli/azure/acr#az_acr_run -[az-acr-task-create]: /cli/azure/acr/task#az_acr_task_create -[az-config]: /cli/azure/reference-index#az_config |
container-registry | Container Registry Tasks Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-samples.md | - Title: ACR task samples -description: Sample Azure Container Registry Tasks (ACR Tasks) to build, run, and patch container images --- Previously updated : 10/31/2023----# ACR Tasks samples --This article links to example `task.yaml` files and associated Dockerfiles for several [Azure Container Registry Tasks](container-registry-tasks-overview.md) (ACR Tasks) scenarios. --For additional examples, see the [Azure samples][task-examples] repo. --## Scenarios --* **Build image** - [YAML](https://github.com/Azure-Samples/acr-tasks/blob/master/build-hello-world.yaml), [Dockerfile](https://github.com/Azure-Samples/acr-tasks/blob/master/hello-world.dockerfile) --* **Run container** - [YAML](https://github.com/Azure-Samples/acr-tasks/blob/master/bash-echo.yaml) --* **Build and push image** - [YAML](https://github.com/Azure-Samples/acr-tasks/blob/master/build-push-hello-world.yaml), [Dockerfile](https://github.com/Azure-Samples/acr-tasks/blob/master/hello-world.dockerfile) --* **Build and run image** - [YAML](https://github.com/Azure-Samples/acr-tasks/blob/master/build-run-hello-world.yaml), [Dockerfile](https://github.com/Azure-Samples/acr-tasks/blob/master/hello-world.dockerfile) --* **Build and push multiple images** - [YAML](https://github.com/Azure-Samples/acr-tasks/blob/master/build-push-hello-world-multi.yaml), [Dockerfile](https://github.com/Azure-Samples/acr-tasks/blob/master/hello-world.dockerfile) --* **Build and test images in parallel** - [YAML](https://github.com/Azure-Samples/acr-tasks/blob/master/when-parallel.yaml), [Dockerfile](https://github.com/Azure-Samples/acr-tasks/blob/master/hello-world.dockerfile) --* **Build and push images to multiple registries** - [YAML](https://github.com/Azure-Samples/acr-tasks/blob/master/multipleRegistries/testtask.yaml), [Dockerfile](https://github.com/Azure-Samples/acr-tasks/blob/master/multipleRegistries/hello-world.dockerfile) ---## Next steps --Learn more about ACR Tasks: --* [Multi-step tasks](container-registry-tasks-multi-step.md) - ACR Task-based workflows for building, testing, and patching container images in the cloud. -* [Task reference](container-registry-tasks-reference-yaml.md) - Task step types, their properties, and usage. -* [Cmd repo](https://github.com/AzureCR/cmd) - A collection of containers as commands for ACR Tasks. ---<!-- LINKS - External --> -[task-examples]: https://github.com/Azure-Samples/acr-tasks |
container-registry | Container Registry Tasks Scheduled | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-scheduled.md | - Title: Tutorial - Schedule an ACR task -description: In this tutorial, learn how to run an Azure Container Registry Task on a defined schedule by setting one or more timer triggers ---- Previously updated : 10/31/2023---# Tutorial: Run an ACR task on a defined schedule --This tutorial shows you how to run an [ACR Task](container-registry-tasks-overview.md) on a schedule. Schedule a task by setting up one or more *timer triggers*. Timer triggers can be used alone, or in combination with other task triggers. --In this tutorial, learn about scheduling tasks and: --> [!div class="checklist"] -> * Create a task with a timer trigger -> * Manage timer triggers --Scheduling a task is useful for scenarios like the following: --* Run a container workload for scheduled maintenance operations. For example, run a containerized app to remove unneeded images from your registry. -* Run a set of tests on a production image during the workday as part of your live-site monitoring. ---## About scheduling a task --* **Trigger with cron expression** - The timer trigger for a task uses a *cron expression*. The expression is a string with five fields specifying the minute, hour, day, month, and day of week to trigger the task. Frequencies of up to once per minute are supported. -- For example, the expression `"0 12 * * Mon-Fri"` triggers a task at noon UTC on each weekday. See [details](#cron-expressions) later in this article. -* **Multiple timer triggers** - Adding multiple timers to a task is allowed, as long as the schedules differ. - * Specify multiple timer triggers when you create the task, or add them later. - * Optionally name the triggers for easier management, or ACR Tasks will provide default trigger names. - * If timer schedules overlap at a time, ACR Tasks triggers the task at the scheduled time for each timer. -* **Other task triggers** - In a timer-triggered task, you can also enable triggers based on [source code commit](container-registry-tutorial-build-task.md) or [base image updates](container-registry-tutorial-base-image-update.md). Like other ACR tasks, you can also [manually run][az-acr-task-run] a scheduled task. --## Create a task with a timer trigger --### Task command --First, populate the following shell environment variable with a value appropriate for your environment. This step isn't strictly required, but makes executing the multiline Azure CLI commands in this tutorial a bit easier. If you don't populate the environment variable, you must manually replace each value wherever it appears in the example commands. ---```console -ACR_NAME=<registry-name> # The name of your Azure container registry -``` --When you create a task with the [az acr task create][az-acr-task-create] command, you can optionally add a timer trigger. Add the `--schedule` parameter and pass a cron expression for the timer. --As a simple example, the following task triggers running the `hello-world` image from Microsoft Container Registry every day at 21:00 UTC. The task runs without a source code context. --```azurecli -az acr task create \ - --name timertask \ - --registry $ACR_NAME \ - --cmd mcr.microsoft.com/hello-world \ - --schedule "0 21 * * *" \ - --context -``` --Run the [az acr task show][az-acr-task-show] command to see that the timer trigger is configured. By default, the base image update trigger is also enabled. --```azurecli -az acr task show --name timertask --registry $ACR_NAME --output table -``` --```output -NAME PLATFORM STATUS SOURCE REPOSITORY TRIGGERS - -- - ---timertask linux Enabled BASE_IMAGE, TIMER -``` ---Also, a simple example, of the task running with source code context. The following task triggers running the `hello-world` image from Microsoft Container Registry every day at 21:00 UTC. --Follow the [Prerequisites](./container-registry-tutorial-quick-task.md#prerequisites) to build the source code context and then create a scheduled task with context. - -```azurecli -az acr task create \ - --name timertask \ - --registry $ACR_NAME \ - --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#master \ - --file Dockerfile \ - --image timertask:{{.Run.ID}} \ - --git-access-token $GIT_PAT \ - --schedule "0 21 * * *" -``` --Run the [az acr task show][az-acr-task-show] command to see that the timer trigger is configured. By default, the base image update trigger is also enabled. --```azurecli -az acr task show --name timertask --registry $ACR_NAME --output table -``` --Run the [az acr task run][az-acr-task-run ] command to trigger the task manually. --```azurecli -az acr task run --name timertask --registry $ACR_NAME -``` --## Trigger the task --Trigger the task manually with [az acr task run][az-acr-task-run] to ensure that it is set up properly: --```azurecli -az acr task run --name timertask --registry $ACR_NAME -``` --If the container runs successfully, the output is similar to the following. The output is condensed to show key steps --```output -Queued a run with ID: cf2a -Waiting for an agent... -2020/11/20 21:03:36 Using acb_vol_2ca23c46-a9ac-4224-b0c6-9fde44eb42d2 as the home volume -2020/11/20 21:03:36 Creating Docker network: acb_default_network, driver: 'bridge' -[...] -2020/11/20 21:03:38 Launching container with name: acb_step_0 --Hello from Docker! -This message shows that your installation appears to be working correctly. -[...] -``` --After the scheduled time, run the [az acr task list-runs][az-acr-task-list-runs] command to verify that the timer triggered the task as expected: --```azurecli -az acr task list-runs --name timertask --registry $ACR_NAME --output table -``` --When the timer is successful, output is similar to the following: --```output -RUN ID TASK PLATFORM STATUS TRIGGER STARTED DURATION - -- --ca15 timertask linux Succeeded Timer 2020-11-20T21:00:23Z 00:00:06 -ca14 timertask linux Succeeded Manual 2020-11-20T20:53:35Z 00:00:06 -``` --## Manage timer triggers --Use the [az acr task timer][az-acr-task-timer] commands to manage the timer triggers for an ACR task. --### Add or update a timer trigger --After a task is created, optionally add a timer trigger by using the [az acr task timer add][az-acr-task-timer-add] command. The following example adds a timer trigger name *timer2* to *timertask* created previously. This timer triggers the task every day at 10:30 UTC. --```azurecli -az acr task timer add \ - --name timertask \ - --registry $ACR_NAME \ - --timer-name timer2 \ - --schedule "30 10 * * *" -``` --Update the schedule of an existing trigger, or change its status, by using the [az acr task timer update][az-acr-task-timer-update] command. For example, update the trigger named *timer2* to trigger the task at 11:30 UTC: --```azurecli -az acr task timer update \ - --name timertask \ - --registry $ACR_NAME \ - --timer-name timer2 \ - --schedule "30 11 * * *" -``` --### List timer triggers --The [az acr task timer list][az-acr-task-timer-list] command shows the timer triggers set up for a task: --```azurecli -az acr task timer list --name timertask --registry $ACR_NAME -``` --Example output: --```JSON -[ - { - "name": "timer2", - "schedule": "30 11 * * *", - "status": "Enabled" - }, - { - "name": "t1", - "schedule": "0 21 * * *", - "status": "Enabled" - } -] -``` --### Remove a timer trigger --Use the [az acr task timer remove][az-acr-task-timer-remove] command to remove a timer trigger from a task. The following example removes the *timer2* trigger from *timertask*: --```azurecli -az acr task timer remove \ - --name timertask \ - --registry $ACR_NAME \ - --timer-name timer2 -``` --## Cron expressions --ACR Tasks uses the [NCronTab](https://github.com/atifaziz/NCrontab) library to interpret cron expressions. Supported expressions in ACR Tasks have five required fields separated by white space: --`{minute} {hour} {day} {month} {day-of-week}` --The time zone used with the cron expressions is Coordinated Universal Time (UTC). Hours are in 24-hour format. --> [!NOTE] -> ACR Tasks does not support the `{second}` or `{year}` field in cron expressions. If you copy a cron expression used in another system, be sure to remove those fields, if they are used. --Each field can have one of the following types of values: --|Type |Example |When triggered | -|||| -|A specific value |<nobr>`"5 * * * *"`</nobr>|every hour at 5 minutes past the hour| -|All values (`*`)|<nobr>`"* 5 * * *"`</nobr>|every minute of the hour beginning 5:00 UTC (60 times a day)| -|A range (`-` operator)|<nobr>`"0 1-3 * * *"`</nobr>|3 times per day, at 1:00, 2:00, and 3:00 UTC| -|A set of values (`,` operator)|<nobr>`"20,30,40 * * * *"`</nobr>|3 times per hour, at 20 minutes, 30 minutes, and 40 minutes past the hour| -|An interval value (`/` operator)|<nobr>`"*/10 * * * *"`</nobr>|6 times per hour, at 10 minutes, 20 minutes, and so on, past the hour ---### Cron examples --|Example|When triggered | -||| -|`"*/5 * * * *"`|once every five minutes| -|`"0 * * * *"`|once at the top of every hour| -|`"0 */2 * * *"`|once every two hours| -|`"0 9-17 * * *"`|once every hour from 9:00 to 17:00 UTC| -|`"30 9 * * *"`|at 9:30 UTC every day| -|`"30 9 * * 1-5"`|at 9:30 UTC every weekday| -|`"30 9 * Jan Mon"`|at 9:30 UTC every Monday in January| --## Clean up resources --To remove all resources you've created in this tutorial series, including the container registry or registries, container instance, key vault, and service principal, issue the following commands: --```azurecli -az group delete --resource-group $RES_GROUP -az ad sp delete --id http://$ACR_NAME-pull -``` --## Next steps --In this tutorial, you learned how to create Azure Container Registry tasks that are automatically triggered by a timer. --For an example of using a scheduled task to clean up repositories in a registry, see [Automatically purge images from an Azure container registry](container-registry-auto-purge.md). --For examples of tasks triggered by source code commits or base image updates, see other articles in the [ACR Tasks tutorial series](container-registry-tutorial-quick-task.md). ----<!-- LINKS - External --> -[task-examples]: https://github.com/Azure-Samples/acr-tasks ---<!-- LINKS - Internal --> -[az-acr-task-create]: /cli/azure/acr/task#az_acr_task_create -[az-acr-task-show]: /cli/azure/acr/task#az_acr_task_show -[az-acr-task-list-runs]: /cli/azure/acr/task#az_acr_task_list_runs -[az-acr-task-timer]: /cli/azure/acr/task/timer -[az-acr-task-timer-add]: /cli/azure/acr/task/timer#az_acr_task_timer_add -[az-acr-task-timer-remove]: /cli/azure/acr/task/timer#az_acr_task_timer_remove -[az-acr-task-timer-list]: /cli/azure/acr/task/timer#az_acr_task_timer_list -[az-acr-task-timer-update]: /cli/azure/acr/task/timer#az_acr_task_timer_update -[az-acr-task-run]: /cli/azure/acr/task#az_acr_task_run -[az-acr-task]: /cli/azure/acr/task -[azure-cli-install]: /cli/azure/install-azure-cli |
container-registry | Container Registry Transfer Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-cli.md | - Title: ACR Transfer with Az CLI -description: Use ACR Transfer with Az CLI --- Previously updated : 10/31/2023-----# ACR Transfer with Az CLI --This article shows how to use the ACR Transfer feature with the acrtransfer Az CLI extension. --## Complete prerequisites --Please complete the prerequisites outlined [here](./container-registry-transfer-prerequisites.md) prior to attempting the actions in this article. This means that: --- You have an existing Premium SKU Registry in both clouds.-- You have an existing Storage Account Container in both clouds.-- You have an existing Keyvault with a secret containing a valid SAS token with the necessary permissions in both clouds.-- You have a recent version of Az CLI installed in both clouds.--## Install the Az CLI extension --In AzureCloud, you can install the extension with the following command: --```azurecli -az extension add --name acrtransfer -``` --In AzureCloud and other clouds, you can install the blob directly from a public storage account container. The blob is hosted in the `acrtransferext` storage account, `dist` container, `acrtransfer-1.0.0-py2.py3-none-any.wh` blob. You may need to change the storage URI suffix depending on which cloud you are in. The following will install in AzureCloud: --```azurecli -az extension add --source https://acrtransferext.blob.core.windows.net/dist/acrtransfer-1.0.0-py2.py3-none-any.whl -``` --## Create ExportPipeline with the acrtransfer Az CLI extension --Create an ExportPipeline resource for your AzureCloud container registry using the acrtransfer Az CLI extension. --Create an export pipeline with no options and a system-assigned identity: --```azurecli -az acr export-pipeline create \ resource-group $MyRG \registry $MyReg \name $MyPipeline \secret-uri https://$MyKV.vault.azure.net/secrets/$MySecret \storage-container-uri https://$MyStorage.blob.core.windows.net/$MyContainer-``` --Create an export pipeline with all possible options and a user-assigned identity: --```azurecli -az acr export-pipeline create \ resource-group $MyRG \registry $MyReg \name $MyPipeline \secret-uri https://$MyKV.vault.azure.net/secrets/$MySecret \storage-container-uri https://$MyStorage.blob.core.windows.net/$MyContainer \options OverwriteBlobs ContinueOnErrors \assign-identity /subscriptions/$MySubID/resourceGroups/$MyRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$MyIdentity-``` --### Export options --The `options` property for the export pipelines supports optional boolean values. The following values are recommended: --|Parameter |Value | -||| -|options | OverwriteBlobs - Overwrite existing target blobs<br/>ContinueOnErrors - Continue export of remaining artifacts in the source registry if one artifact export fails. --### Give the ExportPipeline identity keyvault policy access --If you created your pipeline with a user-assigned identity, simply give this user-assigned identity `secret get` access policy permissions on the keyvault. --If you created your pipeline with a system-assigned identity, you will first need to retrieve the principalId that the system has assigned to your pipeline resource. --Run the following command to retrieve your pipeline resource: --```azurecli -az acr export-pipeline show --resource-group $MyRG --registry $MyReg --name $MyPipeline -``` --From this output, you will want to copy the value in the `principalId` field. --Then, you will run the following command to give this principal the appropriate `secret get` access policy permissions on your keyvault. --```azurecli -az keyvault set-policy --name $MyKeyvault --secret-permissions get --object-id $MyPrincipalID -``` --## Create ImportPipeline with the acrtransfer Az CLI extension --Create an ImportPipeline resource in your target container registry using the acrtransfer Az CLI extension. By default, the pipeline is enabled to create an Import PipelineRun automatically when the attached storage account container receives a new artifact blob. --Create an import pipeline with no options and a system-assigned identity: --```azurecli -az acr import-pipeline create \ resource-group $MyRG \registry $MyReg \name $MyPipeline \secret-uri https://$MyKV.vault.azure.net/secrets/$MySecret \storage-container-uri https://$MyStorage.blob.core.windows.net/$MyContainer-``` --Create an import pipeline with all possible options, source-trigger disabled, and a user-assigned identity: --```azurecli -az acr import-pipeline create \ resource-group $MyRG \registry $MyReg \name $MyPipeline \secret-uri https://$MyKV.vault.azure.net/secrets/$MySecret \storage-container-uri https://$MyStorage.blob.core.windows.net/$MyContainer \options DeleteSourceBlobOnSuccess OverwriteTags ContinueOnErrors \assign-identity /subscriptions/$MySubID/resourceGroups/$MyRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$MyIdentity \source-trigger-enabled False-``` --### Import options --The `options` property for the import pipeline supports optional boolean values. The following values are recommended: --|Parameter |Value | -||| -|options | OverwriteTags - Overwrite existing target tags<br/>DeleteSourceBlobOnSuccess - Delete the source storage blob after successful import to the target registry<br/>ContinueOnErrors - Continue import of remaining artifacts in the target registry if one artifact import fails. --### Give the ImportPipeline identity keyvault policy access --If you created your pipeline with a user-assigned identity, simply give this user-assigned identity `secret get` access policy permissions on the keyvault. --If you created your pipeline with a system-assigned identity, you will first need to retrieve the principalId that the system has assigned to your pipeline resource. --Run the following command to retrieve your pipeline resource: --```azurecli -az acr import-pipeline show --resource-group $MyRG --registry $MyReg --name $MyPipeline -``` --From this output, you will want to copy the value in the `principalId` field. --Then, you will run the following command to give this principal the appropriate `secret get` access policy on your keyvault. --```azurecli -az keyvault set-policy --name $MyKeyvault --secret-permissions get --object-id $MyPrincipalID -``` --## Create PipelineRun for export with the acrtransfer Az CLI extension --Create a PipelineRun resource for your container registry using the acrtransfer Az CLI extension. This resource runs the ExportPipeline resource you created previously and exports specified artifacts from your container registry as a blob to your storage account container. --Create an export pipeline-run: --```azurecli -az acr pipeline-run create \ resource-group $MyRG \registry $MyReg \pipeline $MyPipeline \name $MyPipelineRun \pipeline-type export \storage-blob $MyBlob \artifacts hello-world:latest hello-world@sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042 \force-redeploy-``` --If redeploying a PipelineRun resource with identical properties, you must use the --force-redeploy flag. --It can take several minutes for artifacts to export. When deployment completes successfully, verify artifact export by listing the exported blob in the container of the source storage account. For example, run the [az storage blob list][az-storage-blob-list] command: --```azurecli -az storage blob list --account-name $MyStorageAccount --container $MyContainer --output table -``` --## Transfer blob across domain --In most use-cases, you will now use a Cross Domain Solution or other method to transfer your blob from the storage account in your source domain (the storage account associated with your export pipeline) to the storage account in your target domain (the storage account associated with your import pipeline). At this point, we will assume that the blob has arrived in the target domain storage account associated with your import pipeline. --## Trigger ImportPipeline resource --If you did not use the `--source-trigger-enabled False` parameter when creating your import pipeline, the pipeline will be triggered within 15 minutes after the blob arrives in the storage account container. It can take several minutes for artifacts to import. When the import completes successfully, verify artifact import by listing the tags on the repository you are importing in the target container registry. For example, run [az acr repository show-tags][az-acr-repository-show-tags]: --```azurecli -az acr repository show-tags --name $MyRegistry --repository $MyRepository -``` --> [!Note] -> Source Trigger will only import blobs that have a Last Modified time within the last 60 days. If you intend to use Source Trigger to import blobs older than that, please refresh the Last Modified time of the blobs by add blob metadata to them or else import them with manually created pipeline runs. --If you did use the `--source-trigger-enabled False` parameter when creating your ImportPipeline, you will need to create a PipelineRun manually, as shown in the following section. --## Create PipelineRun for import with the acrtransfer Az CLI extension --Create a PipelineRun resource for your container registry using the acrtransfer Az CLI extension. This resource runs the ImportPipeline resource you created previously and imports specified blobs from your storage account into your container registry. --Create an import pipeline-run: --```azurecli -az acr pipeline-run create \ resource-group $MyRG \registry $MyReg \pipeline $MyPipeline \name $MyPipelineRun \pipeline-type import \storage-blob $MyBlob \force-redeploy-``` --If redeploying a PipelineRun resource with identical properties, you must use the --force-redeploy flag. --It can take several minutes for artifacts to import. When the import completes successfully, verify artifact import by listing the repositories in the target container registry. For example, run [az acr repository show-tags][az-acr-repository-show-tags]: --```azurecli -az acr repository show-tags --name $MyRegistry --repository $MyRepository -``` --## Delete ACR Transfer resources --Delete an ExportPipeline: --```azurecli -az acr export-pipeline delete --resource-group $MyRG --registry $MyReg --name $MyPipeline -``` --Delete an ImportPipeline: --```azurecli -az acr import-pipeline delete --resource-group $MyRG --registry $MyReg --name $MyPipeline -``` --Delete a PipelineRun resource. Note that this does not reverse the action taken by the PipelineRun. This is more like deleting the log of the PipelineRun. --```azurecli -az acr pipeline-run delete --resource-group $MyRG --registry $MyReg --name $MyPipelineRun -``` --## ACR Transfer troubleshooting --View [ACR Transfer Troubleshooting](container-registry-transfer-troubleshooting.md) for troubleshooting guidance. --## Next steps --* Learn how to [block creation of export pipelines](data-loss-prevention.md) from a network-restricted container registry. --<!-- LINKS - External --> -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ --<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-login]: /cli/azure/reference-index#az-login -[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az-keyvault-secret-set -[az-keyvault-secret-show]: /cli/azure/keyvault/secret#az-keyvault-secret-show -[az-keyvault-set-policy]: /cli/azure/keyvault#az-keyvault-set-policy -[az-storage-container-generate-sas]: /cli/azure/storage/container#az-storage-container-generate-sas -[az-storage-blob-list]: /cli/azure/storage/blob#az-storage-blob-list -[az-deployment-group-create]: /cli/azure/deployment/group#az-deployment-group-create -[az-deployment-group-delete]: /cli/azure/deployment/group#az-deployment-group-delete -[az-deployment-group-show]: /cli/azure/deployment/group#az-deployment-group-show -[az-acr-repository-show-tags]: /cli/azure/acr/repository##az_acr_repository_show_tags -[az-acr-import]: /cli/azure/acr#az-acr-import -[az-resource-delete]: /cli/azure/resource#az-resource-delete |
container-registry | Container Registry Transfer Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-images.md | - Title: ACR Transfer with Arm Templates -description: ACR Transfer with Az CLI with ARM templates --- Previously updated : 10/31/2023-----# ACR Transfer with ARM templates --## Complete Prerequisites --Please complete the prerequisites outlined [here](./container-registry-transfer-prerequisites.md) prior to attempting the actions in this article. This means that: --- You have an existing Premium SKU Registry in both clouds.-- You have an existing Storage Account Container in both clouds.-- You have an existing Keyvault with a secret containing a valid SAS token with the necessary permissions in both clouds.-- You have a recent version of Az CLI installed in both clouds.--> [!IMPORTANT] -> The ACR Transfer supports artifacts with the layer size limits to 8 GB due to the technical limitations. --## Consider using the Az CLI extension --For most nonautomated use-cases, we recommend using the Az CLI Extension if possible. You can view documentation for the Az CLI Extension [here](./container-registry-transfer-cli.md). --## Create ExportPipeline with Resource Manager --Create an ExportPipeline resource for your source container registry using Azure Resource Manager template deployment. --Copy ExportPipeline Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/ExportPipelines) to a local folder. --Enter the following parameter values in the file `azuredeploy.parameters.json`: --|Parameter |Value | -||| -|registryName | Name of your source container registry | -|exportPipelineName | Name you choose for the export pipeline | -|targetUri | URI of the storage container in your source environment (the target of the export pipeline).<br/>Example: `https://sourcestorage.blob.core.windows.net/transfer` | -|keyVaultName | Name of the source key vault | -|sasTokenSecretName | Name of the SAS token secret in the source key vault <br/>Example: acrexportsas --### Export options --The `options` property for the export pipelines supports optional boolean values. The following values are recommended: --|Parameter |Value | -||| -|options | OverwriteBlobs - Overwrite existing target blobs<br/>ContinueOnErrors - Continue export of remaining artifacts in the source registry if one artifact export fails. --### Create the resource --Run [az deployment group create][az-deployment-group-create] to create a resource named *exportPipeline* as shown in the following examples. By default, with the first option, the example template enables a system-assigned identity in the ExportPipeline resource. --With the second option, you can provide the resource with a user-assigned identity. (Creation of the user-assigned identity not shown.) --With either option, the template configures the identity to access the SAS token in the export key vault. --#### Option 1: Create resource and enable system-assigned identity --```azurecli -az deployment group create \ - --resource-group $SOURCE_RG \ - --template-file azuredeploy.json \ - --name exportPipeline \ - --parameters azuredeploy.parameters.json -``` --#### Option 2: Create resource and provide user-assigned identity --In this command, provide the resource ID of the user-assigned identity as an additional parameter. --```azurecli -az deployment group create \ - --resource-group $SOURCE_RG \ - --template-file azuredeploy.json \ - --name exportPipeline \ - --parameters azuredeploy.parameters.json \ - --parameters userAssignedIdentity="/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myUserAssignedIdentity" -``` --In the command output, take note of the resource ID (`id`) of the pipeline. You can store this value in an environment variable for later use by running the [az deployment group show][az-deployment-group-show]. For example: --```azurecli -EXPORT_RES_ID=$(az deployment group show \ - --resource-group $SOURCE_RG \ - --name exportPipeline \ - --query 'properties.outputResources[1].id' \ - --output tsv) -``` --## Create ImportPipeline with Resource Manager --Create an ImportPipeline resource in your target container registry using Azure Resource Manager template deployment. By default, the pipeline is enabled to import automatically when the storage account in the target environment has an artifact blob. --Copy ImportPipeline Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/ImportPipelines) to a local folder. --Enter the following parameter values in the file `azuredeploy.parameters.json`: --Parameter |Value | -||| -|registryName | Name of your target container registry | -|importPipelineName | Name you choose for the import pipeline | -|sourceUri | URI of the storage container in your target environment (the source for the import pipeline).<br/>Example: `https://targetstorage.blob.core.windows.net/transfer`| -|keyVaultName | Name of the target key vault | -|sasTokenSecretName | Name of the SAS token secret in the target key vault<br/>Example: acr importsas | --### Import options --The `options` property for the import pipeline supports optional boolean values. The following values are recommended: --|Parameter |Value | -||| -|options | OverwriteTags - Overwrite existing target tags<br/>DeleteSourceBlobOnSuccess - Delete the source storage blob after successful import to the target registry<br/>ContinueOnErrors - Continue import of remaining artifacts in the target registry if one artifact import fails. --### Create the resource --Run [az deployment group create][az-deployment-group-create] to create a resource named *importPipeline* as shown in the following examples. By default, with the first option, the example template enables a system-assigned identity in the ImportPipeline resource. --With the second option, you can provide the resource with a user-assigned identity. (Creation of the user-assigned identity not shown.) --With either option, the template configures the identity to access the SAS token in the import key vault. --#### Option 1: Create resource and enable system-assigned identity --```azurecli -az deployment group create \ - --resource-group $TARGET_RG \ - --template-file azuredeploy.json \ - --name importPipeline \ - --parameters azuredeploy.parameters.json -``` --#### Option 2: Create resource and provide user-assigned identity --In this command, provide the resource ID of the user-assigned identity as an additional parameter. --```azurecli -az deployment group create \ - --resource-group $TARGET_RG \ - --template-file azuredeploy.json \ - --name importPipeline \ - --parameters azuredeploy.parameters.json \ - --parameters userAssignedIdentity="/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myUserAssignedIdentity" -``` --If you plan to run the import manually, take note of the resource ID (`id`) of the pipeline. You can store this value in an environment variable for later use by running the [az deployment group show][az-deployment-group-show] command. For example: --```azurecli -IMPORT_RES_ID=$(az deployment group show \ - --resource-group $TARGET_RG \ - --name importPipeline \ - --query 'properties.outputResources[1].id' \ - --output tsv) -``` --## Create PipelineRun for export with Resource Manager --Create a PipelineRun resource for your source container registry using Azure Resource Manager template deployment. This resource runs the ExportPipeline resource you created previously, and exports specified artifacts from your container registry as a blob to your source storage account. --Copy PipelineRun Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/PipelineRun/PipelineRun-Export) to a local folder. --Enter the following parameter values in the file `azuredeploy.parameters.json`: --|Parameter |Value | -||| -|registryName | Name of your source container registry | -|pipelineRunName | Name you choose for the run | -|pipelineResourceId | Resource ID of the export pipeline.<br/>Example: `/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.ContainerRegistry/registries/<sourceRegistryName>/exportPipelines/myExportPipeline`| -|targetName | Name you choose for the artifacts blob exported to your source storage account, such as *myblob* -|artifacts | Array of source artifacts to transfer, as tags or manifest digests<br/>Example: `[samples/hello-world:v1", "samples/nginx:v1" , "myrepository@sha256:0a2e01852872..."]` | --If redeploying a PipelineRun resource with identical properties, you must also use the [forceUpdateTag](#redeploy-pipelinerun-resource) property. --Run [az deployment group create][az-deployment-group-create] to create the PipelineRun resource. The following example names the deployment *exportPipelineRun*. --```azurecli -az deployment group create \ - --resource-group $SOURCE_RG \ - --template-file azuredeploy.json \ - --name exportPipelineRun \ - --parameters azuredeploy.parameters.json -``` --For later use, store the resource ID of the pipeline run in an environment variable: --```azurecli -EXPORT_RUN_RES_ID=$(az deployment group show \ - --resource-group $SOURCE_RG \ - --name exportPipelineRun \ - --query 'properties.outputResources[0].id' \ - --output tsv) -``` --It can take several minutes for artifacts to export. When deployment completes successfully, verify artifact export by listing the exported blob in the *transfer* container of the source storage account. For example, run the [az storage blob list][az-storage-blob-list] command: --```azurecli -az storage blob list \ - --account-name $SOURCE_SA \ - --container transfer \ - --output table -``` --## Transfer blob (optional) --Use the AzCopy tool or other methods to [transfer blob data](../storage/common/storage-use-azcopy-v10.md#transfer-data) from the source storage account to the target storage account. --For example, the following [`azcopy copy`](../storage/common/storage-ref-azcopy-copy.md) command copies myblob from the *transfer* container in the source account to the *transfer* container in the target account. If the blob exists in the target account, it's overwritten. Authentication uses SAS tokens with appropriate permissions for the source and target containers. (Steps to create tokens aren't shown.) --```console -azcopy copy \ - 'https://<source-storage-account-name>.blob.core.windows.net/transfer/myblob'$SOURCE_SAS \ - 'https://<destination-storage-account-name>.blob.core.windows.net/transfer/myblob'$TARGET_SAS \ - --overwrite true -``` --## Trigger ImportPipeline resource --If you enabled the `sourceTriggerStatus` parameter of the ImportPipeline (the default value), the pipeline is triggered after the blob is copied to the target storage account. It can take several minutes for artifacts to import. When the import completes successfully, verify artifact import by listing the repositories in the target container registry. For example, run [az acr repository list][az-acr-repository-list]: --```azurecli -az acr repository list --name <target-registry-name> -``` --> [!Note] -> Source Trigger will only import blobs that have a Last Modified time within the last 60 days. If you intend to use Source Trigger to import blobs older than that, please refresh the Last Modified time of the blobs by add blob metadata to them or else import them with manually created pipeline runs. --If you didn't enable the `sourceTriggerStatus` parameter of the import pipeline, run the ImportPipeline resource manually, as shown in the following section. --## Create PipelineRun for import with Resource Manager (optional) --You can also use a PipelineRun resource to trigger an ImportPipeline for artifact import to your target container registry. --Copy PipelineRun Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/PipelineRun/PipelineRun-Import) to a local folder. --Enter the following parameter values in the file `azuredeploy.parameters.json`: --|Parameter |Value | -||| -|registryName | Name of your target container registry | -|pipelineRunName | Name you choose for the run | -|pipelineResourceId | Resource ID of the import pipeline.<br/>Example: `/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.ContainerRegistry/registries/<sourceRegistryName>/importPipelines/myImportPipeline` | -|sourceName | Name of the existing blob for exported artifacts in your storage account, such as *myblob* --If redeploying a PipelineRun resource with identical properties, you must also use the [forceUpdateTag](#redeploy-pipelinerun-resource) property. --Run [az deployment group create][az-deployment-group-create] to run the resource. --```azurecli -az deployment group create \ - --resource-group $TARGET_RG \ - --name importPipelineRun \ - --template-file azuredeploy.json \ - --parameters azuredeploy.parameters.json -``` --For later use, store the resource ID of the pipeline run in an environment variable: --```azurecli -IMPORT_RUN_RES_ID=$(az deployment group show \ - --resource-group $TARGET_RG \ - --name importPipelineRun \ - --query 'properties.outputResources[0].id' \ - --output tsv) -``` --When deployment completes successfully, verify artifact import by listing the repositories in the target container registry. For example, run [az acr repository list][az-acr-repository-list]: --```azurecli -az acr repository list --name <target-registry-name> -``` --## Redeploy PipelineRun resource --If redeploying a PipelineRun resource with *identical properties*, you must leverage the **forceUpdateTag** property. This property indicates that the PipelineRun resource should be recreated even if the configuration has not changed. Ensure forceUpdateTag is different each time you redeploy the PipelineRun resource. The example below recreates a PipelineRun for export. The current datetime is used to set forceUpdateTag, thereby ensuring this property is always unique. --```console -CURRENT_DATETIME=`date +"%Y-%m-%d:%T"` -``` --```azurecli -az deployment group create \ - --resource-group $SOURCE_RG \ - --template-file azuredeploy.json \ - --name exportPipelineRun \ - --parameters azuredeploy.parameters.json \ - --parameters forceUpdateTag=$CURRENT_DATETIME -``` --## Delete pipeline resources --The following example commands use [az resource delete][az-resource-delete] to delete the pipeline resources created in this article. The resource IDs were previously stored in environment variables. --``` -# Delete export resources -az resource delete \ resource-group $SOURCE_RG \ids $EXPORT_RES_ID $EXPORT_RUN_RES_ID \api-version 2019-12-01-preview--# Delete import resources -az resource delete \ resource-group $TARGET_RG \ids $IMPORT_RES_ID $IMPORT_RUN_RES_ID \api-version 2019-12-01-preview-``` --## ACR Transfer troubleshooting --View [ACR Transfer Troubleshooting](container-registry-transfer-troubleshooting.md) for troubleshooting guidance. --## Next steps --* Learn how to [block creation of export pipelines](data-loss-prevention.md) from a network-restricted container registry. --<!-- LINKS - External --> -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ ----<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-login]: /cli/azure/reference-index#az-login -[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az-keyvault-secret-set -[az-keyvault-secret-show]: /cli/azure/keyvault/secret#az-keyvault-secret-show -[az-keyvault-set-policy]: /cli/azure/keyvault#az-keyvault-set-policy -[az-storage-container-generate-sas]: /cli/azure/storage/container#az-storage-container-generate-sas -[az-storage-blob-list]: /cli/azure/storage/blob#az-storage-blob-list -[az-deployment-group-create]: /cli/azure/deployment/group#az-deployment-group-create -[az-deployment-group-delete]: /cli/azure/deployment/group#az-deployment-group-delete -[az-deployment-group-show]: /cli/azure/deployment/group#az-deployment-group-show -[az-acr-repository-list]: /cli/azure/acr/repository#az-acr-repository-list -[az-acr-import]: /cli/azure/acr#az-acr-import -[az-resource-delete]: /cli/azure/resource#az-resource-delete |
container-registry | Container Registry Transfer Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-prerequisites.md | - Title: Transfer artifacts -description: Overview of ACR Transfer and prerequisites --- Previously updated : 10/31/2023----# Transfer artifacts to another registry --This article shows how to transfer collections of images or other registry artifacts from one Azure container registry to another registry. The source and target registries can be in the same or different subscriptions, Active Directory tenants, Azure clouds, or physically disconnected clouds. --To transfer artifacts, you create a *transfer pipeline* that replicates artifacts between two registries by using [blob storage](../storage/blobs/storage-blobs-introduction.md): --* Artifacts from a source registry are exported to a blob in a source storage account -* The blob is copied from the source storage account to a target storage account -* The blob in the target storage account gets imported as artifacts in the target registry. You can set up the import pipeline to trigger whenever the artifact blob updates in the target storage. --In this article, you create the prerequisite resources to create and run the transfer pipeline. The Azure CLI is used to provision the associated resources such as storage secrets. Azure CLI version 2.2.0 or later is recommended. If you need to install or upgrade the CLI, see [Install Azure CLI][azure-cli]. --This feature is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry tiers](container-registry-skus.md). --> [!IMPORTANT] -> This feature is currently in preview. Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA). --## Consider your use-case --Transfer is ideal for copying content between two Azure container registries in physically disconnected clouds, mediated by storage accounts in each cloud. If instead you want to copy images from container registries in connected clouds including Docker Hub and other cloud vendors, [image import](container-registry-import-images.md) is recommended. --## Prerequisites --* **Container registries** - You need an existing source registry with artifacts to transfer, and a target registry. ACR transfer is intended for movement across physically disconnected clouds. For testing, the source and target registries can be in the same or a different Azure subscription, Active Directory tenant, or cloud. -- If you need to create a registry, see [Quickstart: Create a private container registry using the Azure CLI](container-registry-get-started-azure-cli.md). -* **Storage accounts** - Create source and target storage accounts in a subscription and location of your choice. For testing purposes, you can use the same subscription or subscriptions as your source and target registries. For cross-cloud scenarios, typically you create a separate storage account in each cloud. -- If needed, create the storage accounts with the [Azure CLI](../storage/common/storage-account-create.md?tabs=azure-cli) or other tools. -- Create a blob container for artifact transfer in each account. For example, create a container named *transfer*. --* **Key vaults** - Key vaults are needed to store SAS token secrets used to access source and target storage accounts. Create the source and target key vaults in the same Azure subscription or subscriptions as your source and target registries. For demonstration purposes, the templates and commands used in this article also assume that the source and target key vaults are located in the same resource groups as the source and target registries, respectively. This use of common resource groups isn't required, but it simplifies the templates and commands used in this article. -- If needed, create key vaults with the [Azure CLI](/azure/key-vault/secrets/quick-create-cli) or other tools. --* **Environment variables** - For example commands in this article, set the following environment variables for the source and target environments. All examples are formatted for the Bash shell. - ```console - SOURCE_RG="<source-resource-group>" - TARGET_RG="<target-resource-group>" - SOURCE_KV="<source-key-vault>" - TARGET_KV="<target-key-vault>" - SOURCE_SA="<source-storage-account>" - TARGET_SA="<target-storage-account>" - ``` --## Scenario overview --You create the following three pipeline resources for image transfer between registries. All are created using PUT operations. These resources operate on your *source* and *target* registries and storage accounts. --Storage authentication uses SAS tokens, managed as secrets in key vaults. The pipelines use managed identities to read the secrets in the vaults. --* **[ExportPipeline](./container-registry-transfer-cli.md#create-exportpipeline-with-the-acrtransfer-az-cli-extension)** - Long-lasting resource that contains high-level information about the *source* registry and storage account. This information includes the source storage blob container URI and the key vault managing the source SAS token. -* **[ImportPipeline](./container-registry-transfer-cli.md#create-importpipeline-with-the-acrtransfer-az-cli-extension)** - Long-lasting resource that contains high-level information about the *target* registry and storage account. This information includes the target storage blob container URI and the key vault managing the target SAS token. An import trigger is enabled by default, so the pipeline runs automatically when an artifact blob lands in the target storage container. -* **[PipelineRun](./container-registry-transfer-cli.md#create-pipelinerun-for-export-with-the-acrtransfer-az-cli-extension)** - Resource used to invoke either an ExportPipeline or ImportPipeline resource. - * You run the ExportPipeline manually by creating a PipelineRun resource and specify the artifacts to export. - * If an import trigger is enabled, the ImportPipeline runs automatically. It can also be run manually using a PipelineRun. - * Currently a maximum of **50 artifacts** can be transferred with each PipelineRun. --### Things to know -* The ExportPipeline and ImportPipeline will typically be in different Active Directory tenants associated with the source and destination clouds. This scenario requires separate managed identities and key vaults for the export and import resources. For testing purposes, these resources can be placed in the same cloud, sharing identities. -* By default, the ExportPipeline and ImportPipeline templates each enable a system-assigned managed identity to access key vault secrets. The ExportPipeline and ImportPipeline templates also support a user-assigned identity that you provide. --## Create and store SAS keys --Transfer uses shared access signature (SAS) tokens to access the storage accounts in the source and target environments. Generate and store tokens as described in the following sections. -> [!IMPORTANT] -> While ACR Transfer will work with a manually generated SAS token stored in a Keyvault Secret, for production workloads we *strongly* recommend using [Keyvault Managed Storage SAS Definition Secrets][kv-managed-sas] instead. ---### Generate SAS token for export --Run the [az storage container generate-sas][az-storage-container-generate-sas] command to generate a SAS token for the container in the source storage account, used for artifact export. --*Recommended token permissions*: Read, Write, List, Add. --In the following example, command output is assigned to the EXPORT_SAS environment variable, prefixed with the '?' character. Update the `--expiry` value for your environment: --```azurecli -EXPORT_SAS=?$(az storage container generate-sas \ - --name transfer \ - --account-name $SOURCE_SA \ - --expiry 2021-01-01 \ - --permissions alrw \ - --https-only \ - --output tsv) -``` --### Store SAS token for export --Store the SAS token in your source Azure key vault using [az keyvault secret set][az-keyvault-secret-set]: --```azurecli -az keyvault secret set \ - --name acrexportsas \ - --value $EXPORT_SAS \ - --vault-name $SOURCE_KV -``` --### Generate SAS token for import --Run the [az storage container generate-sas][az-storage-container-generate-sas] command to generate a SAS token for the container in the target storage account, used for artifact import. --*Recommended token permissions*: Read, Delete, List --In the following example, command output is assigned to the IMPORT_SAS environment variable, prefixed with the '?' character. Update the `--expiry` value for your environment: --```azurecli -IMPORT_SAS=?$(az storage container generate-sas \ - --name transfer \ - --account-name $TARGET_SA \ - --expiry 2021-01-01 \ - --permissions dlr \ - --https-only \ - --output tsv) -``` --### Store SAS token for import --Store the SAS token in your target Azure key vault using [az keyvault secret set][az-keyvault-secret-set]: --```azurecli -az keyvault secret set \ - --name acrimportsas \ - --value $IMPORT_SAS \ - --vault-name $TARGET_KV -``` --## Next steps --* Follow one of the below tutorials to create your ACR Transfer resources. For most non-automated use-cases, we recommend using the Az CLI Extension. -- * [ACR Transfer with Az CLI](./container-registry-transfer-cli.md) - * [ACR Transfer with ARM templates](./container-registry-transfer-images.md) --<!-- LINKS - External --> -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ ---<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-login]: /cli/azure/reference-index#az_login -[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az_keyvault_secret_set -[az-keyvault-secret-show]: /cli/azure/keyvault/secret#az_keyvault_secret_show -[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy -[az-storage-container-generate-sas]: /cli/azure/storage/container#az_storage_container_generate_sas -[az-storage-blob-list]: /cli/azure/storage/blob#az_storage-blob-list -[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create -[az-deployment-group-delete]: /cli/azure/deployment/group#az_deployment_group_delete -[az-deployment-group-show]: /cli/azure/deployment/group#az_deployment_group_show -[az-acr-repository-list]: /cli/azure/acr/repository#az_acr_repository_list -[az-acr-import]: /cli/azure/acr#az_acr_import -[az-resource-delete]: /cli/azure/resource#az_resource_delete -[kv-managed-sas]: /azure/key-vault/secrets/overview-storage-keys |
container-registry | Container Registry Transfer Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-troubleshooting.md | - Title: ACR Transfer Troubleshooting -description: Troubleshoot ACR Transfer -- Previously updated : 10/31/2023-----# ACR Transfer troubleshooting --## Template deployment failures or errors - * If a pipeline run fails, look at the `pipelineRunErrorMessage` property of the run resource. - * For common template deployment errors, see [Troubleshoot ARM template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md) -## Problems accessing Key Vault - * If your pipelineRun deployment fails with a `403 Forbidden` error when accessing Azure Key Vault, verify that your pipeline managed identity has adequate permissions. - * A pipelineRun uses the exportPipeline or importPipeline managed identity to fetch the SAS token secret from your Key Vault. ExportPipelines and importPipelines are provisioned with either a system-assigned or user-assigned managed identity. This managed identity is required to have `secret get` permissions on the Key Vault in order to read the SAS token secret. Ensure that an access policy for the managed identity was added to the Key Vault. For more information, reference [Give the ExportPipeline identity keyvault policy access](./container-registry-transfer-cli.md#give-the-exportpipeline-identity-keyvault-policy-access) and [Give the ImportPipeline identity keyvault policy access](./container-registry-transfer-cli.md#give-the-importpipeline-identity-keyvault-policy-access). -## Problems accessing storage - * If you see a `403 Forbidden` error from storage, you likely have a problem with your SAS token. - * The SAS token might not currently be valid. The SAS token might be expired or the storage account keys might have changed since the SAS token was created. Verify that the SAS token is valid by attempting to use the SAS token to authenticate for access to the storage account container. For example, put an existing blob endpoint followed by the SAS token in the address bar of a new Microsoft Edge InPrivate window or upload a blob to the container with the SAS token by using `az storage blob upload`. - * The SAS token might not have sufficient Allowed Resource Types. Verify that the SAS token has been given permissions to Service, Container, and Object under Allowed Resource Types (`srt=sco` in the SAS token). - * The SAS token might not have sufficient permissions. For export pipelines, the required SAS token permissions are Read, Write, List, and Add. For import pipelines, the required SAS token permissions are Read, Delete, and List. (The Delete permission is required only if the import pipeline has the `DeleteSourceBlobOnSuccess` option enabled.) - * The SAS token might not be configured to work with HTTPS only. Verify that the SAS token is configured to work with HTTPS only (`spr=https` in the SAS token). -## Problems with export or import of storage blobs - * SAS token may be invalid, or may have insufficient permissions for the specified export or import run. See [Problems accessing storage](#problems-accessing-storage). - * Existing storage blob in source storage account might not be overwritten during multiple export runs. Confirm that the OverwriteBlob option is set in the export run and the SAS token has sufficient permissions. - * Storage blob in target storage account might not be deleted after successful import run. Confirm that the DeleteBlobOnSuccess option is set in the import run and the SAS token has sufficient permissions. - * Storage blob not created or deleted. Confirm that container specified in export or import run exists, or specified storage blob exists for manual import run. -## Problems with Source Trigger Imports - * The SAS token must have the List permission for Source Trigger imports to work. - * Source Trigger imports will only fire if the Storage Blob has a Last Modified time within the last 60 days. - * The Storage Blob must have a valid ContentMD5 property in order to be imported by the Source Trigger feature. - * The Storage Blob must have the "category":"acr-transfer-blob" blob metadata in order to be imported by the Source Trigger feature. This metadata is added automatically during an Export Pipeline Run, but may be stripped when moved from storage account to storage account depending on the method of copy. -## AzCopy issues - * See [Troubleshoot AzCopy issues](../storage/common/storage-use-azcopy-configure.md). -## Artifacts transfer problems - * Not all artifacts, or none, are transferred. Confirm spelling of artifacts in export run, and name of blob in export and import runs. Confirm you're transferring a maximum of 50 artifacts. - * Pipeline run might not have completed. An export or import run can take some time. - * For other pipeline issues, provide the deployment [correlation ID](../azure-resource-manager/templates/deployment-history.md) of the export run or import run to the Azure Container Registry team. - * To create ACR Transfer resources such as `exportPipelines`,` importPipelines`, and `pipelineRuns`, the user must have at least Contributor access on the ACR subscription. Otherwise, they'll see authorization to perform the transfer denied or scope is invalid errors. -## Problems pulling the image in a physically isolated environment - * If you see errors regarding foreign layers or attempts to resolve mcr.microsoft.com when attempting to pull an image in a physically isolated environment, your image manifest likely has non-distributable layers. Due to the nature of a physically isolated environment, these images will often fail to pull. You can confirm that this is the case by checking the image manifest for any references to external registries. If so, you'll need to push the non-distributable layers to your public cloud ACR prior to deploying an export pipeline-run for that image. For guidance on how to do this, see [How do I push non-distributable layers to a registry?](./container-registry-faq.yml#how-do-i-push-non-distributable-layers-to-a-registry-) -- <!-- LINKS - External --> -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ --<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-login]: /cli/azure/reference-index#az_login -[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az_keyvault_secret_set -[az-keyvault-secret-show]: /cli/azure/keyvault/secret#az_keyvault_secret_show -[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy -[az-storage-container-generate-sas]: /cli/azure/storage/container#az_storage_container_generate_sas -[az-storage-blob-list]: /cli/azure/storage/blob#az_storage-blob-list -[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create -[az-deployment-group-delete]: /cli/azure/deployment/group#az_deployment_group_delete -[az-deployment-group-show]: /cli/azure/deployment/group#az_deployment_group_show -[az-acr-repository-list]: /cli/azure/acr/repository#az_acr_repository_list -[az-acr-import]: /cli/azure/acr#az_acr_import -[az-resource-delete]: /cli/azure/resource#az_resource_delete |
container-registry | Container Registry Troubleshoot Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-access.md | - Title: Troubleshoot network issues with registry -description: Symptoms, causes, and resolution of common problems when accessing an Azure container registry in a virtual network or behind a firewall --- Previously updated : 10/31/2023----# Troubleshoot network issues with registry --This article helps you troubleshoot problems you might encounter when accessing an Azure container registry in a virtual network or behind a firewall or proxy server. --## Symptoms --May include one or more of the following: --* Unable to push or pull images and you receive error `dial tcp: lookup myregistry.azurecr.io` -* Unable to push or pull images and you receive error `Client.Timeout exceeded while awaiting headers` -* Unable to push or pull images and you receive Azure CLI error `Could not connect to the registry login server` -* Unable to pull images from registry to Azure Kubernetes Service or another Azure service -* Unable to access a registry behind an HTTPS proxy and you receive error `Error response from daemon: login attempt failed with status: 403 Forbidden` or `Error response from daemon: Get <registry>: proxyconnect tcp: EOF Login failed` -* Unable to configure virtual network settings and you receive error `Failed to save firewall and virtual network settings for container registry` -* Unable to access or view registry settings in Azure portal or manage registry using the Azure CLI -* Unable to add or modify virtual network settings or public access rules -* ACR Tasks is unable to push or pull images -* Microsoft Defender for Cloud can't scan images in registry, or scan results don't appear in Microsoft Defender for Cloud -* You receive error `host is not reachable` when attempting to access a registry configured with a private endpoint. --## Causes --* A client firewall or proxy prevents access - [solution](#configure-client-firewall-access) -* Public network access rules on the registry prevent access - [solution](#configure-public-access-to-registry) -* Virtual network or private endpoint configuration prevents access - [solution](#configure-vnet-access) -* You attempt to integrate Microsoft Defender for Cloud or certain other Azure services with a registry that has a private endpoint, service endpoint, or public IP access rules - [solution](#configure-service-access) --## Further diagnosis --Run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command to get more information about the health of the registry environment and optionally access to a target registry. For example, diagnose certain network connectivity or configuration problems. --See [Check the health of an Azure container registry](container-registry-check-health.md) for command examples. If errors are reported, review the [error reference](container-registry-health-error-reference.md) and the following sections for recommended solutions. --If you're experiencing problems using an Azure Kubernetes Service with an integrated registry, run the [az aks check-acr](/cli/azure/aks#az-aks-check-acr) command to validate that the AKS cluster can reach the registry. --> [!NOTE] -> Some network connectivity symptoms can also occur when there are issues with registry authentication or authorization. See [Troubleshoot registry login](container-registry-troubleshoot-login.md). --## Potential solutions --### Configure client firewall access --To access a registry from behind a client firewall or proxy server, configure firewall rules to access the registry's public REST and data endpoints. If [dedicated data endpoints](container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints) are enabled, you need rules to access: --* REST endpoint: `<registryname>.azurecr.io` -* Data endpoint(s): `<registry-name>.<region>.data.azurecr.io` --For a geo-replicated registry, configure access to the data endpoint for each regional replica. --Behind an HTTPS proxy, ensure that both your Docker client and Docker daemon are configured for proxy behavior. If you change your proxy settings for the Docker daemon, be sure to restart the daemon. --Registry resource logs in the ContainerRegistryLoginEvents table may help diagnose an attempted connection that is blocked. --Related links: --* [Configure rules to access an Azure container registry behind a firewall](container-registry-firewall-access-rules.md) -* [HTTP/HTTPS proxy configuration](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) -* [Geo-replication in Azure Container Registry](container-registry-geo-replication.md) -* [Monitor Azure Container Registry](monitor-service.md) --### Configure public access to registry --If accessing a registry over the internet, confirm the registry allows public network access from your client. By default, an Azure container registry allows access to the public registry endpoints from all networks. A registry can limit access to selected networks, or selected IP addresses. --If the registry is configured for a virtual network with a service endpoint, disabling public network access also disables access over the service endpoint. If your registry is configured for a virtual network with Private Link, IP network rules don't apply to the registry's private endpoints. --Related links: --* [Configure public IP network rules](container-registry-access-selected-networks.md) -* [Connect privately to an Azure container registry using Azure Private Link](container-registry-private-link.md) -* [Restrict access to a container registry using a service endpoint in an Azure virtual network](container-registry-vnet.md) ---### Configure VNet access --Confirm that the virtual network is configured with either a private endpoint for Private Link or a service endpoint (preview). Currently an Azure Bastion endpoint isn't supported. --If a private endpoint is configured, confirm that DNS resolves the registry's public FQDN such as *myregistry.azurecr.io* to the registry's private IP address. -- * Run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command with the `--vnet` parameter to confirm the DNS routing to the private endpoint in the virtual network. - * Use a network utility such as `dig` or `nslookup` for DNS lookup. - * Ensure that [DNS records are configured](container-registry-private-link.md#dns-configuration-options) for the registry FQDN and for each of the data endpoint FQDNs. --Review NSG rules and service tags used to limit traffic from other resources in the network to the registry. --If a service endpoint to the registry is configured, confirm that a network rule is added to the registry that allows access from that network subnet. The service endpoint only supports access from virtual machines and AKS clusters in the network. --If you want to restrict registry access using a virtual network in a different Azure subscription, ensure that you register the `Microsoft.ContainerRegistry` resource provider in that subscription. [Register the resource provider](../azure-resource-manager/management/resource-providers-and-types.md) for Azure Container Registry using the Azure portal, Azure CLI, or other Azure tools. --If Azure Firewall or a similar solution is configured in the network, check that egress traffic from other resources such as an AKS cluster is enabled to reach the registry endpoints. --Related links: --* [Connect privately to an Azure container registry using Azure Private Link](container-registry-private-link.md) -* [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md) -* [Restrict access to a container registry using a service endpoint in an Azure virtual network](container-registry-vnet.md) -* [Required outbound network rules and FQDNs for AKS clusters](/azure/aks/outbound-rules-control-egress#required-outbound-network-rules-and-fqdns-for-aks-clusters) -* [Kubernetes: Debugging DNS resolution](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/) -* [Virtual network service tags](../virtual-network/service-tags-overview.md) --### Configure service access --Currently, access to a container registry with network restrictions isn't allowed from several Azure --* Microsoft Defender for Cloud can't perform [image vulnerability scanning](../security-center/defender-for-container-registries-introduction.md?bc=%2fazure%2fcontainer-registry%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fcontainer-registry%2ftoc.json) in a registry that restricts access to private endpoints, selected subnets, or IP addresses. -* Resources of certain Azure services are unable to access a container registry with network restrictions, including Azure App Service and Azure Container Instances. --If access or integration of these Azure services with your container registry is required, remove the network restriction. For example, remove the registry's private endpoints, or remove or modify the registry's public access rules. --Starting January 2021, you can configure a network-restricted registry to [allow access](allow-access-trusted-services.md) from select trusted services. --Related links: --* [Azure Container Registry image scanning by Microsoft Defender for container registries](../security-center/defender-for-container-registries-introduction.md) -* Provide [feedback](https://feedback.azure.com/d365community/idea/cbe6351a-0525-ec11-b6e6-000d3a4f07b8) -* [Allow trusted services to securely access a network-restricted container registry](allow-access-trusted-services.md) ---## Advanced troubleshooting --If [collection of resource logs](monitor-service.md) is enabled in the registry, review the ContainterRegistryLoginEvents log. This log stores authentication events and status, including the incoming identity and IP address. Query the log for [registry authentication failures](monitor-service.md#registry-authentication-failures). --Related links: --* [Logs for diagnostic evaluation and auditing](./monitor-service.md) -* [Container registry FAQ](container-registry-faq.yml) -* [Azure Security Baseline for Azure Container Registry](security-baseline.md) -* [Best practices for Azure Container Registry](container-registry-best-practices.md) --## Next steps --If you don't resolve your problem here, see the following options. --* Other registry troubleshooting topics include: - * [Troubleshoot registry login](container-registry-troubleshoot-login.md) - * [Troubleshoot registry performance](container-registry-troubleshoot-performance.md) -* [Community support](https://azure.microsoft.com/support/community/) options -* [Microsoft Q&A](/answers/products/) -* [Open a support ticket](https://azure.microsoft.com/support/create-ticket/) |
container-registry | Container Registry Troubleshoot Login | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-login.md | - Title: Troubleshoot login to registry -description: Symptoms, causes, and resolution of common problems when logging into an Azure container registry --- Previously updated : 10/31/2023----# Troubleshoot registry login --This article helps you troubleshoot problems you might encounter when logging into an Azure container registry. --## Symptoms --May include one or more of the following: --* Unable to login to registry using `docker login`, `az acr login`, or both -* Unable to login to registry and you receive error `unauthorized: authentication required` or `unauthorized: Application not registered with AAD` -* Unable to login to registry and you receive Azure CLI error `Could not connect to the registry login server` -* Unable to push or pull images and you receive Docker error `unauthorized: authentication required` -* Unable to access a registry using `az acr login` and you receive error `CONNECTIVITY_REFRESH_TOKEN_ERROR. Access to registry was denied. Response code: 403. Unable to get admin user credentials with message: Admin user is disabled. Unable to authenticate using AAD or admin login credentials.` -* Unable to access registry from Azure Kubernetes Service, Azure DevOps, or another Azure service -* Unable to access registry and you receive error `Error response from daemon: login attempt failed with status: 403 Forbidden` - See [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md) -* Unable to access or view registry settings in Azure portal or manage registry using the Azure CLI --## Causes --* Docker isn't configured properly in your environment - [solution](#check-docker-configuration) -* The registry doesn't exist or the name is incorrect - [solution](#specify-correct-registry-name) -* The registry credentials aren't valid - [solution](#confirm-credentials-to-access-registry) -* The registry public access is disabled. Public network access rules on the registry prevent access - [solution](container-registry-troubleshoot-access.md#configure-public-access-to-registry) -* The credentials aren't authorized for push, pull, or Azure Resource Manager operations - [solution](#confirm-credentials-are-authorized-to-access-registry) -* The credentials are expired - [solution](#check-that-credentials-arent-expired) --## Further diagnosis --Run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command to get more information about the health of the registry environment and optionally access to a target registry. For example, diagnose Docker configuration errors or Microsoft Entra login problems. --See [Check the health of an Azure container registry](container-registry-check-health.md) for command examples. If errors are reported, review the [error reference](container-registry-health-error-reference.md) and the following sections for recommended solutions. --Follow the instructions from the [AKS support doc](/troubleshoot/azure/azure-kubernetes/cannot-pull-image-from-acr-to-aks-cluster) if you fail to pull images from ACR to the AKS cluster. --> [!NOTE] -> Some authentication or authorization errors can also occur if there are firewall or network configurations that prevent registry access. See [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md). --## Potential solutions --### Check Docker configuration --Most Azure Container Registry authentication flows require a local Docker installation so you can authenticate with your registry for operations such as pushing and pulling images. Confirm that the Docker CLI client and daemon (Docker Engine) are running in your environment. You need Docker client version 18.03 or later. --Related links: --* [Authentication overview](container-registry-authentication.md#authentication-options) -* [Container registry FAQ](container-registry-faq.yml) --### Specify correct registry name --When using `docker login`, provide the full login server name of the registry, such as *myregistry.azurecr.io*. Ensure that you use only lowercase letters. Example: --```console -docker login myregistry.azurecr.io -``` --When using [az acr login](/cli/azure/acr#az-acr-login) with a Microsoft Entra identity, first [sign in to the Azure CLI](/cli/azure/authenticate-azure-cli), and then specify the Azure resource name of the registry. The resource name is the name provided when the registry was created, such as *myregistry* (without a domain suffix). Example: --```azurecli -az acr login --name myregistry -``` --Related links: --* [az acr login succeeds but docker fails with error: unauthorized: authentication required](container-registry-faq.yml#az-acr-login-succeeds-but-docker-fails-with-error--unauthorized--authentication-required) --### Confirm credentials to access registry --Check the validity of the credentials you use for your scenario, or were provided to you by a registry owner. Some possible issues: --* If using an Active Directory service principal, ensure you use the correct credentials in the Active Directory tenant: - * User name - service principal application ID (also called *client ID*) - * Password - service principal password (also called *client secret*) -* If using an Azure service such as Azure Kubernetes Service or Azure DevOps to access the registry, confirm the registry configuration for your service. -* If you ran `az acr login` with the `--expose-token` option, which enables registry login without using the Docker daemon, ensure that you authenticate with the username `00000000-0000-0000-0000-000000000000`. -* If your registry is configured for [anonymous pull access](container-registry-faq.yml#how-do-i-enable-anonymous-pull-access-), existing Docker credentials stored from a previous Docker login can prevent anonymous access. Run `docker logout` before attempting an anonymous pull operation on the registry. --Related links: --* [Authentication overview](container-registry-authentication.md#authentication-options) -* [Individual login with Microsoft Entra ID](container-registry-authentication.md#individual-login-with-azure-ad) -* [Login with service principal](container-registry-auth-service-principal.md) -* [Login with managed identity](container-registry-authentication-managed-identity.md) -* [Login with repository-scoped token](container-registry-repository-scoped-permissions.md) -* [Login with admin account](container-registry-authentication.md#admin-account) -* [Microsoft Entra authentication and authorization error codes](../active-directory/develop/reference-aadsts-error-codes.md) -* [az acr login](/cli/azure/acr#az-acr-login) reference --### Confirm credentials are authorized to access registry --Confirm the registry permissions that are associated with the credentials, such as the `AcrPull` Azure role to pull images from the registry, or the `AcrPush` role to push images. --Access to a registry in the portal or registry management using the Azure CLI requires at least the `Reader` role or equivalent permissions to perform Azure Resource Manager operations. --If your permissions recently changed to allow registry access though the portal, you might need to try an incognito or private session in your browser to avoid any stale browser cache or cookies. --You or a registry owner must have sufficient privileges in the subscription to add or remove role assignments. --Related links: --* [Azure roles and permissions - Azure Container Registry](container-registry-roles.md) -* [Login with repository-scoped token](container-registry-repository-scoped-permissions.md) -* [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.yml) -* [Use the portal to create a Microsoft Entra application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md) -* [Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-client-secret) -* [Microsoft Entra authentication and authorization codes](../active-directory/develop/reference-aadsts-error-codes.md) --### Check that credentials aren't expired --Tokens and Active Directory credentials may expire after defined periods, preventing registry access. To enable access, credentials might need to be reset or regenerated. --* If using an individual AD identity, a managed identity, or service principal for registry login, the AD token expires after 3 hours. Log in again to the registry. -* If using an AD service principal with an expired client secret, a subscription owner or account administrator needs to reset credentials or generate a new service principal. -* If using a [repository-scoped token](container-registry-repository-scoped-permissions.md), a registry owner might need to reset a password or generate a new token. --Related links: --* [Reset service principal credentials](/cli/azure/ad/sp/credential#az-ad-sp-credential-reset) -* [Regenerate token passwords](container-registry-repository-scoped-permissions.md#regenerate-token-passwords) -* [Individual login with Microsoft Entra ID](container-registry-authentication.md#individual-login-with-azure-ad) --## Advanced troubleshooting --If [collection of resource logs](monitor-service.md) is enabled in the registry, review the ContainerRegistryLoginEvents log. This log stores authentication events and status, including the incoming identity and IP address. Query the log for [registry authentication failures](monitor-service.md#registry-authentication-failures). --Related links: --* [Logs for diagnostic evaluation and auditing](./monitor-service.md) -* [Container registry FAQ](container-registry-faq.yml) -* [Best practices for Azure Container Registry](container-registry-best-practices.md) --## Next steps --If you don't resolve your problem here, see the following options. --* Other registry troubleshooting topics include: - * [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md) - * [Troubleshoot registry performance](container-registry-troubleshoot-performance.md) -* [Community support](https://azure.microsoft.com/support/community/) options -* [Microsoft Q&A](/answers/products/) -* [Open a support ticket](https://azure.microsoft.com/support/create-ticket/) - based on information you provide, a quick diagnostic might be run for authentication failures in your registry |
container-registry | Container Registry Troubleshoot Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-performance.md | - Title: Troubleshoot registry performance -description: Symptoms, causes, and resolution of common problems with the performance of a registry --- Previously updated : 10/31/2023----# Troubleshoot registry performance --This article helps you troubleshoot problems you might encounter with the performance of an Azure container registry. --## Symptoms --May include one or more of the following: --* Pull or push images with the Docker CLI takes longer than expected -* Deployment of images to a service such as Azure Kubernetes Service takes longer than expected -* You're not able to complete a large number of concurrent pull or push operations in the expected time -* You see an HTTP 429 error similar to `Too many requests` -* Pull or push operations in a geo-replicated registry take longer than expected, or push fails with error `Error writing blob` or `Error writing manifest` --## Causes --* Your network connection speed may slow registry operations - [solution](#check-expected-network-speed) -* Image layer compression or extraction may be slow on the client - [solution](#check-client-hardware) -* You're reaching a configured limit in your registry service tier or environment - [solution](#review-configured-limits) -* Your geo-replicated registry has replicas in nearby regions - [solution](#configure-geo-replicated-registry) -* You're pulling from a geographically distant registry replica - [solution](#configure-dns-for-geo-replicated-registry) --If you don't resolve your problem here, see [Advanced troubleshooting](#advanced-troubleshooting) and [Next steps](#next-steps) for other options. --## Potential solutions --### Check expected network speed --Check your internet upload and download speed, or use a tool such as AzureSpeed to test [upload](https://www.azurespeed.com/Azure/Uploadß) and [download](https://www.azurespeed.com/Azure/Download) from Azure blob storage, which hosts registry image layers. --Check your image size against the maximum supported size and the supported download or upload bandwidth for your registry service tier. If your registry is in the Basic or Standard tier, consider upgrading to improve performance. --For image deployment to other services, check the regions where the registry and target are located. Consider locating the registry and the deployment target in the same or network-close regions to improve performance. --Related links: --* [Azure Container Registry service tiers](container-registry-skus.md) -* [Container registry FAQ](container-registry-faq.yml) -* [Performance and scalability targets for Azure Blob Storage](../storage/blobs/scalability-targets.md) --### Check client hardware --The disk type and CPU on the docker client can affect the speed of extracting or compressing image layers on the client as part of pull or push operations. For example, layer extraction on a hard disk drive will take longer than on a solid-state disk. Compare pull operations for comparable images from your Azure container registry and a public registry such as Docker Hub. --### Review configured limits --If you're concurrently pushing or pulling multiple or many multi-layered images to your registry, review the supported ReadOps and WriteOps limits for the registry service tier. If your registry is in the Basic or Standard tier, consider upgrading to increase the limits. Check also with your networking provider about network throttling that may occur with many concurrent operations. --Review your Docker daemon configuration for the maximum concurrent uploads or downloads for each push or pull operation on the client. Configure higher limits if needed. --Because each image layer requires a separate registry read or write operation, check the number of layers in your images. Consider strategies to reduce the number of image layers. --Related links: --* [Azure Container Registry service tiers](container-registry-skus.md) -* [dockerd](https://docs.docker.com/engine/reference/commandline/dockerd/) --### Configure geo-replicated registry --A Docker client that pushes an image to a geo-replicated registry might not push all image layers and its manifest to a single replicated region. This situation may occur because Azure Traffic Manager routes registry requests to the network-closest replicated registry. If the registry has two nearby replication regions, image layers and the manifest could be distributed to the two sites, and the push operation fails when the manifest is validated. --To optimize DNS resolution to the closest replica when pushing images, configure a geo-replicated registry in the same Azure regions as the source of the push operations, or the closest region when working outside of Azure. --To troubleshoot operations with a geo-replicated registry, you can also temporarily disable Traffic Manager routing to one or more replications. --Related links: --* [Geo-replication in Azure Container Registry](container-registry-geo-replication.md) --### Configure DNS for geo-replicated registry --If pull operations from a geo-replicated registry appear slow, the DNS configuration on the client might resolve to a geographically distant DNS server. In this case, Traffic Manager might be routing requests to a replica that is network-close to the DNS server but distant from the client. Run a tool such as `nslookup` or `dig` (on Linux) to determine the replica that Traffic Manager routes registry requests to. For example: --```console -nslookup myregistry.azurecr.io -``` --A potential solution is to configure a closer DNS server. --Related links: --* [Geo-replication in Azure Container Registry](container-registry-geo-replication.md) -* [Troubleshoot push operations with geo-replicated registries](container-registry-geo-replication.md#troubleshoot-push-operations-with-geo-replicated-registries) -* [Temporarily disable routing to replication](container-registry-geo-replication.md#temporarily-disable-routing-to-replication) -* [Traffic Manager FAQs](../traffic-manager/traffic-manager-faqs.md) --### Advanced troubleshooting --If your permissions to registry resources allow, [check the health of the registry environment](container-registry-check-health.md). If errors are reported, review the [error reference](container-registry-health-error-reference.md) for potential solutions. --If [collection of resource logs](monitor-service.md) is enabled in the registry, review the ContainterRegistryRepositoryEvents log. This log stores information for operations such as push or pull events. Query the log for [repository-level operation failures](monitor-service.md#repository-level-operation-failures). --Related links: --* [Logs for diagnostic evaluation and auditing](./monitor-service.md) -* [Container registry FAQ](container-registry-faq.yml) -* [Best practices for Azure Container Registry](container-registry-best-practices.md) --## Next steps --If you don't resolve your problem here, see the following options. --* Other registry troubleshooting topics include: - * [Troubleshoot registry login](container-registry-troubleshoot-login.md) - * [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md) -* [Community support](https://azure.microsoft.com/support/community/) options -* [Microsoft Q&A](/answers/products/) -* [Open a support ticket](https://azure.microsoft.com/support/create-ticket/) |
container-registry | Container Registry Tutorial Base Image Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-base-image-update.md | - Title: Tutorial - Trigger image build on base image update -description: In this tutorial, you learn how to configure an Azure Container Registry Task to automatically trigger container image builds in the cloud when a base image is updated in the same registry. --- Previously updated : 10/31/2023---# Customer intent: As a developer or devops engineer, I want container images to be built automatically when the base image of a container is updated in the registry. ---# Tutorial: Automate container image builds when a base image is updated in an Azure container registry --[ACR Tasks](container-registry-tasks-overview.md) supports automated container image builds when a container's [base image is updated](container-registry-tasks-base-images.md), such as when you patch the OS or application framework in one of your base images. --In this tutorial, you learn how to create an ACR task that triggers a build in the cloud when a container's base image is pushed to the same registry. You can also try a tutorial to create an ACR task that triggers an image build when a base image is pushed to [another Azure container registry](container-registry-tutorial-private-base-image-update.md). --In this tutorial: --> [!div class="checklist"] -> * Build the base image -> * Create an application image in the same registry to track the base image -> * Update the base image to trigger an application image task -> * Display the triggered task -> * Verify updated application image --## Prerequisites --### Complete the previous tutorials --This tutorial assumes you've already configured your environment and completed the steps in the first two tutorials in the series, in which you: --- Create Azure container registry-- Fork sample repository-- Clone sample repository-- Create GitHub personal access token--If you haven't already done so, complete the following tutorials before proceeding: --[Build container images in the cloud with Azure Container Registry Tasks](container-registry-tutorial-quick-task.md) --[Automate container image builds with Azure Container Registry Tasks](container-registry-tutorial-build-task.md) --### Configure the environment --- This article requires version 2.0.46 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--Populate these shell environment variables with values appropriate for your environment. This step isn't strictly required, but makes executing the multiline Azure CLI commands in this tutorial a bit easier. If you don't populate these environment variables, you must manually replace each value wherever it appears in the example commands. --```console -ACR_NAME=<registry-name> # The name of your Azure container registry -GIT_USER=<github-username> # Your GitHub user account name -GIT_PAT=<personal-access-token> # The PAT you generated in the second tutorial -``` ---### Base image update scenario --This tutorial walks you through a base image update scenario in which a base image and an application image are maintained in a single registry. --The [code sample][code-sample] includes two Dockerfiles: an application image, and an image it specifies as its base. In the following sections, you create an ACR task that automatically triggers a build of the application image when a new version of the base image is pushed to the same container registry. --* [Dockerfile-app][dockerfile-app]: A small Node.js web application that renders a static web page displaying the Node.js version on which it's based. The version string is simulated: it displays the contents of an environment variable, `NODE_VERSION`, that's defined in the base image. --* [Dockerfile-base][dockerfile-base]: The image that `Dockerfile-app` specifies as its base. It is itself based on a [Node][base-node] image, and includes the `NODE_VERSION` environment variable. --In the following sections, you create a task, update the `NODE_VERSION` value in the base image Dockerfile, then use ACR Tasks to build the base image. When the ACR task pushes the new base image to your registry, it automatically triggers a build of the application image. Optionally, you run the application container image locally to see the different version strings in the built images. --In this tutorial, your ACR task builds and pushes an application container image specified in a Dockerfile. ACR Tasks can also run [multi-step tasks](container-registry-tasks-multi-step.md), using a YAML file to define steps to build, push, and optionally test multiple containers. --## Build the base image --Start by building the base image with an ACR Tasks *quick task*, using [az acr build][az-acr-build]. As discussed in the [first tutorial](container-registry-tutorial-quick-task.md) in the series, this process not only builds the image, but pushes it to your container registry if the build is successful. --```azurecli -az acr build --registry $ACR_NAME --image baseimages/node:15-alpine --file Dockerfile-base . -``` --## Create a task --Next, create a task with [az acr task create][az-acr-task-create]: --```azurecli -az acr task create \ - --registry $ACR_NAME \ - --name baseexample1 \ - --image helloworld:{{.Run.ID}} \ - --arg REGISTRY_NAME=$ACR_NAME.azurecr.io \ - --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#master \ - --file Dockerfile-app \ - --git-access-token $GIT_PAT -``` --This task is similar to the task created in the [previous tutorial](container-registry-tutorial-build-task.md). It instructs ACR Tasks to trigger an image build when commits are pushed to the repository specified by `--context`. While the Dockerfile used to build the image in the previous tutorial specifies a public base image (`FROM node:15-alpine`), the Dockerfile in this task, [Dockerfile-app][dockerfile-app], specifies a base image in the same registry: --```dockerfile -FROM ${REGISTRY_NAME}/baseimages/node:15-alpine -``` --This configuration makes it easy to simulate a framework patch in the base image later in this tutorial. --## Build the application container --Use [az acr task run][az-acr-task-run] to manually trigger the task and build the application image. This step is needed so that the task tracks the application image's dependency on the base image. --```azurecli -az acr task run --registry $ACR_NAME --name baseexample1 -``` --Once the task has completed, take note of the **Run ID** (for example, "da6") if you wish to complete the following optional step. --### Optional: Run application container locally --If you're working locally (not in the Cloud Shell), and you have Docker installed, run the container to see the application rendered in a web browser before you rebuild its base image. If you're using the Cloud Shell, skip this section (Cloud Shell does not support `az acr login` or `docker run`). --First, authenticate to your container registry with [az acr login][az-acr-login]: --```azurecli -az acr login --name $ACR_NAME -``` --Now, run the container locally with `docker run`. Replace **\<run-id\>** with the Run ID found in the output from the previous step (for example, "da6"). This example names the container `myapp` and includes the `--rm` parameter to remove the container when you stop it. --```bash -docker run -d -p 8080:80 --name myapp --rm $ACR_NAME.azurecr.io/helloworld:<run-id> -``` --Navigate to `http://localhost:8080` in your browser, and you should see the Node.js version number rendered in the web page, similar to the following. In a later step, you bump the version by adding an "a" to the version string. ---To stop and remove the container, run the following command: --```bash -docker stop myapp -``` --## List the builds --Next, list the task runs that ACR Tasks has completed for your registry using the [az acr task list-runs][az-acr-task-list-runs] command: --```azurecli -az acr task list-runs --registry $ACR_NAME --output table -``` --If you completed the previous tutorial (and didn't delete the registry), you should see output similar to the following. Take note of the number of task runs, and the latest RUN ID, so you can compare the output after you update the base image in the next section. --```output -RUN ID TASK PLATFORM STATUS TRIGGER STARTED DURATION -- - -- --cax baseexample1 linux Succeeded Manual 2020-11-20T23:33:12Z 00:00:30 -caw taskhelloworld linux Succeeded Commit 2020-11-20T23:16:07Z 00:00:29 -cav example2 linux Succeeded Commit 2020-11-20T23:16:07Z 00:00:55 -cau example1 linux Succeeded Commit 2020-11-20T23:16:07Z 00:00:40 -cat taskhelloworld linux Succeeded Manual 2020-11-20T23:07:29Z 00:00:27 -``` --## Update the base image --Here you simulate a framework patch in the base image. Edit **Dockerfile-base**, and add an "a" after the version number defined in `NODE_VERSION`: --```dockerfile -ENV NODE_VERSION 15.2.1a -``` --Run a quick task to build the modified base image. Take note of the **Run ID** in the output. --```azurecli -az acr build --registry $ACR_NAME --image baseimages/node:15-alpine --file Dockerfile-base . -``` --Once the build is complete and the ACR task has pushed the new base image to your registry, it triggers a build of the application image. It may take few moments for the task you created earlier to trigger the application image build, as it must detect the newly built and pushed base image. --## List updated build --Now that you've updated the base image, list your task runs again to compare to the earlier list. If at first the output doesn't differ, periodically run the command to see the new task run appear in the list. --```azurecli -az acr task list-runs --registry $ACR_NAME --output table -``` --Output is similar to the following. The TRIGGER for the last-executed build should be "Image Update", indicating that the task was kicked off by your quick task of the base image. --```output -Run ID TASK PLATFORM STATUS TRIGGER STARTED DURATION -- - -- --ca11 baseexample1 linux Succeeded Image Update 2020-11-20T23:38:24Z 00:00:34 -ca10 taskhelloworld linux Succeeded Image Update 2020-11-20T23:38:24Z 00:00:24 -cay linux Succeeded Manual 2020-11-20T23:38:08Z 00:00:22 -cax baseexample1 linux Succeeded Manual 2020-11-20T23:33:12Z 00:00:30 -caw taskhelloworld linux Succeeded Commit 2020-11-20T23:16:07Z 00:00:29 -cav example2 linux Succeeded Commit 2020-11-20T23:16:07Z 00:00:55 -cau example1 linux Succeeded Commit 2020-11-20T23:16:07Z 00:00:40 -cat taskhelloworld linux Succeeded Manual 2020-11-20T23:07:29Z 00:00:27 -``` --If you'd like to perform the following optional step of running the newly built container to see the updated version number, take note of the **RUN ID** value for the Image Update-triggered build (in the preceding output, it's "ca11"). --### Optional: Run newly built image --If you're working locally (not in the Cloud Shell), and you have Docker installed, run the new application image once its build has completed. Replace `<run-id>` with the RUN ID you obtained in the previous step. If you're using the Cloud Shell, skip this section (Cloud Shell does not support `docker run`). --```bash -docker run -d -p 8081:80 --name updatedapp --rm $ACR_NAME.azurecr.io/helloworld:<run-id> -``` --Navigate to http://localhost:8081 in your browser, and you should see the updated Node.js version number (with the "a") in the web page: ----What's important to note is that you updated your **base** image with a new version number, but the last-built **application** image displays the new version. ACR Tasks picked up your change to the base image, and rebuilt your application image automatically. --To stop and remove the container, run the following command: --```bash -docker stop updatedapp -``` --## Next steps --In this tutorial, you learned how to use a task to automatically trigger container image builds when the image's base image has been updated. --For a complete workflow to manage base images originating from a public source, see [How to consume and maintain public content with Azure Container Registry Tasks](tasks-consume-public-content.md). --Now, move on to the next tutorial to learn how to trigger tasks on a defined schedule. --> [!div class="nextstepaction"] -> [Run a task on a schedule](container-registry-tasks-scheduled.md) --<!-- LINKS - External --> -[base-node]: https://hub.docker.com/_/node/ -[code-sample]: https://github.com/Azure-Samples/acr-build-helloworld-node -[dockerfile-app]: https://github.com/Azure-Samples/acr-build-helloworld-node/blob/master/Dockerfile-app -[dockerfile-base]: https://github.com/Azure-Samples/acr-build-helloworld-node/blob/master/Dockerfile-base --<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-build]: /cli/azure/acr#az_acr_build -[az-acr-task-create]: /cli/azure/acr/task#az_acr_task_create -[az-acr-task-update]: /cli/azure/acr/task#az_acr_task_update -[az-acr-task-run]: /cli/azure/acr/task#az_acr_task_run -[az-acr-login]: /cli/azure/acr#az_acr_login -[az-acr-task-list-runs]: /cli/azure/acr -[az-acr-task]: /cli/azure/acr |
container-registry | Container Registry Tutorial Build Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-build-task.md | - Title: Tutorial - Build image on code commit -description: In this tutorial, you learn how to configure an Azure Container Registry Task to automatically trigger container image builds in the cloud when you commit source code to a Git repository. --- Previously updated : 10/31/2023---# Customer intent: As a developer or devops engineer, I want to trigger container image builds automatically when I commit code to a Git repo. ---# Tutorial: Automate container image builds in the cloud when you commit source code --In addition to a [quick task](container-registry-tutorial-quick-task.md), ACR Tasks supports automated Docker container image builds in the cloud when you commit source code to a Git repository. Supported Git contexts for ACR Tasks include public or private GitHub or Azure Repos. --> [!NOTE] -> Currently, ACR Tasks doesn't support commit or pull request triggers in GitHub Enterprise repos. --In this tutorial, your ACR task builds and pushes a single container image specified in a Dockerfile when you commit source code to a Git repo. To create a [multi-step task](container-registry-tasks-multi-step.md) that uses a YAML file to define steps to build, push, and optionally test multiple containers on code commit, see [Tutorial: Run a multi-step container workflow in the cloud when you commit source code](container-registry-tutorial-multistep-task.md). For an overview of ACR Tasks, see [Automate OS and framework patching with ACR Tasks](container-registry-tasks-overview.md) --In this tutorial: --> [!div class="checklist"] -> * Create a task -> * Test the task -> * View task status -> * Trigger the task with a code commit --This tutorial assumes you've already completed the steps in the [previous tutorial](container-registry-tutorial-quick-task.md). If you haven't already done so, complete the steps in the [Prerequisites](container-registry-tutorial-quick-task.md#prerequisites) section of the previous tutorial before proceeding. ---## Create the build task --Now that you've completed the steps required to enable ACR Tasks to read commit status and create webhooks in a repository, you can create a task that triggers a container image build on commits to the repo. --First, populate these shell environment variables with values appropriate for your environment. This step isn't strictly required, but makes executing the multiline Azure CLI commands in this tutorial a bit easier. If you don't populate these environment variables, you must manually replace each value wherever it appears in the example commands. --```console -ACR_NAME=<registry-name> # The name of your Azure container registry -GIT_USER=<github-username> # Your GitHub user account name -GIT_PAT=<personal-access-token> # The PAT you generated in the previous section -``` --Now, create the task by executing the following [az acr task create][az-acr-task-create] command. ---```azurecli -az acr task create \ - --registry $ACR_NAME \ - --name taskhelloworld \ - --image helloworld:{{.Run.ID}} \ - --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#master \ - --file Dockerfile \ - --git-access-token $GIT_PAT -``` ---This task specifies that any time code is committed to the *main* branch in the repository specified by `--context`, ACR Tasks will build the container image from the code in that branch. The Dockerfile specified by `--file` from the repository root is used to build the image. The `--image` argument specifies a parameterized value of `{{.Run.ID}}` for the version portion of the image's tag, ensuring the built image correlates to a specific build, and is tagged uniquely. --Output from a successful [az acr task create][az-acr-task-create] command is similar to the following: --```output -{ - "agentConfiguration": { - "cpu": 2 - }, - "creationDate": "2010-11-19T22:42:32.972298+00:00", - "id": "/subscriptions/<Subscription ID>/resourceGroups/myregistry/providers/Microsoft.ContainerRegistry/registries/myregistry/tasks/taskhelloworld", - "location": "westcentralus", - "name": "taskhelloworld", - "platform": { - "architecture": "amd64", - "os": "Linux", - "variant": null - }, - "provisioningState": "Succeeded", - "resourceGroup": "myregistry", - "status": "Enabled", - "step": { - "arguments": [], - "baseImageDependencies": null, - "contextPath": "https://github.com/gituser/acr-build-helloworld-node#main", - "dockerFilePath": "Dockerfile", - "imageNames": [ - "helloworld:{{.Run.ID}}" - ], - "isPushEnabled": true, - "noCache": false, - "type": "Docker" - }, - "tags": null, - "timeout": 3600, - "trigger": { - "baseImageTrigger": { - "baseImageTriggerType": "Runtime", - "name": "defaultBaseimageTriggerName", - "status": "Enabled" - }, - "sourceTriggers": [ - { - "name": "defaultSourceTriggerName", - "sourceRepository": { - "branch": "main", - "repositoryUrl": "https://github.com/gituser/acr-build-helloworld-node#main", - "sourceControlAuthProperties": null, - "sourceControlType": "GitHub" - }, - "sourceTriggerEvents": [ - "commit" - ], - "status": "Enabled" - } - ] - }, - "type": "Microsoft.ContainerRegistry/registries/tasks" -} -``` --## Test the build task --You now have a task that defines your build. To test the build pipeline, trigger a build manually by executing the [az acr task run][az-acr-task-run] command: --```azurecli -az acr task run --registry $ACR_NAME --name taskhelloworld -``` --By default, the `az acr task run` command streams the log output to your console when you execute the command. The output is condensed to show key steps. --```output -2020/11/19 22:51:00 Using acb_vol_9ee1f28c-4fd4-43c8-a651-f0ed027bbf0e as the home volume -2020/11/19 22:51:00 Setting up Docker configuration... -2020/11/19 22:51:02 Successfully set up Docker configuration -2020/11/19 22:51:02 Logging in to registry: myregistry.azurecr.io -2020/11/19 22:51:03 Successfully logged in -2020/11/19 22:51:03 Executing step: build -2020/11/19 22:51:03 Obtaining source code and scanning for dependencies... -2020/11/19 22:51:05 Successfully obtained source code and scanned for dependencies -Sending build context to Docker daemon 23.04kB -Step 1/5 : FROM node:15-alpine -[...] -Step 5/5 : CMD ["node", "/src/server.js"] - > Running in 7382eea2a56a -Removing intermediate container 7382eea2a56a - > e33cd684027b -Successfully built e33cd684027b -Successfully tagged myregistry.azurecr.io/helloworld:da2 -2020/11/19 22:51:11 Executing step: push -2020/11/19 22:51:11 Pushing image: myregistry.azurecr.io/helloworld:da2, attempt 1 -The push refers to repository [myregistry.azurecr.io/helloworld] -4a853682c993: Preparing -[...] -4a853682c993: Pushed -[...] -da2: digest: sha256:c24e62fd848544a5a87f06ea60109dbef9624d03b1124bfe03e1d2c11fd62419 size: 1366 -2020/11/19 22:51:21 Successfully pushed image: myregistry.azurecr.io/helloworld:da2 -2020/11/19 22:51:21 Step id: build marked as successful (elapsed time in seconds: 7.198937) -2020/11/19 22:51:21 Populating digests for step id: build... -2020/11/19 22:51:22 Successfully populated digests for step id: build -2020/11/19 22:51:22 Step id: push marked as successful (elapsed time in seconds: 10.180456) -The following dependencies were found: -- image:- registry: myregistry.azurecr.io - repository: helloworld - tag: da2 - digest: sha256:c24e62fd848544a5a87f06ea60109dbef9624d03b1124bfe03e1d2c11fd62419 - runtime-dependency: - registry: registry.hub.docker.com - repository: library/node - tag: 9-alpine - digest: sha256:8dafc0968fb4d62834d9b826d85a8feecc69bd72cd51723c62c7db67c6dec6fa - git: - git-head-revision: 68cdf2a37cdae0873b8e2f1c4d80ca60541029bf ---Run ID: ca6 was successful after 27s -``` --## Trigger a build with a commit --Now that you've tested the task by manually running it, trigger it automatically with a source code change. --First, ensure you're in the directory containing your local clone of the [repository][sample-repo]: --```console -cd acr-build-helloworld-node -``` --Next, execute the following commands to create, commit, and push a new file to your fork of the repo on GitHub: --```console -echo "Hello World!" > hello.txt -git add hello.txt -git commit -m "Testing ACR Tasks" -git push origin main -``` --You may be asked to provide your GitHub credentials when you execute the `git push` command. Provide your GitHub username, and enter the personal access token (PAT) that you created earlier for the password. --```azurecli -Username for 'https://github.com': <github-username> -Password for 'https://githubuser@github.com': <personal-access-token> -``` --Once you've pushed a commit to your repository, the webhook created by ACR Tasks fires and kicks off a build in Azure Container Registry. Display the logs for the currently running task to verify and monitor the build progress: --```azurecli -az acr task logs --registry $ACR_NAME -``` --Output is similar to the following, showing the currently executing (or last-executed) task: --```output -Showing logs of the last created run. -Run ID: ca7 --[...] --Run ID: ca7 was successful after 38s -``` --## List builds --To see a list of the task runs that ACR Tasks has completed for your registry, run the [az acr task list-runs][az-acr-task-list-runs] command: --```azurecli -az acr task list-runs --registry $ACR_NAME --output table -``` --Output from the command should appear similar to the following. The runs that ACR Tasks has executed are displayed, and "Git Commit" appears in the TRIGGER column for the most recent task: --```output -RUN ID TASK PLATFORM STATUS TRIGGER STARTED DURATION -- - -- --ca7 taskhelloworld linux Succeeded Commit 2020-11-19T22:54:34Z 00:00:29 -ca6 taskhelloworld linux Succeeded Manual 2020-11-19T22:51:47Z 00:00:24 -ca5 linux Succeeded Manual 2020-11-19T22:23:42Z 00:00:23 -``` --## Next steps --In this tutorial, you learned how to use a task to automatically trigger container image builds in Azure when you commit source code to a Git repository. Move on to the next tutorial to learn how to create tasks that trigger builds when a container image's base image is updated. --> [!div class="nextstepaction"] -> [Automate builds on base image update](container-registry-tutorial-base-image-update.md) --<!-- LINKS - External --> -[sample-repo]: https://github.com/Azure-Samples/acr-build-helloworld-node --<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-task]: /cli/azure/acr/task -[az-acr-task-create]: /cli/azure/acr/task#az_acr_task_create -[az-acr-task-run]: /cli/azure/acr/task#az_acr_task_run -[az-acr-task-list-runs]: /cli/azure/acr/task#az_acr_task_list_runs -[az-login]: /cli/azure/reference-index#az_login |
container-registry | Container Registry Tutorial Deploy App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-deploy-app.md | - Title: Tutorial - Deploy from geo-replicated registry -description: Deploy a Linux-based web app to two different Azure regions using a container image from a geo-replicated Azure container registry. Part two of a three-part series. --- Previously updated : 10/31/2023-----# Tutorial: Deploy a web app from a geo-replicated Azure container registry --This is part two in a three-part tutorial series. In [part one](container-registry-tutorial-prepare-registry.md), a private, geo-replicated container registry was created, and a container image was built from source and pushed to the registry. In this article, you take advantage of the network-close aspect of the geo-replicated registry by deploying the container to Web App instances in two different Azure regions. Each instance then pulls the container image from the closest registry. --In this tutorial, part two in the series: --> [!div class="checklist"] -> * Deploy a container image to two *Web Apps for Containers* instances -> * Verify the deployed application --If you haven't yet created a geo-replicated registry and pushed the image of the containerized sample application to the registry, return to the previous tutorial in the series, [Prepare a geo-replicated Azure container registry](container-registry-tutorial-prepare-registry.md). --In the next article in the series, you update the application, then push the updated container image to the registry. Finally, you browse to each running Web App instance to see the change automatically reflected in both, showing Azure Container Registry geo-replication and webhooks in action. --## Automatic deployment to Web Apps for Containers --Azure Container Registry provides support for deploying containerized applications directly to [Web Apps for Containers](../app-service/index.yml). In this tutorial, you use the Azure portal to deploy the container image created in the previous tutorial to two web app plans located in different Azure regions. --When you deploy a web app from a container image in your registry, and you have a geo-replicated registry in the same region, Azure Container Registry creates an image deployment [webhook](container-registry-webhook.md) for you. When you push a new image to your container repository, the webhook picks up the change and automatically deploys the new container image to your web app. --## Deploy a Web App for Containers instance --In this step, you create a Web App for Containers instance in the *West US* region. --Sign in to the [Azure portal](https://portal.azure.com) and navigate to the registry you created in the previous tutorial. --Select **Repositories** > **acr-helloworld**, then right-click on the **v1** tag under **Tags** and select **Deploy to web app**: --![Deploy to app service in the Azure portal][deploy-app-portal-01] --If "Deploy to web app" is disabled, you might not have enabled the registry admin user as directed in [Create a container registry](container-registry-tutorial-prepare-registry.md#create-a-container-registry) in the first tutorial. You can enable the admin user in **Settings** > **Access keys** in the Azure portal. --Under **Web App for Containers** that's displayed after you select "Deploy to web app," specify the following values for each setting: --| Setting | Value | -||| -| **Site Name** | A globally unique name for the web app. In this example, we use the format `<acrName>-westus` to easily identify the registry and region the web app is deployed from. | -| **Resource Group** | **Use existing** > `myResourceGroup` | -| **App service plan/Location** | Create a new plan named `plan-westus` in the **West US** region. | -| **Image** | `acr-helloworld:v1` | -| **Operating system** | Linux | --> [!NOTE] -> When you create a new app service plan to deploy your containerized app, a default plan is automatically selected to host your application. The default plan depends on the operating system setting. --Select **Create** to provision the web app to the *West US* region. --![Screenshot shows the Web App for Containers with the Create button highlighted.][deploy-app-portal-02] --## View the deployed web app --When deployment is complete, you can view the running application by navigating to its URL in your browser. --In the portal, select **App Services**, then the web app you provisioned in the previous step. In this example, the web app is named *uniqueregistryname-westus*. --Select the hyperlinked URL of the web app in the top-right of the **App Service** overview to view the running application in your browser. --![Screenshot shows the App Service Overview with web app URL highlighted.][deploy-app-portal-04] --Once the Docker image is deployed from your geo-replicated container registry, the site displays an image representing the Azure region hosting the container registry. --![Screenshot shows the deployed web application viewed in a browser.][deployed-app-westus] --## Deploy second Web App for Containers instance --Use the procedure outlined in the previous section to deploy a second web app to the *East US* region. Under **Web App for Containers**, specify the following values: --| Setting | Value | -||| -| **Site Name** | A globally unique name for the web app. In this example, we use the format `<acrName>-eastus` to easily identify the registry and region the web app is deployed from. | -| **Resource Group** | **Use existing** > `myResourceGroup` | -| **App service plan/Location** | Create a new plan named `plan-eastus` in the **East US** region. | -| **Image** | `acr-helloworld:v1` | -| **Operating system** | Linux | --Select **Create** to provision the web app to the *East US* region. --![Screenshot shows the Web App for Containers Create window with the Create button highlighted.][deploy-app-portal-06] --## View the second deployed web app --As before, you can view the running application by navigating to its URL in your browser. --In the portal, select **App Services**, then the web app you provisioned in the previous step. In this example, the web app is named *uniqueregistryname-eastus*. --Select the hyperlinked URL of the web app in the top-right of the **App Service overview** to view the running application in your browser. --![Web app on Linux configuration in the Azure portal][deploy-app-portal-07] --Once the Docker image is deployed from your geo-replicated container registry, the site displays an image representing the Azure region hosting the container registry. --![Deployed web application viewed in a browser][deployed-app-eastus] --## Next steps --In this tutorial, you deployed two Web App for Containers instances from a geo-replicated Azure container registry. --Advance to the next tutorial to update and then deploy a new container image to the container registry, then verify that the web apps running in both regions were updated automatically. --> [!div class="nextstepaction"] -> [Deploy an update to geo-replicated container image](./container-registry-tutorial-deploy-update.md) --<!-- IMAGES --> -[deploy-app-portal-01]: ./media/container-registry-tutorial-deploy-app/deploy-app-portal-01.png -[deploy-app-portal-02]: ./media/container-registry-tutorial-deploy-app/deploy-app-portal-02.png -[deploy-app-portal-03]: ./media/container-registry-tutorial-deploy-app/deploy-app-portal-03.png -[deploy-app-portal-04]: ./media/container-registry-tutorial-deploy-app/deploy-app-portal-04.png -[deploy-app-portal-05]: ./media/container-registry-tutorial-deploy-app/deploy-app-portal-05.png -[deploy-app-portal-06]: ./media/container-registry-tutorial-deploy-app/deploy-app-portal-06.png -[deploy-app-portal-07]: ./media/container-registry-tutorial-deploy-app/deploy-app-portal-07.png -[deployed-app-westus]: ./media/container-registry-tutorial-deploy-app/deployed-app-westus.png -[deployed-app-eastus]: ./media/container-registry-tutorial-deploy-app/deployed-app-eastus.png |
container-registry | Container Registry Tutorial Deploy Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-deploy-update.md | - Title: Tutorial - Push update to geo-replicated registry -description: Push an updated Docker image to your geo-replicated Azure container registry, then see the changes automatically deployed to web apps running in multiple regions. Part three of a three-part series. --- Previously updated : 10/31/2023-----# Tutorial: Push an updated container image to a geo-replicated container registry for regional web app deployments --This is part three in a three-part tutorial series. In the [previous tutorial](container-registry-tutorial-deploy-app.md), geo-replication was configured for two different regional Web App deployments. In this tutorial, you first modify the application, then build a new container image and push it to your geo-replicated registry. Finally, you view the change, deployed automatically by Azure Container Registry webhooks, in both Web App instances. --In this tutorial, the final part in the series: --> [!div class="checklist"] -> * Modify the web application HTML -> * Build and tag the Docker image -> * Push the change to Azure Container Registry -> * View the updated app in two different regions --If you've not yet configured the two *Web App for Containers* regional deployments, return to the previous tutorial in the series, [Deploy web app from Azure Container Registry](container-registry-tutorial-deploy-app.md). --## Modify the web application --In this step, make a change to the web application that will be highly visible once you push the updated container image to Azure Container Registry. --Find the `AcrHelloworld/Views/Home/Index.cshtml` file in the application source you [cloned from GitHub](container-registry-tutorial-prepare-registry.md#get-application-code) in a previous tutorial and open it in your favorite text editor. Add the following line below the existing `<h1>` line: --```html -<h1>MODIFIED</h1> -``` --Your modified `Index.cshtml` should look similar to: --```html -@{ - ViewData["Title"] = "Azure Container Registry :: Geo-replication"; -} -<style> - body { - background-image: url('images/azure-regions.png'); - background-size: cover; - } - .footer { - position: fixed; - bottom: 0px; - width: 100%; - } -</style> --<h1 style="text-align:center;color:blue">Hello World from: @ViewData["REGION"]</h1> -<h1>MODIFIED</h1> -<div class="footer"> - <ul> - <li>Registry URL: @ViewData["REGISTRYURL"]</li> - <li>Registry IP: @ViewData["REGISTRYIP"]</li> - <li>Registry Region: @ViewData["REGION"]</li> - </ul> -</div> -``` --## Rebuild the image --Now that you've updated the web application, rebuild its container image. As before, use the fully qualified image name, including the login server's fully qualified domain name (FQDN), for the tag: --```bash -docker build . -f ./AcrHelloworld/Dockerfile -t <acrName>.azurecr.io/acr-helloworld:v1 -``` --## Push image to Azure Container Registry --Next, push the updated *acr-helloworld* container image to your geo-replicated registry. Here, you're executing a single `docker push` command to deploy the updated image to the registry replicas in both the *West US* and *East US* regions. --```bash -docker push <acrName>.azurecr.io/acr-helloworld:v1 -``` --Your `docker push` output should be similar to the following: --```console -$ docker push uniqueregistryname.azurecr.io/acr-helloworld:v1 -The push refers to a repository [uniqueregistryname.azurecr.io/acr-helloworld] -5b9454e91555: Pushed -d6803756744a: Layer already exists -b7b1f3a15779: Layer already exists -a89567dff12d: Layer already exists -59c7b561ff56: Layer already exists -9a2f9413d9e4: Layer already exists -a75caa09eb1f: Layer already exists -v1: digest: sha256:4c3f2211569346fbe2d1006c18cbea2a4a9dcc1eb3a078608cef70d3a186ec7a size: 1792 -``` --## View the webhook logs --While the image is being replicated, you can see the Azure Container Registry webhooks being triggered. --To see the regional webhooks that were created when you deployed the container to *Web Apps for Containers* in a previous tutorial, navigate to your container registry in the Azure portal, then select **Webhooks** under **SERVICES**. --![Container registry Webhooks in the Azure portal][tutorial-portal-01] --Select each Webhook to see the history of its calls and responses. You should see a row for the **push** action in the logs of both Webhooks. Here, the log for the Webhook located in the *West US* region shows the **push** action triggered by the `docker push` in the previous step: --![Container registry Webhook log in the Azure portal (West US)][tutorial-portal-02] --## View the updated web app --The Webhooks notify Web Apps that a new image has been pushed to the registry, which automatically deploys the updated container to the two regional web apps. --Verify that the application has been updated in both deployments by navigating to both regional Web App deployments in your web browser. As a reminder, you can find the URL for the deployed web app in the top-right of each App Service overview tab. --![App Service overview in the Azure portal][tutorial-portal-03] --To see the updated application, select the link in the App Service overview. Here's an example view of the app running in *West US*: --![Browser view of modified web app running in West US region][deployed-app-westus-modified] --Verify that the updated container image was also deployed to the *East US* deployment by viewing it in your browser. --![Browser view of modified web app running in East US region][deployed-app-eastus-modified] --With a single `docker push`, you've automatically updated the web application running in both regional Web App deployments. And, Azure Container Registry served the container images from the repositories located closest to each deployment. --## Next steps --In this tutorial, you updated and pushed a new version of the web application container to your geo-replicated registry. Webhooks in Azure Container Registry notified Web Apps for Containers of the update, which triggered a local pull from the nearest registry replica. --### ACR Build: Automated image build and patch --In addition to geo-replication, ACR Build is another feature of Azure Container Registry that can help optimize your container deployment pipeline. Start with the ACR Build overview to get an idea of its capabilities: --[Automate OS and framework patching with ACR Build](container-registry-tasks-overview.md) --<!-- IMAGES --> -[deployed-app-eastus-modified]: ./media/container-registry-tutorial-deploy-update/deployed-app-eastus-modified.png -[deployed-app-westus-modified]: ./media/container-registry-tutorial-deploy-update/deployed-app-westus-modified.png -[local-container-01]: ./media/container-registry-tutorial-deploy-update/local-container-01.png -[tutorial-portal-01]: ./media/container-registry-tutorial-deploy-update/tutorial-portal-01.png -[tutorial-portal-02]: ./media/container-registry-tutorial-deploy-update/tutorial-portal-02.png -[tutorial-portal-03]: ./media/container-registry-tutorial-deploy-update/tutorial-portal-03.png |
container-registry | Container Registry Tutorial Multistep Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-multistep-task.md | - Title: Tutorial - Multi-step ACR task -description: In this tutorial, you learn how to configure an Azure Container Registry Task to automatically trigger a multi-step workflow to build, run, and push container images in the cloud when you commit source code to a Git repository. --- Previously updated : 10/31/2023---# Customer intent: As a developer or devops engineer, I want to trigger a multi-step container workflow automatically when I commit code to a Git repo. ---# Tutorial: Run a multi-step container workflow in the cloud when you commit source code --In addition to a [quick task](container-registry-tutorial-quick-task.md), ACR Tasks supports multi-step, multi-container-based workflows that can automatically trigger when you commit source code to a Git repository. --In this tutorial, you learn how to use example YAML files to define multi-step tasks that build, run, and push one or more container images to a registry when you commit source code. To create a task that only automates a single image build on code commit, see [Tutorial: Automate container image builds in the cloud when you commit source code](container-registry-tutorial-build-task.md). For an overview of ACR Tasks, see [Automate OS and framework patching with ACR Tasks](container-registry-tasks-overview.md), --In this tutorial: --> [!div class="checklist"] -> -> * Define a multi-step task using a YAML file -> * Create a task -> * Optionally add credentials to the task to enable access to another registry -> * Test the task -> * View task status -> * Trigger the task with a code commit --This tutorial assumes you've already completed the steps in the [previous tutorial](container-registry-tutorial-quick-task.md). If you haven't already done so, complete the steps in the [Prerequisites](container-registry-tutorial-quick-task.md#prerequisites) section of the previous tutorial before proceeding. ---## Create a multi-step task --Now that you've completed the steps required to enable ACR Tasks to read commit status and create webhooks in a repository, create a multi-step task that triggers building, running, and pushing a container image. --### YAML file --You define the steps for a multi-step task in a [YAML file](container-registry-tasks-reference-yaml.md). The first example multi-step task for this tutorial is defined in the file `taskmulti.yaml`, which is in the root of the GitHub repo that you cloned: --```yml -version: v1.1.0 -steps: -# Build target image -- build: -t {{.Run.Registry}}/hello-world:{{.Run.ID}} -f Dockerfile .-# Run image -- cmd: -t {{.Run.Registry}}/hello-world:{{.Run.ID}}- id: test - detach: true - ports: ["8080:80"] -- cmd: docker stop test-# Push image -- push:- - {{.Run.Registry}}/hello-world:{{.Run.ID}} -``` --This multi-step task does the following: --1. Runs a `build` step to build an image from the Dockerfile in the working directory. The image targets the `Run.Registry`, the registry where the task is run, and is tagged with a unique ACR Tasks run ID. -1. Runs a `cmd` step to run the image in a temporary container. This example starts a long-running container in the background and returns the container ID, then stops the container. In a real-world scenario, you might include steps to test the running container to ensure it runs correctly. -1. In a `push` step, pushes the image that was built to the run registry. --### Task command --First, populate these shell environment variables with values appropriate for your environment. This step isn't strictly required, but makes executing the multiline Azure CLI commands in this tutorial a bit easier. If you don't populate these environment variables, you must manually replace each value wherever it appears in the example commands. --```console -ACR_NAME=<registry-name> # The name of your Azure container registry -GIT_USER=<github-username> # Your GitHub user account name -GIT_PAT=<personal-access-token> # The PAT you generated in the previous section -``` --Now, create the task by executing the following [az acr task create][az-acr-task-create] command: --```azurecli-interactive -az acr task create \ - --registry $ACR_NAME \ - --name example1 \ - --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#main \ - --file taskmulti.yaml \ - --git-access-token $GIT_PAT -``` --This task specifies that any time code is committed to the *main* branch in the repository specified by `--context`, ACR Tasks will run the multi-step task from the code in that branch. The YAML file specified by `--file` from the repository root defines the steps. --Output from a successful [az acr task create][az-acr-task-create] command is similar to the following: --```output -{ - "agentConfiguration": { - "cpu": 2 - }, - "creationDate": "2020-11-20T03:14:31.763887+00:00", - "credentials": null, - "id": "/subscriptions/<Subscription ID>/resourceGroups/myregistry/providers/Microsoft.ContainerRegistry/registries/myregistry/tasks/taskmulti", - "location": "westus", - "name": "example1", - "platform": { - "architecture": "amd64", - "os": "linux", - "variant": null - }, - "provisioningState": "Succeeded", - "resourceGroup": "myresourcegroup", - "status": "Enabled", - "step": { - "baseImageDependencies": null, - "contextAccessToken": null, - "contextPath": "https://github.com/gituser/acr-build-helloworld-node.git#main", - "taskFilePath": "taskmulti.yaml", - "type": "FileTask", - "values": [], - "valuesFilePath": null - }, - "tags": null, - "timeout": 3600, - "trigger": { - "baseImageTrigger": { - "baseImageTriggerType": "Runtime", - "name": "defaultBaseimageTriggerName", - "status": "Enabled" - }, - "sourceTriggers": [ - { - "name": "defaultSourceTriggerName", - "sourceRepository": { - "branch": "main", - "repositoryUrl": "https://github.com/gituser/acr-build-helloworld-node.git#main", - "sourceControlAuthProperties": null, - "sourceControlType": "Github" - }, - "sourceTriggerEvents": [ - "commit" - ], - "status": "Enabled" - } - ] - }, - "type": "Microsoft.ContainerRegistry/registries/tasks" -} -``` --## Test the multi-step workflow --To test the multi-step task, trigger it manually by executing the [az acr task run][az-acr-task-run] command: --```azurecli-interactive -az acr task run --registry $ACR_NAME --name example1 -``` --By default, the `az acr task run` command streams the log output to your console when you execute the command. The output shows the progress of running each of the task steps. The output below is condensed to show key steps. --```output -Queued a run with ID: cab -Waiting for an agent... -2020/11/20 00:03:31 Downloading source code... -2020/11/20 00:03:33 Finished downloading source code -2020/11/20 00:03:33 Using acb_vol_cfe6bd55-3076-4215-8091-6a81aec3d1b1 as the home volume -2020/11/20 00:03:33 Creating Docker network: acb_default_network, driver: 'bridge' -2020/11/20 00:03:34 Successfully set up Docker network: acb_default_network -2020/11/20 00:03:34 Setting up Docker configuration... -2020/11/20 00:03:34 Successfully set up Docker configuration -2020/11/20 00:03:34 Logging in to registry: myregistry.azurecr.io -2020/11/20 00:03:35 Successfully logged into myregistry.azurecr.io -2020/11/20 00:03:35 Executing step ID: acb_step_0. Working directory: '', Network: 'acb_default_network' -2020/11/20 00:03:35 Scanning for dependencies... -2020/11/20 00:03:36 Successfully scanned dependencies -2020/11/20 00:03:36 Launching container with name: acb_step_0 -Sending build context to Docker daemon 24.06kB -[...] -Successfully built f669bfd170af -Successfully tagged myregistry.azurecr.io/hello-world:cf19 -2020/11/20 00:03:43 Successfully executed container: acb_step_0 -2020/11/20 00:03:43 Executing step ID: acb_step_1. Working directory: '', Network: 'acb_default_network' -2020/11/20 00:03:43 Launching container with name: acb_step_1 -279b1cb6e092b64c8517c5506fcb45494cd5a0bd10a6beca3ba97f25c5d940cd -2020/11/20 00:03:44 Successfully executed container: acb_step_1 -2020/11/20 00:03:44 Executing step ID: acb_step_2. Working directory: '', Network: 'acb_default_network' -2020/11/20 00:03:44 Pushing image: myregistry.azurecr.io/hello-world:cf19, attempt 1 -[...] -2020/11/20 00:03:46 Successfully pushed image: myregistry.azurecr.io/hello-world:cf19 -2020/11/20 00:03:46 Step ID: acb_step_0 marked as successful (elapsed time in seconds: 7.425169) -2020/11/20 00:03:46 Populating digests for step ID: acb_step_0... -2020/11/20 00:03:47 Successfully populated digests for step ID: acb_step_0 -2020/11/20 00:03:47 Step ID: acb_step_1 marked as successful (elapsed time in seconds: 0.827129) -2020/11/20 00:03:47 Step ID: acb_step_2 marked as successful (elapsed time in seconds: 2.112113) -2020/11/20 00:03:47 The following dependencies were found: -2020/11/20 00:03:47 -- image:- registry: myregistry.azurecr.io - repository: hello-world - tag: cf19 - digest: sha256:6b981a8ca8596e840228c974c929db05c0727d8630465de536be74104693467a - runtime-dependency: - registry: registry.hub.docker.com - repository: library/node - tag: 15-alpine - digest: sha256:8dafc0968fb4d62834d9b826d85a8feecc69bd72cd51723c62c7db67c6dec6fa - git: - git-head-revision: 1a3065388a0238e52865db1c8f3e97492a43444c --Run ID: cab was successful after 18s -``` --## Trigger a build with a commit --Now that you've tested the task by manually running it, trigger it automatically with a source code change. --First, ensure you're in the directory containing your local clone of the [repository][sample-repo]: --```console -cd acr-build-helloworld-node -``` --Next, execute the following commands to create, commit, and push a new file to your fork of the repo on GitHub: --```console -echo "Hello World!" > hello.txt -git add hello.txt -git commit -m "Testing ACR Tasks" -git push origin main -``` --You may be asked to provide your GitHub credentials when you execute the `git push` command. Provide your GitHub username, and enter the personal access token (PAT) that you created earlier for the password. --```azurecli-interactive -Username for 'https://github.com': <github-username> -Password for 'https://githubuser@github.com': <personal-access-token> -``` --Once you've pushed a commit to your repository, the webhook created by ACR Tasks fires and kicks off the task in Azure Container Registry. Display the logs for the currently running task to verify and monitor the build progress: --```azurecli-interactive -az acr task logs --registry $ACR_NAME -``` --Output is similar to the following, showing the currently executing (or last-executed) task: --```output -Showing logs of the last created run. -Run ID: cad --[...] --Run ID: cad was successful after 37s -``` --## List builds --To see a list of the task runs that ACR Tasks has completed for your registry, run the [az acr task list-runs][az-acr-task-list-runs] command: --```azurecli-interactive -az acr task list-runs --registry $ACR_NAME --output table -``` --Output from the command should appear similar to the following. The runs that ACR Tasks has executed are displayed, and "Git Commit" appears in the TRIGGER column for the most recent task: --```output -RUN ID TASK PLATFORM STATUS TRIGGER STARTED DURATION -- - -- --cad example1 linux Succeeded Commit 2020-11-20T00:22:15Z 00:00:35 -cac taskhelloworld linux Succeeded Commit 2020-11-20T00:22:15Z 00:00:22 -cab example1 linux Succeeded Manual 2020-11-20T00:18:36Z 00:00:47 -``` --## Create a multi-registry multi-step task --ACR Tasks by default has permissions to push or pull images from the registry where the task runs. You might want to run a multi-step task that targets one or more registries in addition to the run registry. For example, you might need to build images in one registry, and store images with different tags in a second registry that is accessed by a production system. This example shows you how to create such a task and provide credentials for another registry. --If you don't already have a second registry, create one for this example. If you need a registry, see the [previous tutorial](container-registry-tutorial-quick-task.md), or [Quickstart: Create a container registry using the Azure CLI](container-registry-get-started-azure-cli.md). --To create the task, you need the name of the registry login server, which is of the form *mycontainerregistrydate.azurecr.io* (all lowercase). In this example, you use the second registry to store images tagged by build date. --### YAML file --The second example multi-step task for this tutorial is defined in the file `taskmulti-multiregistry.yaml`, which is in the root of the GitHub repo that you cloned: --```yml -version: v1.1.0 -steps: -# Build target images -- build: -t {{.Run.Registry}}/hello-world:{{.Run.ID}} -f Dockerfile .-- build: -t {{.Values.regDate}}/hello-world:{{.Run.Date}} -f Dockerfile .-# Run image -- cmd: -t {{.Run.Registry}}/hello-world:{{.Run.ID}}- id: test - detach: true - ports: ["8080:80"] -- cmd: docker stop test-# Push images -- push:- - {{.Run.Registry}}/hello-world:{{.Run.ID}} - - {{.Values.regDate}}/hello-world:{{.Run.Date}} -``` --This multi-step task does the following: --1. Runs two `build` steps to build images from the Dockerfile in the working directory: - * The first targets the `Run.Registry`, the registry where the task is run, and is tagged with the ACR Tasks run ID. - * The second targets the registry identified by the value of `regDate`, which you set when you create the task (or provide through an external `values.yaml` file passed to `az acr task create`). This image is tagged with the run date. -1. Runs a `cmd` step to run one of the built containers. This example starts a long-running container in the background and returns the container ID, then stops the container. In a real-world scenario, you might test a running container to ensure it runs correctly. -1. In a `push` step, pushes the images that were built, the first to the run registry, the second to the registry identified by `regDate`. --### Task command --Using the shell environment variables defined previously, create the task by executing the following [az acr task create][az-acr-task-create] command. Substitute the name of your registry for *mycontainerregistrydate*. --```azurecli-interactive -az acr task create \ - --registry $ACR_NAME \ - --name example2 \ - --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#main \ - --file taskmulti-multiregistry.yaml \ - --git-access-token $GIT_PAT \ - --set regDate=mycontainerregistrydate.azurecr.io -``` --### Add task credential --To push images to the registry identified by the value of `regDate`, use the [az acr task credential add][az-acr-task-credential-add] command to add login credentials for that registry to the task. --For this example, we recommend that you create a [service principal](container-registry-auth-service-principal.md) with access to the registry scoped to the *AcrPush* role, so that it has permissions to push images. To create the service principal, use the following script: ---Pass the service principal application ID and password in the following `az acr task credential add` command. Be sure to update the login server name *mycontainerregistrydate* with the name of your second registry: --```azurecli-interactive -az acr task credential add --name example2 \ - --registry $ACR_NAME \ - --login-server mycontainerregistrydate.azurecr.io \ - --username <service-principal-application-id> \ - --password <service-principal-password> -``` --The CLI returns the name of the registry login server you added. --### Test the multi-step workflow --As in the preceding example, to test the multi-step task, trigger it manually by executing the [az acr task run][az-acr-task-run] command. To trigger the task with a commit to the Git repository, see the section [Trigger a build with a commit](#trigger-a-build-with-a-commit). --```azurecli-interactive -az acr task run --registry $ACR_NAME --name example2 -``` --By default, the `az acr task run` command streams the log output to your console when you execute the command. As before, the output shows the progress of running each of the task steps. The output is condensed to show key steps. --Output: --```output -Queued a run with ID: cf1g -Waiting for an agent... -2020/11/20 04:33:39 Downloading source code... -2020/11/20 04:33:41 Finished downloading source code -2020/11/20 04:33:42 Using acb_vol_4569b017-29fe-42bd-83b2-25c45a8ac807 as the home volume -2020/11/20 04:33:42 Creating Docker network: acb_default_network, driver: 'bridge' -2020/11/20 04:33:43 Successfully set up Docker network: acb_default_network -2020/11/20 04:33:43 Setting up Docker configuration... -2020/11/20 04:33:44 Successfully set up Docker configuration -2020/11/20 04:33:44 Logging in to registry: mycontainerregistry.azurecr.io -2020/11/20 04:33:45 Successfully logged into mycontainerregistry.azurecr.io -2020/11/20 04:33:45 Logging in to registry: mycontainerregistrydate.azurecr.io -2020/11/20 04:33:47 Successfully logged into mycontainerregistrydate.azurecr.io -2020/11/20 04:33:47 Executing step ID: acb_step_0. Working directory: '', Network: 'acb_default_network' -2020/11/20 04:33:47 Scanning for dependencies... -2020/11/20 04:33:47 Successfully scanned dependencies -2020/11/20 04:33:47 Launching container with name: acb_step_0 -Sending build context to Docker daemon 25.09kB -[...] -Successfully tagged mycontainerregistry.azurecr.io/hello-world:cf1g -2020/11/20 04:33:55 Successfully executed container: acb_step_0 -2020/11/20 04:33:55 Executing step ID: acb_step_1. Working directory: '', Network: 'acb_default_network' -2020/11/20 04:33:55 Scanning for dependencies... -2020/11/20 04:33:56 Successfully scanned dependencies -2020/11/20 04:33:56 Launching container with name: acb_step_1 -Sending build context to Docker daemon 25.09kB -[...] -Successfully tagged mycontainerregistrydate.azurecr.io/hello-world:20190503-043342z -2020/11/20 04:33:57 Successfully executed container: acb_step_1 -2020/11/20 04:33:57 Executing step ID: acb_step_2. Working directory: '', Network: 'acb_default_network' -2020/11/20 04:33:57 Launching container with name: acb_step_2 -721437ff674051b6be63cbcd2fa8eb085eacbf38d7d632f1a079320133182101 -2020/11/20 04:33:58 Successfully executed container: acb_step_2 -2020/11/20 04:33:58 Executing step ID: acb_step_3. Working directory: '', Network: 'acb_default_network' -2020/11/20 04:33:58 Launching container with name: acb_step_3 -test -2020/11/20 04:34:09 Successfully executed container: acb_step_3 -2020/11/20 04:34:09 Executing step ID: acb_step_4. Working directory: '', Network: 'acb_default_network' -2020/11/20 04:34:09 Pushing image: mycontainerregistry.azurecr.io/hello-world:cf1g, attempt 1 -The push refers to repository [mycontainerregistry.azurecr.io/hello-world] -[...] -2020/11/20 04:34:12 Successfully pushed image: mycontainerregistry.azurecr.io/hello-world:cf1g -2020/11/20 04:34:12 Pushing image: mycontainerregistrydate.azurecr.io/hello-world:20190503-043342z, attempt 1 -The push refers to repository [mycontainerregistrydate.azurecr.io/hello-world] -[...] -2020/11/20 04:34:19 Successfully pushed image: mycontainerregistrydate.azurecr.io/hello-world:20190503-043342z -2020/11/20 04:34:19 Step ID: acb_step_0 marked as successful (elapsed time in seconds: 8.125744) -2020/11/20 04:34:19 Populating digests for step ID: acb_step_0... -2020/11/20 04:34:21 Successfully populated digests for step ID: acb_step_0 -2020/11/20 04:34:21 Step ID: acb_step_1 marked as successful (elapsed time in seconds: 2.009281) -2020/11/20 04:34:21 Populating digests for step ID: acb_step_1... -2020/11/20 04:34:23 Successfully populated digests for step ID: acb_step_1 -2020/11/20 04:34:23 Step ID: acb_step_2 marked as successful (elapsed time in seconds: 0.795440) -2020/11/20 04:34:23 Step ID: acb_step_3 marked as successful (elapsed time in seconds: 11.446775) -2020/11/20 04:34:23 Step ID: acb_step_4 marked as successful (elapsed time in seconds: 9.734973) -2020/11/20 04:34:23 The following dependencies were found: -2020/11/20 04:34:23 -- image:- registry: mycontainerregistry.azurecr.io - repository: hello-world - tag: cf1g - digest: sha256:75354e9edb995e8661438bad9913deed87a185fddd0193811f916d684b71a5d2 - runtime-dependency: - registry: registry.hub.docker.com - repository: library/node - tag: 15-alpine - digest: sha256:8dafc0968fb4d62834d9b826d85a8feecc69bd72cd51723c62c7db67c6dec6fa - git: - git-head-revision: 9d9023473c46a5e2c315681b11eb4552ef0faccc -- image:- registry: mycontainerregistrydate.azurecr.io - repository: hello-world - tag: 20190503-043342z - digest: sha256:75354e9edb995e8661438bad9913deed87a185fddd0193811f916d684b71a5d2 - runtime-dependency: - registry: registry.hub.docker.com - repository: library/node - tag: 15-alpine - digest: sha256:8dafc0968fb4d62834d9b826d85a8feecc69bd72cd51723c62c7db67c6dec6fa - git: - git-head-revision: 9d9023473c46a5e2c315681b11eb4552ef0faccc --Run ID: cf1g was successful after 46s -``` --## Next steps --In this tutorial, you learned how to create multi-step, multi-container-based tasks that automatically trigger when you commit source code to a Git repository. For advanced features of multi-step tasks, including parallel and dependent step execution, see the [ACR Tasks YAML reference](container-registry-tasks-reference-yaml.md). Move on to the next tutorial to learn how to create tasks that trigger builds when a container image's base image is updated. --> [!div class="nextstepaction"] -> [Automate builds on base image update](container-registry-tutorial-base-image-update.md) --<!-- LINKS - External --> -[sample-repo]: https://github.com/Azure-Samples/acr-build-helloworld-node --<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-task]: /cli/azure/acr/task -[az-acr-task-create]: /cli/azure/acr/task#az-acr-task-create -[az-acr-task-run]: /cli/azure/acr/task#az-acr-task-run -[az-acr-task-list-runs]: /cli/azure/acr/task#az-acr-task-list-runs -[az-acr-task-credential-add]: /cli/azure/acr/task/credential#az-acr-task-credential-add -[az-login]: /cli/azure/reference-index#az-login --<!-- IMAGES --> -[build-task-01-new-token]: ./media/container-registry-tutorial-build-tasks/build-task-01-new-token.png -[build-task-02-generated-token]: ./media/container-registry-tutorial-build-tasks/build-task-02-generated-token.png |
container-registry | Container Registry Tutorial Prepare Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-prepare-registry.md | - Title: Tutorial - Create geo-replicated registry -description: Create an Azure container registry, configure geo-replication, prepare a Docker image, and deploy it to the registry. Part one of a three-part series. --- Previously updated : 10/31/2023-----# Tutorial: Prepare a geo-replicated Azure container registry --An Azure container registry is a private Docker registry deployed in Azure that you can keep network-close to your deployments. In this set of three tutorial articles, you learn how to use geo-replication to deploy an ASP.NET Core web application running in a Linux container to two [Web Apps for Containers](../app-service/index.yml) instances. You'll see how Azure automatically deploys the image to each Web App instance from the closest geo-replicated repository. --In this tutorial, part one in a three-part series: --> [!div class="checklist"] -> * Create a geo-replicated Azure container registry -> * Clone application source code from GitHub -> * Build a Docker container image from application source -> * Push the container image to your registry --In subsequent tutorials, you deploy the container from your private registry to a web app running in two Azure regions. You then update the code in the application, and update both Web App instances with a single `docker push` to your registry. --## Before you begin --This tutorial requires a local installation of the Azure CLI (version 2.0.31 or later). Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). --You should be familiar with core Docker concepts such as containers, container images, and basic Docker CLI commands. For a primer on container basics, see [Get started with Docker]( https://docs.docker.com/get-started/). --To complete this tutorial, you need a local Docker installation. Docker provides installation instructions for [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms) systems. --Azure Cloud Shell does not include the Docker components required to complete every step this tutorial. Therefore, we recommend a local installation of the Azure CLI and Docker development environment. --## Create a container registry --For this tutorial, you need an Azure container registry in the Premium service tier. To create a new Azure container registry, follow the steps in this section. --> [!TIP] -> If you previously created a registry and need to upgrade, see [Changing tiers](container-registry-skus.md#changing-tiers). --Sign in to the [Azure portal](https://portal.azure.com). --Select **Create a resource** > **Containers** > **Azure Container Registry**. ---Configure your new registry with the following settings. In the **Basics** tab: --* **Registry name**: Create a registry name that's globally unique within Azure, and contains 5-50 alphanumeric characters -* **Resource Group**: **Create new** > `myResourceGroup` -* **Location**: `West US` -* **SKU**: `Premium` (required for geo-replication) --Select **Review + create** and then **Create** to create the registry instance. ---Throughout the rest of this tutorial, we use `<acrName>` as a placeholder for the container **Registry name** that you chose. --> [!TIP] -> Because Azure container registries are typically long-lived resources that are used across multiple container hosts, we recommend that you create your registry in its own resource group. As you configure geo-replicated registries and webhooks, these additional resources are placed in the same resource group. --## Configure geo-replication --Now that you have a Premium registry, you can configure geo-replication. Your web app, which you configure in the next tutorial to run in two regions, can then pull its container images from the nearest registry. --Navigate to your new container registry in the Azure portal and select **Replications** under **Services**: ---A map is displayed showing green hexagons representing Azure regions available for geo-replication: ---Replicate your registry to the East US region by selecting its green hexagon, then select **Create** under **Create replication**: ---When the replication is complete, the portal reflects *Ready* for both regions. Use the **Refresh** button to refresh the status of the replication; it can take a minute or so for the replicas to be created and synchronized. ----## Enable admin account --In subsequent tutorials, you deploy a container image from the registry directly to Web App for Containers. To enable this capability, you must also enable the registry's [admin account](container-registry-authentication.md#admin-account). --Navigate to your new container registry in the Azure portal and select **Access keys** under **Settings**. Under **Admin user**, select **Enable**. ----## Container registry login --Now that you've configured geo-replication, build a container image and push it to your registry. You must first log in to your registry before pushing images to it. --Use the [az acr login](/cli/azure/acr#az-acr-login) command to authenticate and cache the credentials for your registry. Replace `<acrName>` with the name of the registry you created earlier. --```azurecli -az acr login --name <acrName> -``` --The command returns `Login Succeeded` when complete. --## Get application code --The sample in this tutorial includes a small web application built with [ASP.NET Core][aspnet-core]. The app serves an HTML page that displays the region from which the image was deployed by Azure Container Registry. ---Use git to download the sample into a local directory, and `cd` into the directory: --```bash -git clone https://github.com/Azure-Samples/acr-helloworld.git -cd acr-helloworld -``` --If you don't have `git` installed, you can [download the ZIP archive][acr-helloworld-zip] directly from GitHub. --## Update Dockerfile --The Dockerfile included in the sample shows how the container is built. It starts from an official ASP.NET Core runtime image, copies the application files into the container, installs dependencies, compiles the output using the official .NET Core SDK image, and finally, builds an optimized aspnetcore image. --The [Dockerfile][dockerfile] is located at `./AcrHelloworld/Dockerfile` in the cloned source. --```Dockerfile -FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS base -# Update <acrName> with the name of your registry -# Example: uniqueregistryname.azurecr.io -ENV DOCKER_REGISTRY <acrName>.azurecr.io -WORKDIR /app -EXPOSE 80 --FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build -WORKDIR /src -COPY *.sln ./ -COPY AcrHelloworld/AcrHelloworld.csproj AcrHelloworld/ -RUN dotnet restore -COPY . . -WORKDIR /src/AcrHelloworld -RUN dotnet build -c Release -o /app --FROM build AS publish -RUN dotnet publish -c Release -o /app --FROM base AS production -WORKDIR /app -COPY --from=publish /app . -ENTRYPOINT ["dotnet", "AcrHelloworld.dll"] -``` --The application in the *acr-helloworld* image tries to determine the region from which its container was deployed by querying DNS for information about the registry's login server. You must specify your registry login server's fully qualified domain name (FQDN) in the `DOCKER_REGISTRY` environment variable in the Dockerfile. --First, get the registry's login server with the `az acr show` command. Replace `<acrName>` with the name of the registry you created in previous steps. --```azurecli -az acr show --name <acrName> --query "{acrLoginServer:loginServer}" --output table -``` --Output: --```bash -AcrLoginServer -uniqueregistryname.azurecr.io -``` --Next, update the `ENV DOCKER_REGISTRY` line with the FQDN of your registry's login server. This example reflects the example registry name, *uniqueregistryname*: --```Dockerfile -ENV DOCKER_REGISTRY uniqueregistryname.azurecr.io -``` --## Build container image --Now that you've updated the Dockerfile with the FQDN of your registry login server, you can use `docker build` to create the container image. Run the following command to build the image and tag it with the URL of your private registry, again replacing `<acrName>` with the name of your registry: --```bash -docker build . -f ./AcrHelloworld/Dockerfile -t <acrName>.azurecr.io/acr-helloworld:v1 -``` --Several lines of output are displayed as the Docker image is built (shown here truncated): --```bash -Sending build context to Docker daemon 523.8kB -Step 1/18 : FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS base -2.2: Pulling from mcr.microsoft.com/dotnet/core/aspnet -3e17c6eae66c: Pulling fs layer --[...] --Step 18/18 : ENTRYPOINT dotnet AcrHelloworld.dll - > Running in 6906d98c47a1 - > c9ca1763cfb1 -Removing intermediate container 6906d98c47a1 -Successfully built c9ca1763cfb1 -Successfully tagged uniqueregistryname.azurecr.io/acr-helloworld:v1 -``` --Use `docker images` to see the built and tagged image: --```console -$ docker images -REPOSITORY TAG IMAGE ID CREATED SIZE -uniqueregistryname.azurecr.io/acr-helloworld v1 01ac48d5c8cf About a minute ago 284MB -[...] -``` --## Push image to Azure Container Registry --Next, use the `docker push` command to push the *acr-helloworld* image to your registry. Replace `<acrName>` with the name of your registry. --```bash -docker push <acrName>.azurecr.io/acr-helloworld:v1 -``` --Because you've configured your registry for geo-replication, your image is automatically replicated to both the *West US* and *East US* regions with this single `docker push` command. --```console -$ docker push uniqueregistryname.azurecr.io/acr-helloworld:v1 -The push refers to a repository [uniqueregistryname.azurecr.io/acr-helloworld] -cd54739c444b: Pushed -d6803756744a: Pushed -b7b1f3a15779: Pushed -a89567dff12d: Pushed -59c7b561ff56: Pushed -9a2f9413d9e4: Pushed -a75caa09eb1f: Pushed -v1: digest: sha256:0799014f91384bda5b87591170b1242bcd719f07a03d1f9a1ddbae72b3543970 size: 1792 -``` --## Next steps --In this tutorial, you created a private, geo-replicated container registry, built a container image, and then pushed that image to your registry. --Advance to the next tutorial to deploy your container to multiple Web Apps for Containers instances, using geo-replication to serve the images locally. --> [!div class="nextstepaction"] -> [Deploy web app from Azure Container Registry](container-registry-tutorial-deploy-app.md) --<!-- LINKS - External --> -[acr-helloworld-zip]: https://github.com/Azure-Samples/acr-helloworld/archive/master.zip -[aspnet-core]: https://dot.net -[dockerfile]: https://github.com/Azure-Samples/acr-helloworld/blob/master/AcrHelloworld/Dockerfile |
container-registry | Container Registry Tutorial Private Base Image Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-private-base-image-update.md | - Title: Tutorial - Trigger image build by private base image update -description: In this tutorial, you configure an Azure Container Registry Task to automatically trigger container image builds in the cloud when a base image in another private Azure container registry is updated. --- Previously updated : 10/31/2023-----# Tutorial: Automate container image builds when a base image is updated in another private container registry --[ACR Tasks](container-registry-tasks-overview.md) supports automated image builds when a container's [base image is updated](container-registry-tasks-base-images.md), such as when you patch the OS or application framework in one of your base images. --In this tutorial, you learn how to create an ACR task that triggers a build in the cloud when a container's base image is pushed to another Azure container registry. You can also try a tutorial to create an ACR task that triggers an image build when a base image is pushed to the [same Azure container registry](container-registry-tutorial-base-image-update.md). --In this tutorial: --> [!div class="checklist"] -> * Build the base image in a base registry -> * Create an application build task in another registry to track the base image -> * Update the base image to trigger an application image task -> * Display the triggered task -> * Verify updated application image --## Prerequisites --### Complete the previous tutorials --This tutorial assumes you've already configured your environment and completed the steps in the first two tutorials in the series, in which you: --* Create Azure container registry -* Fork sample repository -* Clone sample repository -* Create GitHub personal access token --If you haven't already done so, complete the following tutorials before proceeding: --[Build container images in the cloud with Azure Container Registry Tasks](container-registry-tutorial-quick-task.md) --[Automate container image builds with Azure Container Registry Tasks](container-registry-tutorial-build-task.md) --In addition to the container registry created for the previous tutorials, you need to create a registry to store the base images. If you want to, create the second registry in a different location than the original registry. --### Configure the environment --Populate these shell environment variables with values appropriate for your environment. This step isn't strictly required, but makes executing the multiline Azure CLI commands in this tutorial a bit easier. If you don't populate these environment variables, you must manually replace each value wherever it appears in the example commands. --```azurecli -BASE_ACR=<base-registry-name> # The name of your Azure container registry for base images -ACR_NAME=<registry-name> # The name of your Azure container registry for application images -GIT_USER=<github-username> # Your GitHub user account name -GIT_PAT=<personal-access-token> # The PAT you generated in the second tutorial -``` --### Base image update scenario --This tutorial walks you through a base image update scenario. This scenario reflects a development workflow to manage base images in a common, private container registry when creating application images in other registries. The base images could specify common operating systems and frameworks used by a team, or even common service components. --For example, developers who develop application images in their own registries can access a set of base images maintained in the common base registry. The base registry can be in another region or even geo-replicated. --The [code sample][code-sample] includes two Dockerfiles: an application image, and an image it specifies as its base. In the following sections, you create an ACR task that automatically triggers a build of the application image when a new version of the base image is pushed to a different Azure container registry. --* [Dockerfile-app][dockerfile-app]: A small Node.js web application that renders a static web page displaying the Node.js version on which it's based. The version string is simulated: it displays the contents of an environment variable, `NODE_VERSION`, that's defined in the base image. --* [Dockerfile-base][dockerfile-base]: The image that `Dockerfile-app` specifies as its base. It is itself based on a [Node][base-node] image, and includes the `NODE_VERSION` environment variable. --In the following sections, you create a task, update the `NODE_VERSION` value in the base image Dockerfile, then use ACR Tasks to build the base image. When the ACR task pushes the new base image to your registry, it automatically triggers a build of the application image. Optionally, you run the application container image locally to see the different version strings in the built images. --In this tutorial, your ACR task builds and pushes an application container image specified in a Dockerfile. ACR Tasks can also run [multi-step tasks](container-registry-tasks-multi-step.md), using a YAML file to define steps to build, push, and optionally test multiple containers. --## Build the base image --Start by building the base image with an ACR Tasks *quick task*, using [az acr build][az-acr-build]. As discussed in the [first tutorial](container-registry-tutorial-quick-task.md) in the series, this process not only builds the image, but pushes it to your container registry if the build is successful. In this example, the image is pushed to the base image registry. --```azurecli -az acr build --registry $BASE_ACR --image baseimages/node:15-alpine --file Dockerfile-base . -``` --## Create a task to track the private base image --Next, create a task in the application image registry with [az acr task create][az-acr-task-create], enabling a [managed identity](container-registry-tasks-authentication-managed-identity.md). The managed identity is used in later steps so that the task authenticates with the base image registry. --This example uses a system-assigned identity, but you could create and enable a user-assigned managed identity for certain scenarios. For details, see [Cross-registry authentication in an ACR task using an Azure-managed identity](container-registry-tasks-cross-registry-authentication.md). --```azurecli -az acr task create \ - --registry $ACR_NAME \ - --name baseexample2 \ - --image helloworld:{{.Run.ID}} \ - --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#main \ - --file Dockerfile-app \ - --git-access-token $GIT_PAT \ - --arg REGISTRY_NAME=$BASE_ACR.azurecr.io \ - --assign-identity -``` --This task is similar to the task created in the [previous tutorial](container-registry-tutorial-build-task.md). It instructs ACR Tasks to trigger an image build when commits are pushed to the repository specified by `--context`. While the Dockerfile used to build the image in the previous tutorial specifies a public base image (`FROM node:15-alpine`), the Dockerfile in this task, [Dockerfile-app][dockerfile-app], specifies a base image in the base image registry: --```Dockerfile -FROM ${REGISTRY_NAME}/baseimages/node:15-alpine -``` --This configuration makes it easy to simulate a framework patch in the base image later in this tutorial. --## Give identity pull permissions to base registry --To give the task's managed identity permissions to pull images from the base image registry, first run [az acr task show][az-acr-task-show] to get the service principal ID of the identity. Then run [az acr show][az-acr-show] to get the resource ID of the base registry: --```azurecli -# Get service principal ID of the task -principalID=$(az acr task show --name baseexample2 --registry $ACR_NAME --query identity.principalId --output tsv) --# Get resource ID of the base registry -baseregID=$(az acr show --name $BASE_ACR --query id --output tsv) -``` - -Assign the managed identity pull permissions to the registry by running [az role assignment create][az-role-assignment-create]: --```azurecli -az role assignment create \ - --assignee $principalID \ - --scope $baseregID --role acrpull -``` --## Add target registry credentials to the task --Run [az acr task credential add][az-acr-task-credential-add] to add credentials to the task. Pass the `--use-identity [system]` parameter to indicate that the task's system-assigned managed identity can access the credentials. --```azurecli -az acr task credential add \ - --name baseexample2 \ - --registry $ACR_NAME \ - --login-server $BASE_ACR.azurecr.io \ - --use-identity [system] -``` --## Manually run the task --Use [az acr task run][az-acr-task-run] to manually trigger the task and build the application image. This step is needed so that the task tracks the application image's dependency on the base image. --```azurecli -az acr task run --registry $ACR_NAME --name baseexample2 -``` --Once the task has completed, take note of the **Run ID** (for example, "da6") if you wish to complete the following optional step. --### Optional: Run application container locally --If you're working locally (not in the Cloud Shell), and you have Docker installed, run the container to see the application rendered in a web browser before you rebuild its base image. If you're using the Cloud Shell, skip this section (Cloud Shell does not support `az acr login` or `docker run`). --First, authenticate to your container registry with [az acr login][az-acr-login]: --```azurecli -az acr login --name $ACR_NAME -``` --Now, run the container locally with `docker run`. Replace **\<run-id\>** with the Run ID found in the output from the previous step (for example, "da6"). This example names the container `myapp` and includes the `--rm` parameter to remove the container when you stop it. --```bash -docker run -d -p 8080:80 --name myapp --rm $ACR_NAME.azurecr.io/helloworld:<run-id> -``` --Navigate to `http://localhost:8080` in your browser, and you should see the Node.js version number rendered in the web page, similar to the following. In a later step, you bump the version by adding an "a" to the version string. ---To stop and remove the container, run the following command: --```bash -docker stop myapp -``` --## List the builds --Next, list the task runs that ACR Tasks has completed for your registry using the [az acr task list-runs][az-acr-task-list-runs] command: --```azurecli -az acr task list-runs --registry $ACR_NAME --output table -``` --If you completed the previous tutorial (and didn't delete the registry), you should see output similar to the following. Take note of the number of task runs, and the latest RUN ID, so you can compare the output after you update the base image in the next section. --```azurecli -az acr task list-runs --registry $ACR_NAME --output table -``` --```output -UN ID TASK PLATFORM STATUS TRIGGER STARTED DURATION -- - -- --ca12 baseexample2 linux Succeeded Manual 2020-11-21T00:00:56Z 00:00:36 -ca11 baseexample1 linux Succeeded Image Update 2020-11-20T23:38:24Z 00:00:34 -ca10 taskhelloworld linux Succeeded Image Update 2020-11-20T23:38:24Z 00:00:24 -cay linux Succeeded Manual 2020-11-20T23:38:08Z 00:00:22 -cax baseexample1 linux Succeeded Manual 2020-11-20T23:33:12Z 00:00:30 -caw taskhelloworld linux Succeeded Commit 2020-11-20T23:16:07Z 00:00:29 -``` --## Update the base image --Here you simulate a framework patch in the base image. Edit **Dockerfile-base**, and add an "a" after the version number defined in `NODE_VERSION`: --```Dockerfile -ENV NODE_VERSION 15.2.1a -``` --Run a quick task to build the modified base image. Take note of the **Run ID** in the output. --```azurecli -az acr build --registry $BASE_ACR --image baseimages/node:15-alpine --file Dockerfile-base . -``` --Once the build is complete and the ACR task has pushed the new base image to your registry, it triggers a build of the application image. It may take few moments for the task you created earlier to trigger the application image build, as it must detect the newly built and pushed base image. --## List updated build --Now that you've updated the base image, list your task runs again to compare to the earlier list. If at first the output doesn't differ, periodically run the command to see the new task run appear in the list. --```azurecli -az acr task list-runs --registry $ACR_NAME --output table -``` --Output is similar to the following. The TRIGGER for the last-executed build should be "Image Update", indicating that the task was kicked off by your quick task of the base image. --```azurecli -az acr task list-runs --registry $ACR_NAME --output table -``` --```output - PLATFORM STATUS TRIGGER STARTED DURATION -- - -- --ca13 baseexample2 linux Succeeded Image Update 2020-11-21T00:06:00Z 00:00:43 -ca12 baseexample2 linux Succeeded Manual 2020-11-21T00:00:56Z 00:00:36 -ca11 baseexample1 linux Succeeded Image Update 2020-11-20T23:38:24Z 00:00:34 -ca10 taskhelloworld linux Succeeded Image Update 2020-11-20T23:38:24Z 00:00:24 -cay linux Succeeded Manual 2020-11-20T23:38:08Z 00:00:22 -cax baseexample1 linux Succeeded Manual 2020-11-20T23:33:12Z 00:00:30 -caw taskhelloworld linux Succeeded Commit 2020-11-20T23:16:07Z 00:00:29 -``` --If you'd like to perform the following optional step of running the newly built container to see the updated version number, take note of the **RUN ID** value for the Image Update-triggered build (in the preceding output, it's "ca13"). --### Optional: Run newly built image --If you're working locally (not in the Cloud Shell), and you have Docker installed, run the new application image once its build has completed. Replace `<run-id>` with the RUN ID you obtained in the previous step. If you're using the Cloud Shell, skip this section (Cloud Shell does not support `docker run`). --```bash -docker run -d -p 8081:80 --name updatedapp --rm $ACR_NAME.azurecr.io/helloworld:<run-id> -``` --Navigate to http://localhost:8081 in your browser, and you should see the updated Node.js version number (with the "a") in the web page: ---What's important to note is that you updated your **base** image with a new version number, but the last-built **application** image displays the new version. ACR Tasks picked up your change to the base image, and rebuilt your application image automatically. --To stop and remove the container, run the following command: --```bash -docker stop updatedapp -``` --## Next steps --In this tutorial, you learned how to use a task to automatically trigger container image builds when the image's base image has been updated. Now, move on to the next tutorial to learn how to trigger tasks on a defined schedule. --> [!div class="nextstepaction"] -> [Run a task on a schedule](container-registry-tasks-scheduled.md) --<!-- LINKS - External --> -[base-alpine]: https://hub.docker.com/_/alpine/ -[base-dotnet]: https://hub.docker.com/r/microsoft/dotnet/ -[base-node]: https://hub.docker.com/_/node/ -[base-windows]: https://hub.docker.com/r/microsoft/nanoserver/ -[code-sample]: https://github.com/Azure-Samples/acr-build-helloworld-node -[dockerfile-app]: https://github.com/Azure-Samples/acr-build-helloworld-node/blob/master/Dockerfile-app -[dockerfile-base]: https://github.com/Azure-Samples/acr-build-helloworld-node/blob/master/Dockerfile-base --<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-build]: /cli/azure/acr#az_acr_build -[az-acr-task-create]: /cli/azure/acr/task#az_acr_task_create -[az-acr-task-update]: /cli/azure/acr/task#az_acr_task_update -[az-acr-task-run]: /cli/azure/acr/task#az_acr_task_run -[az-acr-task-show]: /cli/azure/acr/task#az_acr_task_show -[az-acr-task-credential-add]: /cli/azure/acr/task/credential#az_acr_task_credential_add -[az-acr-login]: /cli/azure/acr#az_acr_login -[az-acr-task-list-runs]: /cli/azure/acr/task#az_acr_task_list_runs -[az-acr-task]: /cli/azure/acr#az_acr_task -[az-acr-show]: /cli/azure/acr#az_acr_show -[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create |
container-registry | Container Registry Tutorial Quick Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-quick-task.md | - Title: Tutorial - Quick container image build -description: In this tutorial, you learn how to build a Docker container image in Azure with Azure Container Registry Tasks (ACR Tasks), then deploy it to Azure Container Instances. --- Previously updated : 10/31/2023---# Customer intent: As a developer or devops engineer, I want to quickly build container images in Azure, without having to install dependencies like Docker Engine, so that I can simplify my inner-loop development pipeline. ---# Tutorial: Build and deploy container images in the cloud with Azure Container Registry Tasks --[ACR Tasks](container-registry-tasks-overview.md) is a suite of features within Azure Container Registry that provides streamlined and efficient Docker container image builds in Azure. In this article, you learn how to use the *quick task* feature of ACR Tasks. --The "inner-loop" development cycle is the iterative process of writing code, building, and testing your application before committing to source control. A quick task extends your inner-loop to the cloud, providing you with build success validation and automatic pushing of successfully built images to your container registry. Your images are built natively in the cloud, close to your registry, enabling faster deployment. --All your Dockerfile expertise is directly transferrable to ACR Tasks. You don't have to change your Dockerfiles to build in the cloud with ACR Tasks, just the command you run. --In this tutorial, part one of a series: --> [!div class="checklist"] -> * Get the sample application source code -> * Build a container image in Azure -> * Deploy a container to Azure Container Instances --In subsequent tutorials, you learn to use ACR Tasks for automated container image builds on code commit and base image update. ACR Tasks can also run [multi-step tasks](container-registry-tasks-multi-step.md), using a YAML file to define steps to build, push, and optionally test multiple containers. --## Prerequisites --### GitHub account --Create an account on https://github.com if you don't already have one. This tutorial series uses a GitHub repository to demonstrate automated image builds in ACR Tasks. --### Fork sample repository --Next, use the GitHub UI to fork the sample repository into your GitHub account. In this tutorial, you build a container image from the source in the repo, and in the next tutorial, you push a commit to your fork of the repo to kick off an automated task. --Fork this repository: https://github.com/Azure-Samples/acr-build-helloworld-node --![Screenshot of the Fork button (highlighted) in GitHub][quick-build-01-fork] --### Clone your fork --Once you've forked the repo, clone your fork and enter the directory containing your local clone. --Clone the repo with `git`, replace **\<your-github-username\>** with your GitHub username: --```console -git clone https://github.com/<your-github-username>/acr-build-helloworld-node -``` --Enter the directory containing the source code: --```console -cd acr-build-helloworld-node -``` --### Bash shell --The commands in this tutorial series are formatted for the Bash shell. If you prefer to use PowerShell, Command Prompt, or another shell, you may need to adjust the line continuation and environment variable format accordingly. ---## Build in Azure with ACR Tasks --Now that you've pulled the source code down to your machine, follow these steps to create a container registry and build the container image with ACR Tasks. --To make executing the sample commands easier, the tutorials in this series use shell environment variables. Execute the following command to set the `ACR_NAME` variable. Replace **\<registry-name\>** with a unique name for your new container registry. The registry name must be unique within Azure, contain only lower case letters, and contain 5-50 alphanumeric characters. The other resources you create in the tutorial are based on this name, so you should need to modify only this first variable. --```console -ACR_NAME=<registry-name> -``` --With the container registry environment variable populated, you should now be able to copy and paste the remainder of the commands in the tutorial without editing any values. Execute the following commands to create a resource group and container registry. --```azurecli -RES_GROUP=$ACR_NAME # Resource Group name --az group create --resource-group $RES_GROUP --location eastus -az acr create --resource-group $RES_GROUP --name $ACR_NAME --sku Standard --location eastus -``` --Now that you have a registry, use ACR Tasks to build a container image from the sample code. Execute the [az acr build][az-acr-build] command to perform a *quick task*. ---```azurecli -az acr build --registry $ACR_NAME --image helloacrtasks:v1 --file /path/to/Dockerfile /path/to/build/context. -``` --Output from the [az acr build][az-acr-build] command is similar to the following. You can see the upload of the source code (the "context") to Azure, and the details of the `docker build` operation that the ACR task runs in the cloud. Because ACR tasks use `docker build` to build your images, no changes to your Dockerfiles are required to start using ACR Tasks immediately. --```output -Packing source code into tar file to upload... -Sending build context (4.813 KiB) to ACR... -Queued a build with build ID: da1 -Waiting for build agent... -2020/11/18 18:31:42 Using acb_vol_01185991-be5f-42f0-9403-a36bb997ff35 as the home volume -2020/11/18 18:31:42 Setting up Docker configuration... -2020/11/18 18:31:43 Successfully set up Docker configuration -2020/11/18 18:31:43 Logging in to registry: myregistry.azurecr.io -2020/11/18 18:31:55 Successfully logged in -Sending build context to Docker daemon 21.5kB -Step 1/5 : FROM node:15-alpine -15-alpine: Pulling from library/node -Digest: sha256:8dafc0968fb4d62834d9b826d85a8feecc69bd72cd51723c62c7db67c6dec6fa -Status: Image is up to date for node:15-alpine - > a56170f59699 -Step 2/5 : COPY . /src - > 88087d7e709a -Step 3/5 : RUN cd /src && npm install - > Running in e80e1263ce9a -npm notice created a lockfile as package-lock.json. You should commit this file. -npm WARN helloworld@1.0.0 No repository field. --up to date in 0.1s -Removing intermediate container e80e1263ce9a - > 26aac291c02e -Step 4/5 : EXPOSE 80 - > Running in 318fb4c124ac -Removing intermediate container 318fb4c124ac - > 113e157d0d5a -Step 5/5 : CMD ["node", "/src/server.js"] - > Running in fe7027a11787 -Removing intermediate container fe7027a11787 - > 20a27b90eb29 -Successfully built 20a27b90eb29 -Successfully tagged myregistry.azurecr.io/helloacrtasks:v1 -2020/11/18 18:32:11 Pushing image: myregistry.azurecr.io/helloacrtasks:v1, attempt 1 -The push refers to repository [myregistry.azurecr.io/helloacrtasks] -6428a18b7034: Preparing -c44b9827df52: Preparing -172ed8ca5e43: Preparing -8c9992f4e5dd: Preparing -8dfad2055603: Preparing -c44b9827df52: Pushed -172ed8ca5e43: Pushed -8dfad2055603: Pushed -6428a18b7034: Pushed -8c9992f4e5dd: Pushed -v1: digest: sha256:b038dcaa72b2889f56deaff7fa675f58c7c666041584f706c783a3958c4ac8d1 size: 1366 -2020/11/18 18:32:43 Successfully pushed image: myregistry.azurecr.io/helloacrtasks:v1 -2020/11/18 18:32:43 Step ID acb_step_0 marked as successful (elapsed time in seconds: 15.648945) -The following dependencies were found: -- image:- registry: myregistry.azurecr.io - repository: helloacrtasks - tag: v1 - digest: sha256:b038dcaa72b2889f56deaff7fa675f58c7c666041584f706c783a3958c4ac8d1 - runtime-dependency: - registry: registry.hub.docker.com - repository: library/node - tag: 15-alpine - digest: sha256:8dafc0968fb4d62834d9b826d85a8feecc69bd72cd51723c62c7db67c6dec6fa - git: {} --Run ID: da1 was successful after 1m9.970148252s -``` --Near the end of the output, ACR Tasks displays the dependencies it's discovered for your image. This enables ACR Tasks to automate image builds on base image updates, such as when a base image is updated with OS or framework patches. You learn about ACR Tasks support for base image updates later in this tutorial series. --## Deploy to Azure Container Instances --ACR tasks automatically push successfully built images to your registry by default, allowing you to deploy them from your registry immediately. --In this section, you create an Azure Key Vault and service principal, then deploy the container to Azure Container Instances (ACI) using the service principal's credentials. --### Configure registry authentication --All production scenarios should use [service principals][service-principal-auth] to access an Azure container registry. Service principals allow you to provide role-based access control to your container images. For example, you can configure a service principal with pull-only access to a registry. --#### Create a key vault --If you don't already have a vault in [Azure Key Vault](/azure/key-vault/), create one with the Azure CLI using the following commands. --```azurecli -AKV_NAME=$ACR_NAME-vault --az keyvault create --resource-group $RES_GROUP --name $AKV_NAME -``` --#### Create a service principal and store credentials --You now need to create a service principal and store its credentials in your key vault. --Use the [az ad sp create-for-rbac][az-ad-sp-create-for-rbac] command to create the service principal, and [az keyvault secret set][az-keyvault-secret-set] to store the service principal's **password** in the vault. Use Azure CLI version **2.25.0** or later for these commands: --```azurecli -# Create service principal, store its password in AKV (the registry *password*) -az keyvault secret set \ - --vault-name $AKV_NAME \ - --name $ACR_NAME-pull-pwd \ - --value $(az ad sp create-for-rbac \ - --name $ACR_NAME-pull \ - --scopes $(az acr show --name $ACR_NAME --query id --output tsv) \ - --role acrpull \ - --query password \ - --output tsv) -``` --The `--role` argument in the preceding command configures the service principal with the *acrpull* role, which grants it pull-only access to the registry. To grant both push and pull access, change the `--role` argument to *acrpush*. --Next, store the service principal's *appId* in the vault, which is the **username** you pass to Azure Container Registry for authentication: --```azurecli -# Store service principal ID in AKV (the registry *username*) -az keyvault secret set \ - --vault-name $AKV_NAME \ - --name $ACR_NAME-pull-usr \ - --value $(az ad sp list --display-name $ACR_NAME-pull --query [].appId --output tsv) -``` --You've created an Azure Key Vault and stored two secrets in it: --* `$ACR_NAME-pull-usr`: The service principal ID, for use as the container registry **username**. -* `$ACR_NAME-pull-pwd`: The service principal password, for use as the container registry **password**. --You can now reference these secrets by name when you or your applications and services pull images from the registry. --### Deploy a container with Azure CLI --Now that the service principal credentials are stored as Azure Key Vault secrets, your applications and services can use them to access your private registry. --Execute the following [az container create][az-container-create] command to deploy a container instance. The command uses the service principal's credentials stored in Azure Key Vault to authenticate to your container registry. --```azurecli -az container create \ - --resource-group $RES_GROUP \ - --name acr-tasks \ - --image $ACR_NAME.azurecr.io/helloacrtasks:v1 \ - --registry-login-server $ACR_NAME.azurecr.io \ - --registry-username $(az keyvault secret show --vault-name $AKV_NAME --name $ACR_NAME-pull-usr --query value -o tsv) \ - --registry-password $(az keyvault secret show --vault-name $AKV_NAME --name $ACR_NAME-pull-pwd --query value -o tsv) \ - --dns-name-label acr-tasks-$ACR_NAME \ - --query "{FQDN:ipAddress.fqdn}" \ - --output table -``` --The `--dns-name-label` value must be unique within Azure, so the preceding command appends your container registry's name to the container's DNS name label. The output from the command displays the container's fully qualified domain name (FQDN), for example: --```output -FQDN ---acr-tasks-myregistry.eastus.azurecontainer.io -``` --Take note of the container's FQDN, you'll use it in the next section. --### Verify the deployment --To watch the startup process of the container, use the [az container attach][az-container-attach] command: --```azurecli -az container attach --resource-group $RES_GROUP --name acr-tasks -``` --The `az container attach` output first displays the container's status as it pulls the image and starts, then binds your local console's STDOUT and STDERR to that of the container. --```output -Container 'acr-tasks' is in state 'Running'... -(count: 1) (last timestamp: 2020-11-18 18:39:10+00:00) pulling image "myregistry.azurecr.io/helloacrtasks:v1" -(count: 1) (last timestamp: 2020-11-18 18:39:15+00:00) Successfully pulled image "myregistry.azurecr.io/helloacrtasks:v1" -(count: 1) (last timestamp: 2020-11-18 18:39:17+00:00) Created container -(count: 1) (last timestamp: 2020-11-18 18:39:17+00:00) Started container --Start streaming logs: -Server running at http://localhost:80 -``` --When `Server running at http://localhost:80` appears, navigate to the container's FQDN in your browser to see the running application. The FQDN should have been displayed in the output of the `az container create` command you executed in the previous section. ---To detach your console from the container, hit `Control+C`. --## Clean up resources --Stop the container instance with the [az container delete][az-container-delete] command: --```azurecli -az container delete --resource-group $RES_GROUP --name acr-tasks -``` --To remove *all* resources you've created in this tutorial, including the container registry, key vault, and service principal, issue the following commands. These resources are used in the [next tutorial](container-registry-tutorial-build-task.md) in the series, however, so you might want to keep them if you move on directly to the next tutorial. --```azurecli -az group delete --resource-group $RES_GROUP -az ad sp delete --id http://$ACR_NAME-pull -``` --## Next steps --Now that you've tested your inner loop with a quick task, configure a **build task** to trigger container images builds when you commit source code to a Git repository: --> [!div class="nextstepaction"] -> [Trigger automatic builds with tasks](container-registry-tutorial-build-task.md) --<!-- LINKS - External --> -[sample-archive]: https://github.com/Azure-Samples/acr-build-helloworld-node/archive/master.zip --<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-build]: /cli/azure/acr#az-acr-build -[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az-ad-sp-create-for-rbac -[az-container-attach]: /cli/azure/container#az-container-attach -[az-container-create]: /cli/azure/container#az-container-create -[az-container-delete]: /cli/azure/container#az-container-delete -[az-keyvault-create]: /cli/azure/keyvault/secret#az-keyvault-create -[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az-keyvault-secret-set -[az-login]: /cli/azure/reference-index#az-login -[service-principal-auth]: container-registry-auth-service-principal.md --<!-- IMAGES --> -[quick-build-01-fork]: ~/reusable-content/ce-skilling/azure/media/container-registry/quick-build-01-fork.png -[quick-build-02-browser]: ./media/container-registry-tutorial-quick-build/quick-build-02-browser.png |
container-registry | Container Registry Tutorial Sign Build Push | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md | - Title: Sign container images with Notation and Azure Key Vault using a self-signed certificate -description: In this tutorial you'll learn to create a self-signed certificate in Azure Key Vault (AKV), build and sign a container image stored in Azure Container Registry (ACR) with notation and AKV, and then verify the container image with notation. ----- Previously updated : 9/3/2024---# Sign container images with Notation and Azure Key Vault using a self-signed certificate --Signing container images is a process that ensures their authenticity and integrity. This is achieved by adding a digital signature to the container image, which can be validated during deployment. The signature helps to verify that the image is from a trusted publisher and has not been modified. [Notation](https://github.com/notaryproject/notation) is an open source supply chain security tool developed by the [Notary Project community](https://notaryproject.dev/) and backed by Microsoft, which supports signing and verifying container images and other artifacts. The Azure Key Vault (AKV) is used to store certificates with signing keys that can be used by Notation with the Notation AKV plugin (azure-kv) to sign and verify container images and other artifacts. The Azure Container Registry (ACR) allows you to attach signatures to container images and other artifacts as well as view those signatures. --In this tutorial: --> [!div class="checklist"] -> * Install Notation CLI and AKV plugin -> * Create a self-signed certificate in AKV -> * Build and push a container image with [ACR Tasks](container-registry-tasks-overview.md) -> * Sign a container image with Notation CLI and AKV plugin -> * Validate a container image against the signature with Notation CLI -> * Timestamping --## Prerequisites --* Create or use an [Azure Container Registry](../container-registry/container-registry-get-started-azure-cli.md) for storing container images and signatures -* Create or use an [Azure Key Vault](/azure/key-vault/general/quick-create-cli) for managing certificates -* Install and configure the latest [Azure CLI](/cli/azure/install-azure-cli), or Run commands in the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) --## Install Notation CLI and AKV plugin --1. Install Notation v1.2.0 on a Linux amd64 environment. Follow the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/cli/) to download the package for other environments. -- ```bash - # Download, extract and install - curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.2.0/notation_1.2.0_linux_amd64.tar.gz - tar xvzf notation.tar.gz - - # Copy the Notation binary to the desired bin directory in your $PATH, for example - cp ./notation /usr/local/bin - ``` --2. Install the Notation Azure Key Vault plugin `azure-kv` v1.2.0 on a Linux amd64 environment. -- > [!NOTE] - > The URL and SHA256 checksum for the Notation Azure Key Vault plugin can be found on the plugin's [release page](https://github.com/Azure/notation-azure-kv/releases). -- ```bash - notation plugin install --url https://github.com/Azure/notation-azure-kv/releases/download/v1.2.0/notation-azure-kv_1.2.0_linux_amd64.tar.gz --sha256sum 06bb5198af31ce11b08c4557ae4c2cbfb09878dfa6b637b7407ebc2d57b87b34 - ``` --3. List the available plugins and confirm that the `azure-kv` plugin with version `1.2.0` is included in the list. -- ```bash - notation plugin ls - ``` --## Configure environment variables --> [!NOTE] -> For easy execution of commands in the tutorial, provide values for the Azure resources to match the existing ACR and AKV resources. --1. Configure AKV resource names. -- ```bash - AKV_SUB_ID=myAkvSubscriptionId - AKV_RG=myAkvResourceGroup - # Name of the existing AKV used to store the signing keys - AKV_NAME=myakv - # Name of the certificate created in AKV - CERT_NAME=wabbit-networks-io - CERT_SUBJECT="CN=wabbit-networks.io,O=Notation,L=Seattle,ST=WA,C=US" - CERT_PATH=./${CERT_NAME}.pem - ``` --2. Configure ACR and image resource names. -- ```bash - ACR_SUB_ID=myAcrSubscriptionId - ACR_RG=myAcrResourceGroup - # Name of the existing registry example: myregistry.azurecr.io - ACR_NAME=myregistry - # Existing full domain of the ACR - REGISTRY=$ACR_NAME.azurecr.io - # Container name inside ACR where image will be stored - REPO=net-monitor - TAG=v1 - IMAGE=$REGISTRY/${REPO}:$TAG - # Source code directory containing Dockerfile to build - IMAGE_SOURCE=https://github.com/wabbit-networks/net-monitor.git#main - ``` --## Sign in with Azure CLI --```bash -az login -``` --To learn more about Azure CLI and how to sign in with it, see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli). --## Secure access permissions to ACR and AKV --When working with ACR and AKV, itΓÇÖs essential to grant the appropriate permissions to ensure secure and controlled access. You can authorize access for different entities, such as user principals, service principals, or managed identities, depending on your specific scenarios. In this tutorial, the access is authorized to a signed-in Azure user. --### Authorize access to ACR --The `AcrPull` and `AcrPush` roles are required for signing container images in ACR. --1. Set the subscription that contains the ACR resource -- ```bash - az account set --subscription $ACR_SUB_ID - ``` --2. Assign the roles -- ```bash - USER_ID=$(az ad signed-in-user show --query id -o tsv) - az role assignment create --role "AcrPull" --role "AcrPush" --assignee $USER_ID --scope "/subscriptions/$ACR_SUB_ID/resourceGroups/$ACR_RG/providers/Microsoft.ContainerRegistry/registries/$ACR_NAME" - ``` --### Authorize access to AKV --In this section, weΓÇÖll explore two options for authorizing access to AKV. --#### Use Azure RBAC (Recommended) --The following roles are required for signing using self-signed certificates: -- `Key Vault Certificates Officer` for creating and reading certificates-- `Key Vault Certificates User`for reading existing certificates-- `Key Vault Crypto User` for signing operations--To learn more about Key Vault access with Azure RBAC, see [Use an Azure RBAC for managing access](/azure/key-vault/general/rbac-guide). --1. Set the subscription that contains the AKV resource -- ```bash - az account set --subscription $AKV_SUB_ID - ``` --2. Assign the roles -- ```bash - USER_ID=$(az ad signed-in-user show --query id -o tsv) - az role assignment create --role "Key Vault Certificates Officer" --role "Key Vault Crypto User" --assignee $USER_ID --scope "/subscriptions/$AKV_SUB_ID/resourceGroups/$AKV_RG/providers/Microsoft.KeyVault/vaults/$AKV_NAME" - ``` --#### Assign access policy in AKV (legacy) --The following permissions are required for an identity: -- `Create` permissions for creating a certificate-- `Get` permissions for reading existing certificates-- `Sign` permissions for signing operations--To learn more about assigning policy to a principal, see [Assign Access Policy](/azure/key-vault/general/assign-access-policy). --1. Set the subscription that contains the AKV resource: -- ```bash - az account set --subscription $AKV_SUB_ID - ``` --2. Set the access policy in AKV: -- ```bash - USER_ID=$(az ad signed-in-user show --query id -o tsv) - az keyvault set-policy -n $AKV_NAME --certificate-permissions create get --key-permissions sign --object-id $USER_ID - ``` --> [!IMPORTANT] -> This example shows the minimum permissions needed for creating a certificate and signing a container image. Depending on your requirements, you may need to grant additional permissions. --## Create a self-signed certificate in AKV (Azure CLI) --The following steps show how to create a self-signed certificate for testing purpose. --1. Create a certificate policy file. -- Once the certificate policy file is executed as below, it creates a valid certificate compatible with [Notary Project certificate requirement](https://github.com/notaryproject/specifications/blob/v1.0.0/specs/signature-specification.md#certificate-requirements) in AKV. The value for `ekus` is for code-signing, but isn't required for notation to sign artifacts. The subject is used later as trust identity that user trust during verification. -- ```bash - cat <<EOF > ./my_policy.json - { - "issuerParameters": { - "certificateTransparency": null, - "name": "Self" - }, - "keyProperties": { - "exportable": false, - "keySize": 2048, - "keyType": "RSA", - "reuseKey": true - }, - "secretProperties": { - "contentType": "application/x-pem-file" - }, - "x509CertificateProperties": { - "ekus": [ - "1.3.6.1.5.5.7.3.3" - ], - "keyUsage": [ - "digitalSignature" - ], - "subject": "$CERT_SUBJECT", - "validityInMonths": 12 - } - } - EOF - ``` --2. Create the certificate. -- ```bash - az keyvault certificate create -n $CERT_NAME --vault-name $AKV_NAME -p @my_policy.json - ``` --## Sign a container image with Notation CLI and AKV plugin --1. Authenticate to your ACR by using your individual Azure identity. -- ```bash - az acr login --name $ACR_NAME - ``` --> [!IMPORTANT] -> If you have Docker installed on your system and used `az acr login` or `docker login` to authenticate to your ACR, your credentials are already stored and available to notation. In this case, you donΓÇÖt need to run `notation login` again to authenticate to your ACR. To learn more about authentication options for notation, see [Authenticate with OCI-compliant registries](https://notaryproject.dev/docs/user-guides/how-to/registry-authentication/). --2. Build and push a new image with ACR Tasks. Always use the digest value to identify the image for signing since tags are mutable and can be overwritten. -- ```bash - DIGEST=$(az acr build -r $ACR_NAME -t $REGISTRY/${REPO}:$TAG $IMAGE_SOURCE --no-logs --query "outputImages[0].digest" -o tsv) - IMAGE=$REGISTRY/${REPO}@$DIGEST - ``` -- In this tutorial, if the image has already been built and is stored in the registry, the tag serves as an identifier for that image for convenience. -- ```bash - IMAGE=$REGISTRY/${REPO}:$TAG - ``` --3. Get the Key ID of the signing key. A certificate in AKV can have multiple versions, the following command gets the Key ID of the latest version. -- ```bash - KEY_ID=$(az keyvault certificate show -n $CERT_NAME --vault-name $AKV_NAME --query 'kid' -o tsv) - ``` --4. Sign the container image with the [COSE](https://datatracker.ietf.org/doc/html/rfc9052) signature format using the signing key ID. To sign with a self-signed certificate, you need to set the plugin configuration value `self_signed=true`. -- ```bash - notation sign --signature-format cose --id $KEY_ID --plugin azure-kv --plugin-config self_signed=true $IMAGE - ``` -- To authenticate with AKV, by default, the following credential types if enabled will be tried in order: - - - [Environment credential](/dotnet/api/azure.identity.environmentcredential) - - [Workload identity credential](/dotnet/api/azure.identity.workloadidentitycredential) - - [Managed identity credential](/dotnet/api/azure.identity.managedidentitycredential) - - [Azure CLI credential](/dotnet/api/azure.identity.azureclicredential) - - If you want to specify a credential type, use an additional plugin configuration called `credential_type`. For example, you can explicitly set `credential_type` to `azurecli` for using Azure CLI credential, as demonstrated below: - - ```bash - notation sign --signature-format cose --id $KEY_ID --plugin azure-kv --plugin-config self_signed=true --plugin-config credential_type=azurecli $IMAGE - ``` -- See below table for the values of `credential_type` for various credential types. -- | Credential type | Value for `credential_type` | - | - | -- | - | Environment credential | `environment` | - | Workload identity credential | `workloadid` | - | Managed identity credential | `managedid` | - | Azure CLI credential | `azurecli` | - -5. View the graph of signed images and associated signatures. -- ```bash - notation ls $IMAGE - ``` --## Verify a container image with Notation CLI --To verify the container image, add the root certificate that signs the leaf certificate to the trust store and create trust policies for verification. For the self-signed certificate used in this tutorial, the root certificate is the self-signed certificate itself. --1. Download public certificate. -- ```bash - az keyvault certificate download --name $CERT_NAME --vault-name $AKV_NAME --file $CERT_PATH - ``` --2. Add the downloaded public certificate to named trust store for signature verification. -- ```bash - STORE_TYPE="ca" - STORE_NAME="wabbit-networks.io" - notation cert add --type $STORE_TYPE --store $STORE_NAME $CERT_PATH - ``` - -3. List the certificate to confirm. -- ```bash - notation cert ls - ``` - -4. Configure trust policy before verification. -- Trust policies allow users to specify fine-tuned verification policies. The following example configures a trust policy named `wabbit-networks-images`, which applies to all artifacts in `$REGISTRY/$REPO` and uses the named trust store `$STORE_NAME` of type `$STORE_TYPE`. It also assumes that the user trusts a specific identity with the X.509 subject `$CERT_SUBJECT`. For more details, see [Trust store and trust policy specification](https://github.com/notaryproject/notaryproject/blob/v1.0.0/specs/trust-store-trust-policy.md). -- ```bash - cat <<EOF > ./trustpolicy.json - { - "version": "1.0", - "trustPolicies": [ - { - "name": "wabbit-networks-images", - "registryScopes": [ "$REGISTRY/$REPO" ], - "signatureVerification": { - "level" : "strict" - }, - "trustStores": [ "$STORE_TYPE:$STORE_NAME" ], - "trustedIdentities": [ - "x509.subject: $CERT_SUBJECT" - ] - } - ] - } - EOF - ``` --5. Use `notation policy` to import the trust policy configuration from a JSON file that we created previously. -- ```bash - notation policy import ./trustpolicy.json - notation policy show - ``` - -6. Use `notation verify` to verify the container image hasn't been altered since build time. -- ```bash - notation verify $IMAGE - ``` -- Upon successful verification of the image using the trust policy, the sha256 digest of the verified image is returned in a successful output message. --## Timestamping --Since Notation v1.2.0 release, Notation supports [RFC 3161](https://www.rfc-editor.org/rfc/rfc3161) compliant timestamping. This enhancement extends the trust of signatures created within certificates validity, enabling successful signature verification even after certificates have expired. Timestamping reduces costs by eliminating the need to periodically re-sign images due to certificate expiry, which is especially critical when using short-lived certificates. For detailed instructions on how to sign and verify using timestamping, please refer to the [Notary Project timestamping guide](https://v1-2.notaryproject.dev/docs/user-guides/how-to/timestamping/). --## Next steps --Notation also provides CI/CD solutions on Azure Pipeline and GitHub Actions Workflow: --- [Sign and verify a container image with Notation in Azure Pipeline](/azure/security/container-secure-supply-chain/articles/notation-ado-task-sign)-- [Sign and verify a container image with Notation in GitHub Actions Workflow](https://github.com/marketplace/actions/notation-actions)--To validate signed image deployment in AKS or Kubernetes: --- [Use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters (Preview)](/azure/aks/image-integrity?tabs=azure-cli)-- [Use Ratify to validate and audit image deployment in any Kubernetes cluster](https://ratify.dev/)--[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ |
container-registry | Container Registry Tutorial Sign Trusted Ca | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-trusted-ca.md | - Title: Sign container images with Notation and Azure Key vault using a CA-issued certificate -description: In this tutorial learn to create a CA-issued certificate in Azure Key Vault, build and sign a container image stored in Azure Container Registry (ACR) with notation and AKV, and then verify the container image using notation. ----- Previously updated : 9/5/2024---# Sign container images with Notation and Azure Key Vault using a CA-issued certificate --Signing and verifying container images with a certificate issued by a trusted Certificate Authority (CA) is a valuable security practice. This security measure will help you to responsibly identify, authorize, and validate the identity of both the publisher of the container image and the container image itself. The Trusted Certificate Authorities (CAs) such as GlobalSign, DigiCert, and others play a crucial role in the validation of a user's or organization's identity, maintaining the security of digital certificates, and revoking the certificate immediately upon any risk or misuse. --Here are some essential components that help you to sign and verify container images with a certificate issued by a trusted CA: --* The [Notation](https://github.com/notaryproject/notation) is an open-source supply chain security tool developed by [Notary Project community](https://notaryproject.dev/) and backed by Microsoft, which supports signing and verifying container images and other artifacts. -* The Azure Key Vault (AKV), a cloud-based service for managing cryptographic keys, secrets, and certificates will help you ensure to securely store and manage a certificate with a signing key. -* The [Notation AKV plugin azure-kv](https://github.com/Azure/notation-azure-kv), the extension of Notation uses the keys stored in Azure Key Vault for signing and verifying the digital signatures of container images and artifacts. -* The Azure Container Registry (ACR) allows you to attach these signatures to the signed image and helps you to store and manage these container images. --When you verify the image, the signature is used to validate the integrity of the image and the identity of the signer. This helps to ensure that the container images are not tampered with and are from a trusted source. --In this article: --> [!div class="checklist"] -> * Install the notation CLI and AKV plugin -> * Create or import a certificate issued by a CA in AKV -> * Build and push a container image with ACR task -> * Sign a container image with Notation CLI and AKV plugin -> * Verify a container image signature with Notation CLI -> * Timestamping --## Prerequisites --* Create or use an [Azure Container Registry](../container-registry/container-registry-get-started-azure-cli.md) for storing container images and signatures -* Create or use an [Azure Key Vault.](/azure/key-vault/general/quick-create-cli) -* Install and configure the latest [Azure CLI](/cli/azure/install-azure-cli), or run commands in the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) --> [!NOTE] -> We recommend creating a new Azure Key Vault for storing certificates only. --## Install the notation CLI and AKV plugin --1. Install Notation v1.2.0 on a Linux amd64 environment. Follow the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/cli/) to download the package for other environments. -- ```bash - # Download, extract and install - curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.2.0/notation_1.2.0_linux_amd64.tar.gz - tar xvzf notation.tar.gz -- # Copy the notation cli to the desired bin directory in your PATH, for example - cp ./notation /usr/local/bin - ``` --2. Install the Notation Azure Key Vault plugin `azure-kv` v1.2.0 on a Linux amd64 environment. -- > [!NOTE] - > The URL and SHA256 checksum for the Notation Azure Key Vault plugin can be found on the plugin's [release page](https://github.com/Azure/notation-azure-kv/releases). -- ```bash - notation plugin install --url https://github.com/Azure/notation-azure-kv/releases/download/v1.2.0/notation-azure-kv_1.2.0_linux_amd64.tar.gz --sha256sum 06bb5198af31ce11b08c4557ae4c2cbfb09878dfa6b637b7407ebc2d57b87b34 - ``` --3. List the available plugins and confirm that the `azure-kv` plugin with version `1.2.0` is included in the list. -- ```bash - notation plugin ls - ``` --## Configure environment variables --> [!NOTE] -> This guide uses environment variables for convenience when configuring the AKV and ACR. Update the values of these environment variables for your specific resources. --1. Configure environment variables for AKV and certificates -- ```bash - AKV_SUB_ID=myAkvSubscriptionId - AKV_RG=myAkvResourceGroup - AKV_NAME=myakv - - # Name of the certificate created or imported in AKV - CERT_NAME=wabbit-networks-io - - # X.509 certificate subject - CERT_SUBJECT="CN=wabbit-networks.io,O=Notation,L=Seattle,ST=WA,C=US" - ``` --2. Configure environment variables for ACR and images. - - ```bash - ACR_SUB_ID=myAcrSubscriptionId - ACR_RG=myAcrResourceGroup - # Name of the existing registry example: myregistry.azurecr.io - ACR_NAME=myregistry - # Existing full domain of the ACR - REGISTRY=$ACR_NAME.azurecr.io - # Container name inside ACR where image will be stored - REPO=net-monitor - TAG=v1 - # Source code directory containing Dockerfile to build - IMAGE_SOURCE=https://github.com/wabbit-networks/net-monitor.git#main - ``` --## Sign in with Azure CLI --```bash -az login -``` --To learn more about Azure CLI and how to sign in with it, see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli). ---## Create or import a certificate issued by a CA in AKV --### Certificate requirements --When creating certificates for signing and verification, the certificates must meet the [Notary Project certificate requirement](https://github.com/notaryproject/specifications/blob/v1.0.0/specs/signature-specification.md#certificate-requirements). --Here are the requirements for root and intermediate certificates: -- The `basicConstraints` extension must be present and marked as critical. The `CA` field must be set `true`.-- The `keyUsage` extension must be present and marked `critical`. Bit positions for `keyCertSign` MUST be set. --Here are the requirements for certificates issued by a CA: -- X.509 certificate properties:- - Subject must contain common name (`CN`), country (`C`), state or province (`ST`), and organization (`O`). In this tutorial, `$CERT_SUBJECT` is used as the subject. - - X.509 key usage flag must be `DigitalSignature` only. - - Extended Key Usages (EKUs) must be empty or `1.3.6.1.5.5.7.3.3` (for Codesigning). -- Key properties:- - The `exportable` property must be set to `false`. - - Select a supported key type and size from the [Notary Project specification](https://github.com/notaryproject/specifications/blob/v1.0.0/specs/signature-specification.md#algorithm-selection). --> [!IMPORTANT] -> To ensure successful integration with [Image Integrity](/azure/aks/image-integrity), the content type of certificate should be set to PEM. --> [!NOTE] -> This guide uses version 1.0.1 of the AKV plugin. Prior versions of the plugin had a limitation that required a specific certificate order in a certificate chain. Version 1.0.1 of the plugin does not have this limitation so it is recommended that you use version 1.0.1 or later. --### Create a certificate issued by a CA --Create a certificate signing request (CSR) by following the instructions in [create certificate signing request](/azure/key-vault/certificates/create-certificate-signing-request). --> [!IMPORTANT] -> When merging the CSR, make sure you merge the entire chain that brought back from the CA vendor. --### Import the certificate in AKV --To import the certificate: --1. Get the certificate file from CA vendor with entire certificate chain. -2. Import the certificate into Azure Key Vault by following the instructions in [import a certificate](/azure/key-vault/certificates/tutorial-import-certificate). --> [!NOTE] -> If the certificate does not contain a certificate chain after creation or importing, you can obtain the intermediate and root certificates from your CA vendor. You can ask your vendor to provide you with a PEM file that contains the intermediate certificates (if any) and root certificate. This file can then be used at step 5 of [signing container images](#sign-a-container-image-with-notation-cli-and-akv-plugin). --## Sign a container image with Notation CLI and AKV plugin --When working with ACR and AKV, itΓÇÖs essential to grant the appropriate permissions to ensure secure and controlled access. You can authorize access for different entities, such as user principals, service principals, or managed identities, depending on your specific scenarios. In this tutorial, the access are authorized to a signed-in Azure user. --### Authoring access to ACR --The `AcrPull` and `AcrPush` roles are required for building and signing container images in ACR. --1. Set the subscription that contains the ACR resource -- ```bash - az account set --subscription $ACR_SUB_ID - ``` --1. Assign the roles -- ```bash - USER_ID=$(az ad signed-in-user show --query id -o tsv) - az role assignment create --role "AcrPull" --role "AcrPush" --assignee $USER_ID --scope "/subscriptions/$ACR_SUB_ID/resourceGroups/$ACR_RG/providers/Microsoft.ContainerRegistry/registries/$ACR_NAME" - ``` --### Build and push container images to ACR --1. Authenticate to your ACR by using your individual Azure identity. -- ```bash - az acr login --name $ACR_NAME - ``` --> [!IMPORTANT] -> If you have Docker installed on your system and used `az acr login` or `docker login` to authenticate to your ACR, your credentials are already stored and available to notation. In this case, you donΓÇÖt need to run `notation login` again to authenticate to your ACR. To learn more about authentication options for notation, see [Authenticate with OCI-compliant registries](https://notaryproject.dev/docs/user-guides/how-to/registry-authentication/). --1. Build and push a new image with ACR Tasks. Always use `digest` to identify the image for signing, since tags are mutable and can be overwritten. -- ```bash - DIGEST=$(az acr build -r $ACR_NAME -t $REGISTRY/${REPO}:$TAG $IMAGE_SOURCE --no-logs --query "outputImages[0].digest" -o tsv) - IMAGE=$REGISTRY/${REPO}@$DIGEST - ``` --In this tutorial, if the image has already been built and is stored in the registry, the tag serves as an identifier for that image for convenience. --```bash -IMAGE=$REGISTRY/${REPO}@$TAG -``` --### Authoring access to AKV --#### Use Azure RBAC (Recommended) --1. Set the subscription that contains the AKV resource -- ```bash - az account set --subscription $AKV_SUB_ID - ``` --1. Assign the roles -- If the certificate contains the entire certificate chain, the principal must be assigned with the following roles: - - `Key Vault Secrets User` for reading secrets - - `Key Vault Certificates User`for reading certificates - - `Key Vault Crypto User` for signing operations -- ```bash - USER_ID=$(az ad signed-in-user show --query id -o tsv) - az role assignment create --role "Key Vault Secrets User" --role "Key Vault Certificates User" --role "Key Vault Crypto User" --assignee $USER_ID --scope "/subscriptions/$AKV_SUB_ID/resourceGroups/$AKV_RG/providers/Microsoft.KeyVault/vaults/$AKV_NAME" - ``` -- If the certificate doesn't contain the chain, the principal must be assigned with the following roles: - - `Key Vault Certificates User`for reading certificates - - `Key Vault Crypto User` for signing operations -- ```bash - USER_ID=$(az ad signed-in-user show --query id -o tsv) - az role assignment create --role "Key Vault Certificates User" --role "Key Vault Crypto User" --assignee $USER_ID --scope "/subscriptions/$AKV_SUB_ID/resourceGroups/$AKV_RG/providers/Microsoft.KeyVault/vaults/$AKV_NAME" - ``` --To learn more about Key Vault access with Azure RBAC, see [Use an Azure RBAC for managing access](/azure/key-vault/general/rbac-guide). --#### Use access policy (Legacy) - -To set the subscription that contains the AKV resources, run the following command: --```bash -az account set --subscription $AKV_SUB_ID -``` --If the certificate contains the entire certificate chain, the principal must be granted key permission `Sign`, secret permission `Get`, and certificate permissions `Get`. To grant these permissions to the principal: --```bash -USER_ID=$(az ad signed-in-user show --query id -o tsv) -az keyvault set-policy -n $AKV_NAME --key-permissions sign --secret-permissions get --certificate-permissions get --object-id $USER_ID -``` --If the certificate doesn't contain the chain, the principal must be granted key permission `Sign`, and certificate permissions `Get`. To grant these permissions to the principal: --```bash -USER_ID=$(az ad signed-in-user show --query id -o tsv) -az keyvault set-policy -n $AKV_NAME --key-permissions sign --certificate-permissions get --object-id $USER_ID -``` --To learn more about assigning policy to a principal, see [Assign Access Policy](/azure/key-vault/general/assign-access-policy). --### Sign container images using the certificate in AKV --1. Get the Key ID for a certificate. A certificate in AKV can have multiple versions, the following command gets the Key ID for the latest version of the `$CERT_NAME` certificate. -- ```bash - KEY_ID=$(az keyvault certificate show -n $CERT_NAME --vault-name $AKV_NAME --query 'kid' -o tsv) - ``` --1. Sign the container image with the COSE signature format using the Key ID. -- If the certificate contains the entire certificate chain, run the following command: -- ```bash - notation sign --signature-format cose $IMAGE --id $KEY_ID --plugin azure-kv - ``` -- If the certificate does not contain the chain, use the `--plugin-config ca_certs=<ca_bundle_file>` parameter to pass the CA certificates in a PEM file to AKV plugin, run the following command: -- ```bash - notation sign --signature-format cose $IMAGE --id $KEY_ID --plugin azure-kv --plugin-config ca_certs=<ca_bundle_file> - ``` -- To authenticate with AKV, by default, the following credential types if enabled will be tried in order: - - [Environment credential](/dotnet/api/azure.identity.environmentcredential) - - [Workload identity credential](/dotnet/api/azure.identity.workloadidentitycredential) - - [Managed identity credential](/dotnet/api/azure.identity.managedidentitycredential) - - [Azure CLI credential](/dotnet/api/azure.identity.azureclicredential) - - If you want to specify a credential type, use an additional plugin configuration called `credential_type`. For example, you can explicitly set `credential_type` to `azurecli` for using Azure CLI credential, as demonstrated below: - - ```bash - notation sign --signature-format cose --id $KEY_ID --plugin azure-kv --plugin-config credential_type=azurecli $IMAGE - ``` -- See below table for the values of `credential_type` for various credential types. -- | Credential type | Value for `credential_type` | - | - | -- | - | Environment credential | `environment` | - | Workload identity credential | `workloadid` | - | Managed identity credential | `managedid` | - | Azure CLI credential | `azurecli` | --1. View the graph of signed images and associated signatures. -- ```bash - notation ls $IMAGE - ``` -- In the following example of output, a signature of type `application/vnd.cncf.notary.signature` identified by digest `sha256:d7258166ca820f5ab7190247663464f2dcb149df4d1b6c4943dcaac59157de8e` is associated to the `$IMAGE`. -- ``` - myregistry.azurecr.io/net-monitor@sha256:17cc5dd7dfb8739e19e33e43680e43071f07497ed716814f3ac80bd4aac1b58f - ΓööΓöÇΓöÇ application/vnd.cncf.notary.signature - ΓööΓöÇΓöÇ sha256:d7258166ca820f5ab7190247663464f2dcb149df4d1b6c4943dcaac59157de8e - ``` --## Verify a container image with Notation CLI --1. Add the root certificate to a named trust store for signature verification. If you do not have the root certificate, you can obtain it from your CA. The following example adds the root certificate `$ROOT_CERT` to the `$STORE_NAME` trust store. -- ```bash - STORE_TYPE="ca" - STORE_NAME="wabbit-networks.io" - notation cert add --type $STORE_TYPE --store $STORE_NAME $ROOT_CERT - ``` --2. List the root certificate to confirm the `$ROOT_CERT` is added successfully. -- ```bash - notation cert ls - ``` --3. Configure trust policy before verification. -- Trust policies allow users to specify fine-tuned verification policies. Use the following command to configure trust policy. -- ```bash - cat <<EOF > ./trustpolicy.json - { - "version": "1.0", - "trustPolicies": [ - { - "name": "wabbit-networks-images", - "registryScopes": [ "$REGISTRY/$REPO" ], - "signatureVerification": { - "level" : "strict" - }, - "trustStores": [ "$STORE_TYPE:$STORE_NAME" ], - "trustedIdentities": [ - "x509.subject: $CERT_SUBJECT" - ] - } - ] - } - EOF - ``` -- The above `trustpolicy.json` file defines one trust policy named `wabbit-networks-images`. This trust policy applies to all the artifacts stored in the `$REGISTRY/$REPO` repositories. The named trust store `$STORE_NAME` of type `$STORE_TYPE` contains the root certificates. It also assumes that the user trusts a specific identity with the X.509 subject `$CERT_SUBJECT`. For more details, see [Trust store and trust policy specification](https://github.com/notaryproject/notaryproject/blob/v1.0.0/specs/trust-store-trust-policy.md). --4. Use `notation policy` to import the trust policy configuration from `trustpolicy.json`. -- ```bash - notation policy import ./trustpolicy.json - ``` --5. Show the trust policy configuration to confirm its successful import. -- ```bash - notation policy show - ``` - -5. Use `notation verify` to verify the integrity of the image: -- ```bash - notation verify $IMAGE - ``` -- Upon successful verification of the image using the trust policy, the sha256 digest of the verified image is returned in a successful output message. An example of output: -- `Successfully verified signature for myregistry.azurecr.io/net-monitor@sha256:17cc5dd7dfb8739e19e33e43680e43071f07497ed716814f3ac80bd4aac1b58f` --## FAQ --- What should I do if the certificate is expired? - - If the certificate has expired, it invalidates the signature. To resolve this issue, you should renew the certificate and sign container images again. Learn more about [Renew your Azure Key Vault certificates](/azure/key-vault/certificates/overview-renew-certificate). --- What should I do if the root certificate is expired? -- If the root certificate has expired, it invalidates the signature. To resolve this issue, you should obtain a new certificate from a trusted CA vendor and sign container images again. Replace the expired root certificate with the new one from the CA vendor. --- What should I do if the certificate is revoked?-- If the certificate is revoked, it invalidates the signature. The most common reason for revoking a certificate is when the certificateΓÇÖs private key has been compromised. To resolve this issue, you should obtain a new certificate from a trusted CA vendor and sign container images again. --## Timestamping --Since Notation v1.2.0 release, Notation supports [RFC 3161](https://www.rfc-editor.org/rfc/rfc3161) compliant timestamping. This enhancement extends the trust of signatures created within certificates validity, enabling successful signature verification even after certificates have expired. Timestamping reduces costs by eliminating the need to periodically re-sign images due to certificate expiry, which is especially critical when using short-lived certificates. For detailed instructions on how to sign and verify using timestamping, please refer to the [Notary Project timestamping guide](https://v1-2.notaryproject.dev/docs/user-guides/how-to/timestamping/). --## Next steps --Notation also provides CI/CD solutions on Azure Pipeline and GitHub Actions Workflow: --- [Sign and verify a container image with Notation in Azure Pipeline](/azure/security/container-secure-supply-chain/articles/notation-ado-task-sign)-- [Sign and verify a container image with Notation in GitHub Actions Workflow](https://github.com/marketplace/actions/notation-actions)--To validate signed image deployment in AKS or Kubernetes: --- [Use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters (Preview)](/azure/aks/image-integrity?tabs=azure-cli)-- [Use Ratify to validate and audit image deployment in any Kubernetes cluster](https://ratify.dev/)--[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ |
container-registry | Container Registry Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-vnet.md | - Title: Restrict access using a service endpoint -description: Restrict access to an Azure container registry using a service endpoint in an Azure virtual network. Service endpoint access is a feature of the Premium service tier. ---- Previously updated : 10/31/2023----# Restrict access to a container registry using a service endpoint in an Azure virtual network --[Azure Virtual Network](../virtual-network/virtual-networks-overview.md) provides secure, private networking for your Azure and on-premises resources. A [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) allows you to secure your container registry's public IP address to only your virtual network. This endpoint gives traffic an optimal route to the resource over the Azure backbone network. The identities of the virtual network and the subnet are also transmitted with each request. --This article shows how to configure a container registry service endpoint (preview) in a virtual network. --Each registry supports a maximum of 100 virtual network rules. --> [!IMPORTANT] -> Azure Container Registry now supports [Azure Private Link](container-registry-private-link.md), enabling private endpoints from a virtual network to be placed on a registry. Private endpoints are accessible from within the virtual network, using private IP addresses. We recommend using private endpoints instead of service endpoints in most network scenarios. -> The container registry does not support enabling both private link and service endpoint features configured from a virtual network. So, we recommend running the list and removing the [network rules](container-registry-vnet.md#remove-network-rules) as required. --Configuring a registry service endpoint is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md). --## Preview limitations --* Future development of service endpoints for Azure Container Registry isn't currently planned. We recommend using [private endpoints](container-registry-private-link.md) instead. -* You can't use the Azure portal to configure service endpoints on a registry. -* Only an [Azure Kubernetes Service](/azure/aks/intro-kubernetes) cluster or Azure [virtual machine](/azure/virtual-machines/linux/overview) can be used as a host to access a container registry using a service endpoint. *Other Azure services including Azure Container Instances aren't supported.* -* Service endpoints for Azure Container Registry aren't supported in the Azure US Government cloud or Microsoft Azure operated by 21Vianet cloud. ---## Prerequisites --* To use the Azure CLI steps in this article, Azure CLI version 2.0.58 or later is required. If you need to install or upgrade, see [Install Azure CLI][azure-cli]. --* If you don't already have a container registry, create one (Premium tier required) and push a sample image such as `hello-world` from Docker Hub. For example, use the [Azure portal][quickstart-portal] or the [Azure CLI][quickstart-cli] to create a registry. --* If you want to restrict registry access using a service endpoint in a different Azure subscription, register the resource provider for Azure Container Registry in that subscription. For example: -- ```azurecli - az account set --subscription <Name or ID of subscription of virtual network> -- az provider register --namespace Microsoft.ContainerRegistry - ``` ---## Configure network access for registry --In this section, configure your container registry to allow access from a subnet in an Azure virtual network. Steps are provided using the Azure CLI. --### Add a service endpoint to a subnet --When you create a VM, Azure by default creates a virtual network in the same resource group. The name of the virtual network is based on the name of the virtual machine. For example, if you name your virtual machine *myDockerVM*, the default virtual network name is *myDockerVMVNET*, with a subnet named *myDockerVMSubnet*. Verify this by using the [az network vnet list][az-network-vnet-list] command: --```azurecli -az network vnet list \ - --resource-group myResourceGroup \ - --query "[].{Name: name, Subnet: subnets[0].name}" -``` --Output: --```output -[ - { - "Name": "myDockerVMVNET", - "Subnet": "myDockerVMSubnet" - } -] -``` --Use the [az network vnet subnet update][az-network-vnet-subnet-update] command to add a **Microsoft.ContainerRegistry** service endpoint to your subnet. Substitute the names of your virtual network and subnet in the following command: --```azurecli -az network vnet subnet update \ - --name myDockerVMSubnet \ - --vnet-name myDockerVMVNET \ - --resource-group myResourceGroup \ - --service-endpoints Microsoft.ContainerRegistry -``` --Use the [az network vnet subnet show][az-network-vnet-subnet-show] command to retrieve the resource ID of the subnet. You need this in a later step to configure a network access rule. --```azurecli -az network vnet subnet show \ - --name myDockerVMSubnet \ - --vnet-name myDockerVMVNET \ - --resource-group myResourceGroup \ - --query "id" - --output tsv -``` --Output: --```output -/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myDockerVMVNET/subnets/myDockerVMSubnet -``` --### Change default network access to registry --By default, an Azure container registry allows connections from hosts on any network. To limit access to a selected network, change the default action to deny access. Substitute the name of your registry in the following [az acr update][az-acr-update] command: --```azurecli -az acr update --name myContainerRegistry --default-action Deny -``` --### Add network rule to registry --Use the [az acr network-rule add][az-acr-network-rule-add] command to add a network rule to your registry that allows access from the VM's subnet. Substitute the container registry's name and the resource ID of the subnet in the following command: --```azurecli -az acr network-rule add \ - --name mycontainerregistry \ - --subnet <subnet-resource-id> -``` --## Verify access to the registry --After waiting a few minutes for the configuration to update, verify that the VM can access the container registry. Make an SSH connection to your VM, and run the [az acr login][az-acr-login] command to login to your registry. --```azurecli -az acr login --name mycontainerregistry -``` --You can perform registry operations such as run `docker pull` to pull a sample image from the registry. Substitute an image and tag value appropriate for your registry, prefixed with the registry login server name (all lowercase): --```bash -docker pull mycontainerregistry.azurecr.io/hello-world:v1 -``` --Docker successfully pulls the image to the VM. --This example demonstrates that you can access the private container registry through the network access rule. However, the registry can't be accessed from a login host that doesn't have a network access rule configured. If you attempt to login from another host using the `az acr login` command or `docker login` command, output is similar to the following: --```output -Error response from daemon: login attempt to https://xxxxxxx.azurecr.io/v2/ failed with status: 403 Forbidden -``` --## Restore default registry access --To restore the registry to allow access by default, remove any network rules that are configured. Then set the default action to allow access. --### Remove network rules --To see a list of network rules configured for your registry, run the following [az acr network-rule list][az-acr-network-rule-list] command: --```azurecli -az acr network-rule list --name mycontainerregistry -``` --For each rule that is configured, run the [az acr network-rule remove][az-acr-network-rule-remove] command to remove it. For example: --```azurecli -# Remove a rule that allows access for a subnet. Substitute the subnet resource ID. --az acr network-rule remove \ - --name mycontainerregistry \ - --subnet /subscriptions/ \ - xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myDockerVMVNET/subnets/myDockerVMSubnet -``` --### Allow access --Substitute the name of your registry in the following [az acr update][az-acr-update] command: -```azurecli -az acr update --name myContainerRegistry --default-action Allow -``` --## Clean up resources --If you created all the Azure resources in the same resource group and no longer need them, you can optionally delete the resources by using a single [az group delete](/cli/azure/group) command: --```azurecli -az group delete --name myResourceGroup -``` --## Next steps --* To restrict access to a registry using a private endpoint in a virtual network, see [Configure Azure Private Link for an Azure container registry](container-registry-private-link.md). -* If you need to set up registry access rules from behind a client firewall, see [Configure rules to access an Azure container registry behind a firewall](container-registry-firewall-access-rules.md). ---<!-- IMAGES --> --[acr-subnet-service-endpoint]: ./media/container-registry-vnet/acr-subnet-service-endpoint.png ---<!-- LINKS - External --> -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ -[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/ -[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms -[docker-login]: https://docs.docker.com/engine/reference/commandline/login/ -[docker-mac]: https://docs.docker.com/docker-for-mac/ -[docker-push]: https://docs.docker.com/engine/reference/commandline/push/ -[docker-tag]: https://docs.docker.com/engine/reference/commandline/tag/ -[docker-windows]: https://docs.docker.com/docker-for-windows/ --<!-- LINKS - Internal --> -[azure-cli]: /cli/azure/install-azure-cli -[az-acr-create]: /cli/azure/acr#az_acr_create -[az-acr-show]: /cli/azure/acr#az_acr_show -[az-acr-repository-show]: /cli/azure/acr/repository#az_acr_repository_show -[az-acr-repository-list]: /cli/azure/acr/repository#az_acr_repository_list -[az-acr-login]: /cli/azure/acr#az_acr_login -[az-acr-network-rule-add]: /cli/azure/acr/network-rule/#az_acr_network_rule_add -[az-acr-network-rule-remove]: /cli/azure/acr/network-rule/#az_acr_network_rule_remove -[az-acr-network-rule-list]: /cli/azure/acr/network-rule/#az_acr_network_rule_list -[az-acr-run]: /cli/azure/acr#az_acr_run -[az-acr-update]: /cli/azure/acr#az_acr_update -[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac -[az-group-create]: /cli/azure/group -[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create -[az-vm-create]: /cli/azure/vm#az_vm_create -[az-network-vnet-subnet-show]: /cli/azure/network/vnet/subnet/#az_network_vnet_subnet_show -[az-network-vnet-subnet-update]: /cli/azure/network/vnet/subnet/#az_network_vnet_subnet_update -[az-network-vnet-subnet-show]: /cli/azure/network/vnet/subnet/#az_network_vnet_subnet_show -[az-network-vnet-list]: /cli/azure/network/vnet/#az_network_vnet_list -[quickstart-portal]: container-registry-get-started-portal.md -[quickstart-cli]: container-registry-get-started-azure-cli.md |
container-registry | Container Registry Webhook Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-webhook-reference.md | - Title: Registry webhook schema reference -description: Reference for JSON payload for webhook requests in an Azure container registry, which are generated when webhooks are enabled for artifact push or delete events --- Previously updated : 10/31/2023----# Azure Container Registry webhook reference --You can [configure webhooks](container-registry-webhook.md) for your container registry that generate events when certain actions are performed against it. For example, enable webhooks that are triggered when a container image or Helm chart is pushed to a registry, or deleted. When a webhook is triggered, Azure Container Registry issues an HTTP or HTTPS request containing information about the event to an endpoint you specify. Your endpoint can then process the webhook and act accordingly. --The following sections detail the schema of webhook requests generated by supported events. The event sections contain the payload schema for the event type, an example request payload, and one or more example commands that would trigger the webhook. --For information about configuring webhooks for your Azure container registry, see [Using Azure Container Registry webhooks](container-registry-webhook.md). --## Webhook requests --### HTTP request --A triggered webhook makes an HTTP `POST` request to the URL endpoint you specified when you configured the webhook. --### HTTP headers --Webhook requests include a `Content-Type` of `application/json` if you have not specified a `Content-Type` custom header for your webhook. --No other headers are added to the request beyond those custom headers you might have specified for the webhook. --## Push event --Webhook triggered when a container image is pushed to a repository. --### Push event payload --|Element|Type|Description| -|-|-|--| -|`id`|String|The ID of the webhook event.| -|`timestamp`|DateTime|The time at which the webhook event was triggered.| -|`action`|String|The action that triggered the webhook event.| -|[target](#target)|Complex Type|The target of the event that triggered the webhook event.| -|[request](#request)|Complex Type|The request that generated the webhook event.| --### <a name="target"></a>target --|Element|Type|Description| -||-|--| -|`mediaType`|String|The MIME type of the referenced object.| -|`size`|Int32|The number of bytes of the content. Same as Length field.| -|`digest`|String|The digest of the content, as defined by the Registry V2 HTTP API Specification.| -|`length`|Int32|The number of bytes of the content. Same as Size field.| -|`repository`|String|The repository name.| -|`tag`|String|The image tag name.| --### <a name="request"></a>request --|Element|Type|Description| -||-|--| -|`id`|String|The ID of the request that initiated the event.| -|`host`|String|The externally accessible hostname of the registry instance, as specified by the HTTP host header on incoming requests.| -|`method`|String|The request method that generated the event.| -|`useragent`|String|The user agent header of the request.| --### Payload example: image push event --```JSON -{ - "id": "cb8c3971-9adc-488b-xxxx-43cbb4974ff5", - "timestamp": "2017-11-17T16:52:01.343145347Z", - "action": "push", - "target": { - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "size": 524, - "digest": "sha256:xxxxd5c8786bb9e621a45ece0dbxxxx1cdc624ad20da9fe62e9d25490f33xxxx", - "length": 524, - "repository": "hello-world", - "tag": "v1" - }, - "request": { - "id": "3cbb6949-7549-4fa1-xxxx-a6d5451dffc7", - "host": "myregistry.azurecr.io", - "method": "PUT", - "useragent": "docker/17.09.0-ce go/go1.8.3 git-commit/afdb6d4 kernel/4.10.0-27-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.09.0-ce \\(linux\\))" - } -} -``` --Example [Docker CLI](https://docs.docker.com/engine/reference/commandline/cli/) command that triggers the image **push** event webhook: --```bash -docker push myregistry.azurecr.io/hello-world:v1 -``` --## Chart push event --Webhook triggered when a Helm chart is pushed to a repository. --### Chart push event payload --|Element|Type|Description| -|-|-|--| -|`id`|String|The ID of the webhook event.| -|`timestamp`|DateTime|The time at which the webhook event was triggered.| -|`action`|String|The action that triggered the webhook event.| -|[target](#helm_target)|Complex Type|The target of the event that triggered the webhook event.| --### <a name="helm_target"></a>target --|Element|Type|Description| -||-|--| -|`mediaType`|String|The MIME type of the referenced object.| -|`size`|Int32|The number of bytes of the content.| -|`digest`|String|The digest of the content, as defined by the Registry V2 HTTP API Specification.| -|`repository`|String|The repository name.| -|`tag`|String|The chart tag name.| -|`name`|String|The chart name.| -|`version`|String|The chart version.| --### Payload example: chart push event --```JSON -{ - "id": "6356e9e0-627f-4fed-xxxx-d9059b5143ac", - "timestamp": "2019-03-05T23:45:31.2614267Z", - "action": "chart_push", - "target": { - "mediaType": "application/vnd.acr.helm.chart", - "size": 25265, - "digest": "sha256:xxxx8075264b5ba7c14c23672xxxx52ae6a3ebac1c47916e4efe19cd624dxxxx", - "repository": "repo", - "tag": "wordpress-5.4.0.tgz", - "name": "wordpress", - "version": "5.4.0.tgz" - } -} -``` --Example [Azure CLI](/cli/azure/acr) command that triggers the **chart_push** event webhook: --```azurecli -az acr helm push wordpress-5.4.0.tgz --name MyRegistry -``` --## Delete event --Webhook triggered when an image repository or manifest is deleted. Not triggered when a tag is deleted. --### Delete event payload --|Element|Type|Description| -|-|-|--| -|`id`|String|The ID of the webhook event.| -|`timestamp`|DateTime|The time at which the webhook event was triggered.| -|`action`|String|The action that triggered the webhook event.| -|[target](#delete_target)|Complex Type|The target of the event that triggered the webhook event.| -|[request](#delete_request)|Complex Type|The request that generated the webhook event.| --### <a name="delete_target"></a> target --|Element|Type|Description| -||-|--| -|`mediaType`|String|The MIME type of the referenced object.| -|`digest`|String|The digest of the content, as defined by the Registry V2 HTTP API Specification.| -|`repository`|String|The repository name.| --### <a name="delete_request"></a> request --|Element|Type|Description| -||-|--| -|`id`|String|The ID of the request that initiated the event.| -|`host`|String|The externally accessible hostname of the registry instance, as specified by the HTTP host header on incoming requests.| -|`method`|String|The request method that generated the event.| -|`useragent`|String|The user agent header of the request.| --### Payload example: image delete event --```JSON -{ - "id": "afc359ce-df7f-4e32-xxxx-1ff8aa80927b", - "timestamp": "2017-11-17T16:54:53.657764628Z", - "action": "delete", - "target": { - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "digest": "sha256:xxxxd5c8786bb9e621a45ece0dbxxxx1cdc624ad20da9fe62e9d25490f33xxxx", - "repository": "hello-world" - }, - "request": { - "id": "3d78b540-ab61-4f75-xxxx-7ca9ecf559b3", - "host": "myregistry.azurecr.io", - "method": "DELETE", - "useragent": "python-requests/2.18.4" - } - } -``` --Example [Azure CLI](/cli/azure/acr) commands that trigger a **delete** event webhook: --```azurecli -# Delete repository -az acr repository delete --name MyRegistry --repository MyRepository --# Delete image -az acr repository delete --name MyRegistry --image MyRepository:MyTag -``` --## Chart delete event --Webhook triggered when a Helm chart or repository is deleted. --### Chart delete event payload --|Element|Type|Description| -|-|-|--| -|`id`|String|The ID of the webhook event.| -|`timestamp`|DateTime|The time at which the webhook event was triggered.| -|`action`|String|The action that triggered the webhook event.| -|[target](#chart_delete_target)|Complex Type|The target of the event that triggered the webhook event.| --### <a name="chart_delete_target"></a> target --|Element|Type|Description| -||-|--| -|`mediaType`|String|The MIME type of the referenced object.| -|`size`|Int32|The number of bytes of the content.| -|`digest`|String|The digest of the content, as defined by the Registry V2 HTTP API Specification.| -|`repository`|String|The repository name.| -|`tag`|String|The chart tag name.| -|`name`|String|The chart name.| -|`version`|String|The chart version.| --### Payload example: chart delete event --```JSON -{ - "id": "338a3ef7-ad68-4128-xxxx-fdd3af8e8f67", - "timestamp": "2019-03-06T00:10:48.1270754Z", - "action": "chart_delete", - "target": { - "mediaType": "application/vnd.acr.helm.chart", - "size": 25265, - "digest": "sha256:xxxx8075264b5ba7c14c23672xxxx52ae6a3ebac1c47916e4efe19cd624dxxxx", - "repository": "repo", - "tag": "wordpress-5.4.0.tgz", - "name": "wordpress", - "version": "5.4.0.tgz" - } -} -``` --Example [Azure CLI](/cli/azure/acr) command that triggers the **chart_delete** event webhook: --```azurecli -az acr helm delete wordpress --version 5.4.0 --name MyRegistry -``` --## Next steps --[Using Azure Container Registry webhooks](container-registry-webhook.md) |
container-registry | Container Registry Webhook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-webhook.md | - Title: Webhooks to respond to registry actions -description: Learn how to use webhooks to trigger events when push or pull actions occur in your registry repositories. ---- Previously updated : 10/31/2023----# Using Azure Container Registry webhooks --An Azure container registry stores and manages private Docker container images, similar to the way Docker Hub stores public Docker images. It can also host repositories for [Helm charts](container-registry-helm-repos.md) (preview), a packaging format to deploy applications to Kubernetes. You can use webhooks to trigger events when certain actions take place in one of your registry repositories. Webhooks can respond to events at the registry level, or they can be scoped down to a specific repository tag. With a [geo-replicated](container-registry-geo-replication.md) registry, you configure each webhook to respond to events in a specific regional replica. --The endpoint for a webhook must be publicly accessible from the registry. You can configure registry webhook requests to authenticate to a secured endpoint. --For details on webhook requests, see [Azure Container Registry webhook schema reference](container-registry-webhook-reference.md). --## Prerequisites --* Azure container registry - Create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md). The [Azure Container Registry service tiers](container-registry-skus.md) have different webhooks quotas. -* Docker CLI - To set up your local computer as a Docker host and access the Docker CLI commands, install [Docker Engine](https://docs.docker.com/engine/installation/). --## Create webhook - Azure portal --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to the container registry in which you want to create a webhook. -1. Under **Services**, select **Webhooks**. -1. Select **Add** in the webhook toolbar. -1. Complete the *Create webhook* form with the following information: --| Value | Description | -||| -| Webhook name | The name you want to give to the webhook. It may contain only letters and numbers, and must be 5-50 characters in length. | -| Location | For a [geo-replicated](container-registry-geo-replication.md) registry, specify the Azure region of the registry replica. -| Service URI | The URI where the webhook should send POST notifications. | -| Custom headers | Headers you want to pass along with the POST request. They should be in "key: value" format. | -| Trigger actions | Actions that trigger the webhook. Actions include image push, image delete, Helm chart push, Helm chart delete, and image quarantine. You can choose one or more actions to trigger the webhook. | -| Status | The status for the webhook after it's created. It's enabled by default. | -| Scope | The scope at which the webhook works. If not specified, the scope is for all events in the registry. It can be specified for a repository or a tag by using the format "repository:tag", or "repository:*" for all tags under a repository. | --Example webhook form: --![Screenshot that shows the ACR webhook creation U I in the Azure portal.](./media/container-registry-webhook/webhook.png) --## Create webhook - Azure CLI --To create a webhook using the Azure CLI, use the [az acr webhook create](/cli/azure/acr/webhook#az-acr-webhook-create) command. The following command creates a webhook for all image delete events in the registry *mycontainerregistry*: --```azurecli-interactive -az acr webhook create --registry mycontainerregistry --name myacrwebhook01 --actions delete --uri http://webhookuri.com -``` --## Test webhook --### Azure portal --Prior to using the webhook, you can test it with the **Ping** button. Ping sends a generic POST request to the specified endpoint and logs the response. Using the ping feature can help you verify you've correctly configured the webhook. --1. Select the webhook you want to test. -2. In the top toolbar, select **Ping**. -3. Check the endpoint's response in the **HTTP STATUS** column. --![ACR webhook creation UI in the Azure portal](./media/container-registry-webhook/webhook-02.png) --### Azure CLI --To test an ACR webhook with the Azure CLI, use the [az acr webhook ping](/cli/azure/acr/webhook#az-acr-webhook-ping) command. --```azurecli-interactive -az acr webhook ping --registry mycontainerregistry --name myacrwebhook01 -``` --To see the results, use the [az acr webhook list-events](/cli/azure/acr/webhook) command. --```azurecli-interactive -az acr webhook list-events --registry mycontainerregistry08 --name myacrwebhook01 -``` --## Delete webhook --### Azure portal --Each webhook can be deleted by selecting the webhook and then the **Delete** button in the Azure portal. --### Azure CLI --```azurecli-interactive -az acr webhook delete --registry mycontainerregistry --name myacrwebhook01 -``` --## Next steps --### Webhook schema reference --For details on the format and properties of the JSON event payloads emitted by Azure Container Registry, see the webhook schema reference: --[Azure Container Registry webhook schema reference](container-registry-webhook-reference.md) --### Event Grid events --In addition to the native registry webhook events discussed in this article, Azure Container Registry can emit events to Event Grid: --[Quickstart: Send container registry events to Event Grid](container-registry-event-grid-quickstart.md) |
container-registry | Data Loss Prevention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/data-loss-prevention.md | - Title: Disable export of artifacts -description: Set a registry property to prevent data exfiltration from a Premium Azure container registry. ---- Previously updated : 10/31/2023----# Disable export of artifacts from an Azure container registry --To prevent registry users in an organization from maliciously or accidentally leaking artifacts outside a virtual network, you can configure the registry's *export policy* to disable exports. --Export policy is a property introduced in API version **2021-06-01-preview** for Premium container registries. The `exportPolicy` property, when its status is set to `disabled`, blocks export of artifacts from a network-restricted registry when a user attempts to: --* [Import](container-registry-import-images.md) the registry's artifacts to another Azure container registry -* Create a registry [export pipeline](container-registry-transfer-images.md) to transfer artifacts to another container registry --> [!NOTE] -> Disabling export of artifacts does not prevent authorized users' access to the registry within the virtual network to pull artifacts or perform other data-plane operations. To audit this use, we recommend that you configure diagnostic settings to [monitor](monitor-service.md) registry operations. --## Prerequisites --* A Premium container registry configured with a [private endpoint](container-registry-private-link.md). ---## Other requirements to disable exports --* **Disable public network access** - To disable export of artifacts, public access to the registry must also be disabled (the registry's `publicNetworkAccess` property must be set to `disabled`). You can disable public network access to the registry before disabling export or disable it at the same time. -- By disabling access to the registry's public endpoint, you ensure that registry operations are permitted only within the virtual network. Public access to the registry to pull artifacts and perform other operations is prohibited. --* **Remove export pipelines** - Before setting the registry's `exportPolicy` status to `disabled`, delete any existing export pipelines configured on the registry. If a pipeline is configured, you can't change the `exportPolicy` status. --## Disable exportPolicy for an existing registry --When you create a registry, the `exportPolicy` status is set to `enabled` by default, which permits artifacts to be exported. You can update the status to `disabled` using an ARM template or the `az resource update` command. --### ARM template --Include the following JSON to update the `exportPolicy` status and set the `publicNetworkAccess` property to `disabled`. Learn more about [deploying resources with ARM templates](../azure-resource-manager/templates/deploy-cli.md). --```json -{ -[...] -"resources": [ - { - "type": "Microsoft.ContainerRegistry/registries", - "apiVersion": "2021-06-01-preview", - "name": "myregistry", - [...] - "properties": { - "publicNetworkAccess": "disabled", - "policies": { - "exportPolicy": { - "status": "disabled" - } - } - } - } -] -[...] -} -``` --### Azure CLI --Run [az resource update](/cli/azure/resource/#az-resource-update) to set the `exportPolicy` status in an existing registry to `disabled`. Substitute the names of your registry and resource group. --As shown in this example, when disabling the `exportPolicy` property, also set the `publicNetworkAccess` property to `disabled`. --```azurecli -az resource update --resource-group myResourceGroup \ - --name myregistry \ - --resource-type "Microsoft.ContainerRegistry/registries" \ - --api-version "2021-06-01-preview" \ - --set "properties.policies.exportPolicy.status=disabled" \ - --set "properties.publicNetworkAccess=disabled" -``` --The output shows that the export policy status is disabled. --```json -{ - "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/myregistry", - "identity": null, - "kind": null, - "location": "centralus", - "managedBy": null, - "name": "myregistry", - "plan": null, - "properties": { - [...] - "policies": { - "exportPolicy": { - "status": "disabled" - }, - "quarantinePolicy": { - "status": "disabled" - }, - "retentionPolicy": { - "days": 7, - "lastUpdatedTime": "2021-07-20T23:20:30.9985256+00:00", - "status": "disabled" - }, - "trustPolicy": { - "status": "disabled", - "type": "Notary" - }, - "privateEndpointConnections": [], - "provisioningState": "Succeeded", - "publicNetworkAccess": "Disabled", - "zoneRedundancy": "Disabled" -[...] -} -``` --## Enable exportPolicy --After disabling the `exportPolicy` status in a registry, you can re-enable it at any time using an ARM template or the `az resource update` command. --### ARM template --Include the following JSON to update the `exportPolicy` status to `enabled`. Learn more about [deploying resources with ARM templates](../azure-resource-manager/templates/deploy-cli.md) --```json -{ -[...] -"resources": [ - { - "type": "Microsoft.ContainerRegistry/registries", - "apiVersion": "2021-06-01-preview", - "name": "myregistry", - [...] - "properties": { - "policies": { - "exportPolicy": { - "status": "enabled" - } - } - } - } -] -[...] -} -``` --### Azure CLI --Run [az resource update](/cli/azure/resource/#az-resource-update) to set the `exportPolicy` status to `enabled`. Substitute the names of your registry and resource group. --```azurecli -az resource update --resource-group myResourceGroup \ - --name myregistry \ - --resource-type "Microsoft.ContainerRegistry/registries" \ - --api-version "2021-06-01-preview" \ - --set "properties.policies.exportPolicy.status=enabled" -``` - -## Next steps --* Learn about [Azure Container Registry roles and permissions](container-registry-roles.md). -* If you want to prevent accidental deletion of registry artifacts, see [Lock container images](container-registry-image-lock.md). -* Learn about built-in [Azure policies](container-registry-azure-policy.md) to secure your Azure container registry |
container-registry | Intro Connected Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/intro-connected-registry.md | - Title: What is a connected registry? -description: Overview and scenarios of the connected registry feature of Azure Container Registry, including its benefits and use cases. --- Previously updated : 10/31/2023--#customer intent: As a reader, I want to understand the overview and scenarios of the connected registry feature of Azure Container Registry so that I can utilize it effectively. ---# What is a connected registry? --In this article, you learn about the *connected registry* feature of [Azure Container Registry](container-registry-intro.md). A connected registry is an on-premises or remote replica that synchronizes container images with your cloud-based Azure container registry. Use a connected registry to help speed-up access to registry artifacts on-premises or remote. --## Billing and Support --The connected registry is a preview feature of the **Premium** container registry service tier, and subject to [limitations](#limitations). For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md). -->[!IMPORTANT] -> Please note that there are **Important upcoming changes** to the connected registry Deployment Model Support and Billing starting from January 1st, 2025. For any inquiries or assistance with the transition, please reach out to the customer support team. --### Billing --- The connected registry incurs no charges until it reaches general availability (GA).-- Post-GA, a monthly price of $10 will apply for each connected registry deployed.-- This price represents Microsoft's commitment to deliver high-quality services and product support.-- The price is applied to the Azure subscription associated with the parent registry.--### Support --- Microsoft will end support for the connected registry deployment on IoT Edge devices on January 1st, 2025.-- After January 1st, 2025 connected registry will solely support Arc-enabled Kubernetes clusters as the deployment model.-- Microsoft advises users to begin planning their transition to Arc-enabled Kubernetes clusters as the deployment model.--## Available regions --Connected registry is available in the following continents and regions: --``` -| Continent | Available Regions | -||-| -| Australia | Australia East | -| Asia | East Asia | -| | Japan East | -| | Japan West | -| | Southeast Asia | -| Europe | North Europe | -| | Norway East | -| | West Europe | -| North America | Canada Central | -| | Central US | -| | East US | -| | South Central US | -| | West Central US | -| | West US 3 | -| South America | Brazil South | -``` --## Scenarios --A cloud-based Azure container registry provides [features](container-registry-intro.md#key-features) including geo-replication, integrated security, Azure-managed storage, and integration with Azure development and deployment pipelines. At the same time, customers are extending their cloud investments to their on-premises and field solutions. --To run with the required performance and reliability in on-premises or remote environments, container workloads need container images and related artifacts to be available nearby. The connected registry provides a performant, on-premises registry solution that regularly synchronizes content with a cloud-based Azure container registry. --Scenarios for a connected registry include: --* Connected factories -* Point-of-sale retail locations -* Shipping, oil-drilling, mining, and other occasionally connected environments --## How does the connected registry work? --The connected registry is deployed on a server or device on-premises, or an environment that supports container workloads on-premises such as Azure IoT Edge and Azure Arc-enabled Kubernetes. The connected registry synchronizes container images and other OCI artifacts with a cloud-based Azure container registry. --The following image shows a typical deployment model for the connected registry using IoT Edge. ---The following image shows a typical deployment model for the connected registry using Azure Arc-enabled Kubernetes. ---### Deployment --Each connected registry is a resource you manage within a cloud-based Azure container registry. The top parent in the connected registry hierarchy is an Azure container registry in the Azure cloud. The connected registry can be deployed either on Azure IoT Edge or Arc-enabled Kubernetes clusters. --To install the connected registry, use Azure tools on a server or device on your premises, or in an environment that supports on-premises container workloads, such as [Azure IoT Edge](../iot-edge/tutorial-nested-iot-edge.md). --Deploy the connected registry Arc extension to the Arc-enabled Kubernetes cluster. Secure the connection with TLS using default configurations for read-only access and a continuous sync window. This setup allows the connected registry to synchronize images from the Azure container registry (ACR) to the connected registry on-premises, enabling image pulls from the connected registry. --The connected registry's *activation status* indicates whether it's deployed on-premises. --* **Active** - The connected registry is currently deployed on-premises. It can't be deployed again until it's deactivated. -* **Inactive** - The connected registry is not deployed on-premises. It can be deployed at this time. - -### Content synchronization --The connected registry regularly accesses the cloud registry to synchronize container images and OCI artifacts. --It can also be configured to synchronize a subset of the repositories from the cloud registry or to synchronize only during certain intervals to reduce traffic between the cloud and the premises. --### Modes --A connected registry can work in one of two modes: *ReadWrite* or *ReadOnly* --**ReadOnly mode** - The default mode, when the connected registry is in ReadOnly mode, clients can only pull (read) artifacts. This configuration is used in scenarios where clients need to pull a container image to operate. This default mode aligns with our secure-by-default approach and is effective starting with CLI version 2.60.0. --**ReadWrite mode** - This mode allows clients to pull and push artifacts (read and write) to the connected registry. Artifacts that are pushed to the connected registry will be synchronized with the cloud registry. The ReadWrite mode is useful when a local development environment is in place. The images are pushed to the local connected registry and from there synchronized to the cloud. --### Registry hierarchy --Each connected registry must be connected to a parent. The top parent is the cloud registry. For hierarchical scenarios such as [nested IoT Edge][overview-connected-registry-and-iot-edge], you can nest connected registries in either mode. The parent connected to the cloud registry can operate in either mode. --Child registries must be compatible with their parent capabilities. Thus, both ReadOnly and ReadWrite modes of the connected registries can be children of a connected registry operating in ReadWrite mode, but only a ReadOnly mode registry can be a child of a connected registry operating in ReadOnly mode. --## Client access --On-premises clients use standard tools such as the Docker CLI to push or pull content from a Connected registry. To manage client access, you create Azure container registry [tokens][repository-scoped-permissions] for access to each connected registry. You can scope the client tokens for pull or push access to one or more repositories in the registry. --Each connected registry also needs to regularly communicate with its parent registry. For this purpose, the registry is issued a synchronization token (*sync token*) by the cloud registry. This token is used to authenticate with its parent registry for synchronization and management operations. --For more information, see [Manage access to a connected registry][overview-connected-registry-access]. --## Limitations --- Number of tokens and scope maps is [limited](container-registry-skus.md) to 20,000 each for a single container registry. This indirectly limits the number of connected registries for a cloud registry, because every Connected registry needs a sync and client token.-- Number of repository permissions in a scope map is limited to 500.-- Number of clients for the connected registry is currently limited to 20.-- [Image locking](container-registry-image-lock.md) through repository/manifest/tag metadata isn't currently supported for connected registries.-- [Repository delete](container-registry-delete.md) isn't supported on the connected registry using ReadOnly mode.-- [Resource logs](monitor-service-reference.md#resource-logs) for connected registries are currently not supported.-- Connected registry is coupled with the registry's home region data endpoint. Automatic migration for [geo-replication](container-registry-geo-replication.md) isn't supported.-- Deletion of a connected registry needs manual removal of the containers on-premises and removal of the respective scope map or tokens in the cloud.-- Connected registry sync limitations are as follows:- - For continuous sync: - - `minMessageTtl` is one day - - `maxMessageTtl` is 90 days - - For occasionally connected scenarios, where you want to specify sync window: - - `minSyncWindow` is 1 hr - - `maxSyncWindow` is seven days --## Conclusion --In this overview, you learned about the connected registry and some basic concepts. Continue to the one of the following articles to learn about specific scenarios where connected registry can be utilized. --> [!div class="nextstepaction"] -<!-- LINKS - internal --> -[overview-connected-registry-access]:overview-connected-registry-access.md -[overview-connected-registry-and-iot-edge]:overview-connected-registry-and-iot-edge.md -[repository-scoped-permissions]: container-registry-repository-scoped-permissions.md |
container-registry | Monitor Container Registry Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/monitor-container-registry-reference.md | - Title: Monitoring data reference for Azure Container Registry -description: This article contains important reference material you need when you monitor Azure Container Registry. Previously updated : 06/17/2024--------# Azure Container Registry monitoring data reference ---See [Monitor Azure Container Registry](monitor-container-registry.md) for details on the data you can collect for Azure Container Registry and how to use it. ---### Supported metrics for Microsoft.ContainerRegistry/registries --The following table lists the metrics available for the Microsoft.ContainerRegistry/registries resource type. ----> [!NOTE] -> Because of layer sharing, registry **Storage used** might be less than the sum of storage for individual repositories. When you [delete](container-registry-delete.md) a repository or tag, you recover only the storage used by manifest files and the unique layers referenced. ----- **Geolocation**. The Azure region for a registry or [geo-replica](container-registry-geo-replication.md).---### Supported resource logs for Microsoft.ContainerRegistry/registries ----For a reference of all Azure Monitor Logs and Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype). --### Container Registry Microsoft.ContainerRegistry/registries --- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns). Entries from the Azure Activity log that provide insight into any subscription-level or management group level events that occurred in Azure.-- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns). Metric data emitted by Azure services that measure their health and performance.-- [ContainerRegistryLoginEvents](/azure/azure-monitor/reference/tables/containerregistryloginevents#columns). Registry authentication events and status, including the incoming identity and IP address.-- [ContainerRegistryRepositoryEvents](/azure/azure-monitor/reference/tables/containerregistryrepositoryevents#columns). Operations on images and other artifacts in registry repositories. The following operations are logged: push, pull, untag, delete (including repository delete), purge tag, and purge manifest.-- Purge events are logged only if a registry [retention policy](container-registry-retention-policy.md) is configured. ---- [Microsoft.ContainerRegistry resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftcontainerregistry)--The following table lists operations related to Azure Container Registry that can be created in the Activity log. This list isn't exhaustive. --| Operation | Description | -|:|:| -| Create or Update Container Registry | Create a container registry or update a registry property | -| Delete Container Registry | Delete a container registry | -| List Container Registry Login Credentials | Show credentials for registry's admin account | -| Import Image | Import an image or other artifact to a registry | -| Create Role Assignment | Assign an identity a Role-based access control (RBAC) role to access a resource | --## Related content --- See [Monitor Azure Container Registry](monitor-container-registry.md) for a description of monitoring Container Registry.-- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. |
container-registry | Monitor Container Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/monitor-container-registry.md | - Title: Monitor Azure Container Registry -description: Start here to learn how you can use the features of Azure Monitor to analyze and alert data in Azure Container Registry. Previously updated : 06/17/2024--------# Monitor Azure Container Registry ---This article describes the monitoring data generated by Azure Container Registry and how you can use the features of Azure Monitor to analyze and alert on this data. --## Monitor overview --The **Overview** page in the Azure portal for each registry includes a brief view of recent resource usage and activity, such as push and pull operations. This high-level information is useful, but only a small amount of data is shown there. ---For more information about the resource types for Container Registry, see [Azure Container Registry monitoring data reference](monitor-container-registry-reference.md). --## Monitoring data --Azure Container Registry collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data). --See [Monitoring Azure Container Registry data reference](monitor-service-reference.md) for detailed information on the metrics and logs created by Azure Container Registry. --## Collection and routing --Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting. --Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. --See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/essentials/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Container Registry are listed in [Azure Container Registry monitoring data reference](monitor-service-reference.md#resource-logs). --> [!TIP] -> You can also create registry diagnostic settings by navigating to your registry in the portal. In the menu, select **Diagnostic settings** under **Monitoring**. --The metrics and logs you can collect are discussed in the following sections. ----For a list of available metrics for Container Registry, see [Azure Container Registry monitoring data reference](monitor-container-registry-reference.md#metrics). --## Analyzing metrics --You can analyze metrics for an Azure container registry with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics) for details on using this tool. --> [!TIP] -> You can also go to the metrics explorer by navigating to your registry in the portal. In the menu, select **Metrics** under **Monitoring**. --For a list of the platform metrics collected for Azure Container Registry, see [Monitoring Azure Container Registry data reference metrics](monitor-service-reference.md#metrics). --### Azure CLI --The following Azure CLI commands can be used to get information about the Azure Container Registry metrics. --- [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az-monitor-metrics-list-definitions) - List metric definitions and dimensions-- [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) - Retrieve metric values--### REST API --You can use the Azure Monitor REST API to get information programmatically about the Azure Container Registry metrics. --- [List metric definitions and dimensions](/rest/api/monitor/metricdefinitions/list)-- [Retrieve metric values](/rest/api/monitor/metrics/list)---## Analyzing logs --Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. --All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema#top-level-common-schema). The schema for Azure Container Registry resource logs is found in the [Azure Container Registry Data Reference](monitor-service-reference.md#schemas). --The [Activity log](/azure/azure-monitor/essentials/activity-log) is a platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. --For a list of the types of resource logs collected for Azure Container Registry, see [Monitoring Azure Container Registry data reference](monitor-service-reference.md#resource-logs). --For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Container Reference data reference](monitor-service-reference.md#azure-monitor-logs-tables). --For the available resource log categories, their associated Log Analytics tables, and the log schemas for Container Registry, see [Azure Container Registry monitoring data reference](monitor-container-registry-reference.md#resource-logs). ------For example, the following query retrieves the most recent 24 hours of data from the **ContainerRegistryRepositoryEvents** table: --```Kusto -ContainerRegistryRepositoryEvents -| where TimeGenerated > ago(1d) -``` --The following image shows sample output: ---Following are queries that you can use to help you monitor your registry resource. --Error events from the last hour: --```Kusto -union Event, Syslog // Event table stores Windows event records, Syslog stores Linux records -| where TimeGenerated > ago(1h) -| where EventLevelName == "Error" // EventLevelName is used in the Event (Windows) records - or SeverityLevel== "err" // SeverityLevel is used in Syslog (Linux) records -``` --100 most recent registry events: --```Kusto -ContainerRegistryRepositoryEvents -| union ContainerRegistryLoginEvents -| top 100 by TimeGenerated -| project TimeGenerated, LoginServer, OperationName, Identity, Repository, DurationMs, Region , ResultType -``` --Identity of user or object that deleted repository: --```Kusto -ContainerRegistryRepositoryEvents -| where OperationName contains "Delete" -| project LoginServer, OperationName, Repository, Identity, CallerIpAddress -``` --Identity of user or object that deleted tag: --```Kusto -ContainerRegistryRepositoryEvents -| where OperationName contains "Untag" -| project LoginServer, OperationName, Repository, Tag, Identity, CallerIpAddress -``` --Repository-level operation failures: --```kusto -ContainerRegistryRepositoryEvents -| where ResultDescription contains "40" -| project TimeGenerated, OperationName, Repository, Tag, ResultDescription -``` --Registry authentication failures: --```kusto -ContainerRegistryLoginEvents -| where ResultDescription != "200" -| project TimeGenerated, Identity, CallerIpAddress, ResultDescription -``` ---### Azure Container Registry alert rules --The following table lists some suggested alert rules for Container Registry. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Azure Container Registry monitoring data reference](monitor-container-registry-reference.md). --| Alert type | Condition | Description | -|:|:|:| -| metric | Signal: Storage used<br/>Operator: Greater than<br/>Aggregation type: Average<br/>Threshold value: 5 GB| Alerts if the registry storage used exceeds a specified value.| --### Example: Send email alert when registry storage used exceeds a value --1. In the Azure portal, navigate to your registry. -1. Select **Metrics** under **Monitoring**. -1. In the metrics explorer, in **Metric**, select **Storage used**. -1. Select **New alert rule**. -1. In **Scope**, confirm the registry resource for which you want to create an alert rule. -1. In **Condition**, select **Add condition**. - 1. In **Signal name**, select **Storage used**. - 1. In **Chart period**, select **Over the last 24 hours**. - 1. In **Alert logic**, in **Threshold value**, select a value such as *5*. In **Unit**, select a value such as *GB*. - 1. Accept default values for the remaining settings, and select **Done**. -1. In **Actions**, select **Add action groups** > **+ Create action group**. - 1. Enter details of the action group. - 1. On the **Notifications** tab, select **Email/SMS message/Push/Voice** and enter a recipient such as *admin@contoso.com*. Select **Review + create**. -1. Enter a name and description of the alert rule, and select the severity level. -1. Select **Create alert rule**. ---## Related content --- See [Azure Container Registry monitoring data reference](monitor-container-registry-reference.md) for a reference of the metrics, logs, and other important values created for Container Registry.-- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources. |
container-registry | Overview Connected Registry Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/overview-connected-registry-access.md | - Title: Understand access to a connected registry -description: Introduction to token-based authentication and authorization for connected registries in Azure Container Registry ---- Previously updated : 10/31/2023---# Understand access to a connected registry --To access and manage a [connected registry](intro-connected-registry.md), currently only ACR [token-based authentication](container-registry-repository-scoped-permissions.md) is supported. As shown in the following image, two different types of tokens are used by each connected registry: --* [**Client tokens**](#client-tokens) - One or more tokens that on-premises clients use to authenticate with a connected registry and push or pull images and artifacts to or from it. -* [**Sync token**](#sync-token) - A token used by each connected registry to access its parent and synchronize content. --![Connected registry authentication verview](media/overview-connected-registry-access/connected-registry-authentication-overview.svg) --> [!IMPORTANT] -> Store token passwords for each connected registry in a safe location. After they are created, token passwords can't be retrieved. You can regenerate token passwords at any time. --## Client tokens --To manage client access to a connected registry, you create tokens scoped for actions on one or more repositories. After creating a token, configure the connected registry to accept the token by using the [az acr connected-registry update](/cli/azure/acr/connected-registry#az-acr-connected-registry-update) command. A client can then use the token credentials to access a connected registry endpoint - for example, to use Docker CLI commands to pull or push images to the connected registry. --Your options for configuring client token actions depend on whether the connected registry allows both push and pull operations or functions as a pull-only mirror. -* A connected registry in the default [ReadWrite mode](intro-connected-registry.md#modes) allows both pull and push operations, so you can create a token that allows actions to both *read* and *write* repository content in that registry. -* For a connected registry in [ReadOnly mode](intro-connected-registry.md#modes), client tokens can only allow actions to *read* repository content. --### Manage client tokens --Update client tokens, passwords, or scope maps as needed by using [az acr token](/cli/azure/acr#az-acr-token) and [az acr scope-map](/cli/azure/acr#az-acr-scope-map) commands. Client token updates are propagated automatically to the connected registries that accept the token. --## Sync token --Each connected registry uses a sync token to authenticate with its immediate parent - which could be another connected registry or the cloud registry. The connected registry automatically uses this token when synchronizing content with the parent or performing other updates. --* The sync token and passwords are generated automatically when you create the connected registry resource. Run the [az acr connected-registry install renew-credentials][az-acr-connected-registry-install-renew-credentials] command to regenerate the passwords. -* Include sync token credentials in the configuration used to deploy the connected registry on-premises. -* By default, the sync token is granted permission to synchronize selected repositories with its parent. You must provide an existing sync token or one or more repositories to sync when you create the connected registry resource. -* It also has permissions to read and write synchronization messages on a gateway used to communicate with the connected registry's parent. These messages control the synchronization schedule and manage other updates between the connected registry and its parent. --### Manage sync token --Update sync tokens, passwords, or scope maps as needed by using [az acr token](/cli/azure/acr#az-acr-token) and [az acr scope-map](/cli/azure/acr#az-acr-scope-map) commands. Sync token updates are propagated automatically to the connected registry. Follow the standard practices of rotating passwords when updating the sync token. --> [!NOTE] -> The sync token cannot be deleted until the connected registry associated with the token is deleted. You can disable a connected registry by setting the status of the sync token to `disabled`. --## Registry endpoints --Token credentials for connected registries are scoped to access specific registry endpoints: --* A client token accesses the connected registry's endpoint. The connected registry endpoint is the login server URI, which is typically the IP address of the server or device that hosts it. --* A sync token accesses the endpoint of the parent registry, which is either another connected registry endpoint or the cloud registry itself. When scoped to access the cloud registry, the sync token needs to reach two registry endpoints: -- - The fully qualified login server name, for example, `contoso.azurecr.io`. This endpoint is used for authentication. - - A fully qualified regional [data endpoint](container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints) for the cloud registry, for example, `contoso.westus2.data.azurecr.io`. This endpoint is used to exchange messages with the connected registry for synchronization purposes. --## Next steps --Continue to the following article to learn about specific scenarios where connected registry can be utilized. --> [!div class="nextstepaction"] -> [Overview: Connected registry and IoT Edge][overview-connected-registry-and-iot-edge] --<!-- LINKS - internal --> -[az-acr-connected-registry-update]: /cli/azure/acr/connected-registry#az_acr_connected_registry_update -[az-acr-connected-registry-install-renew-credentials]: /cli/azure/acr/connected-registry/install#az_acr_connected_registry_install_renew_credentials -[overview-connected-registry-and-iot-edge]:overview-connected-registry-and-iot-edge.md -[repository-scoped-permissions]: container-registry-repository-scoped-permissions.md |
container-registry | Overview Connected Registry And Iot Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/overview-connected-registry-and-iot-edge.md | - Title: Using connected registry with Azure IoT Edge -description: Overview of the connected Azure container registry in hierarchical IoT Edge scenarios ---- Previously updated : 10/31/2023---# Using connected registry with Azure IoT Edge --In this article, you learn about using an Azure [connected registry](intro-connected-registry.md) in hierarchical [IoT Edge](../iot-edge/about-iot-edge.md) scenarios. The connected container registry can be deployed as an IoT Edge module and play an essential role in serving container images required by the devices in the hierarchy. --## What is a hierarchical IoT Edge deployment? --Azure IoT Edge allows you to deploy IoT Edge devices across networks organized in hierarchical layers. Each layer in a hierarchy is a [gateway device](../iot-edge/iot-edge-as-gateway.md) that handles messages and requests from devices in the layer beneath it. You can structure a hierarchy of devices so that only the top layer has connectivity to the cloud, and the lower layers can only communicate with adjacent north and south layers. This network layering is the foundation of most industrial networks, which follow the [ISA-95 standard](https://en.wikipedia.org/wiki/ANSI/ISA-95). --To learn how to create a hierarchy of IoT Edge devices, see [Tutorial: Create a hierarchy of IoT Edge devices][tutorial-nested-iot-edge] --## How do I use connected registry in hierarchical IoT Edge scenarios? --The following image shows how the connected registry can be used to support the hierarchical deployment of IoT Edge. Solid gray lines show the actual network flow, while the dashed lines show the logical communication between components and the connected registries. --![Connected Registry and hierarchical IoT Edge deployments](media/overview-connected-registry-and-iot-edge/connected-registry-iot-edge-overview.svg) --### Top layer --The top layer of the example architecture, *Layer 5: Enterprise Network*, is managed by IT and can access the container registry for Contoso in the Azure cloud. The connected registry is deployed as an IoT Edge module on the IoT Edge VM and can directly communicate with the cloud registry to pull and push images and artifacts. --The connected registry is shown as working in the default [ReadWrite mode](intro-connected-registry.md#modes). Clients of this connected registry can pull and push images and artifacts to it. Pushed images will be synchronized with the cloud registry. If pushes are not required in that layer, the connected registry can be changed to operate in [ReadOnly mode](intro-connected-registry.md#modes). --For steps to deploy the connected registry as an IoT Edge module at this level, see [Quickstart - Deploy a connected registry to an IoT Edge device][quickstart-deploy-connected-registry-iot-edge-cli]. --### Nested layers --The next lower layer, *Layer 4: Site Business Planning and Logistics*, is configured to communicate only with Layer 5. Thus, when deploying the IoT Edge VM on Layer 4, it needs to pull the module images from the connected registry on Layer 5 instead. --You can also deploy a connected registry working in ReadOnly mode to serve the layers below. This is illustrated with the IoT Edge VM on *Layer 3: Industrial Security Zone*. That VM must pull the module images from the connected registry on *Layer 4*. If clients on lower layers need to be served, a connected registry in ReadOnly mode can be deployed on Layer 3, and so on. --In this architecture, the connected registries deployed on each layer are configured to synchronize the images with the connected registry on the layer above. The connected registries are deployed as IoT Edge modules and leverage the IoT Edge mechanisms for deployment and network routing. --For steps to deploy the connected registry on nested IoT Edge devices, see [Quickstart: Deploy connected registry on nested IoT Edge devices][tutorial-deploy-connected-registry-nested-iot-edge-cli]. --## Next steps --In this overview, you learned about the use of the connected registry in hierarchical IoT Edge scenarios. Continue to the following articles to learn how to configure and deploy a connected registry to your IoT Edge device. --> [!div class="nextstepaction"] -> [Quickstart - Create connected registry using the CLI][quickstart-connected-registry-cli] --> [!div class="nextstepaction"] -> [Quickstart - Create connected registry using the portal][quickstart-connected-registry-portal] --> [!div class="nextstepaction"] -> [Quickstart - Deploy a connected registry to an IoT Edge device][quickstart-deploy-connected-registry-iot-edge-cli] --> [!div class="nextstepaction"] -> [Tutorial: Deploy connected registry on nested IoT Edge devices][tutorial-deploy-connected-registry-nested-iot-edge-cli] --<!-- LINKS - internal --> -[quickstart-connected-registry-cli]:quickstart-connected-registry-cli.md -[quickstart-connected-registry-portal]:quickstart-connected-registry-portal.md -[quickstart-deploy-connected-registry-iot-edge-cli]:quickstart-deploy-connected-registry-iot-edge-cli.md -[tutorial-nested-iot-edge]:../iot-edge/tutorial-nested-iot-edge.md -[tutorial-deploy-connected-registry-nested-iot-edge-cli]: tutorial-deploy-connected-registry-nested-iot-edge-cli.md |
container-registry | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md | - Title: Built-in policy definitions for Azure Container Registry -description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. -- Previously updated : 02/06/2024-----# Azure Policy built-in definitions for Azure Container Registry --This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy -definitions for Azure Container Registry. For additional Azure Policy built-ins for other services, -see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). --The name of each built-in policy definition links to the policy definition in the Azure portal. Use -the link in the **Version** column to view the source on the -[Azure Policy GitHub repo](https://github.com/Azure/azure-policy). --## Azure Container Registry ---## Next steps --- See guidance to [assign policies and review compliance](container-registry-azure-policy.md).-- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).-- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../governance/policy/concepts/effects.md). |
container-registry | Pull Images From Connected Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/pull-images-from-connected-registry.md | - Title: Pull images from a connected registry -description: Use Azure Container Registry CLI commands to configure a client token and pull images from a connected registry on an IoT Edge device. --- Previously updated : 10/31/2023-----# Pull images from a connected registry on IoT Edge device (To be deprecated) --To pull images from a [connected registry](intro-connected-registry.md), configure a [client token](overview-connected-registry-access.md#client-tokens) and pass the token credentials to access registry content. --* Connected registry resource in Azure. For deployment steps, see [Quickstart: Create a connected registry using the Azure CLI][quickstart-connected-registry-cli]. -* Connected registry instance deployed on an IoT Edge device. For deployment steps, see [Quickstart: Deploy a connected registry to an IoT Edge device](quickstart-deploy-connected-registry-iot-edge-cli.md) or [Tutorial: Deploy a connected registry to nested IoT Edge devices](tutorial-deploy-connected-registry-nested-iot-edge-cli.md). In the commands in this article, the connected registry name is stored in the environment variable *$CONNECTED_REGISTRY_RW*. --## Create a scope map --Use the [az acr scope-map create][az-acr-scope-map-create] command to create a scope map for read access to the `hello-world` repository: --```azurecli -# Use the REGISTRY_NAME variable in the following Azure CLI commands to identify the registry -REGISTRY_NAME=<container-registry-name> --az acr scope-map create \ - --name hello-world-scopemap \ - --registry $REGISTRY_NAME \ - --repository hello-world content/read \ - --description "Scope map for the connected registry." -``` --## Create a client token --Use the [az acr token create][az-acr-token-create] command to create a client token and associate it with the newly created scope map: --```azurecli -az acr token create \ - --name myconnectedregistry-client-token \ - --registry $REGISTRY_NAME \ - --scope-map hello-world-scopemap -``` --The command will return details about the newly generated token including passwords. -- > [!IMPORTANT] - > Make sure that you save the generated passwords. Those are one-time passwords and cannot be retrieved. You can generate new passwords using the [az acr token credential generate][az-acr-token-credential-generate] command. --## Update the connected registry with the client token --Use the az acr connected-registry update command to update the connected registry with the newly created client token. --```azurecli -az acr connected-registry update \ - --name $CONNECTED_REGISTRY_RW \ - --registry $REGISTRY_NAME \ - --add-client-token myconnectedregistry-client-token -``` --## Pull an image from the connected registry --From a machine with access to the IoT Edge device, use the following example command to sign into the connected registry, using the client token credentials. For best practices to manage login credentials, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command reference. --> [!CAUTION] -> If you set up your connected registry as an insecure registry, update the insecure registries list in the Docker daemon configuration to include the IP address (or FQDN) and port of your connected registry on the IoT Edge device. This configuration should only be used for testing purposes. For more information, see [Test an insecure registry](https://docs.docker.com/registry/insecure/). --``` -docker login --username myconnectedregistry-client-token \ - --password <token_password> <IP_address_or_FQDN_of_connected_registry>:<port> -``` --For IoT Edge scenarios, be sure to include the port used to reach the connected registry on the device. Example: --``` -docker login --username myconnectedregistry-client-token \ - --password xxxxxxxxxxx 192.0.2.13:8000 -``` --Then, use the following command to pull the `hello-world` image: --``` -docker pull <IP_address_or_FQDN_of_connected_registry>:<port>/hello-world -``` --## Next steps --* Learn more about [repository-scoped tokens](container-registry-repository-scoped-permissions.md). -* Learn more about [accessing a connected registry](overview-connected-registry-access.md). --<!-- LINKS - internal --> -[az-acr-scope-map-create]: /cli/azure/acr/token/#az_acr_token_create -[az-acr-token-create]: /cli/azure/acr/token/#az_acr_token_create -[az-acr-token-credential-generate]: /cli/azure/acr/token/credential#az_acr_token_credential_generate -[az-acr-connected-registry-update]: ./quickstart-connected-registry-cli.md#az_acr_connected_registry_update] -[container-registry-intro]: container-registry-intro.md -[quickstart-connected-registry-cli]: quickstart-connected-registry-cli.md |
container-registry | Push Multi Architecture Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/push-multi-architecture-images.md | - Title: Multi-architecture images in your registry -description: Use your Azure container registry to build, import, store, and deploy multi-architecture (multi-arch) images --- Previously updated : 10/31/2023----# Multi-architecture images in your Azure container registry --This article introduces *multi-architecture* (*multi-arch*) images and how you can use Azure Container Registry features to help create, store, and use them. --A multi-arch image is a type of container image that may combine variants for different architectures, and sometimes for different operating systems. When running an image with multi-architecture support, container clients will automatically select an image variant that matches your OS and architecture. --## Manifests and manifest lists --Multi-arch images are based on image manifests and manifest lists. --### Manifest --Each container image is represented by a [manifest](container-registry-concepts.md#manifest). A manifest is a JSON file that uniquely identifies the image, referencing its layers and their corresponding sizes. --A basic manifest for a Linux `hello-world` image looks similar to the following: -- ```json - { - "schemaVersion": 2, - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "config": { - "mediaType": "application/vnd.docker.container.image.v1+json", - "size": 1510, - "digest": "sha256:fbf289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e" - }, - "layers": [ - { - "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", - "size": 977, - "digest": "sha256:2c930d010525941c1d56ec53b97bd057a67ae1865eebf042686d2a2d18271ced" - } - ] - } - ``` - -You can view a manifest in Azure Container Registry using the Azure portal or tools such as the [az acr manifest list-metadata](/cli/azure/acr/manifest#az-acr-manifest-list-metadata) command in the Azure CLI. --### Manifest list --A *manifest list* for a multi-arch image (known more generally as an [image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md) for OCI images) is a collection (index) of images, and you create one by specifying one or more image names. It includes details about each of the images such as the supported OS and architecture, size, and manifest digest. The manifest list can be used in the same way as an image name in `docker pull` and `docker run` commands. --The `docker` CLI manages manifests and manifest lists using the [docker manifest](https://docs.docker.com/engine/reference/commandline/manifest/) command. --> [!NOTE] -> Currently, the `docker manifest` command and subcommands are experimental. See the Docker documentation for details about using experimental commands. --You can view a manifest list using the `docker manifest inspect` command. The following is the output for the multi-arch image `mcr.microsoft.com/mcr/hello-world:latest`, which has three manifests: two for Linux OS architectures and one for a Windows architecture. -```json -{ - "schemaVersion": 2, - "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json", - "manifests": [ - { - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "size": 524, - "digest": "sha256:83c7f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e899a", - "platform": { - "architecture": "amd64", - "os": "linux" - } - }, - { - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "size": 525, - "digest": "sha256:873612c5503f3f1674f315c67089dee577d8cc6afc18565e0b4183ae355fb343", - "platform": { - "architecture": "arm64", - "os": "linux" - } - }, - { - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "size": 1124, - "digest": "sha256:b791ad98d505abb8c9618868fc43c74aa94d08f1d7afe37d19647c0030905cae", - "platform": { - "architecture": "amd64", - "os": "windows", - "os.version": "10.0.17763.1697" - } - } - ] -} -``` --When a multi-arch manifest list is stored in Azure Container Registry, you can also view the manifest list using the Azure portal or with tools such as the [az acr manifest list-metadata](/cli/azure/acr/manifest#az-acr-manifest-list-metadata) command. --## Import a multi-arch image --An existing multi-arch image can be imported to an Azure container registry using the [az acr import](/cli/azure/acr#az-acr-import) command. The image import syntax is the same as with a single-architecture image. Like import of a single-architecture image, import of a multi-arch image doesn't use Docker commands. --For details, see [Import container images to a container registry](container-registry-import-images.md). --## Push a multi-arch image --When you have build workflows to create container images for different architectures, follow these steps to push a multi-arch image to your Azure container registry. --1. Tag and push each architecture-specific image to your container registry. The following example assumes two Linux architectures: arm64 and amd64. -- ```console - docker tag myimage:arm64 \ - myregistry.azurecr.io/multi-arch-samples/myimage:arm64 -- docker push myregistry.azurecr.io/multi-arch-samples/myimage:arm64 - - docker tag myimage:amd64 \ - myregistry.azurecr.io/multi-arch-samples/myimage:amd64 -- docker push myregistry.azurecr.io/multi-arch-samples/myimage:amd64 - ``` --1. Run `docker manifest create` to create a manifest list to combine the preceding images into a multi-arch image. -- ```console - docker manifest create myregistry.azurecr.io/multi-arch-samples/myimage:multi \ - myregistry.azurecr.io/multi-arch-samples/myimage:arm64 \ - myregistry.azurecr.io/multi-arch-samples/myimage:amd64 - ``` --1. Push the manifest to your container registry using `docker manifest push`: -- ```console - docker manifest push myregistry.azurecr.io/multi-arch-samples/myimage:multi - ``` --1. Use the `docker manifest inspect` command to view the manifest list. An example of command output is shown in a preceding section. --After you push the multi-arch manifest to your registry, work with the multi-arch image the same way that you do with a single-architecture image. For example, pull the image using `docker pull`, and use [az acr repository](/cli/azure/acr/repository#az-acr-repository) commands to view tags, manifests, and other properties of the image. --## Build and push a multi-arch image --Using features of [ACR Tasks](container-registry-tasks-overview.md), you can build and push a multi-arch image to your Azure container registry. For example, define a [multi-step task](container-registry-tasks-multi-step.md) in a YAML file that builds a Linux multi-arch image. --The following example assumes that you have separate Dockerfiles for two architectures, arm64 and amd64. It builds and pushes the architecture-specific images, then creates and pushes a multi-arch manifest that has the `latest` tag: --```yml -version: v1.1.0 --steps: -- build: -t {{.Run.Registry}}/multi-arch-samples/myimage:{{.Run.ID}}-amd64 -f dockerfile.arm64 . -- build: -t {{.Run.Registry}}/multi-arch-samples/myyimage:{{.Run.ID}}-arm64 -f dockerfile.amd64 . -- push: - - {{.Run.Registry}}/multi-arch-samples/myimage:{{.Run.ID}}-arm64 - - {{.Run.Registry}}/multi-arch-samples/myimage:{{.Run.ID}}-amd64 -- cmd: >- docker manifest create - {{.Run.Registry}}/multi-arch-samples/myimage:latest - {{.Run.Registry}}/multi-arch-samples/myimage:{{.Run.ID}}-arm64 - {{.Run.Registry}}/multi-arch-samples/myimage:{{.Run.ID}}-amd64 -- cmd: docker manifest push --purge {{.Run.Registry}}/multi-arch-samples/myimage:latest-- cmd: docker manifest inspect {{.Run.Registry}}/multi-arch-samples/myimage:latest-``` --## Next steps --* Use [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) to build container images for different architectures. -* Learn about building multi-platform images using the experimental Docker [buildx](https://docs.docker.com/buildx/working-with-buildx/) plug-in. --<!-- LINKS - external --> -[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms -[docker-mac]: https://docs.docker.com/docker-for-mac/ -[docker-windows]: https://docs.docker.com/docker-for-windows/ |
container-registry | Quickstart Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-client-libraries.md | - Title: Quickstart - Manage container registry content with client libraries -description: Use this quickstart to manage repositories, images, and artifacts using the Azure Container Registry client libraries --- Previously updated : 10/31/2023-zone_pivot_groups: programming-languages-set-fivedevlangs -----# Quickstart: Use the Azure Container Registry client libraries ---Use this article to get started with the client library for Azure Container Registry. Follow these steps to try out example code for data-plane operations on images and artifacts. --Use the client library for Azure Container Registry to: --* List images or artifacts in a registry -* Obtain metadata for images and artifacts, repositories, and tags -* Set read/write/delete properties on registry items -* Delete images and artifacts, repositories, and tags --Azure Container Registry also has a management library for control-plane operations including registry creation and updates. --## Prerequisites --* You need an [Azure subscription](https://azure.microsoft.com/free/) and an Azure container registry to use this library. -- To create a new Azure container registry, you can use the [Azure portal](container-registry-get-started-portal.md), [Azure PowerShell](container-registry-get-started-powershell.md), or the [Azure CLI](container-registry-get-started-azure-cli.md). Here's an example using the Azure CLI: -- ```azurecli - az acr create --name MyContainerRegistry --resource-group MyResourceGroup \ - --location westus --sku Basic - ``` --* Push one or more container images to your registry. For steps, see [Push your first image to your Azure container registry using the Docker CLI](container-registry-get-started-docker-cli.md). --## Key concepts --* An Azure container registry stores *container images* and [OCI artifacts](container-registry-manage-artifact.md). -* An image or artifact consists of a *manifest* and *layers*. -* A manifest describes the layers that make up the image or artifact. It is uniquely identified by its *digest*. -* An image or artifact can also be *tagged* to give it a human-readable alias. An image or artifact can have zero or more tags associated with it, and each tag uniquely identifies the image. -* A collection of images or artifacts that share the same name, but have different tags, is a *repository*. --For more information, see [About registries, repositories, and artifacts](container-registry-concepts.md). ----## Get started --[Source code][dotnet_source] | [Package (NuGet)][dotnet_package] | [API reference][dotnet_docs] | [Samples][dotnet_samples] --To develop .NET application code that can connect to an Azure Container Registry instance, you will need the `Azure.Containers.ContainerRegistry` library. --### Install the package --Install the Azure Container Registry client library for .NET with [NuGet][nuget]: --```Powershell -dotnet add package Azure.Containers.ContainerRegistry --prerelease -``` --## Authenticate the client --For your application to connect to your registry, you'll need to create a `ContainerRegistryClient` that can authenticate with it. Use the [Azure Identity library][dotnet_identity] to add Microsoft Entra ID support for authenticating Azure SDK clients with their corresponding Azure services. --When you're developing and debugging your application locally, you can use your own user to authenticate with your registry. One way to accomplish this is to [authenticate your user with the Azure CLI](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity#authenticating-via-the-azure-cli) and run your application from this environment. If your application is using a client that has been constructed to authenticate with `DefaultAzureCredential`, it will correctly authenticate with the registry at the specified endpoint. --```C# -// Create a ContainerRegistryClient that will authenticate to your registry through Azure Active Directory -Uri endpoint = new Uri("https://myregistry.azurecr.io"); -ContainerRegistryClient client = new ContainerRegistryClient(endpoint, new DefaultAzureCredential(), - new ContainerRegistryClientOptions() - { - Audience = ContainerRegistryAudience.AzureResourceManagerPublicCloud - }); -``` --See the [Azure Identity README][dotnet_identity] for more approaches to authenticating with `DefaultAzureCredential`, both locally and in deployment environments. To connect to registries in non-public Azure clouds, see the [API reference][dotnet_docs]. --For more information on using Microsoft Entra ID with Azure Container Registry, see the [authentication overview](container-registry-authentication.md). --## Examples --Each sample assumes there is a `REGISTRY_ENDPOINT` environment variable set to a string containing the `https://` prefix and the name of the login server, for example "https://myregistry.azurecr.io". --The following samples use asynchronous APIs that return a task. Synchronous APIs are also available. --### List repositories asynchronously --Iterate through the collection of repositories in the registry. --```C# Snippet:ContainerRegistry_Tests_Samples_CreateClientAsync -// Get the service endpoint from the environment -Uri endpoint = new Uri(Environment.GetEnvironmentVariable("REGISTRY_ENDPOINT")); --// Create a new ContainerRegistryClient -ContainerRegistryClient client = new ContainerRegistryClient(endpoint, new DefaultAzureCredential(), - new ContainerRegistryClientOptions() - { - Audience = ContainerRegistryAudience.AzureResourceManagerPublicCloud - }); --// Get the collection of repository names from the registry -AsyncPageable<string> repositories = client.GetRepositoryNamesAsync(); -await foreach (string repository in repositories) -{ - Console.WriteLine(repository); -} -``` --### Set artifact properties asynchronously --```C# Snippet:ContainerRegistry_Tests_Samples_CreateClientAsync -// Get the service endpoint from the environment -Uri endpoint = new Uri(Environment.GetEnvironmentVariable("REGISTRY_ENDPOINT")); --// Create a new ContainerRegistryClient -ContainerRegistryClient client = new ContainerRegistryClient(endpoint, new DefaultAzureCredential(), - new ContainerRegistryClientOptions() - { - Audience = ContainerRegistryAudience.AzureResourceManagerPublicCloud - }); --// Get the collection of repository names from the registry -AsyncPageable<string> repositories = client.GetRepositoryNamesAsync(); -await foreach (string repository in repositories) -{ - Console.WriteLine(repository); -} -``` --### Delete images asynchronously --```C# Snippet:ContainerRegistry_Tests_Samples_DeleteImageAsync -using System.Linq; -using Azure.Containers.ContainerRegistry; -using Azure.Identity; --// Get the service endpoint from the environment -Uri endpoint = new Uri(Environment.GetEnvironmentVariable("REGISTRY_ENDPOINT")); --// Create a new ContainerRegistryClient -ContainerRegistryClient client = new ContainerRegistryClient(endpoint, new DefaultAzureCredential(), - new ContainerRegistryClientOptions() - { - Audience = ContainerRegistryAudience.AzureResourceManagerPublicCloud - }); --// Iterate through repositories -AsyncPageable<string> repositoryNames = client.GetRepositoryNamesAsync(); -await foreach (string repositoryName in repositoryNames) -{ - ContainerRepository repository = client.GetRepository(repositoryName); -- // Obtain the images ordered from newest to oldest - AsyncPageable<ArtifactManifestProperties> imageManifests = - repository.GetManifestPropertiesCollectionAsync(orderBy: ArtifactManifestOrderBy.LastUpdatedOnDescending); -- // Delete images older than the first three. - await foreach (ArtifactManifestProperties imageManifest in imageManifests.Skip(3)) - { - RegistryArtifact image = repository.GetArtifact(imageManifest.Digest); - Console.WriteLine($"Deleting image with digest {imageManifest.Digest}."); - Console.WriteLine($" Deleting the following tags from the image: "); - foreach (var tagName in imageManifest.Tags) - { - Console.WriteLine($" {imageManifest.RepositoryName}:{tagName}"); - await image.DeleteTagAsync(tagName); - } - await image.DeleteAsync(); - } -} -``` ----## Get started --[Source code][java_source] | [Package (Maven)][java_package] | [API reference][java_docs] | [Samples][java_samples] --### Currently supported environments --* [Java Development Kit (JDK)][jdk_link], version 8 or later. --### Include the package --[//]: # ({x-version-update-start;com.azure:azure-containers-containerregistry;current}) -```xml -<dependency> - <groupId>com.azure</groupId> - <artifactId>azure-containers-containerregistry</artifactId> - <version>1.0.0-beta.3</version> -</dependency> -``` -[//]: # ({x-version-update-end}) --## Authenticate the client --The [Azure Identity library][java_identity] provides Microsoft Entra ID support for authentication. --The following samples assume you have a registry endpoint string containing the `https://` prefix and the name of the login server, for example "https://myregistry.azurecr.io". --<!-- embedme ./src/samples/java/com/azure/containers/containerregistry/ReadmeSamples.java#L31-L35 --> -```Java -DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build(); -ContainerRegistryClient client = new ContainerRegistryClientBuilder() - .endpoint(endpoint) - .credential(credential) - .buildClient(); -``` --<!-- embedme ./src/samples/java/com/azure/containers/containerregistry/ReadmeSamples.java#L39-L43 --> -```Java -DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build(); -ContainerRegistryAsyncClient client = new ContainerRegistryClientBuilder() - .endpoint(endpoint) - .credential(credential) - .buildAsyncClient(); -``` --For more information on using Microsoft Entra ID with Azure Container Registry, see the [authentication overview](container-registry-authentication.md). --## Examples --Each sample assumes there is a registry endpoint string containing the `https://` prefix and the name of the login server, for example "https://myregistry.azurecr.io". --### List repository names --Iterate through the collection of repositories in the registry. --<!-- embedme ./src/samples/java/com/azure/containers/containerregistry/ReadmeSamples.java#L47-L53 --> -```Java -DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build(); -ContainerRegistryClient client = new ContainerRegistryClientBuilder() - .endpoint(endpoint) - .credential(credential) - .buildClient(); --client.listRepositoryNames().forEach(repository -> System.out.println(repository)); -``` --### Set artifact properties --<!-- embedme ./src/samples/java/com/azure/containers/containerregistry/ReadmeSamples.java#L119-L132 --> -```Java -TokenCredential defaultCredential = new DefaultAzureCredentialBuilder().build(); --ContainerRegistryClient client = new ContainerRegistryClientBuilder() - .endpoint(endpoint) - .credential(defaultCredential) - .buildClient(); --RegistryArtifact image = client.getArtifact(repositoryName, digest); --image.updateTagProperties( - tag, - new ArtifactTagProperties() - .setWriteEnabled(false) - .setDeleteEnabled(false)); -``` --### Delete images --<!-- embedme ./src/samples/java/com/azure/containers/containerregistry/ReadmeSamples.java#L85-L113 --> -```Java -TokenCredential defaultCredential = new DefaultAzureCredentialBuilder().build(); --ContainerRegistryClient client = new ContainerRegistryClientBuilder() - .endpoint(endpoint) - .credential(defaultCredential) - .buildClient(); --final int imagesCountToKeep = 3; -for (String repositoryName : client.listRepositoryNames()) { - final ContainerRepository repository = client.getRepository(repositoryName); -- // Obtain the images ordered from newest to oldest - PagedIterable<ArtifactManifestProperties> imageManifests = - repository.listManifestProperties( - ArtifactManifestOrderBy.LAST_UPDATED_ON_DESCENDING, - Context.NONE); -- imageManifests.stream().skip(imagesCountToKeep) - .forEach(imageManifest -> { - System.out.printf(String.format("Deleting image with digest %s.%n", imageManifest.getDigest())); - System.out.printf(" This image has the following tags: "); -- for (String tagName : imageManifest.getTags()) { - System.out.printf(" %s:%s", imageManifest.getRepositoryName(), tagName); - } -- repository.getArtifact(imageManifest.getDigest()).delete(); - }); -} -``` ----## Get started --[Source code][javascript_source] | [Package (npm)][javascript_package] | [API reference][javascript_docs] | [Samples][javascript_samples] --### Currently supported environments --- [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule)--See our [support policy](https://github.com/Azure/azure-sdk-for-js/blob/main/SUPPORT.md) for more details. --### Install the `@azure/container-registry` package --Install the Container Registry client library for JavaScript with `npm`: --```bash -npm install @azure/container-registry -``` --## Authenticate the client --The [Azure Identity library][javascript_identity] provides Microsoft Entra ID support for authentication. --```javascript -const { ContainerRegistryClient } = require("@azure/container-registry"); -const { DefaultAzureCredential } = require("@azure/identity"); --const endpoint = process.env.CONTAINER_REGISTRY_ENDPOINT; -// Create a ContainerRegistryClient that will authenticate through Active Directory -const client = new ContainerRegistryClient(endpoint, new DefaultAzureCredential()); -``` --For more information on using Microsoft Entra ID with Azure Container Registry, see the [authentication overview](container-registry-authentication.md). --## Examples --Each sample assumes there is a `CONTAINER_REGISTRY_ENDPOINT` environment variable set to a string containing the `https://` prefix and the name of the login server, for example "https://myregistry.azurecr.io". --### List repositories asynchronously --Iterate through the collection of repositories in the registry. --```javascript -const { ContainerRegistryClient } = require("@azure/container-registry"); -const { DefaultAzureCredential } = require("@azure/identity"); --async function main() { - // endpoint should be in the form of "https://myregistryname.azurecr.io" - // where "myregistryname" is the actual name of your registry - const endpoint = process.env.CONTAINER_REGISTRY_ENDPOINT || "<endpoint>"; - const client = new ContainerRegistryClient(endpoint, new DefaultAzureCredential()); -- console.log("Listing repositories"); - const iterator = client.listRepositoryNames(); - for await (const repository of iterator) { - console.log(` repository: ${repository}`); - } -} --main().catch((err) => { - console.error("The sample encountered an error:", err); -}); -``` --### Set artifact properties asynchronously --```javascript -const { ContainerRegistryClient } = require("@azure/container-registry"); -const { DefaultAzureCredential } = require("@azure/identity"); --async function main() { - // Get the service endpoint from the environment - const endpoint = process.env.CONTAINER_REGISTRY_ENDPOINT || "<endpoint>"; -- // Create a new ContainerRegistryClient and RegistryArtifact to access image operations - const client = new ContainerRegistryClient(endpoint, new DefaultAzureCredential()); - const image = client.getArtifact("library/hello-world", "v1"); -- // Set permissions on the image's "latest" tag - await image.updateTagProperties("latest", { canWrite: false, canDelete: false }); -} --main().catch((err) => { - console.error("The sample encountered an error:", err); -}); -``` --### Delete images asynchronously --```javascript -const { ContainerRegistryClient } = require("@azure/container-registry"); -const { DefaultAzureCredential } = require("@azure/identity"); --async function main() { - // Get the service endpoint from the environment - const endpoint = process.env.CONTAINER_REGISTRY_ENDPOINT || "<endpoint>"; - // Create a new ContainerRegistryClient - const client = new ContainerRegistryClient(endpoint, new DefaultAzureCredential()); -- // Iterate through repositories - const repositoryNames = client.listRepositoryNames(); - for await (const repositoryName of repositoryNames) { - const repository = client.getRepository(repositoryName); - // Obtain the images ordered from newest to oldest by passing the `orderBy` option - const imageManifests = repository.listManifestProperties({ - orderBy: "LastUpdatedOnDescending" - }); - const imagesToKeep = 3; - let imageCount = 0; - // Delete images older than the first three. - for await (const manifest of imageManifests) { - imageCount++; - if (imageCount > imagesToKeep) { - const image = repository.getArtifact(manifest.digest); - console.log(`Deleting image with digest ${manifest.digest}`); - console.log(` Deleting the following tags from the image:`); - for (const tagName of manifest.tags) { - console.log(` ${manifest.repositoryName}:${tagName}`); - image.deleteTag(tagName); - } - await image.delete(); - } - } - } -} --main().catch((err) => { - console.error("The sample encountered an error:", err); -}); -``` ----## Get started --[Source code][python_source] | [Package (Pypi)][python_package] | [API reference][python_docs] | [Samples][python_samples] --### Install the package --Install the Azure Container Registry client library for Python with [pip][pip_link]: --```bash -pip install --pre azure-containerregistry -``` --## Authenticate the client --The [Azure Identity library][python_identity] provides Microsoft Entra ID support for authentication. The `DefaultAzureCredential` assumes the `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_CLIENT_SECRET` environment variables are set. For more information, see [Azure Identity environment variables](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#environment-variables). --```python -# Create a ContainerRegistryClient that will authenticate through Active Directory -from azure.containerregistry import ContainerRegistryClient -from azure.identity import DefaultAzureCredential --account_url = "https://mycontainerregistry.azurecr.io" -client = ContainerRegistryClient(account_url, DefaultAzureCredential()) -``` --## Examples --Each sample assumes there is a `CONTAINERREGISTRY_ENDPOINT` environment variable set to a string containing the `https://` prefix and the name of the login server, for example "https://myregistry.azurecr.io". --### List tags asynchronously --This sample assumes the registry has a repository `hello-world`. --```python -import asyncio -from dotenv import find_dotenv, load_dotenv -import os --from azure.containerregistry.aio import ContainerRegistryClient -from azure.identity.aio import DefaultAzureCredential ---class ListTagsAsync(object): - def __init__(self): - load_dotenv(find_dotenv()) -- async def list_tags(self): - # Create a new ContainerRegistryClient - audience = "https://management.azure.com" - account_url = os.environ["CONTAINERREGISTRY_ENDPOINT"] - credential = DefaultAzureCredential() - client = ContainerRegistryClient(account_url, credential, audience=audience) -- manifest = await client.get_manifest_properties("library/hello-world", "latest") - print(manifest.repository_name + ": ") - for tag in manifest.tags: - print(tag + "\n") -``` --### Set artifact properties asynchronously --This sample assumes the registry has a repository `hello-world` with image tagged `v1`. --```python -import asyncio -from dotenv import find_dotenv, load_dotenv -import os --from azure.containerregistry.aio import ContainerRegistryClient -from azure.identity.aio import DefaultAzureCredential ---class SetImagePropertiesAsync(object): - def __init__(self): - load_dotenv(find_dotenv()) -- async def set_image_properties(self): - # Create a new ContainerRegistryClient - account_url = os.environ["CONTAINERREGISTRY_ENDPOINT"] - audience = "https://management.azure.com" - credential = DefaultAzureCredential() - client = ContainerRegistryClient(account_url, credential, audience=audience) -- # [START update_manifest_properties] - # Set permissions on the v1 image's "latest" tag - await client.update_manifest_properties( - "library/hello-world", - "latest", - can_write=False, - can_delete=False - ) - # [END update_manifest_properties] - # After this update, if someone were to push an update to "myacr.azurecr.io\hello-world:v1", it would fail. - # It's worth noting that if this image also had another tag, such as "latest", and that tag did not have - # permissions set to prevent reads or deletes, the image could still be overwritten. For example, - # if someone were to push an update to "myacr.azurecr.io\hello-world:latest" - # (which references the same image), it would succeed. -``` --### Delete images asynchronously --```python -import asyncio -from dotenv import find_dotenv, load_dotenv -import os --from azure.containerregistry import ManifestOrder -from azure.containerregistry.aio import ContainerRegistryClient -from azure.identity.aio import DefaultAzureCredential ---class DeleteImagesAsync(object): - def __init__(self): - load_dotenv(find_dotenv()) -- async def delete_images(self): - # [START list_repository_names] - audience = "https://management.azure.com" - account_url = os.environ["CONTAINERREGISTRY_ENDPOINT"] - credential = DefaultAzureCredential() - client = ContainerRegistryClient(account_url, credential, audience=audience) -- async with client: - async for repository in client.list_repository_names(): - print(repository) - # [END list_repository_names] -- # [START list_manifest_properties] - # Keep the three most recent images, delete everything else - manifest_count = 0 - async for manifest in client.list_manifest_properties(repository, order_by=ManifestOrder.LAST_UPDATE_TIME_DESCENDING): - manifest_count += 1 - if manifest_count > 3: - await client.delete_manifest(repository, manifest.digest) - # [END list_manifest_properties] -``` ----## Get started --[Source code][go_source] | [Package (pkg.go.dev)][go_package] | [REST API reference][go_docs] --### Install the package --Install the Azure Container Registry client library for Go with `go get`: --```bash -go get github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry -``` --## Authenticate the client --When you're developing and debugging your application locally, you can use [azidentity.NewDefaultAzureCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#NewDefaultAzureCredential) to authenticate. We recommend using a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) in a production environment. --```go -import ( - "github.com/Azure/azure-sdk-for-go/sdk/azidentity" - "github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry" - "log" -) --func main() { - cred, err := azidentity.NewDefaultAzureCredential(nil) - if err != nil { - log.Fatalf("failed to obtain a credential: %v", err) - } -- client, err := azcontainerregistry.NewClient("https://myregistry.azurecr.io", cred, nil) - if err != nil { - log.Fatalf("failed to create client: %v", err) - } -} -``` -See the [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) documentation for more information about other authentication approaches. --## Examples --Each sample assumes the container registry endpoint URL is "https://myregistry.azurecr.io". --### List tags --This sample assumes the registry has a repository `hello-world`. --```go -import ( - "context" - "fmt" - "github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry" - "log" -) --func Example_listTagsWithAnonymousAccess() { - client, err := azcontainerregistry.NewClient("https://myregistry.azurecr.io", nil, nil) - if err != nil { - log.Fatalf("failed to create client: %v", err) - } - ctx := context.Background() - pager := client.NewListTagsPager("library/hello-world", nil) - for pager.More() { - page, err := pager.NextPage(ctx) - if err != nil { - log.Fatalf("failed to advance page: %v", err) - } - for _, v := range page.Tags { - fmt.Printf("tag: %s\n", *v.Name) - } - } -} -``` --### Set artifact properties --This sample assumes the registry has a repository `hello-world` with image tagged `latest`. --```go -package azcontainerregistry_test --import ( - "context" - "fmt" - "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" - "github.com/Azure/azure-sdk-for-go/sdk/azidentity" - "github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry" - "log" -) --func Example_setArtifactProperties() { - cred, err := azidentity.NewDefaultAzureCredential(nil) - if err != nil { - log.Fatalf("failed to obtain a credential: %v", err) - } - client, err := azcontainerregistry.NewClient("https://myregistry.azurecr.io", cred, nil) - if err != nil { - log.Fatalf("failed to create client: %v", err) - } - ctx := context.Background() - res, err := client.UpdateTagProperties(ctx, "library/hello-world", "latest", &azcontainerregistry.ClientUpdateTagPropertiesOptions{ - Value: &azcontainerregistry.TagWriteableProperties{ - CanWrite: to.Ptr(false), - CanDelete: to.Ptr(false), - }}) - if err != nil { - log.Fatalf("failed to finish the request: %v", err) - } - fmt.Printf("repository library/hello-world - tag latest: 'CanWrite' property: %t, 'CanDelete' property: %t\n", *res.Tag.ChangeableAttributes.CanWrite, *res.Tag.ChangeableAttributes.CanDelete) -} -``` --### Delete images --```go -package azcontainerregistry_test --import ( - "context" - "fmt" - "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" - "github.com/Azure/azure-sdk-for-go/sdk/azidentity" - "github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry" - "log" -) --func Example_deleteImages() { - cred, err := azidentity.NewDefaultAzureCredential(nil) - if err != nil { - log.Fatalf("failed to obtain a credential: %v", err) - } - client, err := azcontainerregistry.NewClient("https://myregistry.azurecr.io", cred, nil) - if err != nil { - log.Fatalf("failed to create client: %v", err) - } - ctx := context.Background() - repositoryPager := client.NewListRepositoriesPager(nil) - for repositoryPager.More() { - repositoryPage, err := repositoryPager.NextPage(ctx) - if err != nil { - log.Fatalf("failed to advance repository page: %v", err) - } - for _, r := range repositoryPage.Repositories.Names { - manifestPager := client.NewListManifestsPager(*r, &azcontainerregistry.ClientListManifestsOptions{ - OrderBy: to.Ptr(azcontainerregistry.ArtifactManifestOrderByLastUpdatedOnDescending), - }) - for manifestPager.More() { - manifestPage, err := manifestPager.NextPage(ctx) - if err != nil { - log.Fatalf("failed to advance manifest page: %v", err) - } - imagesToKeep := 3 - for i, m := range manifestPage.Manifests.Attributes { - if i >= imagesToKeep { - for _, t := range m.Tags { - fmt.Printf("delete tag from image: %s", *t) - _, err := client.DeleteTag(ctx, *r, *t, nil) - if err != nil { - log.Fatalf("failed to delete tag: %v", err) - } - } - _, err := client.DeleteManifest(ctx, *r, *m.Digest, nil) - if err != nil { - log.Fatalf("failed to delete manifest: %v", err) - } - fmt.Printf("delete image with digest: %s", *m.Digest) - } - } - } - } - } -} -``` ---## Clean up resources --If you want to clean up and remove an Azure container registry, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. --* [Portal](container-registry-get-started-portal.md#clean-up-resources) -* [Azure CLI](container-registry-get-started-azure-cli.md#clean-up-resources) --## Next steps --In this quickstart, you learned about using the Azure Container Registry client library to perform operations on images and artifacts in your container registry. --* For more information, see the API reference documentation: -- * [.NET][dotnet_docs] - * [Java][java_docs] - * [JavaScript][javascript_docs] - * [Python][python_docs] --* Learn about the Azure Container Registry [REST API][rest_docs]. --[dotnet_source]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/containerregistry/Azure.Containers.ContainerRegistry/src -[dotnet_package]: https://www.nuget.org/packages/Azure.Containers.ContainerRegistry/ -[dotnet_samples]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/containerregistry/Azure.Containers.ContainerRegistry/samples -[dotnet_docs]: /dotnet/api/azure.containers.containerregistry -[rest_docs]: /rest/api/containerregistry/ -[product_docs]: container-registry-intro.md -[nuget]: https://www.nuget.org/ -[dotnet_identity]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity/README.md -[javascript_identity]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/README.md -[javascript_source]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/containerregistry/container-registry/src -[javascript_package]: https://www.npmjs.com/package/@azure/container-registry -[javascript_docs]: /javascript/api/overview/azure/container-registry-readme -[jdk_link]: /java/azure/jdk/ -[javascript_samples]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/containerregistry/container-registry/samples -[java_source]: https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/containerregistry/azure-containers-containerregistry/src -[java_package]: https://search.maven.org/artifact/com.azure/azure-containers-containerregistry -[java_docs]: /java/api/overview/azure/containers-containerregistry-readme -[java_identity]: https://github.com/Azure/azure-sdk-for-jav -[java_samples]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/containerregistry/azure-containers-containerregistry/src/samples/ -[python_package]: https://pypi.org/project/azure-containerregistry/ -[python_docs]: /python/api/overview/azure/containerregistry-readme -[python_samples]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/containerregistry/azure-containerregistry/samples -[pip_link]: https://pypi.org -[python_identity]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity -[python_source]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/containerregistry/azure-containerregistry -[go_source]: https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/containers/azcontainerregistry -[go_package]: https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/containers/azcontainerregistry -[go_docs]: /rest/api/containerregistry/ |
container-registry | Quickstart Connected Registry Arc Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-connected-registry-arc-cli.md | - Title: "Quickstart: Deploying the connected registry Arc extension" -description: "Learn how to deploy the Connected Registry Arc Extension CLI UX with secure-by-default settings for efficient and secure container workload operations." ---- Previously updated : 05/09/2024-ai-usage: ai-assisted --#customer intent: As a user, I want to learn how to deploy the connected registry Arc extension using the CLI UX with secure-by-default settings, such as using HTTPS, Read Only, Trust Distribution, and Cert Manager service, so that I can ensure the secure and efficient operation of my container workloads." ---# Quickstart: Deploy the connected registry Arc extension (preview) --In this quickstart, you learn how to deploy the Connected registry Arc extension using the CLI UX with secure-by-default settings to ensure robust security and operational integrity. - -The connected registry is a pivotal tool for edge customers, enabling efficient management and access to containerized workloads, whether on-premises or at remote sites. By integrating with Azure Arc, the service ensures a seamless and unified lifecycle management experience for Kubernetes-based containerized workloads. Deploying the connected registry Arc extension on Arc-enabled Kubernetes clusters simplifies the management and access of these workloads. --## Prerequisites --* Set up the [Azure CLI][Install Azure CLI] to connect to Azure and Kubernetes. --* Create or use an existing Azure Container Registry (ACR) with [quickstart.][create-acr] --* Set up the firewall access and communication between the ACR and the connected registry by enabling the [dedicated data endpoints.][dedicated data endpoints] --* Create or use an existing Azure KubernetesService (AKS) cluster with the [tutorial.][tutorial-aks-cluster] --* Set up the connection between the Kubernetescluster and Azure Arc by following the [quickstart.][quickstart-connect-cluster] --* Use the [k8s-extension][k8s-extension] command to manage Kubernetesextensions. -- ```azurecli - az extension add --name k8s-extension - ``` -* Register the required [Azure resource providers][azure-resource-provider-requirements] in your subscription and use Azure Arc-enabled Kubernetes: -- ```azurecli - az provider register --namespace Microsoft.Kubernetes - az provider register --namespace Microsoft.KubernetesConfiguration - az provider register --namespace Microsoft.ExtendedLocation - ``` - An Azure resource provider is a set of REST operations that enable functionality for a specific Azure service. --* Repository in the ACR registry to synchronize with the connected registry. -- ```azurecli - az acr import --name myacrregistry --source mcr.microsoft.com/mcr/hello-world:latest --image hello-world:latest - ``` -- The `hello-world` repository is created in the ACR registry `myacrregistry` to synchronize with the Connected registry. ---## Deploy the connected registry Arc extension with secure-by-default settings --Once the prerequisites and necessary conditions and components are in place, follow the streamlined approach to securely deploy a connected registry extension on an Arc-enabled Kubernetes cluster using the following settings. These settings define the following configuration with HTTPS, Read Only, Trust Distribution, and Cert Manager service. Follow the steps for a successful deployment: --1. [Create the connected registry.](#create-the-connected-registry-and-synchronize-with-acr) -2. [Deploy the connected registry Arc extension.](#deploy-the-connected-registry-arc-extension-on-the-arc-enabled-kubernetes-cluster) -3. [Verify the connected registry extension deployment.](#verify-the-connected-registry-extension-deployment) -4. [Deploy a pod that uses image from connected registry.](#deploy-a-pod-that-uses-an-image-from-connected-registry) ---### Create the connected registry and synchronize with ACR --Creating the connected registry to synchronize with ACR is the foundational step for deploying the connected registry Arc extension. --1. Create the connected registry, which synchronizes with the ACR registry: -- To create a connected registry `myconnectedregistry` that synchronizes with the ACR registry `myacrregistry` in the resource group `myresourcegroup` and the repository `hello-world`, you can run the [az acr connected-registry create][az-acr-connected-registry-create] command: - - ```azurecli - az acr connected-registry create --registry myacrregistry \ - --name myconnectedregistry \ - --resource-group myresourcegroup \ - --repository "hello-world" - ``` --- The [az acr connected-registry create][az-acr-connected-registry-create] command creates the connected registry with the specified repository. -- The [az acr connected-registry create][az-acr-connected-registry-create] command overwrites actions if the sync scope map named `myscopemap` exists and overwrites properties if the sync token named `mysynctoken` exists. -- The [az acr connected-registry create][az-acr-connected-registry-create] command validates a dedicated data endpoint during the creation of the connected registry and provides a command to enable the dedicated data endpoint on the ACR registry.--### Deploy the connected registry Arc extension on the Arc-enabled Kubernetes cluster --By deploying the connected Registry Arc extension, you can synchronize container images and other Open Container Initiative (OCI) artifacts with your ACR registry. The deployment helps speed-up access to registry artifacts and enables the building of advanced scenarios. The extension deployment ensures secure trust distribution between the connected registry and all client nodes within the cluster, and installs the cert-manager service for Transport Layer Security (TLS) encryption. --1. Generate the Connection String and Protected Settings JSON File -- For secure deployment of the connected registry extension, generate the connection string, including a new password, transport protocol, and create the `protected-settings-extension.json` file required for the extension deployment with [az acr connected-registry get-settings][az-acr-connected-registry-get-settings] command: --```bash - cat << EOF > protected-settings-extension.json - { - "connectionString": "$(az acr connected-registry get-settings \ - --name myconnectedregistry \ - --registry myacrregistry \ - --parent-protocol https \ - --generate-password 1 \ - --query ACR_REGISTRY_CONNECTION_STRING --output tsv --yes)" - } - EOF -``` --```bash - cat << EOF > protected-settings-extension.json - { - "connectionString": "$(az acr connected-registry get-settings \ - --name myconnectedregistry \ - --registry myacrregistry \ - --parent-protocol https \ - --generate-password 1 \ - --query ACR_REGISTRY_CONNECTION_STRING --output tsv --yes)" - } - EOF -``` --```azurepowershell - echo "{\"connectionString\":\"$(az acr connected-registry get-settings \ - --name myconnectedregistry \ - --registry myacrregistry \ - --parent-protocol https \ - --generate-password 1 \ - --query ACR_REGISTRY_CONNECTION_STRING \ - --output tsv \ - --yes | tr -d '\r')\" }" > settings.json -``` -->[!NOTE] -> The cat and echo commands create the `protected-settings-extension.json` file with the connection string details, injecting the contents of the connection string into the `protected-settings-extension.json` file, a necessary step for the extension deployment. The [az acr connected-registry get-settings][az-acr-connected-registry-get-settings] command generates the connection string, including the creation of a new password and the specification of the transport protocol. --2. Deploy the connected registry extension -- Deploy the connected registry extension with the specified configuration details using the [az k8s-extension create][az-k8s-extension-create] command: -- ```azurecli - az k8s-extension create --cluster-name myarck8scluster \ - --cluster-type connectedClusters \ - --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \ - --name myconnectedregistry \ - --resource-group myresourcegroup \ - --config service.clusterIP=192.100.100.1 \ - --config-protected-file protected-settings-extension.json - ``` --- The [az k8s-extension create][az-k8s-extension-create] command deploys the connected registry extension on the Kubernetescluster with the provided configuration parameters and protected settings file. -- It ensures secure trust distribution between the connected registry and all client nodes within the cluster, and installs the cert-manager service for Transport Layer Security (TLS) encryption.-- The clusterIP must be from the AKS cluster subnet IP range. The `service.clusterIP` parameter specifies the IP address of the connected registry service within the cluster. It is essential to set the `service.clusterIP` within the range of valid service IPs for the Kubernetescluster. Ensure that the IP address specified for `service.clusterIP` falls within the designated service IP range defined during the cluster's initial configuration, typically found in the cluster's networking settings. If the `service.clusterIP` is not within this range, it must be updated to an IP address that is both within the valid range and not currently in use by another service.---### Verify the connected registry extension deployment --To verify the deployment of the connected registry extension on the Arc-enabled Kubernetescluster, follow the steps: --1. Verify the deployment status -- Run the [az k8s-extension show][az-k8s-extension-show] command to check the deployment status of the connected registry extension: -- ```azurecli - az k8s-extension show --name myconnectedregistry \ - --cluster-name myarck8scluster \ - --resource-group myresourcegroup \ - --cluster-type connectedClusters - ``` -- **Example Output** -- ```output - { - "aksAssignedIdentity": null, - "autoUpgradeMinorVersion": true, - "configurationProtectedSettings": { - "connectionString": "" - }, - "configurationSettings": { - "pvc.storageClassName": "standard", - "pvc.storageRequest": "250Gi", - "service.clusterIP": "[your service cluster ip]" - }, - "currentVersion": "0.11.0", - "customLocationSettings": null, - "errorInfo": null, - "extensionType": "microsoft.containerregistry.connectedregistry", - "id": "/subscriptions/[your subscription id]/resourceGroups/[your resource group name]/providers/Microsoft.Kubernetes/connectedClusters/[your arc cluster name]/providers/Microsoft.KubernetesConfiguration/extensions/[your extension name]", - "identity": { - "principalId": "[identity principal id]", - "tenantId": null, - "type": "SystemAssigned" - }, - "isSystemExtension": false, - "name": "[your extension name]", - "packageUri": null, - "plan": null, - "provisioningState": "Succeeded", - "releaseTrain": "preview", - "resourceGroup": "[your resource group]", - "scope": { - "cluster": { - "releaseNamespace": "connected-registry" - }, - "namespace": null - }, - "statuses": [], - "systemData": { - "createdAt": "2024-07-12T18:17:51.364427+00:00", - "createdBy": null, - "createdByType": null, - "lastModifiedAt": "2024-07-12T18:22:42.156799+00:00", - "lastModifiedBy": null, - "lastModifiedByType": null - }, - "type": "Microsoft.KubernetesConfiguration/extensions", - "version": null - } - ``` --2. Verify the connected registry status and state -- For each connected registry, you can view the status and state of the connected registry using the [az acr connected-registry list][az-acr-connected-registry-list] command: - - ```azurecli - az acr connected-registry list --registry myacrregistry \ - --output table - ``` --**Example Output** --```console - | NAME | MODE | CONNECTION STATE | PARENT | LOGIN SERVER | LAST SYNC(UTC) | - ||||--|--|-| - | myconnectedregistry | ReadWrite | online | myacrregistry | myacrregistry.azurecr.io | 2024-05-09 12:00:00 | - | myreadonlyacr | ReadOnly | offline | myacrregistry | myacrregistry.azurecr.io | 2024-05-09 12:00:00 | -``` --3. Verify the specific connected registry details -- For details on a specific connected registry, use [az acr connected-registry show][az-acr-connected-registry-show] command: -- ```azurecli - az acr connected-registry show --registry myacrregistry \ - --name myreadonlyacr \ - --output table - ``` --**Example Output** --```console - | NAME | MODE | CONNECTION STATE | PARENT | LOGIN SERVER | LAST SYNC(UTC) | SYNC SCHEDULE | SYNC WINDOW | - | - | | - | - | | - | - | -- | - | myconnectedregistry | ReadWrite | online | myacrregistry | myacrregistry.azurecr.io | 2024-05-09 12:00:00 | 0 0 * * * | 00:00:00-23:59:59 | -``` --- The [az k8s-extension show][az-k8s-extension-show] command verifies the state of the extension deployment.-- The command also provides details on the connected registry's connection status, last sync, sync window, sync schedule, and more.--### Deploy a pod that uses an image from connected registry --To deploy a pod that uses an image from connected registry within the cluster, the operation must be performed from within the cluster node itself. Follow these steps: --1. Create a secret in the cluster to authenticate with the connected registry: --Run the [kubectl create secret docker-registry][kubectl-create-secret-docker-registry] command to create a secret in the cluster to authenticate with the Connected registry: --```bash -kubectl create secret docker-registry regcred --docker-server=192.100.100.1 --docker-username=mytoken --docker-password=mypassword - ``` --2. Deploy the pod that uses the desired image from the connected registry using the value of service.clusterIP address `192.100.100.1` of the connected registry, and the image name `hello-world` with tag `latest`: -- ```bash - kubectl apply -f - <<EOF - apiVersion: apps/v1 - kind: Deployment - metadata: - name: hello-world-deployment - labels: - app: hello-world - spec: - selector: - matchLabels: - app: hello-world - replicas: 1 - template: - metadata: - labels: - app: hello-world - spec: - imagePullSecrets: - - name: regcred - containers: - - name: hello-world - image: 192.100.100.1/hello-world:latest - EOF - ``` --## Clean up resources --By deleting the deployed connected registry extension, you remove the corresponding connected registry pods and configuration settings. --1. Delete the connected registry extension -- Run the [az k8s-extension delete][az-k8s-extension-delete] command to delete the connected registry extension: -- ```azurecli - az k8s-extension delete --name myconnectedregistry - --cluster-name myarcakscluster \ - --resource-group myresourcegroup \ - --cluster-type connectedClusters - ``` - -By deleting the deployed connected registry, you remove the connected registry cloud instance and its configuration details. --2. Delete the connected registry -- Run the [az acr connected-registry delete][az-acr-connected-registry-delete] command to delete the Connected registry: -- ```azurecli - az acr connected-registry delete --registry myacrregistry \ - --name myconnectedregistry \ - --resource-group myresourcegroup - ``` --## Next steps --- [Known issues: Connected registry Arc Extension](troubleshoot-connected-registry-arc.md)---<!-- LINKS - internal --> -[create-acr]: container-registry-get-started-azure-cli.md -[dedicated data endpoints]: container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints -[Install Azure CLI]: /cli/azure/install-azure-cli -[k8s-extension]: /cli/azure/k8s-extension -[azure-resource-provider-requirements]: /azure/azure-arc/kubernetes/system-requirements#azure-resource-provider-requirements -[quickstart-connect-cluster]: /azure/azure-arc/kubernetes/quickstart-connect-cluster -[tutorial-aks-cluster]: /azure/aks/tutorial-kubernetes-deploy-cluster?tabs=azure-cli -[az-acr-connected-registry-create]: /cli/azure/acr/connected-registry#az-acr-connected-registry-create -[az-acr-connected-registry-get-settings]: /cli/azure/acr/connected-registry#az-acr-connected-registry-get-settings -[az-k8s-extension-create]: /cli/azure/k8s-extension#az-k8s-extension-create -[az-k8s-extension-show]: /cli/azure/k8s-extension#az-k8s-extension-show -[az-acr-connected-registry-list]: /cli/azure/acr/connected-registry#az-acr-connected-registry-list -[az-acr-connected-registry-show]: /cli/azure/acr/connected-registry#az-acr-connected-registry-show -[az-k8s-extension-delete]: /cli/azure/k8s-extension#az-k8s-extension-delete -[az-acr-connected-registry-delete]: /cli/azure/acr/connected-registry#az-acr-connected-registry-delete -[kubectl-create-secret-docker-registry]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/ |
container-registry | Quickstart Connected Registry Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-connected-registry-cli.md | - Title: Quickstart - Create connected registry using the CLI -description: Use Azure CLI commands to create a connected Azure container registry resource that can synchronize images and other artifacts with the cloud registry. - Previously updated : 10/31/2023-----# customer intent: To create a connected registry resource in Azure using the Azure CLI. ---# Quickstart: Create a connected registry using the Azure CLI (To be deprecated) --In this quickstart, you use the Azure CLI to create a [connected registry](intro-connected-registry.md) resource in Azure. The connected registry feature of Azure Container Registry allows you to deploy a registry remotely or on your premises and synchronize images and other artifacts with the cloud registry. --Here you create two connected registry resources for a cloud registry: one connected registry allows read and write (artifact pull and push) functionality and one allows read-only functionality. --After creating a connected registry, you can follow other guides to deploy and use it on your on-premises or remote infrastructure. ---* Azure Container registry - If you don't already have a container registry, [create one](container-registry-get-started-azure-cli.md) (Premium tier required) in a [region](intro-connected-registry.md#available-regions) that supports connected registries. --## Enable the dedicated data endpoint for the cloud registry --Enable the dedicated data endpoint for the Azure container registry in the cloud by using the [az acr update][az-acr-update] command. This step is needed for a connected registry to communicate with the cloud registry. --```azurecli -# Set the REGISTRY_NAME environment variable to identify the existing cloud registry -REGISTRY_NAME=<container-registry-name> --az acr update --name $REGISTRY_NAME \ - --data-endpoint-enabled -``` ---## Create a connected registry resource for read and write functionality --Create a connected registry using the [az acr connected-registry create][az-acr-connected-registry-create] command. The connected registry name must start with a letter and contain only alphanumeric characters. It must be 5 to 40 characters long and unique in the hierarchy for this Azure container registry. --```azurecli -# Set the CONNECTED_REGISTRY_RW environment variable to provide a name for the connected registry with read/write functionality -CONNECTED_REGISTRY_RW=<connnected-registry-name> --az acr connected-registry create --registry $REGISTRY_NAME \ - --name $CONNECTED_REGISTRY_RW \ - --repository "hello-world" "acr/connected-registry" "azureiotedge-agent" "azureiotedge-hub" "azureiotedge-api-proxy" -``` --This command creates a connected registry resource whose name is the value of *$CONNECTED_REGISTRY_RW* and links it to the cloud registry whose name is the value of *$REGISTRY_NAME*. In later quickstart guides, you learn about options to deploy the connected registry. -* The specified repositories will be synchronized between the cloud registry and the connected registry once it is deployed. -* Because no `--mode` option is specified for the connected registry, it is created in the default [ReadWrite mode](intro-connected-registry.md#modes). -* Because there is no synchronization schedule defined for this connected registry, the repositories will be synchronized between the cloud registry and the connected registry without interruptions. -- > [!IMPORTANT] - > To support nested scenarios where lower layers have no Internet access, you must always allow synchronization of the `acr/connected-registry` repository. This repository contains the image for the connected registry runtime. --## Create a connected registry resource for read-only functionality --You can also use the [az acr connected-registry create][az-acr-connected-registry-create] command to create a connected registry with read-only functionality. --```azurecli -# Set the CONNECTED_REGISTRY_READ environment variable to provide a name for the connected registry with read-only functionality -CONNECTED_REGISTRY_RO=<connnected-registry-name> -az acr connected-registry create --registry $REGISTRY_NAME \ - --parent $CONNECTED_REGISTRY_RW \ - --name $CONNECTED_REGISTRY_RO \ - --repository "hello-world" "acr/connected-registry" "azureiotedge-agent" "azureiotedge-hub" "azureiotedge-api-proxy" \ - --mode ReadOnly -``` --This command creates a connected registry resource whose name is the value of *$CONNECTED_REGISTRY_RO* and links it to the cloud registry named with the value of *$REGISTRY_NAME*. -* The specified repositories will be synchronized between the parent registry named with the value of *$CONNECTED_REGISTRY_RW* and the connected registry once deployed. -* This resource is created in the [ReadOnly mode](intro-connected-registry.md#modes), which enables read-only (artifact pull) functionality once deployed. -* Because there is no synchronization schedule defined for this connected registry, the repositories will be synchronized between the parent registry and the connected registry without interruptions. --## Verify that the resources are created --You can use the connected registry [az acr connected-registry list][az-acr-connected-registry-list] command to verify that the resources are created. --```azurecli -az acr connected-registry list \ - --registry $REGISTRY_NAME \ - --output table -``` --You should see a response as follows. Because the connected registries are not yet deployed, the connection state of "Offline" indicates that they are currently disconnected from the cloud. --``` -NAME MODE CONNECTION STATE PARENT LOGIN SERVER LAST SYNC (UTC) -- -- - -- ---myconnectedregrw ReadWrite Offline -myconnectedregro ReadOnly Offline myconnectedregrw -``` --## Next steps --In this quickstart, you used the Azure CLI to create two connected registry resources in Azure. Those new connected registry resources are tied to your cloud registry and allow synchronization of artifacts with the cloud registry. --Continue to the connected registry deployment guides to learn how to deploy and use a connected registry on your IoT Edge infrastructure. --> [!div class="nextstepaction"] -> [Quickstart: Deploy connected registry on IoT Edge][quickstart-deploy-connected-registry-iot-edge-cli] --<!-- LINKS - internal --> -[az-acr-connected-registry-create]: /cli/azure/acr/connected-registry#az_acr_connected_registry_create -[az-acr-connected-registry-list]: /cli/azure/acr/connected-registry#az_acr_connected_registry_list -[az-acr-create]: /cli/azure/acr#az_acr_create -[az-acr-update]: /cli/azure/acr#az_acr_update -[az-acr-import]: /cli/azure/acr#az_acr_import -[az-group-create]: /cli/azure/group#az_group_create -[container-registry-intro]: container-registry-intro.md -[container-registry-skus]: container-registry-skus.md -[quickstart-deploy-connected-registry-iot-edge-cli]: quickstart-deploy-connected-registry-iot-edge-cli.md |
container-registry | Quickstart Connected Registry Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-connected-registry-portal.md | - Title: Quickstart - Create connected registry using the portal -description: Use Azure portal to create a connected Azure container registry resource that can synchronize images and other artifacts with the cloud registry. - Previously updated : 10/31/2023-------# Quickstart: Create a connected registry using the Azure portal (To be deprecated) --In this quickstart, you use the Azure portal to create a [connected registry](intro-connected-registry.md) resource in Azure. The connected registry feature of Azure Container Registry allows you to deploy a registry remotely or on your premises and synchronize images and other artifacts with the cloud registry. --Here you create two connected registry resources for a cloud registry: one connected registry allows read and write (artifact pull and push) functionality and one allows read-only functionality. --After creating a connected registry, you can follow other guides to deploy and use it on your on-premises or remote infrastructure. --## Prerequisites --* Azure Container registry - If you don't already have a container registry, [create one](container-registry-get-started-portal.md) (Premium tier required) in a [region](intro-connected-registry.md#available-regions) that supports connected registries. --To import images to the container registry, use the Azure CLI: --## Enable the dedicated data endpoint for the cloud registry --Enable the [dedicated data endpoint](container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints) for the Azure container registry in the cloud. This step is needed for a connected registry to communicate with the cloud registry. --1. In the [Azure portal](https://portal.azure.com), navigate to your container registry. -1. Select **Networking > Public access**. -Select the **Enable dedicated data endpoint** checkbox. -1. Select **Save**. ---## Create a connected registry resource for read and write functionality --The following steps create a connected registry in [ReadWrite mode](intro-connected-registry.md#modes) that is linked to the cloud registry. --1. In the [Azure portal](https://portal.azure.com), navigate to your container registry. -1. Select **Connected registries (Preview) > + Create**. -1. Enter or select the values in the following table, and select **Save**. ---|Item |Description | -||| -|Parent | Select **No parent** for a connected registry linked to the cloud registry. | -|Mode | Select **ReadWrite**. | -|Name | The connected registry name must start with a letter and contain only alphanumeric characters. It must be 5 to 40 characters long and unique in the hierarchy for this Azure container registry. | -|Logging properties | Accept the default settings. | -|Sync properties | Accept the default settings. Because there is no synchronization schedule defined by default, the repositories will be synchronized between the cloud registry and the connected registry without interruptions. | -|Repositories | Select or enter the names of the repositories you imported in the previous step. The specified repositories will be synchronized between the cloud registry and the connected registry once it is deployed. | ----> [!IMPORTANT] -> To support nested scenarios where lower layers have no Internet access, you must always allow synchronization of the `acr/connected-registry` repository. This repository contains the image for the connected registry runtime. --## Create a connected registry resource for read-only functionality --The following steps create a connected registry in [ReadOnly mode](intro-connected-registry.md#modes) whose parent is the connected registry you created in the previous section. This connected registry enables read-only (artifact pull) functionality once deployed. --1. In the [Azure portal](https://portal.azure.com), navigate to your container registry. -1. Select **Connected registries (Preview) > + Create**. -1. Enter or select the values in the following table, and select **Save**. ---|Item |Description | -||| -|Parent | Select the connected registry you created previously. | -|Mode | Select **ReadOnly**. | -|Name | The connected registry name must start with a letter and contain only alphanumeric characters. It must be 5 to 40 characters long and unique in the hierarchy for this Azure container registry. | -|Logging properties | Accept the default settings. | -|Sync properties | Accept the default settings. Because there is no synchronization schedule defined by default, the repositories will be synchronized between the cloud registry and the connected registry without interruptions. | -|Repositories | Select or enter the names of the repositories you imported in the previous step. The specified repositories will be synchronized between the parent registry and the connected registry once it is deployed. | ---## View connected registry properties --Select a connected registry in the portal to view its properties, such as its connection status (Offline, Online, or Unhealthy) and whether it has been activated (deployed on-premises). In the following example, the connected registry is not deployed. Its connection state of "Offline" indicates that it is currently disconnected from the cloud. ---From this view, you can also generate a connection string and optionally generate passwords for the [sync token](overview-connected-registry-access.md#sync-token). A connection string contains configuration settings used for deploying a connected registry and synchronizing content with a parent registry. --## Next steps --In this quickstart, you used the Azure portal to create two connected registry resources in Azure. Those new connected registry resources are tied to your cloud registry and allow synchronization of artifacts with the cloud registry. --Continue to the connected registry deployment guides to learn how to deploy and use a connected registry on your IoT Edge infrastructure. --> [!div class="nextstepaction"] -> [Quickstart: Deploy connected registry on IoT Edge][quickstart-deploy-connected-registry-iot-edge-cli] --<!-- LINKS - internal --> -[az-acr-connected-registry-create]: /cli/azure/acr/connected-registry#az_acr_connected_registry_create -[az-acr-connected-registry-list]: /cli/azure/acr/connected-registry#az_acr_connected_registry_list -[az-acr-create]: /cli/azure/acr#az_acr_create -[az-acr-update]: /cli/azure/acr#az_acr_update -[az-acr-import]: /cli/azure/acr#az_acr_import -[az-group-create]: /cli/azure/group#az_group_create -[container-registry-intro]: container-registry-intro.md -[container-registry-skus]: container-registry-skus.md -[quickstart-deploy-connected-registry-iot-edge-cli]: quickstart-deploy-connected-registry-iot-edge-cli.md |
container-registry | Quickstart Deploy Connected Registry Iot Edge Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-deploy-connected-registry-iot-edge-cli.md | - Title: Quickstart - Deploy a connected registry to an IoT Edge device -description: Use Azure CLI commands and Azure portal to deploy a connected Azure container registry to an Azure IoT Edge device. - Previously updated : 10/31/2023-----#customer intent: To deploy a connected registry resource to an Azure IoT Edge device using the Azure CLI. ---# Quickstart: Deploy a connected registry to an IoT Edge device (To be deprecated) --In this quickstart, you use the Azure CLI to deploy a [connected registry](intro-connected-registry.md) as a module on an Azure IoT Edge device. The IoT Edge device can access the parent Azure container registry in the cloud. --For an overview of using a connected registry with IoT Edge, see [Using connected registry with Azure IoT Edge](overview-connected-registry-and-iot-edge.md). This scenario corresponds to a device at the [top layer](overview-connected-registry-and-iot-edge.md#top-layer) of an IoT Edge hierarchy. ---* Azure IoT Hub and IoT Edge device. For deployment steps, see [Quickstart: Deploy your first IoT Edge module to a virtual Linux device](../iot-edge/quickstart-linux.md). - > [!IMPORTANT] - > For later access to the modules deployed on the IoT Edge device, make sure that you open the ports 8000, 5671, and 8883 on the device. For configuration steps, see [How to open ports to a virtual machine with the Azure portal](/azure/virtual-machines/windows/nsg-quickstart-portal). --* Connected registry resource in Azure. For deployment steps, see quickstarts using the [Azure CLI][quickstart-connected-registry-cli] or [Azure portal][quickstart-connected-registry-portal]. -- * A connected registry in either `ReadWrite` or `ReadOnly` mode can be used in this scenario. - * In the commands in this article, the connected registry name is stored in the environment variable *CONNECTED_REGISTRY_RW*. ---## Retrieve connected registry configuration --Before deploying the connected registry to the IoT Edge device, you need to retrieve configuration settings from the connected registry resource in Azure. --Use the [az acr connected-registry get-settings][az-acr-connected-registry-get-settings] command to get the settings information required to install a connected registry. The following example specifies HTTPS as the parent protocol. This protocol is required when the parent registry is a cloud registry. --```azurecli -az acr connected-registry get-settings \ - --registry $REGISTRY_NAME \ - --name $CONNECTED_REGISTRY_RW \ - --parent-protocol https -``` --By default, the settings information doesn't include the [sync token](overview-connected-registry-access.md#sync-token) password, which is also needed to deploy the connected registry. Optionally, generate one of the passwords by passing the `--generate-password 1` or `generate-password 2` parameter. Save the generated password to a safe location. It cannot be retrieved again. --> [!WARNING] -> Regenerating a password rotates the sync token credentials. If you configured a device using the previous password, you need to update the configuration. ---## Configure a deployment manifest for IoT Edge --A deployment manifest is a JSON document that describes which modules to deploy to the IoT Edge device. For more information, see [Understand how IoT Edge modules can be used, configured, and reused](../iot-edge/module-composition.md). --To deploy the connected registry and API proxy modules using the Azure CLI, save the following deployment manifest locally as a `manifest.json` file. You will use the file path in the next section when you run the command to apply the configuration to your device. ---## Deploy the connected registry and API proxy modules on IoT Edge --Use the following command to deploy the connected registry and API proxy modules on the IoT Edge device, using the deployment manifest created in the previous section. Provide the ID of the IoT Edge top layer device and the name of the IoT Hub where indicated. --```azurecli -# Set the IOT_EDGE_TOP_LAYER_DEVICE_ID and IOT_HUB_NAME environment variables for use in the following Azure CLI command -IOT_EDGE_TOP_LAYER_DEVICE_ID=<device-id> -IOT_HUB_NAME=<hub-name> --az iot edge set-modules \ - --device-id $IOT_EDGE_TOP_LAYER_DEVICE_ID \ - --hub-name $IOT_HUB_NAME \ - --content manifest.json -``` --For details, see [Deploy Azure IoT Edge modules with Azure CLI](../iot-edge/how-to-deploy-modules-cli.md). --To check the status of the connected registry, use the following [az acr connected-registry show][az-acr-connected-registry-show] command. The name of the connected registry is the value of *$CONNECTED_REGISTRY_RW*. --```azurecli -az acr connected-registry show \ - --registry $REGISTRY_NAME \ - --name $CONNECTED_REGISTRY_RW \ - --output table -``` --After successful deployment, the connected registry shows a status of `Online`. --## Next steps --In this quickstart, you learned how to deploy a connected registry to an IoT Edge device. Continue to the next guides to learn how to pull images from the newly deployed connected registry or to deploy the connected registry on nested IoT Edge devices. ---> [!div class="nextstepaction"] -> [Pull images from a connected registry][pull-images-from-connected-registry] --> [!div class="nextstepaction"] -> [Tutorial: Deploy connected registry on nested IoT Edge devices][tutorial-connected-registry-nested] --<!-- LINKS - internal --> -[az-acr-connected-registry-get-settings]: /cli/azure/acr/connected-registry/install#az_acr_connected_registry_get_settings -[az-acr-connected-registry-show]: /cli/azure/acr/connected-registry#az_acr_connected_registry_show -[az-acr-import]:/cli/azure/acr#az_acr_import -[az-acr-token-credential-generate]: /cli/azure/acr/token/credential?#az_acr_token_credential_generate -[container-registry-intro]: container-registry-intro.md -[pull-images-from-connected-registry]: pull-images-from-connected-registry.md -[quickstart-connected-registry-cli]: quickstart-connected-registry-cli.md -[quickstart-connected-registry-portal]: quickstart-connected-registry-portal.md -[tutorial-connected-registry-nested]: tutorial-deploy-connected-registry-nested-iot-edge-cli.md |
container-registry | Resource Graph Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/resource-graph-samples.md | - Title: Azure Resource Graph sample queries for Azure Container Registry -description: Sample Azure Resource Graph queries for Azure Container Registry showing use of resource types and tables to access Azure Container Registry related resources and properties. --- Previously updated : 10/31/2023----# Azure Resource Graph sample queries for Azure Container Registry --This page is a collection of [Azure Resource Graph](../governance/resource-graph/overview.md) sample queries for Azure Container Registry. --## Sample queries ---## Next steps --- Learn more about the [query language](../governance/resource-graph/concepts/query-language.md).-- Learn more about how to [explore resources](../governance/resource-graph/concepts/explore-resources.md). |
container-registry | Scan Images Defender | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/scan-images-defender.md | - Title: Scan registry images with Microsoft Defender for Cloud -description: Learn about using Microsoft Defender for container registries to scan images in your Azure container registries --- Previously updated : 10/31/2023----# Scan registry images with Microsoft Defender for Cloud --To scan images in your Azure container registries for vulnerabilities, you can integrate one of the available Azure Marketplace solutions or, if you want to use Microsoft Defender for Cloud, optionally enable **Microsoft Defender for container registries** at the subscription level. --* Learn more about [Microsoft Defender for container registries](/azure/defender-for-cloud/defender-for-containers-va-acr) -* Learn more about [container security in Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-containers-introduction) --## Registry operations by Microsoft Defender for Cloud --Microsoft Defender for Cloud scans images that are pushed to a registry, imported into a registry, or any images pulled within the last 30 days. If vulnerabilities are detected, [recommended remediations](/azure/defender-for-cloud/defender-for-containers-va-acr#view-and-remediate-findings) appear in Microsoft Defender for Cloud. -- After you've taken the recommended steps to remediate the security issue, replace the image in your registry. Microsoft Defender for Cloud rescans the image to confirm that the vulnerabilities are remediated. --For details, see [Use Microsoft Defender for container registries](/azure/defender-for-cloud/defender-for-containers-va-acr). --> [!TIP] -> Microsoft Defender for Cloud authenticates with the registry to pull images for vulnerability scanning. If [resource logs](monitor-service-reference.md#resource-logs) are collected for your registry, you'll see registry login events and image pull events generated by Microsoft Defender for Cloud. These events are associated with an alphanumeric ID such as `b21cb118-5a59-4628-bab0-3c3f0e434cg6`. --## Scanning a network-restricted registry --Microsoft Defender for Cloud can scan images in a publicly accessible container registry or one that's protected with network access rules. If network rules are configured (that is, you disable public registry access, configure IP access rules, or create private endpoints), be sure to enable the network setting to [**allow trusted Microsoft services**](allow-access-trusted-services.md) to access the registry. By default, this setting is enabled in a new container registry. --## Next steps --* Learn more about registry access by [trusted services](allow-access-trusted-services.md). -* To restrict access to a registry using a private endpoint in a virtual network, see [Configure Azure Private Link for an Azure container registry](container-registry-private-link.md). -* To set up registry firewall rules, see [Configure public IP network rules](container-registry-access-selected-networks.md). |
container-registry | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md | - Title: Azure Policy Regulatory Compliance controls for Azure Container Registry -description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. --- Previously updated : 02/06/2024----# Azure Policy Regulatory Compliance controls for Azure Container Registry --[Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md) -provides Microsoft created and managed initiative definitions, known as _built-ins_, for the -**compliance domains** and **security controls** related to different compliance standards. This -page lists the **compliance domains** and **security controls** for Azure Container Registry. You -can assign the built-ins for a **security control** individually to help make your Azure resources -compliant with the specific standard. ----## Next steps --- Learn more about [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md).-- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). |
container-registry | Tasks Agent Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-agent-pools.md | - Title: Use dedicated pool to run task - Tasks -description: Set up a dedicated compute pool (agent pool) in your registry to run an Azure Container Registry task. --- Previously updated : 10/31/2023-----# Run an ACR task on a dedicated agent pool --Set up an Azure-managed VM pool (*agent pool*) to enable running your [Azure Container Registry tasks][acr-tasks] in a dedicated compute environment. After you've configured one or more pools in your registry, you can choose a pool to run a task in place of the service's default compute environment. --An agent pool provides: --- **Virtual network support** - Assign an agent pool to an Azure VNet, providing access to resources in the VNet such as a container registry, key vault, or storage.-- **Scale as needed** - Increase the number of instances in an agent pool for compute-intensive tasks, or scale to zero. Billing is based on pool allocation. For details, see [Pricing](https://azure.microsoft.com/pricing/details/container-registry/).-- **Flexible options** - Choose from different [pool tiers](#pool-tiers) and scale options to meet your task workload needs.-- **Azure management** - Task pools are patched and maintained by Azure, providing reserved allocation without the need to maintain the individual VMs.--This feature is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry SKUs][acr-tiers]. --> [!IMPORTANT] -> This feature is currently in preview, and some [limitations apply](#preview-limitations). Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA). -> --## Preview limitations --- Task agent pools currently support Linux nodes. Windows nodes aren't currently supported.-- Task agent pools are available in preview in the following regions: West US 2, South Central US, East US 2, East US, Central US, West Europe, North Europe, Canada Central, East Asia, Switzerland North, USGov Arizona, USGov Texas, and USGov Virginia.-- For each registry, the default total vCPU (core) quota is 16 for all standard agent pools and is 0 for isolated agent pools. Open a [support request][open-support-ticket] for additional allocation.--## Prerequisites --* To use the Azure CLI steps in this article, Azure CLI version 2.3.1 or later is required. If you need to install or upgrade, see [Install Azure CLI][azure-cli]. Or run in [Azure Cloud Shell](../cloud-shell/quickstart.md). -* If you don't already have a container registry, [create one][create-reg-cli] (Premium tier required) in a preview region. --## Pool tiers --Agent pool tiers provide the following resources per instance in the pool. --| Tier | Type | CPU | Memory (GB) | -| - | -- | | -- | -| S1 | standard | 2 | 3 | -| S2 | standard | 4 | 8 | -| S3 | standard | 8 | 16 | -| I6 | isolated | 64 | 216 | ---## Create and manage a task agent pool --### Set default registry (optional) --To simplify Azure CLI commands that follow, set the default registry by running the [az config][az-config] command: --```azurecli -az config set defaults.acr=<registryName> -``` --The following examples assume that you've set the default registry. If not, pass a `--registry <registryName>` parameter in each `az acr` command. --### Create agent pool --Create an agent pool by using the [az acr agentpool create][az-acr-agentpool-create] command. The following example creates a tier S2 pool (4 CPU/instance). By default, the pool contains 1 instance. --```azurecli -az acr agentpool create \ - --registry MyRegistry \ - --name myagentpool \ - --tier S2 -``` --> [!NOTE] -> Creating an agent pool and other pool management operations take several minutes to complete. --### Scale pool --Scale the pool size up or down with the [az acr agentpool update][az-acr-agentpool-update] command. The following example scales the pool to 2 instances. You can scale to 0 instances. --```azurecli -az acr agentpool update \ - --registry MyRegistry \ - --name myagentpool \ - --count 2 -``` --## Create pool in a virtual network --### Add firewall rules --Task agent pools require access to the following Azure services. The following firewall rules must be added to any existing network security groups or user-defined routes. --| Direction | Protocol | Source | Source Port | Destination | Dest Port | Used | Remarks | -| | -- | -- | -- | -- | | - | - | -| Outbound | TCP | VirtualNetwork | Any | AzureKeyVault | 443 | Default | | -| Outbound | TCP | VirtualNetwork | Any | Storage | 443 | Default | | -| Outbound | TCP | VirtualNetwork | Any | EventHub | 443 | Default | | -| Outbound | TCP | VirtualNetwork | Any | AzureActiveDirectory | 443 | Default | | -| Outbound | TCP | VirtualNetwork | Any | AzureMonitor | 443,12000 | Default | Port 12000 is a unique port used for diagnostics | --> [!NOTE] -> If your tasks require additional resources from the public internet, add the corresponding rules. For example, additional rules are needed to run a docker build task that pulls the base images from Docker Hub, or restores a NuGet package. --Customers basing their deployments with MCR can refer to [MCR/MAR firewall rules.](https://github.com/microsoft/containerregistry/blob/main/docs/client-firewall-rules.md) --#### Advanced network configuration --If the standard Firewall/NSG (Network Security Group) rules are deemed too permissive, and more fine-grained control is required for outbound connections, consider the following approach: --- Enable service endpoints on the agent pool subnet. This grants the agent pool access to its service dependencies while maintaining a secure network posture.-- It's important to note that outbound Firewall/NSG rules are still necessary. These rules facilitate the Virtual Network's ability to switch the source IP from public to private, which is an additional step beyond enabling service endpoints.- -More information on service endpoints is documented [here][az-vnet-svc-ep]. - -At minimum, the following service endpoints will be required - -- Microsoft.AzureActiveDirectory-- Microsoft.ContainerRegistry-- Microsoft.EventHub-- Microsoft.KeyVault-- Microsoft.Storage (or the corresponding storage regions taking geo-replication into account)- -> [!NOTE] -> Currently a service endpoint for Azure Monitor does not exist. If outbound traffic for Azure Monitor is not configured, the agent pool will be unable to emit diagnostic logs but may appear to still operate normally. In this case ACR will be unable to help fully troubleshoot any issues encountered so it is important that the network administrator take this into account when planning the network configuration. - -Also, it is important to note that all of ACR Tasks have pre-cached images for some of the more common use cases. Tasks will only cache a single version at a time, meaning that if the full tagged image reference is used, then the build agent will attempt to pull the image. For example, a common use case is `cmd: mcr.microsoft.com/acr/acr-cli:<tag>`. However, the pre-cached version is frequently updated, which means the actual version on the machine will likely be higher. In this case, the network configuration must configure a route for outbound traffic to the target registry host which in the example above would be mcr.microsoft.com. The same rules would apply to any other external public registry (docker.io, quay.io, ghcr.io, etc.). --### Create pool in VNet --The following example creates an agent pool in the *mysubnet* subnet of network *myvnet*: --```azurecli -# Get the subnet ID -subnetId=$(az network vnet subnet show \ - --resource-group myresourcegroup \ - --vnet-name myvnet \ - --name mysubnetname \ - --query id --output tsv) --az acr agentpool create \ - --registry MyRegistry \ - --name myagentpool \ - --tier S2 \ - --subnet-id $subnetId -``` --## Run task on agent pool --The following examples show how to specify an agent pool when queuing a task. --> [!NOTE] -> To use an agent pool in an ACR task, ensure that the pool contains at least 1 instance. -> --### Quick task --Queue a quick task on the agent pool by using the [az acr build][az-acr-build] command and pass the `--agent-pool` parameter: --```azurecli -az acr build \ - --registry MyRegistry \ - --agent-pool myagentpool \ - --image myimage:mytag \ - --file Dockerfile \ - https://github.com/Azure-Samples/acr-build-helloworld-node.git#main -``` --### Automatically triggered task --For example, create a scheduled task on the agent pool with [az acr task create][az-acr-task-create], passing the `--agent-pool` parameter. --```azurecli -az acr task create \ - --registry MyRegistry \ - --name mytask \ - --agent-pool myagentpool \ - --image myimage:mytag \ - --schedule "0 21 * * *" \ - --file Dockerfile \ - --context https://github.com/Azure-Samples/acr-build-helloworld-node.git#main \ - --commit-trigger-enabled false -``` --To verify task setup, run [az acr task run][az-acr-task-run]: --```azurecli -az acr task run \ - --registry MyRegistry \ - --name mytask -``` --### Query pool status --To find the number of runs currently scheduled on the agent pool, run [az acr agentpool show][az-acr-agentpool-show]. --```azurecli -az acr agentpool show \ - --registry MyRegistry \ - --name myagentpool \ - --queue-count -``` --## Next steps --For more examples of container image builds and maintenance in the cloud, check out the [ACR Tasks tutorial series](container-registry-tutorial-quick-task.md). ----[acr-tasks]: container-registry-tasks-overview.md -[acr-tiers]: container-registry-skus.md -[azure-cli]: /cli/azure/install-azure-cli -[open-support-ticket]: https://aka.ms/acr/support/create-ticket -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ -[az-config]: /cli/azure#az_config -[az-acr-agentpool-create]: /cli/azure/acr/agentpool#az_acr_agentpool_create -[az-acr-agentpool-update]: /cli/azure/acr/agentpool#az_acr_agentpool_update -[az-acr-agentpool-show]: /cli/azure/acr/agentpool#az_acr_agentpool_show -[az-acr-build]: /cli/azure/acr#az_acr_build -[az-acr-task-create]: /cli/azure/acr/task#az_acr_task_create -[az-acr-task-run]: /cli/azure/acr/task#az_acr_task_run -[create-reg-cli]: container-registry-get-started-azure-cli.md -[az-vnet-svc-ep]: ../virtual-network/virtual-network-service-endpoints-overview.md#secure-azure-services-to-virtual-networks |
container-registry | Tasks Consume Public Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-consume-public-content.md | - Title: Task workflow to manage public registry content -description: Create an automated Azure Container Registry Tasks workflow to track, manage, and consume public image content in a private Azure container registry. --- Previously updated : 10/31/2023-----# How to consume and maintain public content with Azure Container Registry Tasks --This article provides a sample workflow in Azure Container Registry to help you manage consuming and maintaining public content: --1. Import local copies of dependent public images. -1. Validate public images through security scanning and functional testing. -1. Promote the images to private registries for internal usage. -1. Trigger base image updates for applications dependent upon public content. -1. Use [Azure Container Registry Tasks](container-registry-tasks-overview.md) to automate this workflow. --The workflow is summarized in the following image: --![Consuming public content Workflow](./media/tasks-consume-public-content/consuming-public-content-workflow.png) --The gated import workflow helps manage your organization's dependencies on externally managed artifacts - for example, images sourced from public registries including [Docker Hub][docker-hub], [GCR][gcr], [Quay][quay], [GitHub Container Registry][ghcr], [Microsoft Container Registry][mcr], or even other [Azure container registries][acr]. --For background about the risks introduced by dependencies on public content and how to use Azure Container Registry to mitigate them, see the [OCI Consuming Public Content Blog post][oci-consuming-public-content] and [Manage public content with Azure Container Registry](buffer-gate-public-content.md). --You can use the Azure Cloud Shell or a local installation of the Azure CLI to complete this walkthrough. Azure CLI version 2.10 or later is recommended. If you need to install or upgrade, see [Install Azure CLI][install-cli]. --## Scenario overview --![import workflow components](./media/tasks-consume-public-content/consuming-public-content-objects.png) --This walkthrough sets up: --1. Three **container registries**, representing: - * A simulated [Docker Hub][docker-hub] (`publicregistry`) to support changing the base image - * Team registry (`contoso`) to share private images - * Company/team shared registry (`baseartifacts`) for imported public content -1. An **ACR task** in each registry. The tasks: - 1. Build a simulated public `node` image - 1. Import and validate the `node` image to the company/team shared registry - 1. Build and deploy the `hello-world` image -1. **ACR task definitions**, including configurations for: - * A collection of **registry credentials**, which are pointers to a key vault - * A collection of **secrets**, available within an `acr-task.yaml`, which are pointers to a key vault - * A collection of **configured values** used within an `acr-task.yaml` -1. An **Azure key vault** to secure all secrets -1. An **Azure container instance**, which hosts the `hello-world` build application --## Prerequisites --The following steps configure values for resources created and used in the walkthrough. --### Set environment variables --Configure variables unique to your environment. We follow best practices for placing resources with durable content in their own resource group to minimize accidental deletion. However, you can place these variables in a single resource group if desired. --The examples in this article are formatted for the bash shell. --```bash -# Set the three registry names, must be globally unique: -REGISTRY_PUBLIC=publicregistry -REGISTRY_BASE_ARTIFACTS=contosobaseartifacts -REGISTRY=contoso --# set the location all resources will be created in: -RESOURCE_GROUP_LOCATION=eastus --# default resource groups -REGISTRY_PUBLIC_RG=${REGISTRY_PUBLIC}-rg -REGISTRY_BASE_ARTIFACTS_RG=${REGISTRY_BASE_ARTIFACTS}-rg -REGISTRY_RG=${REGISTRY}-rg --# fully qualified registry urls -REGISTRY_DOCKERHUB_URL=docker.io -REGISTRY_PUBLIC_URL=${REGISTRY_PUBLIC}.azurecr.io -REGISTRY_BASE_ARTIFACTS_URL=${REGISTRY_BASE_ARTIFACTS}.azurecr.io -REGISTRY_URL=${REGISTRY}.azurecr.io --# Azure key vault for storing secrets, name must be globally unique -AKV=acr-task-credentials -AKV_RG=${AKV}-rg --# ACI for hosting the deployed application -ACI=hello-world-aci -ACI_RG=${ACI}-rg -``` --### Git repositories and tokens --To simulate your environment, fork each of the following Git repos into repositories you can manage. --* https://github.com/importing-public-content/base-image-node.git -* https://github.com/importing-public-content/import-baseimage-node.git -* https://github.com/importing-public-content/hello-world.git --Then, update the following variables for your forked repositories. --The `:main` appended to the end of the git URLs represents the default repository branch. --```bash -GIT_BASE_IMAGE_NODE=https://github.com/<your-fork>/base-image-node.git#main -GIT_NODE_IMPORT=https://github.com/<your-fork>/import-baseimage-node.git#main -GIT_HELLO_WORLD=https://github.com/<your-fork>/hello-world.git#main -``` --You need a [GitHub access token (PAT)][git-token] for ACR Tasks to clone and establish Git webhooks. For steps to create a token with the required permissions to a private repo, see [Create a GitHub access token](container-registry-tutorial-build-task.md#create-a-github-personal-access-token). --```bash -GIT_TOKEN=<set-git-token-here> -``` --### Docker Hub credentials -To avoid throttling and identity requests when pulling images from Docker Hub, create a [Docker Hub token][docker-hub-tokens]. Then, set the following environment variables: --```bash -REGISTRY_DOCKERHUB_USER=<yourusername> -REGISTRY_DOCKERHUB_PASSWORD=<yourtoken> -``` --### Create registries --Using Azure CLI commands, create three Premium tier container registries, each in its own resource group: --```azurecli-interactive -az group create --name $REGISTRY_PUBLIC_RG --location $RESOURCE_GROUP_LOCATION -az acr create --resource-group $REGISTRY_PUBLIC_RG --name $REGISTRY_PUBLIC --sku Premium --az group create --name $REGISTRY_BASE_ARTIFACTS_RG --location $RESOURCE_GROUP_LOCATION -az acr create --resource-group $REGISTRY_BASE_ARTIFACTS_RG --name $REGISTRY_BASE_ARTIFACTS --sku Premium --az group create --name $REGISTRY_RG --location $RESOURCE_GROUP_LOCATION -az acr create --resource-group $REGISTRY_RG --name $REGISTRY --sku Premium -``` --### Create key vault and set secrets --Create a key vault: --```azurecli-interactive -az group create --name $AKV_RG --location $RESOURCE_GROUP_LOCATION -az keyvault create --resource-group $AKV_RG --name $AKV -``` --Set Docker Hub username and token in the key vault: --```azurecli-interactive -az keyvault secret set \ vault-name $AKV \name registry-dockerhub-user \value $REGISTRY_DOCKERHUB_USER--az keyvault secret set \ vault-name $AKV \name registry-dockerhub-password \value $REGISTRY_DOCKERHUB_PASSWORD-``` --Set and verify a Git PAT in the key vault: --```azurecli-interactive -az keyvault secret set --vault-name $AKV --name github-token --value $GIT_TOKEN --az keyvault secret show --vault-name $AKV --name github-token --query value -o tsv -``` --### Create resource group for an Azure container instance --This resource group is used in a later task when deploying the `hello-world` image. --```azurecli-interactive -az group create --name $ACI_RG --location $RESOURCE_GROUP_LOCATION -``` --## Create public `node` base image --To simulate the `node` image on Docker Hub, create an [ACR task][acr-task] to build and maintain the public image. This setup allows simulating changes by the `node` image maintainers. --```azurecli-interactive -az acr task create \ - --name node-public \ - -r $REGISTRY_PUBLIC \ - -f acr-task.yaml \ - --context $GIT_BASE_IMAGE_NODE \ - --git-access-token $(az keyvault secret show \ - --vault-name $AKV \ - --name github-token \ - --query value -o tsv) \ - --set REGISTRY_FROM_URL=${REGISTRY_DOCKERHUB_URL}/ \ - --assign-identity -``` --To avoid Docker throttling, add [Docker Hub credentials][docker-hub-tokens] to the task. The [acr task credentials][acr-task-credentials] command may be used to pass Docker credentials to any registry, including Docker Hub. --```azurecli-interactive -az acr task credential add \ - -n node-public \ - -r $REGISTRY_PUBLIC \ - --login-server $REGISTRY_DOCKERHUB_URL \ - -u https://${AKV}.vault.azure.net/secrets/registry-dockerhub-user \ - -p https://${AKV}.vault.azure.net/secrets/registry-dockerhub-password \ - --use-identity [system] -``` --Grant the task access to read values from the key vault: --```azurecli-interactive -az keyvault set-policy \ - --name $AKV \ - --resource-group $AKV_RG \ - --object-id $(az acr task show \ - --name node-public \ - --registry $REGISTRY_PUBLIC \ - --query identity.principalId --output tsv) \ - --secret-permissions get -``` --[Tasks can be triggered][acr-task-triggers] by Git commits, base image updates, timers, or manual runs. --Run the task manually to generate the `node` image: --```azurecli-interactive -az acr task run -r $REGISTRY_PUBLIC -n node-public -``` --List the image in the simulated public registry: --```azurecli-interactive -az acr repository show-tags -n $REGISTRY_PUBLIC --repository node -``` --## Create the `hello-world` image --Based on the simulated public `node` image, build a `hello-world` image. --### Create token for pull access to simulated public registry --Create an [access token][acr-tokens] to the simulated public registry, scoped to `pull`. Then, set it in the key vault: --```azurecli-interactive -az keyvault secret set \ - --vault-name $AKV \ - --name "registry-${REGISTRY_PUBLIC}-user" \ - --value "registry-${REGISTRY_PUBLIC}-user" --az keyvault secret set \ - --vault-name $AKV \ - --name "registry-${REGISTRY_PUBLIC}-password" \ - --value $(az acr token create \ - --name "registry-${REGISTRY_PUBLIC}-user" \ - --registry $REGISTRY_PUBLIC \ - --scope-map _repositories_pull \ - -o tsv \ - --query credentials.passwords[0].value) -``` --### Create token for pull access by Azure Container Instances --Create an [access token][acr-tokens] to the registry hosting the `hello-world` image, scoped to pull. Then, set it in the key vault: --```azurecli-interactive -az keyvault secret set \ - --vault-name $AKV \ - --name "registry-${REGISTRY}-user" \ - --value "registry-${REGISTRY}-user" --az keyvault secret set \ - --vault-name $AKV \ - --name "registry-${REGISTRY}-password" \ - --value $(az acr token create \ - --name "registry-${REGISTRY}-user" \ - --registry $REGISTRY \ - --repository hello-world content/read \ - -o tsv \ - --query credentials.passwords[0].value) -``` --### Create task to build and maintain `hello-world` image --The following command creates a task from the definition in `acr-tasks.yaml` in the `hello-world` repo. The task steps build the `hello-world` image and then deploy it to Azure Container Instances. The resource group for Azure Container Instances was created in a previous section. By calling `az container create` in the task with only a difference in the `image:tag`, the task deploys to same instance throughout this walkthrough. --```azurecli-interactive -az acr task create \ - -n hello-world \ - -r $REGISTRY \ - -f acr-task.yaml \ - --context $GIT_HELLO_WORLD \ - --git-access-token $(az keyvault secret show \ - --vault-name $AKV \ - --name github-token \ - --query value -o tsv) \ - --set REGISTRY_FROM_URL=${REGISTRY_PUBLIC_URL}/ \ - --set KEYVAULT=$AKV \ - --set ACI=$ACI \ - --set ACI_RG=$ACI_RG \ - --assign-identity -``` --Add credentials to the task for the simulated public registry: --```azurecli-interactive -az acr task credential add \ - -n hello-world \ - -r $REGISTRY \ - --login-server $REGISTRY_PUBLIC_URL \ - -u https://${AKV}.vault.azure.net/secrets/registry-${REGISTRY_PUBLIC}-user \ - -p https://${AKV}.vault.azure.net/secrets/registry-${REGISTRY_PUBLIC}-password \ - --use-identity [system] -``` --Grant the task access to read values from the key vault: --```azurecli-interactive -az keyvault set-policy \ - --name $AKV \ - --resource-group $AKV_RG \ - --object-id $(az acr task show \ - --name hello-world \ - --registry $REGISTRY \ - --query identity.principalId --output tsv) \ - --secret-permissions get -``` --Grant the task access to create and manage Azure Container Instances by granting access to the resource group: --```azurecli-interactive -az role assignment create \ - --assignee $(az acr task show \ - --name hello-world \ - --registry $REGISTRY \ - --query identity.principalId --output tsv) \ - --scope $(az group show -n $ACI_RG --query id -o tsv) \ - --role owner -``` --With the task created and configured, run the task to build and deploy the `hello-world` image: --```azurecli-interactive -az acr task run -r $REGISTRY -n hello-world -``` --Once created, get the IP address of the container hosting the `hello-world` image. --```azurecli-interactive -az container show \ - --resource-group $ACI_RG \ - --name ${ACI} \ - --query ipAddress.ip \ - --out tsv -``` --In your browser, go to the IP address to see the running application. --## Update the base image with a "questionable" change --This section simulates a change to the base image that could cause problems in the environment. --1. Open `Dockerfile` in the forked `base-image-node` repo. -1. Change the `BACKGROUND_COLOR` to `Orange` to simulate the change. --```Dockerfile -ARG REGISTRY_NAME= -FROM ${REGISTRY_NAME}node:15-alpine -ENV NODE_VERSION 15-alpine -ENV BACKGROUND_COLOR Orange -``` --Commit the change and watch for ACR Tasks to automatically start building. --Watch for the task to start executing: --```azurecli-interactive -watch -n1 az acr task list-runs -r $REGISTRY_PUBLIC -o table -``` --You should eventually see STATUS `Succeeded` based on a TRIGGER of `Commit`: --```azurecli-interactive -RUN ID TASK PLATFORM STATUS TRIGGER STARTED DURATION -- - -- --ca4 hub-node linux Succeeded Commit 2020-10-24T05:02:29Z 00:00:22 -``` --Type **Ctrl+C** to exit the watch command, then view the logs for the most recent run: --```azurecli-interactive -az acr task logs -r $REGISTRY_PUBLIC -``` --Once the `node` image is completed, `watch` for ACR Tasks to automatically start building the `hello-world` image: --```azurecli-interactive -watch -n1 az acr task list-runs -r $REGISTRY -o table -``` --You should eventually see STATUS `Succeeded` based on a TRIGGER of `Image Update`: --```azurecli-interactive -RUN ID TASK PLATFORM STATUS TRIGGER STARTED DURATION -- - -- --dau hello-world linux Succeeded Image Update 2020-10-24T05:08:45Z 00:00:31 -``` --Type **Ctrl+C** to exit the watch command, then view the logs for the most recent run: --```azurecli-interactive -az acr task logs -r $REGISTRY -``` --Once completed, get the IP address of the site hosting the updated `hello-world` image: --```azurecli-interactive -az container show \ - --resource-group $ACI_RG \ - --name ${ACI} \ - --query ipAddress.ip \ - --out tsv -``` --In your browser, go to the site, which should have an orange (questionable) background. --### Checking in --At this point, you've created a `hello-world` image that is automatically built on Git commits and changes to the base `node` image. In this example, the task builds against a base image in Azure Container Registry, but any supported registry could be used. --The base image update automatically retriggers the task run when the `node` image is updated. As seen here, not all updates are wanted. --## Gated imports of public content --To prevent upstream changes from breaking critical workloads, security scanning and functional tests may be added. --In this section, you create an ACR task to: --* Build a test image -* Run a functional test script `./test.sh` against the test image -* If the image tests successfully, import the public image to the **baseimages** registry --### Add automation testing --To gate any upstream content, automated testing is implemented. In this example, a `test.sh` is provided which checks the `$BACKGROUND_COLOR`. If the test fails, an `EXIT_CODE` of `1` is returned which causes the ACR task step to fail, ending the task run. The tests can be expanded in any form of tools, including logging results. The gate is managed by a pass/fail response in the script, reproduced here: --```bash -if [ ""$(echo $BACKGROUND_COLOR | tr '[:lower:]' '[:upper:]') = 'RED' ]; then - echo -e "\e[31mERROR: Invalid Color:\e[0m" ${BACKGROUND_COLOR} - EXIT_CODE=1 -else - echo -e "\e[32mValidation Complete - No Known Errors\e[0m" -fi -exit ${EXIT_CODE} -``` -### Task YAML --Review the `acr-task.yaml` in the `import-baseimage-node` repo, which performs the following steps: --1. Build the test base image using the following Dockerfile: - ```dockerfile - ARG REGISTRY_FROM_URL= - FROM ${REGISTRY_FROM_URL}node:15-alpine - WORKDIR /test - COPY ./test.sh . - CMD ./test.sh - ``` -1. When completed, validate the image by running the container, which runs `./test.sh` -1. Only if successfully completed, run the import steps, which are gated with `when: ['validate-base-image']` --```yaml -version: v1.1.0 -steps: - - id: build-test-base-image - # Build off the base image we'll track - # Add a test script to do unit test validations - # Note: the test validation image isn't saved to the registry - # but the task logs captures log validation results - build: > - --build-arg REGISTRY_FROM_URL={{.Values.REGISTRY_FROM_URL}} - -f ./Dockerfile - -t {{.Run.Registry}}/node-import:test - . - - id: validate-base-image - # only continues if node-import:test returns a non-zero code - when: ['build-test-base-image'] - cmd: "{{.Run.Registry}}/node-import:test" - - id: pull-base-image - # import the public image to base-artifacts - # Override the stable tag, - # and create a unique tag to enable rollback - # to a previously working image - when: ['validate-base-image'] - cmd: > - docker pull {{.Values.REGISTRY_FROM_URL}}node:15-alpine - - id: retag-base-image - when: ['pull-base-image'] - cmd: docker tag {{.Values.REGISTRY_FROM_URL}}node:15-alpine {{.Run.Registry}}/node:15-alpine - - id: retag-base-image-unique-tag - when: ['pull-base-image'] - cmd: docker tag {{.Values.REGISTRY_FROM_URL}}node:15-alpine {{.Run.Registry}}/node:15-alpine-{{.Run.ID}} - - id: push-base-image - when: ['retag-base-image', 'retag-base-image-unique-tag'] - push: - - "{{.Run.Registry}}/node:15-alpine" - - "{{.Run.Registry}}/node:15-alpine-{{.Run.ID}}" -``` --### Create task to import and test base image --```azurecli-interactive - az acr task create \ - --name base-import-node \ - -f acr-task.yaml \ - -r $REGISTRY_BASE_ARTIFACTS \ - --context $GIT_NODE_IMPORT \ - --git-access-token $(az keyvault secret show \ - --vault-name $AKV \ - --name github-token \ - --query value -o tsv) \ - --set REGISTRY_FROM_URL=${REGISTRY_PUBLIC_URL}/ \ - --assign-identity -``` --Add credentials to the task for the simulated public registry: --```azurecli-interactive -az acr task credential add \ - -n base-import-node \ - -r $REGISTRY_BASE_ARTIFACTS \ - --login-server $REGISTRY_PUBLIC_URL \ - -u https://${AKV}.vault.azure.net/secrets/registry-${REGISTRY_PUBLIC}-user \ - -p https://${AKV}.vault.azure.net/secrets/registry-${REGISTRY_PUBLIC}-password \ - --use-identity [system] -``` --Grant the task access to read values from the key vault: --```azurecli-interactive -az keyvault set-policy \ - --name $AKV \ - --resource-group $AKV_RG \ - --object-id $(az acr task show \ - --name base-import-node \ - --registry $REGISTRY_BASE_ARTIFACTS \ - --query identity.principalId --output tsv) \ - --secret-permissions get -``` --Run the import task: --```azurecli-interactive -az acr task run -n base-import-node -r $REGISTRY_BASE_ARTIFACTS -``` --> [!NOTE] -> If the task fails due to `./test.sh: Permission denied`, ensure that the script has execution permissions, and commit back to the Git repo: ->```bash ->chmod +x ./test.sh ->``` --## Update `hello-world` image to build from gated `node` image --Create an [access token][acr-tokens] to access the base-artifacts registry, scoped to `read` from the `node` repository. Then, set in the key vault: --```azurecli-interactive -az keyvault secret set \ - --vault-name $AKV \ - --name "registry-${REGISTRY_BASE_ARTIFACTS}-user" \ - --value "registry-${REGISTRY_BASE_ARTIFACTS}-user" --az keyvault secret set \ - --vault-name $AKV \ - --name "registry-${REGISTRY_BASE_ARTIFACTS}-password" \ - --value $(az acr token create \ - --name "registry-${REGISTRY_BASE_ARTIFACTS}-user" \ - --registry $REGISTRY_BASE_ARTIFACTS \ - --repository node content/read \ - -o tsv \ - --query credentials.passwords[0].value) -``` --Add credentials to the **hello-world** task for the base artifacts registry: --```azurecli-interactive -az acr task credential add \ - -n hello-world \ - -r $REGISTRY \ - --login-server $REGISTRY_BASE_ARTIFACTS_URL \ - -u https://${AKV}.vault.azure.net/secrets/registry-${REGISTRY_BASE_ARTIFACTS}-user \ - -p https://${AKV}.vault.azure.net/secrets/registry-${REGISTRY_BASE_ARTIFACTS}-password \ - --use-identity [system] -``` --Update the task to change the `REGISTRY_FROM_URL` to use the `BASE_ARTIFACTS` registry --```azurecli-interactive -az acr task update \ - -n hello-world \ - -r $REGISTRY \ - --set KEYVAULT=$AKV \ - --set REGISTRY_FROM_URL=${REGISTRY_BASE_ARTIFACTS_URL}/ \ - --set ACI=$ACI \ - --set ACI_RG=$ACI_RG -``` --Run the **hello-world** task to change its base image dependency: --```azurecli-interactive -az acr task run -r $REGISTRY -n hello-world -``` --## Update the base image with a "valid" change --1. Open the `Dockerfile` in `base-image-node` repo. -1. Change the `BACKGROUND_COLOR` to `Green` to simulate a valid change. --```Dockerfile -ARG REGISTRY_NAME= -FROM ${REGISTRY_NAME}node:15-alpine -ENV NODE_VERSION 15-alpine -ENV BACKGROUND_COLOR Green -``` --Commit the change and monitor the sequence of updates: --```azurecli-interactive -watch -n1 az acr task list-runs -r $REGISTRY_PUBLIC -o table -``` --Once running, type **Ctrl+C** and monitor the logs: --```azurecli-interactive -az acr task logs -r $REGISTRY_PUBLIC -``` --Once complete, monitor the **base-image-import** task: --```azurecli-interactive -watch -n1 az acr task list-runs -r $REGISTRY_BASE_ARTIFACTS -o table -``` --Once running, type **Ctrl+C** and monitor the logs: --```azurecli-interactive -az acr task logs -r $REGISTRY_BASE_ARTIFACTS -``` --Once complete, monitor the **hello-world** task: --```azurecli-interactive -watch -n1 az acr task list-runs -r $REGISTRY -o table -``` --Once running, type **Ctrl+C** and monitor the logs: --```azurecli-interactive -az acr task logs -r $REGISTRY -``` --Once completed, get the IP address of the site hosting the updated `hello-world` image: --```azurecli-interactive -az container show \ - --resource-group $ACI_RG \ - --name ${ACI} \ - --query ipAddress.ip \ - --out tsv -``` --In your browser, go to the site, which should have a green (valid) background. --### View the gated workflow --Perform the steps in the preceding section again, with a background color of red. --1. Open the `Dockerfile` in the `base-image-node` repo -1. Change the `BACKGROUND_COLOR` to `Red` to simulate an invalid change. --```Dockerfile -ARG REGISTRY_NAME= -FROM ${REGISTRY_NAME}node:15-alpine -ENV NODE_VERSION 15-alpine -ENV BACKGROUND_COLOR Red -``` --Commit the change and monitor the sequence of updates: --```azurecli-interactive -watch -n1 az acr task list-runs -r $REGISTRY_PUBLIC -o table -``` --Once running, type **Ctrl+C** and monitor the logs: --```azurecli-interactive -az acr task logs -r $REGISTRY_PUBLIC -``` --Once complete, monitor the **base-image-import** task: --```azurecli-interactive -watch -n1 az acr task list-runs -r $REGISTRY_BASE_ARTIFACTS -o table -``` --Once running, type **Ctrl+C** and monitor the logs: --```azurecli-interactive -az acr task logs -r $REGISTRY_BASE_ARTIFACTS -``` --At this point, you should see the **base-import-node** task fail validation and stop the sequence to publish a `hello-world` update. Output is similar to: --```console -[...] -2020/10/30 03:57:39 Launching container with name: validate-base-image -Validating Image -NODE_VERSION: 15-alpine -BACKGROUND_COLOR: Red -ERROR: Invalid Color: Red -2020/10/30 03:57:40 Container failed during run: validate-base-image. No retries remaining. -failed to run step ID: validate-base-image: exit status 1 -``` --### Publish an update to `hello-world` --Changes to the `hello-world` image will continue using the last validated `node` image. --Any additional changes to the base `node` image that pass the gated validations will trigger base image updates to the `hello-world` image. --## Cleaning up --When no longer needed, delete the resources used in this article. --```azurecli-interactive -az group delete -n $REGISTRY_RG --no-wait -y -az group delete -n $REGISTRY_PUBLIC_RG --no-wait -y -az group delete -n $REGISTRY_BASE_ARTIFACTS_RG --no-wait -y -az group delete -n $AKV_RG --no-wait -y -az group delete -n $ACI_RG --no-wait -y -``` --## Next steps --In this article, you used ACR tasks to create an automated gating workflow to introduce updated base images to your environment. See related information to manage images in Azure Container Registry. ---* [Recommendations for tagging and versioning container images](container-registry-image-tag-version.md) -* [Lock a container image in an Azure container registry](container-registry-image-lock.md) --[install-cli]: /cli/azure/install-azure-cli -[acr]: https://aka.ms/acr -[acr-repo-permissions]: ./container-registry-repository-scoped-permissions.md -[acr-task]: ./container-registry-tasks-overview.md -[acr-task-triggers]: container-registry-tasks-overview.md#task-scenarios -[acr-task-credentials]: container-registry-tasks-authentication-managed-identity.md#4-optional-add-credentials-to-the-task -[acr-tokens]: ./container-registry-repository-scoped-permissions.md -[aci]: https://aka.ms/aci -[alpine-public-image]: https://hub.docker.com/_/alpine -[docker-hub]: https://hub.docker.com -[docker-hub-tokens]: https://hub.docker.com/settings/security -[git-token]: https://github.com/settings/tokens -[gcr]: https://cloud.google.com/container-registry -[ghcr]: https://docs.github.com/en/free-pro-team@latest/packages/getting-started-with-github-container-registry/about-github-container-registry -[helm-charts]: https://helm.sh -[mcr]: https://aka.ms/mcr -[nginx-public-image]: https://hub.docker.com/_/nginx -[oci-artifacts]: ./container-registry-oci-artifacts.md -[oci-consuming-public-content]: https://opencontainers.org/posts/blog/2020-10-30-consuming-public-content/ -[opa]: https://www.openpolicyagent.org/ -[quay]: https://quay.io |
container-registry | Troubleshoot Artifact Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-artifact-cache.md | - Title: Troubleshoot Artifact cache -description: Learn how to troubleshoot the most common problems for a registry enabled with the Artifact cache feature. - Previously updated : 10/31/2023---# customer intent: As a user, I want to troubleshoot the most common problems for a registry enabled with the Artifact cache feature so that I can effectively use the feature. ---# Troubleshoot guide for Artifact cache --In this tutorial, you troubleshoot the most common problems for a registry enabled with the Artifact cache feature by identifying the Symptoms, causes, and potential solutions to effectively use the feature. --## Symptoms and Causes --May include one or more of the following issues: --- Cached images don't appear in a real repository - - [Cached images don't appear in a live repository](troubleshoot-artifact-cache.md#cached-images-dont-appear-in-a-live-repository) --- Credentials have an unhealthy status- - [Unhealthy Credentials](troubleshoot-artifact-cache.md#unhealthy-credentials) --- Unable to create a cache rule- - [Cache rule Limit](troubleshoot-artifact-cache.md#cache-rule-limit) --- Unable to create cache rule using a wildcard- - [Unable to create cache rule using a wildcard](troubleshoot-artifact-cache.md#unable-to-create-cache-rule-using-a-wildcard) --## Potential Solutions --### Cached images don't appear in a live repository --If you're having an issue with cached images not showing up in your repository in Azure Container Registry(ACR), we recommend verifying the repository path. Incorrect repository paths lead the cached images to not show up in your repository in ACR. --- The Login server for Docker Hub is `docker.io`.-- The Login server for Microsoft Artifact Registry is `mcr.microsoft.com`.--The Azure portal autofills these fields for you. However, many Docker repositories begin with `library/` in their path. For example, in-order to cache the `hello-world` repository, the correct Repository Path is `docker.io/library/hello-world`. --### Unhealthy Credentials --Credentials are a set of Key Vault secrets that operate as a Username and Password for private repositories. Unhealthy Credentials are often a result of these secrets no longer being valid. In the Azure portal, you can select the credentials, to edit and apply changes. --- Verify the secrets in Azure Key Vault are expired. -- Verify the secrets in Azure Key Vault are valid.-- Verify the access to the Azure Key Vault is assigned.--To assign the access to Azure Key Vault: --```azurecli-interactive -az keyvault set-policy --name myKeyVaultName --object-id myObjID --secret-permissions get -``` --Learn more about [Key Vaults][create-and-store-keyvault-credentials]. -Learn more about [Assigning the access to Azure Key Vault][az-keyvault-set-policy]. --### Unable to create a Cache rule --#### Cache rule Limit --If you're facing issues while creating a Cache rule, we recommend verifying if you have more than 1,000 cache rules created. --We recommend deleting any unwanted cache rules to avoid hitting the limit. --Learn more about the [Cache Terminology.](container-registry-artifact-cache.md#terminology) ---### Unable to create cache rule using a wildcard --If you're trying to create a cache rule, but there's a conflict with an existing rule. The error message suggests that there's already a cache rule with a wildcard for the specified target repository. --To resolve this issue, you need to follow these steps: --1. Identify Existing cache rule causing the conflict. Look for an existing rule that uses a wildcard (*) for the target repository. --1. Delete the conflicting cache rule that is overlapping source repository and wildcard. --1. Create a new cache rule with the desired wildcard and target repository. --1. Double-check your cache configuration to ensure that the new rule is correctly applied and there are no other conflicting rules. --## Upstream support --Artifact cache currently supports the following upstream registries: -->[!WARNING] -> Customers must generate [credential set](container-registry-artifact-cache.md#create-new-credentials) to source content from Docker hub. --| Upstream Registries | Support | Availability | -|-|-|--| -| Docker Hub | Supports authenticated pulls only. | Azure CLI, Azure portal | -| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | -| AWS Elastic Container Registry (ECR) Public Gallery | Supports unauthenticated pulls only. | Azure CLI, Azure portal | -| GitHub Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | -| Quay | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | -| registry.k8s.io | Supports both authenticated and unauthenticated pulls. | Azure CLI | -| Google Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI | --<!-- LINKS - External --> -[create-and-store-keyvault-credentials]:/azure/key-vault/secrets/quick-create-portal --[az-keyvault-set-policy]: /azure/key-vault/general/assign-access-policy#assign-an-access-policy - |
container-registry | Troubleshoot Artifact Streaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-artifact-streaming.md | - Title: "Troubleshoot Artifact streaming" -description: "Troubleshoot Artifact streaming in Azure Container Registry to diagnose and resolve with managing, scaling, and deploying artifacts through containerized platforms." --- Previously updated : 10/31/2023----# Troubleshoot Artifact streaming --The troubleshooting steps in this article can help you resolve common issues that you might encounter when using artifact streaming in Azure Container Registry (ACR). These steps and recommendations can help diagnose and resolve issues related to artifact streaming as well as provide insights into the underlying processes and logs for debugging purposes. --## Symptoms --* Conversion operation failed due to an unknown error. -* Troubleshooting Failed AKS Pod Deployments. -* Pod conditions indicate "UpgradeIfStreamableDisabled." -* Digest usage instead of Tag for Streaming Artifact. --## Causes --* Issues with authentication, network latency, image retrieval, streaming operations, or other issues. -* Issues with image pull or streaming, streaming artifacts configurations, image sources, and resource constraints. -* Issues with ACR configurations or permissions. --## Conversion operation failed --| Error Code | Error Message | Troubleshooting Info | -| | - | | -| UNKNOWN_ERROR | Conversion operation failed due to an unknown error. | Caused by an internal error. A retry helps here. If retry is unsuccessful, contact support. | -| RESOURCE_NOT_FOUND | Conversion operation failed because target resource isn't found. | If the target image isn't found in the registry, verify typos in the image digest. If the image is deleted, or missing in the target region (replication consistency isn't immediate for example) | -| UNSUPPORTED_PLATFORM | Conversion isn't currently supported for image platform. | Only linux/amd64 images are initially supported. | -| NO_SUPPORTED_PLATFORM_FOUND | Conversion isn't currently supported for any of the image platforms in the index. | Only linux/amd64 images are initially supported. No image with this platform is found in the target index. | -| UNSUPPORTED_MEDIATYPE | Conversion isn't supported for the image MediaType. | Conversion can only target images with media type: application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, application/vnd.docker.distribution.manifest.v2+json, or application/vnd.docker.distribution.manifest.list.v2+json | -| UNSUPPORTED_ARTIFACT_TYPE | Conversion isn't supported for the image ArtifactType. | Streaming Artifacts (Artifact type: application/vnd.azure.artifact.streaming.v1) can't be converted again. | -| IMAGE_NOT_RUNNABLE | Conversion isn't supported for nonrunnable images. | Only linux/amd64 runnable images are initially supported. | --## Troubleshooting Failed AKS Pod Deployments --If AKS pod deployment fails with an error related to image pulling, like the following example. --```bash -Failed to pull image "mystreamingtest.azurecr.io/jupyter/all-spark-notebook:latest": -rpc error: code = Unknown desc = failed to pull and unpack image -"mystreamingtest.azurecr.io/latestobd/jupyter/all-spark-notebook:latest": -failed to resolve reference "mystreamingtest.azurecr.io/jupyter/all-spark-notebook:latest": -unexpected status from HEAD request to http://localhost:8578/v2/jupyter/all-spark-notebook/manifests/latest?ns=mystreamingtest.azurecr.io:503 Service Unavailable -``` --To troubleshoot this issue, you should check the following guidelines: --1. Verify if the AKS has permissions to access the container registry `mystreamingtest.azurecr.io`. -1. Ensure that the container registry `mystreamingtest.azurecr.io` is accessible and properly attached to AKS. --## Checking for "UpgradeIfStreamableDisabled" Pod Condition: --If the AKS pod condition shows "UpgradeIfStreamableDisabled," check if the image is from an Azure Container Registry. --## Using Digest Instead of Tag for Streaming Artifact: --If you deploy the streaming artifact using digest instead of tag (for example, mystreamingtest.azurecr.io/jupyter/all-spark-notebook@sha256:4ef83ea6b0f7763c230e696709d8d8c398e21f65542db36e82961908bcf58d18), AKS pod event and condition message won't include streaming related information. However, you see fast container startup as the underlying container engine. This engine stream the image to AKS if it detects the actual image content is streamed. --## Related content --> [!div class="nextstepaction"] -> [Artifact streaming](./container-registry-artifact-streaming.md) |
container-registry | Troubleshoot Connected Registry Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-connected-registry-arc.md | - Title: "Known issues: Connected Registry Arc Extension" -description: "Learn how to troubleshoot the most common problems for a Connected Registry Arc Extension and resolve issues with ease." ---- Previously updated : 05/09/2024-#customer intent: As a customer, I want to understand the common issues with the connected registry Arc extension and how to troubleshoot them. ---# Troubleshoot connected registry extension --This article discusses some common error messages that you may receive when you install or update the connected registry extension for Arc-enabled Kubernetes clusters. --## How is the connected registry extension installed --The connected registry extension is released as a helm chart and installed by Helm V3. All components of the connected registry extension are installed in _connected-registry_ namespace. You can use the following commands to check the extension status. --```bash -# get the extension status -az k8s-extension show --name <extension-name> -# check status of all pods of connected registry extension -kubectl get pod -n connected-registry -# get events of the extension -kubectl get events -n connected-registry --sort-by='.lastTimestamp' -``` --## Common errors --### Error: can't reuse a name that is still in use --This error means the extension name you specified already exists. If the name is already in use, you need to use another name. --### Error: unable to create new content in namespace _connected-registry_ because it's being terminated --This error happens when an uninstallation operation isn't finished, and another installation operation is triggered. You can run `az k8s-extension show` command to check the provisioning status of the extension and make sure the extension has been uninstalled before taking other actions. --### Error: failed in download the Chart path not found --This error happens when you specify the wrong extension version. You need to make sure the specified version exists. If you want to use the latest version, you don't need to specify `--version`. --## Common Scenarios --### Scenario 1: Installation fails but doesn't show an error message --If the extension generates an error message when you create or update it, you can inspect where the creation failed by running the `az k8s-extension list` command: --```bash -az k8s-extension list \ resource-group <my-resource-group-name> \ cluster-name <my-cluster-name> \ cluster-type connectedClusters-``` - -**Solution:** Restart the cluster, register the service provider, or delete and reinstall connected registry --To fix this issue, try the following methods: --- Restart your Arc Kubernetes cluster. --- Register the KubernetesConfiguration service provider. --- Force delete and reinstall the connected registry extension. --### Scenario 2: Targeted connected registry version doesn't exist --When you try to install the connected registry extension to target a specific version, you receive an error message that states that the connected registry version doesn't exist. --**Solution:** Install again for a supported connected registry version --Try again to install the extension. Make sure that you use a supported version of connected registry. --## Common issues --### Issue: Extension creation stuck in running state --**Possibility 1:** Issue with Persistent Volume Claim (PVC) --- Check status of connected registry PVC -```bash -kubectl get pvc -n connected-registry -o yaml connected-registry-pvc -``` --The value of _phase_ under _status_ should be _bound_. If it doesnΓÇÖt change from _pending_, delete the extension. --- Check whether the desired storage class is in your list of storage classes: --```bash -kubectl get storageclass --all-namespaces -``` --- If not, recreate the extension and add- -```bash config pvc.storageClassName=ΓÇ¥standardΓÇ¥` -``` --- Alternatively, it could be an issue with not having enough space for the PVC. Recreate the extension with the parameter --```bash config pvc.storageRequest=ΓÇ¥250GiΓÇ¥` -``` --**Possibility 2:** Connection String is bad --- Check the logs for the connected registry Pod: --```bash -kubectl get pod -n connected-registry -``` --- Copy the name of the connected registry pod (e.g.: ΓÇ£connected-registry-8d886cf7f-w4prp") and paste it into the following command: --```bash -kubectl logs -n connected-registry connected-registry-8d886cf7f-w4prp -``` --- If you see the following error message, the connected registry's connection string is bad: --```bash -Response: '{"errors":[{"code":"UNAUTHORIZED","message":"Incorrect Password","detail":"Please visit https://aka.ms/acr#UNAUTHORIZED for more information."}]}' -``` --- Ensure that a _protected-settings-extension.json_ file has been created --```bash -cat protected-settings-extension.json -``` --- If needed, regenerate _protected-settings-extension.json_ --```bash -cat << EOF > protected-settings-extension.json -{ -"connectionString": "$(az acr connected-registry get-settings \ name myconnectedregistry \ registry myacrregistry \ parent-protocol https \ generate-password 1 \ query ACR_REGISTRY_CONNECTION_STRING --output tsv --yes)" -} -EOF -``` --- Update the extension to include the new connection string --```bash -az k8s-extension update \ cluster-name <myarck8scluster> \ cluster-type connectedClusters \ name <myconnectedregistry> \ --g <myresourcegroup> \ config-protected-file protected-settings-extension.json-``` --### Issue: Extension created, but connected registry is not an 'Online' state --**Possibility 1:** Previous connected registry has not been deactivated --This scenario commonly happens when a previous connected registry extension has been deleted and a new one has been created for the same connected registry. --- Check the logs for the connected registry Pod: --```bash -kubectl get pod -n connected-registry -``` --- Copy the name of the connected registry pod (e.g.: ΓÇ£connected-registry-xxxxxxxxx-xxxxx") and paste it into the following command: --```bash -kubectl logs -n connected-registry connected-registry-xxxxxxxxx-xxxxx -``` --- If you see the following error message, the connected registry needs to be deactivated: --`Response: '{"errors":[{"code":"ALREADY_ACTIVATED","message":"Failed to activate the connected registry as it is already activated by another instance. Only one instance is supported at any time.","detail":"Please visit https://aka.ms/acr#ALREADY_ACTIVATED for more information."}]}'` --- Run the following command to deactivate: --```azurecli -az acr connected-registry deactivate -n <myconnectedregistry> -r <mycontainerregistry> -``` --After a few minutes, the connected registry pod should be recreated, and the error should disappear. - -## Enable logging --- Run the [az acr connected-registry update] command to update the connected registry extension with the debug log level:--```azurecli -az acr connected-registry update --registry mycloudregistry --name myacrregistry --log-level debug -``` --- The following log levels can be applied to aid in troubleshooting:-- - **Debug** provides detailed information for debugging purposes. -- - **Information** provides general information for debugging purposes. -- - **Warning** indicates potential problems that aren't yet errors but might become one if no action is taken. -- - **Error** logs errors that prevent an operation from completing. -- - **None** turns off logging, so no log messages are written. --- Adjust the log level as needed to troubleshoot the issue.--The active selection provides more options to adjust the verbosity of logs when debugging issues with a connected registry. The following options are available: --The connected registry log level is specific to the connected registry's operations and determines the severity of messages that the connected registry handles. This setting is used to manage the logging behavior of the connected registry itself. --**--log-level** set the log level on the instance. The log level determines the severity of messages that the logger handle. By setting the log level, you can filter out messages that are below a certain severity. For example, if you set the log level to "warning" the logger handles warnings, errors, and critical messages, but it ignores information and debug messages. --The az cli log level controls the verbosity of the output messages during the operation of the Azure CLI. The Azure CLI (az) provides several verbosity options for log levels, which can be adjusted to control the amount of output information during its operation: --**--verbose** increases the verbosity of the logs. It provides more detailed information than the default setting, which can be useful for identifying issues. --**--debug** enables full debug logs. Debug logs provide the most detailed information, including all the information provided at the "verbose" level plus more details intended for diagnosing problems. --## Next steps --> [!div class="nextstepaction"] -> [Quickstart: Deploying the Connected Registry Arc Extension](quickstart-connected-registry-arc-cli.md) -> [Glossary of terms](connected-registry-glossary.md) |
container-registry | Tutorial Connected Registry Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-connected-registry-arc.md | - Title: "Secure and deploy connected registry Arc extension" -description: "Learn to secure the connected registry Arc extension deployment with HTTPS, TLS, optional no TLS, BYOC certificate, and trust distribution." ---- Previously updated : 06/17/2024--#customer intent: Learn how to secure and deploy the connected registry extension with HTTPS, TLS encryption, and upgrades/rollbacks. ----# Tutorial: Secure deployment methods for the connected registry extension --These tutorials cover various deployment scenarios for the connected registry extension in an Arc-enabled Kubernetes cluster. Once the connected registry extension is installed, you can synchronize images from your cloud registry to on-premises or remote locations. --Before you dive in, take a moment to learn how [Arc-enabled Kubernetes][Arc-enabled Kubernetes] works conceptually. --The connected registry can be securely deployed using various encryption methods. To ensure a successful deployment, follow the quickstart guide to review prerequisites and other pertinent information. By default, the connected registry is configured with HTTPS, ReadOnly mode, Trust Distribution, and the Cert Manager service. You can add more customizations and dependencies as needed, depending on your scenario. --### What is Cert Manager service? --The connected registry cert manager is a service that manages TLS certificates for the connected registry extension in an Azure Arc-enabled Kubernetes cluster. It ensures secure communication between the connected registry and other components by handling the creation, renewal, and distribution of certificates. This service can be installed as part of the connected registry deployment, or you can use an existing cert manager if it's already installed on your cluster. --[Cert-Manager][cert-manager] is an open-source Kubernetes add-on that automates the management and issuance of TLS certificates from various sources. It manages the lifecycle of certificates issued by CA pools created using CA Service, ensuring they are valid and renewed before they expire. --### What is trust distribution? --Connected registry trust distribution refers to the process of securely distributing trust between the connected registry service and Kubernetes clients within a cluster. This is achieved by using a Certificate Authority (CA), such as cert-manager, to sign TLS certificates, which are then distributed to both the registry service and the clients. This ensures that all entities can securely authenticate each other, maintaining a secure and trusted environment within the Kubernetes cluster. --## Prerequisites --To complete this tutorial, you need: --* Follow the [quickstart][quickstart] to securely deploy the connected registry extension. --## Deploy connected registry extension using your preinstalled cert-manager --In this tutorial, we demonstrate how to use a preinstalled cert-manager service on the cluster. This setup gives you control over certificate management, enabling you to deploy the connected registry extension with encryption by following the steps provided: --Run the [az-k8s-extension-create][az-k8s-extension-create] command in the [quickstart][quickstart] and set the `cert-manager.enabled=true` and `cert-manager.install=false` parameters to determine the cert-manager service is installed and enabled: --```azurecli - az k8s-extension create --cluster-name myarck8scluster \ - --cluster-type connectedClusters \ - --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \ - --name myconnectedregistry \ - --resource-group myresourcegroup \ - --config service.clusterIP=192.100.100.1 \ - --config cert-manager.install=false \ - --config-protected-file protected-settings-extension.json -``` --## Deploy connected registry extension using bring your own certificate (BYOC) --In this tutorial, we demonstrate how to use your own certificate (BYOC) on the cluster. BYOC allows you to use your own public certificate and private key pair, giving you control over certificate management. This setup enables you to deploy the connected registry extension with encryption by following the provided steps: -->[!NOTE] ->BYOC is applicable for customers who bring their own certificate that is already trusted by their Kubernetes nodes. It is not recommended to manually update the nodes to trust the certificates. --Follow the [quickstart][quickstart] and add the public certificate and private key string variable + value pair. --1. Create self-signed SSL cert with connected-registry service IP as the SAN --```bash - mkdir /certs -``` --```bash -openssl req -newkey rsa:4096 -nodes -sha256 -keyout /certs/mycert.key -x509 -days 365 -out /certs/mycert.crt -addext "subjectAltName = IP:<service IP>" -``` --2. Get base64 encoded strings of these cert files --```bash -export TLS_CRT=$(cat mycert.crt | base64 -w0) -export TLS_KEY=$(cat mycert.key | base64 -w0) -``` --3. Protected settings file example with secret in JSON format: --> [!NOTE] -> The public certificate and private key pair must be encoded in base64 format and added to the protected settings file. - -```json - { - "connectionString": "[connection string here]", - "tls.crt": $TLS_CRT, - "tls.key": $TLS_KEY, - "tls.cacrt": $TLS_CRT - } -``` --4. Now, you can deploy the Connected registry extension with HTTPS (TLS encryption) using the public certificate and private key pair management by configuring variables set to `cert-manager.enabled=false` and `cert-manager.install=false`. With these parameters, the cert-manager isn't installed or enabled since the public certificate and private key pair is used instead for encryption. --5. Run the [az-k8s-extension-create][az-k8s-extension-create] command for deployment after protected settings file is edited: -- ```azurecli - az k8s-extension create --cluster-name myarck8scluster \ - --cluster-type connectedClusters \ - --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \ - --name myconnectedregistry \ - --resource-group myresourcegroup \ - --config service.clusterIP=192.100.100.1 \ - --config cert-manager.enabled=false \ - --config cert-manager.install=false \ - --config-protected-file protected-settings-extension.json - ``` --## Deploy connected registry with Kubernetes secret management --In this tutorial, we demonstrate how to use a [Kubernetes secret][Kubernetes secret] on your cluster. Kubernetes secret allows you to securely manage authorized access between pods within the cluster. This setup enables you to deploy the connected registry extension with encryption by following the provided steps: --Follow the [quickstart][quickstart] and add the Kubernetes TLS secret string variable + value pair. --1. Create self-signed SSL cert with connected-registry service IP as the SAN --```bash -mkdir /certs -``` --```bash -openssl req -newkey rsa:4096 -nodes -sha256 -keyout /certs/mycert.key -x509 -days 365 -out /certs/mycert.crt -addext "subjectAltName = IP:<service IP>" -``` --2. Get base64 encoded strings of these cert files --```bash -export TLS_CRT=$(cat mycert.crt | base64 -w0) -export TLS_KEY=$(cat mycert.key | base64 -w0) -``` --3. Create k8s secret --```bash -cat <<EOF | kubectl apply -f - -apiVersion: v1 -kind: Secret -metadata: - name: k8secret - type: kubernetes.io/tls -data: - ca.crt: $TLS_CRT - tls.crt: $TLS_CRT - tls.key: $TLS_KEY -EOF -``` --4. Protected settings file example with secret in JSON format: -- ```json - { - "connectionString": "[connection string here]", - "tls.secret": ΓÇ£k8secretΓÇ¥ - } - ``` --Now, you can deploy the Connected registry extension with HTTPS (TLS encryption) using the Kubernetes secret management by configuring variables set to `cert-manager.enabled=false` and `cert-manager.install=false`. With these parameters, the cert-manager isn't installed or enabled since the Kubernetes secret is used instead for encryption. --5. Run the [az-k8s-extension-create][az-k8s-extension-create] command for deployment after protected settings file is edited: -- ```azurecli - az k8s-extension create --cluster-name myarck8scluster \ - --cluster-type connectedClusters \ - --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \ - --name myconnectedregistry \ - --resource-group myresourcegroup \ - --config service.clusterIP=192.100.100.1 \ - --config cert-manager.enabled=false \ - --config cert-manager.install=false \ - --config-protected-file protected-settings-extension.json - ``` --## Deploy the connected registry using your own trust distribution and disable the connected registry's default trust distribution --In this tutorial, we demonstrate how to configure trust distribution on the cluster. While using your own Kubernetes secret or public certificate and private key pairs, you can deploy the connected registry extension with TLS encryption, your inherent trust distribution, and reject the connected registryΓÇÖs default trust distribution. This setup enables you to deploy the connected registry extension with encryption by following the provided steps: --1. Follow the [quickstart][quickstart] to add either the Kubernetes secret or public certificate, and private key variable + value pairs in the protected settings file in JSON format. --2. Run the [az-k8s-extension-create][az-k8s-extension-create] command in [quickstart][quickstart] and set the `trustDistribution.enabled=false`, `trustDistribution.skipNodeSelector=false` parameters to reject Connected registry trust distribution: - - ```azurecli - az k8s-extension create --cluster-name myarck8scluster \ - --cluster-type connectedClusters \ - --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \ - --name myconnectedregistry \ - --resource-group myresourcegroup \ - --config service.clusterIP=192.100.100.1 \ - --config trustDistribution.enabled=false \ - --config cert-manager.enabled=false \ - --config cert-manager.install=false \ - --config-protected-file <JSON file path> - ``` --With these parameters, cert-manager isn't installed or enabled, additionally, the Connected registry trust distribution isn't enforced. Instead you're using the cluster provided trust distribution for establishing trust between the Connected registry and the client nodes. --## Clean up resources --By deleting the deployed Connected registry extension, you remove the corresponding Connected registry pods and configuration settings. --1. Run the [az-k8s-extension-delete][az-k8s-extension-delete] command to delete the Connected registry extension: -- ```azurecli - az k8s-extension delete --name myconnectedregistry - --cluster-name myarcakscluster \ - --resource-group myresourcegroup \ - --cluster-type connectedClusters - ``` --2. Run the [az acr connected-registry delete][az-acr-connected-registry-delete] command to delete the Connected registry: -- ```azurecli - az acr connected-registry delete --registry myacrregistry \ - --name myconnectedregistry \ - --resource-group myresourcegroup - ``` --By deleting the Connected registry extension and the Connected registry, you remove all the associated resources and configurations. --## Next steps --- [Enable Connected registry with Azure arc CLI][quickstart]-- [Upgrade Connected registry with Azure arc](tutorial-connected-registry-upgrade.md)-- [Sync Connected registry with Azure arc in Scheduled window](tutorial-connected-registry-sync.md)-- [Troubleshoot Connected registry with Azure arc](troubleshoot-connected-registry-arc.md)-- [Glossary of terms](connected-registry-glossary.md)--<!-- LINKS - internal --> -[create-acr]: container-registry-get-started-azure-cli.md -[dedicated data endpoints]: container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints -[Install Azure CLI]: /cli/azure/install-azure-cli -[k8s-extension]: /cli/azure/k8s-extension -[azure-resource-provider-requirements]: /azure/azure-arc/kubernetes/system-requirements#azure-resource-provider-requirements -[quickstart-connect-cluster]: /azure/azure-arc/kubernetes/quickstart-connect-cluster -[tutorial-aks-cluster]: /azure/aks/tutorial-kubernetes-deploy-cluster?tabs=azure-cli -[quickstart]: quickstart-connected-registry-arc-cli.md -[Arc-enabled Kubernetes]: /azure/azure-arc/kubernetes/overview -[cert-manager]: https://cert-manager.io/ -[Kubernetes secret]: https://kubernetes.io/docs/concepts/configuration/secret/ -<!-- LINKS - external --> -[az-k8s-extension-create]: /cli/azure/k8s-extension#az-k8s-extension-create -[az-k8s-extension-delete]: /cli/azure/k8s-extension#az-k8s-extension-delete -[az-acr-connected-registry-delete]: /cli/azure/acr/connected-registry#az-acr-connected-registry-delete |
container-registry | Tutorial Connected Registry Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-connected-registry-sync.md | - Title: "Connected registry synchronization scheduling" -description: "Sync the Connected registry extension with Azure Arc synchronization schedule and window." ---- Previously updated : 06/17/2024--#customer intent: Learn how to sync the connected registry extension using a synchronization schedule and window. ---# Configuring the connected registry sync schedule and window --In this tutorial, youΓÇÖll learn how to configure the synchronization for a connected registry. The process includes updating the connected registry extension with a synchronization schedule and window. --YouΓÇÖll be guided on how to update the synchronization schedule using Azure CLI commands. This tutorial covers setting up the connected registry to sync continuously every minute or to sync daily at midnight. --The commands utilize CRON expressions to define the sync schedule and the ISO 8601 duration format for the sync window. Remember to replace the placeholders with your actual registry names when executing the commands. --## Prerequisites --To complete this tutorial, you need the following resources: --* Follow the [quickstart][quickstart] as needed. --## Update the connected registry to sync every day at midnight --Run the [az acr connected-registry update][az-acr-connected-registry-update] command to update the connected registry synchronization schedule to occasionally connect and sync every day at midnight with sync window for 4 hours duration. --For example, the following command configures the connected registry `myconnectedregistry` to schedule sync daily occur every day at 12:00 PM UTC at midnight and set the synchronization window to 4 hours (PT4H). The duration for which the connected registry will sync with the parent ACR `myacrregistry` after the sync initiates. --```azurecli -az acr connected-registry update --registry myacrregistry \ name myconnectedregistry \ sync-schedule "0 12 * * *" \sync-window PT4H-``` --The configuration syncs the connected registry daily at noon UTC for 4 hours. --## Update the connected registry to sync continuously every minute --Run the [az acr connected-registry update][az-acr-connected-registry-update] command to update the connected registry synchronization to connect and sync continuously every minute. --For example, the following command configures the connected registry `myconnectedregistry` to schedule sync every minute with the cloud registry. --```azurecli -az acr connected-registry update --registry myacrregistry \ name myconnectedregistry \ sync-schedule "* * * * *" -``` --The configuration syncs the connected registry with the cloud registry every minute. --## Next steps --- [Enable Connected registry with Azure arc CLI][quickstart]-- [Deploy the Connected registry Arc extension](tutorial-connected-registry-arc.md)-- [Upgrade Connected registry with Azure arc](tutorial-connected-registry-upgrade.md)-- [Troubleshoot Connected registry with Azure arc](troubleshoot-connected-registry-arc.md)-- [Glossary of terms](connected-registry-glossary.md)--<!-- LINKS - internal --> -[az-acr-connected-registry-update]: /cli/azure/acr/connected-registry#az-acr-connected-registry-update -[quickstart]: quickstart-connected-registry-arc-cli.md |
container-registry | Tutorial Connected Registry Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-connected-registry-upgrade.md | - Title: "Upgrade and roll back connected registry Arc extension version" -description: "Upgrade and roll back the connected registry Arc extension version. Learn how to upgrade and roll back the connected registry extension version in this tutorial." ---- Previously updated : 06/17/2024--#customer intent: Learn how to upgrade and roll back the connected registry Arc extension. ---# Upgrade and roll back the connected registry extension version --In this tutorial, you learn how to upgrade and roll back the connected registry extension version. --## Prerequisites --To complete this tutorial, you need the following resources: --* Follow the [quickstart][quickstart] as needed. --## Deploy the connected registry extension with auto upgrade enabled --Follow the [quickstart][quickstart] to edit the [az-k8s-extension-create][az-k8s-extension-create] command and include the `--auto-upgrade-minor-version true` parameter. This parameter automatically upgrades the extension to the latest version whenever a new version is available. --```azurecli - az k8s-extension create --cluster-name myarck8scluster \ - --cluster-type connectedClusters \ - --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \ - --name myconnectedregistry \ - --resource-group myresourcegroup \ - --config service.clusterIP=192.100.100.1 \ - --config-protected-file protected-settings-extension.json \ - --auto-upgrade-minor-version true -``` --## Deploy the connected registry extension with auto roll back enabled --> [!IMPORTANT] -> When a customer pins to a specific version, the extension does not auto-rollback. Auto-rollback will only occur if the--auto-upgrade-minor-version flag is set to true. --Follow the [quickstart][quickstart] to edit the [az k8s-extension update] command and add --version with your desired version. This example uses version 0.6.0. This parameter updates the extension version to the desired pinned version. --```azurecli - az k8s-extension update --cluster-name myarck8scluster \ - --cluster-type connectedClusters \ - --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \ - --name myconnectedregistry \ - --resource-group myresourcegroup \ - --config service.clusterIP=192.100.100.1 \ - --config-protected-file <JSON file path> \ - --auto-upgrade-minor-version true \ - --version 0.6.0 -``` --## Deploy the connected registry extension using manual upgrade steps --Follow the [quickstart][quickstart] to edit the [az-k8s-extension-update][az-k8s-extension-update] command and add--version with your desired version. This example uses version 0.6.1. This parameter upgrades the extension version to 0.6.1. --```azurecli - az k8s-extension update --cluster-name myarck8scluster \ - --cluster-type connectedClusters \ - --name myconnectedregistry \ - --resource-group myresourcegroup \ - --config service.clusterIP=192.100.100.1 \ - --auto-upgrade-minor-version false \ - --version 0.6.1 -``` --## Next steps --In this tutorial, you learned how to upgrade the Connected registry extension with Azure Arc. --- [Enable Connected registry with Azure arc CLI][quickstart]-- [Deploy the Connected registry Arc extension](tutorial-connected-registry-arc.md)-- [Sync Connected registry with Azure arc](tutorial-connected-registry-sync.md)-- [Troubleshoot Connected registry with Azure arc](troubleshoot-connected-registry-arc.md)-- [Glossary of terms](connected-registry-glossary.md)--[quickstart]: quickstart-connected-registry-arc-cli.md -[az-k8s-extension-create]: /cli/azure/k8s-extension#az-k8s-extension-create -[az-k8s-extension-update]: /cli/azure/k8s-extension#az-k8s-extension-update |
container-registry | Tutorial Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-customer-managed-keys.md | - Title: Overview of customer-managed keys -description: Learn how to encrypt your Premium container registry by using a customer-managed key stored in Azure Key Vault. - Previously updated : 10/31/2023-----# Overview of customer-managed keys --Azure Container Registry automatically encrypts images and other artifacts that you store. By default, Azure automatically encrypts the registry content at rest by using [service-managed keys](../security/fundamentals/encryption-models.md). By using a customer-managed key, you can supplement default encryption with an additional encryption layer. - -This article is part one in a four-part tutorial series. The tutorial covers: --> [!div class="checklist"] -> * Overview of customer-managed keys -> * Enable a customer-managed key -> * Rotate and revoke a customer-managed key -> * Troubleshoot a customer-managed key --## About customer-managed keys --A customer-managed key gives you the ownership to bring your own key in [Azure Key Vault](/azure/key-vault/general/overview). When you enable a customer-managed key, you can manage its rotations, control the access and permissions to use it, and audit its use. --Key features include: --* **Regulatory compliance**: Azure automatically encrypts registry content at rest with [service-managed keys](../security/fundamentals/encryption-models.md), but customer-managed key encryption helps you meet guidelines for regulatory compliance. --* **Integration with Azure Key Vault**: Customer-managed keys support server-side encryption through integration with [Azure Key Vault](/azure/key-vault/general/overview). With customer-managed keys, you can create your own encryption keys and store them in a key vault. Or you can use Azure Key Vault APIs to generate keys. --* **Key lifecycle management**: Integrating customer-managed keys with [Azure Key Vault](/azure/key-vault/general/overview) gives you full control and responsibility for the key lifecycle, including rotation and management. --## Before you enable a customer-managed key --Before you configure Azure Container Registry with a customer-managed key, consider the following information: --* This feature is available in the Premium service tier for a container registry. For more information, see [Azure Container Registry service tiers](container-registry-skus.md). -* You can currently enable a customer-managed key only while creating a registry. -* You can't disable the encryption after you enable a customer-managed key on a registry. -* You have to configure a *user-assigned* managed identity to access the key vault. Later, if required, you can enable the registry's *system-assigned* managed identity for key vault access. -* Azure Container Registry supports only RSA or RSA-HSM keys. Elliptic-curve keys aren't currently supported. -* In a registry that's encrypted with a customer-managed key, you can retain logs for [Azure Container Registry tasks](container-registry-tasks-overview.md) for only 24 hours. To retain logs for a longer period, see [View and manage task run logs](container-registry-tasks-logs.md#alternative-log-storage). -* [Content trust](container-registry-content-trust.md) is currently not supported in a registry that's encrypted with a customer-managed key. --## Update the customer-managed key version --Azure Container Registry supports both automatic and manual rotation of registry encryption keys when a new key version is available in Azure Key Vault. -->[!IMPORTANT] ->It's an important security consideration for a registry with customer-managed key encryption to frequently update (rotate) the key versions. Follow your organization's compliance policies to regularly update [key versions](/azure/key-vault/general/about-keys-secrets-certificates#objects-identifiers-and-versioning) while storing a customer-managed key in Azure Key Vault. --* **Automatically update the key version**: When a registry is encrypted with a non-versioned key, Azure Container Registry regularly checks the key vault for a new key version and updates the customer-managed key within one hour. We suggest that you omit the key version when you enable registry encryption with a customer-managed key. Azure Container Registry will then automatically use and update the latest key version. --* **Manually update the key version**: When a registry is encrypted with a specific key version, Azure Container Registry uses that version for encryption until you manually rotate the customer-managed key. We suggest that you specify the key version when you enable registry encryption with a customer-managed key. Azure Container Registry will then use a specific version of a key for registry encryption. --For details, see [Key rotation](tutorial-enable-customer-managed-keys.md#key-rotation) and [Update key version](tutorial-rotate-revoke-customer-managed-keys.md#create-or-update-the-key-version-by-using-the-azure-cli). --## Next steps --* To enable your container registry with a customer-managed key by using the Azure CLI, the Azure portal, or an Azure Resource Manager template, advance to the next article: [Enable a customer-managed key](tutorial-enable-customer-managed-keys.md). -* Learn more about [encryption at rest in Azure](../security/fundamentals/encryption-atrest.md). -* Learn more about access policies and how to [secure access to a key vault](/azure/key-vault/general/security-features). ---<!-- LINKS - external --> --<!-- LINKS - internal --> --[az-feature-register]: /cli/azure/feature#az_feature_register -[az-feature-show]: /cli/azure/feature#az_feature_show -[az-group-create]: /cli/azure/group#az_group_create -[az-identity-create]: /cli/azure/identity#az_identity_create -[az-feature-register]: /cli/azure/feature#az_feature_register -[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create -[az-keyvault-create]: /cli/azure/keyvault#az_keyvault_create -[az-keyvault-key-create]: /cli/azure/keyvault/key#az_keyvault_key_create -[az-keyvault-key]: /cli/azure/keyvault/key -[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy -[az-keyvault-delete-policy]: /cli/azure/keyvault#az_keyvault_delete_policy -[az-resource-show]: /cli/azure/resource#az_resource_show -[az-acr-create]: /cli/azure/acr#az_acr_create -[az-acr-show]: /cli/azure/acr#az_acr_show -[az-acr-encryption-rotate-key]: /cli/azure/acr/encryption#az_acr_encryption_rotate_key -[az-acr-encryption-show]: /cli/azure/acr/encryption#az_acr_encryption_show |
container-registry | Tutorial Deploy Connected Registry Nested Iot Edge Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-deploy-connected-registry-nested-iot-edge-cli.md | - Title: 'Tutorial: Deploy a connected registry to an IoT Edge hierarchy' -description: In this tutorial, use Azure CLI commands to create a two-layer hierarchy of Azure IoT Edge devices and deploy a connected registry as a module at each layer. - Previously updated : 10/31/2023-------# Tutorial: Deploy a connected registry to a nested IoT Edge hierarchy --In this tutorial, you use Azure CLI commands to create a two-layer hierarchy of Azure IoT Edge devices and deploy a [connected registry](intro-connected-registry.md) as a module at each layer. In this scenario, a device in the [top layer](overview-connected-registry-and-iot-edge.md#top-layer) communicates with a cloud registry. A device in the [lower layer](overview-connected-registry-and-iot-edge.md#nested-layers) communicates with its connected registry parent in the top layer. --For an overview of using a connected registry with IoT Edge, see [Using connected registry with Azure IoT Edge](overview-connected-registry-and-iot-edge.md). ---* Azure IoT Hub. For deployment steps, see [Create an IoT hub using the Azure portal](../iot-hub/iot-hub-create-through-portal.md). -* Two connected registry resources in Azure. For deployment steps, see quickstarts using the [Azure CLI][quickstart-connected-registry-cli] or [Azure portal][quickstart-connected-registry-portal]. -- * For the top layer, the connected registry can be in either ReadWrite or ReadOnly mode. This article assumes ReadWrite mode, and the connected registry name is stored in the environment variable `$CONNECTED_REGISTRY_RW`. - * For the lower layer, the connected registry must be in ReadOnly mode. This article assumes the connected registry name is stored in the environment variable `$CONNECTED_REGISTRY_RO`. ---## Retrieve connected registry configuration --To deploy each connected registry to the IoT Edge device in the hierarchy, you need to retrieve configuration settings from the connected registry resource in Azure. If needed, run the [az acr connected-registry get-settings][az-acr-connected-registry-get-settings] command for each connected registry to retrieve the configuration. --By default, the settings information doesn't include the [sync token](overview-connected-registry-access.md#sync-token) password, which is also needed to deploy the connected registry. Optionally, generate one of the passwords by passing the `--generate-password 1` or `--generate-password 2` parameter. Save the generated password to a safe location. It can't be retrieved again. --> [!WARNING] -> Regenerating a password rotates the sync token credentials. If you configured a device using the previous password, you need to update the configuration. --```azurecli -# Use the REGISTRY_NAME variable in the following Azure CLI commands to identify the registry -REGISTRY_NAME=<container-registry-name> --# Run the command for each registry resource in the hierarchy --az acr connected-registry get-settings \ - --registry $REGISTRY_NAME \ - --name $CONNECTED_REGISTRY_RW \ - --parent-protocol https --az acr connected-registry get-settings \ - --registry $REGISTRY_NAME \ - --name $CONNECTED_REGISTRY_RO \ - --parent-protocol https -``` ---## Configure deployment manifests --A deployment manifest is a JSON document that describes which modules to deploy to an IoT Edge device. For more information, see [Understand how IoT Edge modules can be used, configured, and reused](../iot-edge/module-composition.md). --To deploy the connected registry module on each IoT Edge device using the Azure CLI, save the following deployment manifests locally as JSON files. Use the information from the previous sections to update the relevant JSON values in each manifest. Use the file paths in the next section when you run the command to apply the configuration to your device. --### Deployment manifest for the top layer --For the device at the top layer, create a deployment manifest file `deploymentTopLayer.json` with the following content. This manifest is similar to the one used in [Quickstart: Deploy a connected registry to an IoT Edge device](quickstart-deploy-connected-registry-iot-edge-cli.md). --> [!NOTE] -> If you already deployed a connected registry to a top layer IoT Edge device using the [quickstart](quickstart-deploy-connected-registry-iot-edge-cli.md), you can use it at the top layer of a nested hierarchy. Modify the deployment steps in this tutorial to configure it in the hierarchy (not shown). ---### Deployment manifest for the lower layer --For the device at the lower layer, create a deployment manifest file *deploymentLowerLayer.json* with the following content. --Overall, the lower layer deployment file is similar to the top layer deployment file. The differences are: --* It pulls the required images from the top layer connected registry instead of from the cloud registry. -- When you set up the top layer connected registry, make sure that it syncs all the required images locally, including `azureiotedge-agent`, `azureiotedge-hub`, `azureiotedge-api-proxy`, and `acr/connected-registry`. The lower layer IoT device needs to pull these images from the top layer connected registry. -* It uses the sync token configured at the lower layer to authenticate with the top layer connected registry. -* It configures the parent gateway endpoint with the top layer connected registry's IP address or FQDN instead of with the cloud registry's FQDN. --> [!IMPORTANT] -> In the following deployment manifest, `$upstream` is used as the IP address or FQDN of the device hosting the parent connected registry. However, `$upstream` is not supported in an environment variable. The connected registry needs to read the environment variable `ACR_PARENT_GATEWAY_ENDPOINT` to get the parent gateway endpoint. Instead of using `$upstream`, the connected registry supports dynamically resolving the IP address or FQDN from another environment variable. -> -> On the nested IoT Edge, there's an environment variable `$IOTEDGE_PARENTHOSTNAME` on the lower layer that is equal to the IP address or FQDN of the parent device. Manually replace the environment variable as the value of `ParentGatewayEndpoint` in the connection string to avoid hard-coding the parent IP address or FQDN. Because the parent device in this example is running `nginx` on port 8000, pass `$IOTEDGE_PARENTHOSTNAME:8000`. You also need to select the proper protocol in `ParentEndpointProtocol`. --```json -{ - "modulesContent": { - "$edgeAgent": { - "properties.desired": { - "modules": { - "connected-registry": { - "settings": { - "image": "$upstream:8000/acr/connected-registry:0.8.0", - "createOptions": "{\"HostConfig\":{\"Binds\":[\"/home/azureuser/connected-registry:/var/acr/data\"]}}" - }, - "type": "docker", - "env": { - "ACR_REGISTRY_CONNECTION_STRING": { - "value": "ConnectedRegistryName=<REPLACE_WITH_CONNECTED_REGISTRY_NAME>;SyncTokenName=<REPLACE_WITH_SYNC_TOKEN_NAME>;SyncTokenPassword=<REPLACE_WITH_SYNC_TOKEN_PASSWORD>;ParentGatewayEndpoint=$IOTEDGE_PARENTHOSTNAME:8000;ParentEndpointProtocol=https" - } - }, - "status": "running", - "restartPolicy": "always", - "version": "1.0" - }, - "IoTEdgeApiProxy": { - "settings": { - "image": "$upstream:8000/azureiotedge-api-proxy:1.1.2", - "createOptions": "{\"HostConfig\": {\"PortBindings\": {\"8000/tcp\": [{\"HostPort\": \"8000\"}]}}}" - }, - "type": "docker", - "version": "1.0", - "env": { - "NGINX_DEFAULT_PORT": { - "value": "8000" - }, - "CONNECTED_ACR_ROUTE_ADDRESS": { - "value": "connected-registry:8080" - }, - "NGINX_CONFIG_ENV_VAR_LIST": { - "value": "NGINX_DEFAULT_PORT,BLOB_UPLOAD_ROUTE_ADDRESS,CONNECTED_ACR_ROUTE_ADDRESS,IOTEDGE_PARENTHOSTNAME,DOCKER_REQUEST_ROUTE_ADDRESS" - }, - "BLOB_UPLOAD_ROUTE_ADDRESS": { - "value": "AzureBlobStorageonIoTEdge:11002" - } - }, - "status": "running", - "restartPolicy": "always", - "startupOrder": 3 - } - }, - "runtime": { - "settings": { - "minDockerVersion": "v1.25", - "registryCredentials": { - "connectedregistry": { - "address": "$upstream:8000", - "password": "<REPLACE_WITH_SYNC_TOKEN_PASSWORD>", - "username": "<REPLACE_WITH_SYNC_TOKEN_NAME>" - } - } - }, - "type": "docker" - }, - "schemaVersion": "1.1", - "systemModules": { - "edgeAgent": { - "settings": { - "image": "$upstream:8000/azureiotedge-agent:1.2.4", - "createOptions": "" - }, - "type": "docker", - "env": { - "SendRuntimeQualityTelemetry": { - "value": "false" - } - } - }, - "edgeHub": { - "settings": { - "image": "$upstream:8000/azureiotedge-hub:1.2.4", - "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"443/tcp\":[{\"HostPort\":\"443\"}],\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}]}}}" - }, - "type": "docker", - "status": "running", - "restartPolicy": "always" - } - } - } - }, - "$edgeHub": { - "properties.desired": { - "routes": { - "route": "FROM /messages/* INTO $upstream" - }, - "schemaVersion": "1.1", - "storeAndForwardConfiguration": { - "timeToLiveSecs": 7200 - } - } - } - } -} -``` --## Set up and deploy connected registry modules --The following steps are adapted from [Tutorial: Create a hierarchy of IoT Edge devices](../iot-edge/tutorial-nested-iot-edge.md) and are specific to deploying connected registry modules in the IoT Edge hierarchy. See that tutorial for details about individual steps. --### Create top layer and lower layer devices --Create top layer and lower layer virtual machines using an existing [ARM template](https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2.0/edgeDeploy.json). The template also installs the IoT Edge agent. If you want to deploy from your own devices instead, see [Tutorial: Install or uninstall Azure IoT Edge for Linux](../iot-edge/how-to-install-iot-edge.md) to learn how to manually set up the device. --> [!IMPORTANT] -> For later access to the modules deployed on the top layer device, make sure that you open the following ports inbound: 8000, 443, 5671, 8883. For configuration steps, see [How to open ports to a virtual machine with the Azure portal](/azure/virtual-machines/windows/nsg-quickstart-portal). --### Create and configure the hierarchy --Use the `iotedge-config` tool to create and configure your hierarchy by following these steps in the Azure CLI or Azure Cloud Shell: --1. Download the configuration tool. -- ```bash - mkdir nested_iot_edge_tutorial - cd ~/nested_iot_edge_tutorial - wget -O iotedge_config.tar "https://github.com/Azure-Samples/iotedge_config_cli/releases/download/latest/iotedge_config_cli.tar.gz" - tar -xvf iotedge_config.tar - ``` -- This step creates the `iotedge_config_cli_release` folder in your tutorial directory. The template file used to create your device hierarchy is the `iotedge_config.yaml` file found in `~/nested_iot_edge_tutorial/iotedge_config_cli_release/templates/tutorial`. In the same directory, there are two deployment manifests for top and lower layers: `deploymentTopLayer.json` and `deploymentLowerLayer.json` files. --1. Edit `iotedge_config.yaml` with your information. Edit the `iothub_hostname`, `iot_name`, deployment manifest filenames for the top layer and lower layer, and the client token credentials you created to pull images from upstream from each layer. The following example is a sample configuration file: -- ```yaml - config_version: "1.0" -- iothub: - iothub_hostname: <REPLACE_WITH_HUB_NAME>.azure-devices.net - iothub_name: <REPLACE_WITH_HUB_NAME> - ## Authentication method used by IoT Edge devices: symmetric_key or x509_certificate - authentication_method: symmetric_key -- ## Root certificate used to generate device CA certificates. Optional. If not provided a self-signed CA will be generated - # certificates: - # root_ca_cert_path: "" - # root_ca_cert_key_path: "" -- ## IoT Edge configuration template to use - configuration: - template_config_path: "./templates/tutorial/device_config.toml" - default_edge_agent: "$upstream:8000/azureiotedge-agent:1.2.4" -- ## Hierarchy of IoT Edge devices to create - edgedevices: - device_id: top-layer - edge_agent: "<REPLACE_WITH_REGISTRY_NAME>.azurecr.io/azureiotedge-agent:1.2.4" ## Optional. If not provided, default_edge_agent will be used - deployment: "./templates/tutorial/deploymentTopLayer.json" ## Optional. If provided, the given deployment file will be applied to the newly created device - # hostname: "FQDN or IP" ## Optional. If provided, install.sh will not prompt user for this value nor the parent_hostname value - container_auth: ## The token used to pull the image from cloud registry - serveraddress: "<REPLACE_WITH_REGISTRY_NAME>.azurecr.io" - username: "<REPLACE_WITH_SYNC_TOKEN_NAME_FOR_TOP_LAYER>" - password: "<REPLACE_WITH_SYNC_TOKEN_PASSWORD_FOR_TOP_LAYER>" - child: - - device_id: lower-layer - deployment: "./templates/tutorial/deploymentLowerLayer.json" ## Optional. If provided, the given deployment file will be applied to the newly created device - # hostname: "FQDN or IP" ## Optional. If provided, install.sh will not prompt user for this value nor the parent_hostname value - container_auth: ## The token used to pull the image from parent connected registry - serveraddress: "$upstream:8000" - username: "<REPLACE_WITH_SYNC_TOKEN_NAME_FOR_LOWER_LAYER>" - password: "<REPLACE_WITH_SYNC_TOKEN_PASSWORD_FOR_LOWER_LAYER>" - ``` --1. Prepare the top layer and lower layer deployment files: *deploymentTopLayer.json* and *deploymentLowerLayer.json*. Copy the [deployment manifest files](#configure-deployment-manifests) you created earlier in this article to the following folder: `~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial`. --1. Navigate to your *iotedge_config_cli_release* directory and run the tool to create your hierarchy of IoT Edge devices. -- ```bash - cd ~/nestedIotEdgeTutorial/iotedge_config_cli_release - ./iotedge_config --config ~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial/iotedge_config.yaml --output ~/nestedIotEdgeTutorial/iotedge_config_cli_release/outputs -f - ``` -- With the `--output` parameter, the tool creates the device certificates, certificate bundles, and a log file in a directory of your choice. With the `-f` parameter, the tool automatically looks for existing IoT Edge devices in your IoT Hub and removes them, to avoid errors and keep your hub clean. -- The tool could run for several minutes. --1. Copy the generated *top-layer.zip* and *lower-layer.zip* files generated in the previous step to the corresponding top and lower layer virtual machines using `scp`: -- ```bash - scp <PATH_TO_CONFIGURATION_BUNDLE> <USER>@<VM_IP_OR_FQDN>:~ - ``` --1. Connect to the top layer device to install the configuration bundle. -- 1. Unzip the configuration bundle. You'll need to install zip first. -- ```bash - sudo apt install zip - unzip ~/<PATH_TO_CONFIGURATION_BUNDLE>/<CONFIGURATION_BUNDLE>.zip #unzip top-layer.zip - ``` -- 1. Run `sudo ./install.sh`. Input the IP address or hostname. We recommend using the IP address. - 1. Run `sudo iotedge list` to confirm that all modules are running. --1. Connect to the lower layer device to install the configuration bundle. - 1. Unzip the configuration bundle. You'll need to install zip first. -- ```bash - sudo apt install zip - unzip ~/<PATH_TO_CONFIGURATION_BUNDLE>/<CONFIGURATION_BUNDLE>.zip #unzip lower-layer.zip - ``` -- 1. Run `sudo ./install.sh`. Input the device and parent IP addresses or hostnames. We recommend using the IP addresses. - 1. Run `sudo iotedge list` to confirm that all modules are running. --If you didn't specify a deployment file for device configuration, or if deployment problems occur such as an invalid deployment manifest on the top or lower layer device, manually deploy the modules. See the following section. --## Manually deploy the connected registry module --Use the following command to deploy the connected registry module manually on an IoT Edge device: --```azurecli -az iot edge set-modules \ - --device-id <device-id> \ - --hub-name <hub-name> \ - --content <deployment-manifest-filename> -``` --For details, see [Deploy Azure IoT Edge modules with Azure CLI](../iot-edge/how-to-deploy-modules-cli.md). --After successful deployment, the connected registry shows a status of `Online`. --To check the status of the connected registry, use the following [az acr connected-registry show][az-acr-connected-registry-show] command: --```azurecli -az acr connected-registry show \ - --registry $REGISTRY_NAME \ - --name $CONNECTED_REGISTRY_RO \ - --output table -``` --You might need to a wait a few minutes until the deployment of the connected registry completes. --After successful deployment, the connected registry shows a status of `Online`. --To troubleshoot a deployment, run `iotedge check` on the affected device. For more information, see [Troubleshooting](../iot-edge/tutorial-nested-iot-edge.md#troubleshooting). --## Next steps --In this quickstart, you learned how to deploy a connected registry to a nested IoT Edge device. Continue to the next guide to learn how to pull images from the newly deployed connected registry. --> [!div class="nextstepaction"] -> [Pull images from a connected registry][pull-images-from-connected-registry] --<!-- LINKS - internal --> -[az-acr-connected-registry-get-settings]: /cli/azure/acr/connected-registry/install#az_acr_connected_registry_get_settings -[az-acr-connected-registry-show]: /cli/azure/acr/connected-registry#az_acr_connected_registry_show -[az-acr-import]: /cli/azure/acr#az-acr-import -[az-acr-token-credential-generate]: /cli/azure/acr/credential#az-acr-token-credential-generate -[container-registry-intro]: container-registry-intro.md -[pull-images-from-connected-registry]: pull-images-from-connected-registry.md -[quickstart-connected-registry-cli]: quickstart-connected-registry-cli.md -[quickstart-connected-registry-portal]: quickstart-connected-registry-portal.md |
container-registry | Tutorial Enable Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-customer-managed-keys.md | - Title: Enable a customer-managed key -description: In this tutorial, learn how to encrypt your Premium registry with a customer-managed key stored in Azure Key Vault. -- Previously updated : 10/31/2023-----# Enable a customer-managed key --This article is part two in a four-part tutorial series. [Part one](tutorial-customer-managed-keys.md) provides an overview of customer-managed keys, their features, and considerations before you enable one on your registry. This article walks you through the steps of enabling a customer-managed key by using the Azure CLI, the Azure portal, or an Azure Resource Manager template. --## Prerequisites --* [Install the Azure CLI][azure-cli] or prepare to use [Azure Cloud Shell](../cloud-shell/quickstart.md). -* Sign in to the [Azure portal](https://portal.azure.com/). --## Enable a customer-managed key by using the Azure CLI --### Create a resource group --Run the [az group create][az-group-create] command to create a resource group that will hold your key vault, container registry, and other required resources: --```azurecli -az group create --name <resource-group-name> --location <location> -``` --### Create a user-assigned managed identity --Configure a user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the registry so that you can access the key vault: --1. Run the [az identity create][az-identity-create] command to create the managed identity: -- ```azurecli - az identity create \ - --resource-group <resource-group-name> \ - --name <managed-identity-name> - ``` --2. In the command output, take note of the `id` and `principalId` values to configure registry access with the key vault: -- ```JSON - { - "clientId": "xxxx2bac-xxxx-xxxx-xxxx-192cxxxx6273", - "clientSecretUrl": "https://control-eastus.identity.azure.net/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myidentityname/credentials?tid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&oid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&aid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myresourcegroup", - "location": "eastus", - "name": "myidentityname", - "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "resourceGroup": "myresourcegroup", - "tags": {}, - "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "type": "Microsoft.ManagedIdentity/userAssignedIdentities" - } - ``` --3. For convenience, store the `id` and `principalId` values in environment variables: -- ```azurecli - identityID=$(az identity show --resource-group <resource-group-name> --name <managed-identity-name> --query 'id' --output tsv) -- identityPrincipalID=$(az identity show --resource-group <resource-group-name> --name <managed-identity-name> --query 'principalId' --output tsv) - ``` --### Create a key vault --1. Run the [az keyvault create][az-keyvault-create] command to create a key vault where you can store a customer-managed key for registry encryption. --2. By default, the new key vault automatically enables the *soft delete* setting. To prevent data loss from accidental deletion of keys or key vaults, we recommend enabling the *purge protection* setting: -- ```azurecli - az keyvault create --name <key-vault-name> \ - --resource-group <resource-group-name> \ - --enable-purge-protection - ``` --3. For convenience, take a note of the key vault's resource ID and store the value in environment variables: -- ```azurecli - keyvaultID=$(az keyvault show --resource-group <resource-group-name> --name <key-vault-name> --query 'id' --output tsv) - ``` --#### Enable trusted services to access the key vault --If the key vault is in protection with a firewall or virtual network (private endpoint), you must enable the network settings to allow access by [trusted Azure services](/azure/key-vault/general/overview-vnet-service-endpoints#trusted-services). For more information, see [Configure Azure Key Vault networking settings](/azure/key-vault/general/how-to-azure-key-vault-network-security?tabs=azure-cli). --#### Enable managed identities to access the key vault --There are two ways to enable managed identities to access your key vault. --The first option is to configure the access policy for the key vault and set key permissions for access with a user-assigned managed identity: --1. Run the [az keyvault set policy][az-keyvault-set-policy] command. Pass the previously created and stored environment variable value of `principalID`. - -2. Set key permissions to `get`, `unwrapKey`, and `wrapKey`: -- ```azurecli - az keyvault set-policy \ - --resource-group <resource-group-name> \ - --name <key-vault-name> \ - --object-id $identityPrincipalID \ - --key-permissions get unwrapKey wrapKey -- ``` --The second option is to use [Azure role-based access control (RBAC)](/azure/key-vault/general/rbac-guide) to assign permissions to the user-assigned managed identity and access the key vault. Run the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command and assign the `Key Vault Crypto Service Encryption User` role to a user-assigned managed identity: --```azurecli -az role assignment create --assignee $identityPrincipalID \ - --role "Key Vault Crypto Service Encryption User" \ - --scope $keyvaultID -``` --### Create a key and get the key ID --1. Run the [az keyvault key create][az-keyvault-key-create] command to create a key in the key vault: -- ```azurecli - az keyvault key create \ - --name <key-name> \ - --vault-name <key-vault-name> - ``` --2. In the command output, take note of the key ID (`kid`): -- ```output - [...] - "key": { - "crv": null, - "d": null, - "dp": null, - "dq": null, - "e": "AQAB", - "k": null, - "keyOps": [ - "encrypt", - "decrypt", - "sign", - "verify", - "wrapKey", - "unwrapKey" - ], - "kid": "https://mykeyvault.vault.azure.net/keys/mykey/<version>", - "kty": "RSA", - [...] - ``` --3. For convenience, store the format that you choose for the key ID in the `$keyID` environment variable. You can use a key ID with or without a version. --#### Key rotation --You can choose manual or automatic key rotation. --Encrypting a registry with a customer-managed key that has a key version will allow only manual key rotation in Azure Container Registry. This example stores the key's `kid` property: --```azurecli -keyID=$(az keyvault key show \ - --name <keyname> \ - --vault-name <key-vault-name> \ - --query 'key.kid' --output tsv) -``` --Encrypting a registry with a customer-managed key by omitting a key version will enable automatic key rotation to detect a new key version in Azure Key Vault. This example removes the version from the key's `kid` property: --```azurecli -keyID=$(az keyvault key show \ - --name <keyname> \ - --vault-name <key-vault-name> \ - --query 'key.kid' --output tsv) --keyID=$(echo $keyID | sed -e "s/\/[^/]*$//") -``` --### Create a registry with a customer-managed key --1. Run the [az acr create][az-acr-create] command to create a registry in the *Premium* service tier and enable the customer-managed key. --2. Pass the managed identity ID (`id`) and key ID (`kid`) values stored in the environment variables in previous steps: -- ```azurecli - az acr create \ - --resource-group <resource-group-name> \ - --name <container-registry-name> \ - --identity $identityID \ - --key-encryption-key $keyID \ - --sku Premium - ``` --### Show encryption status --Run the [az acr encryption show][az-acr-encryption-show] command to show the status of the registry encryption with a customer-managed key: --```azurecli -az acr encryption show --name <container-registry-name> -``` --Depending on the key that's used to encrypt the registry, the output is similar to: --```console -{ - "keyVaultProperties": { - "identity": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "keyIdentifier": "https://myvault.vault.azure.net/keys/myresourcegroup/abcdefg123456789...", - "keyRotationEnabled": true, - "lastKeyRotationTimestamp": xxxxxxxx - "versionedKeyIdentifier": "https://myvault.vault.azure.net/keys/myresourcegroup/abcdefg123456789...", - }, - "status": "enabled" -} -``` --## Enable a customer-managed key by using the Azure portal --### Create a user-assigned managed identity --To create a user-assigned [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) in the Azure portal: --1. Follow the steps to [create a user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity). --2. Save the identity's name to use it in later steps. ---### Create a key vault --1. Follow the steps in [Quickstart: Create a key vault using the Azure portal](/azure/key-vault/general/quick-create-portal). --2. When you're creating a key vault for a customer-managed key, on the **Basics** tab, enable the **Purge protection** setting. This setting helps prevent data loss from accidental deletion of keys or key vaults. -- :::image type="content" source="media/container-registry-customer-managed-keys/create-key-vault.png" alt-text="Screenshot of the options for creating a key vault in the Azure portal."::: --#### Enable trusted services to access the key vault --If the key vault is in protection with a firewall or virtual network (private endpoint), enable the network setting to allow access by [trusted Azure services](/azure/key-vault/general/overview-vnet-service-endpoints#trusted-services). For more information, see [Configure Azure Key Vault networking settings](/azure/key-vault/general/how-to-azure-key-vault-network-security?tabs=azure-portal). --#### Enable managed identities to access the key vault --There are two ways to enable managed identities to access your key vault. --The first option is to configure the access policy for the key vault and set key permissions for access with a user-assigned managed identity: --1. Go to your key vault. -2. Select **Settings** > **Access policies > +Add Access Policy**. -3. Select **Key permissions**, and then select **Get**, **Unwrap Key**, and **Wrap Key**. -4. In **Select principal**, select the resource name of your user-assigned managed identity. -5. Select **Add**, and then select **Save**. ---The other option is to assign the `Key Vault Crypto Service Encryption User` RBAC role to the user-assigned managed identity at the key vault scope. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). --### Create a key --Create a key in the key vault and use it to encrypt the registry. Follow these steps if you want to select a specific key version as a customer-managed key. You might also need to create a key before creating the registry if key vault access is restricted to a private endpoint or selected networks. --1. Go to your key vault. -1. Select **Settings** > **Keys**. -1. Select **+Generate/Import** and enter a unique name for the key. -1. Accept the remaining default values, and then select **Create**. -1. After creation, select the key and then select the current version. Copy the **Key identifier** for the key version. --### Create a container registry --1. Select **Create a resource** > **Containers** > **Container Registry**. -1. On the **Basics** tab, select or create a resource group, and then enter a registry name. In **SKU**, select **Premium**. -1. On the **Encryption** tab, for **Customer-managed key**, select **Enabled**. -1. For **Identity**, select the managed identity that you created. -1. For **Encryption**, choose one of the following options: - * Choose **Select from Key Vault**, and then either select an existing key vault and key or select **Create new**. The key that you select is unversioned and enables automatic key rotation. - * Select **Enter key URI**, and provide the identifier of an existing key. You can provide either a versioned key URI (for a key that must be rotated manually) or an unversioned key URI (which enables automatic key rotation). See the previous section for steps to create a key. -1. Select **Review + create**. -1. Select **Create** to deploy the registry instance. ---### Show the encryption status --To see the encryption status of your registry in the portal, go to your registry. Under **Settings**, select **Encryption**. --## Enable a customer-managed key by using a Resource Manager template --You can use a Resource Manager template to create a container registry and enable encryption with a customer-managed key: --1. Copy the following content of a Resource Manager template to a new file and save it as *CMKtemplate.json*: -- ```json - { - "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "vault_name": { - "defaultValue": "", - "type": "String" - }, - "registry_name": { - "defaultValue": "", - "type": "String" - }, - "identity_name": { - "defaultValue": "", - "type": "String" - }, - "kek_id": { - "type": "String" - } - }, - "variables": {}, - "resources": [ - { - "type": "Microsoft.ContainerRegistry/registries", - "apiVersion": "2019-12-01-preview", - "name": "[parameters('registry_name')]", - "location": "[resourceGroup().location]", - "sku": { - "name": "Premium", - "tier": "Premium" - }, - "identity": { - "type": "UserAssigned", - "userAssignedIdentities": { - "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]": {} - } - }, - "dependsOn": [ - "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]" - ], - "properties": { - "adminUserEnabled": false, - "encryption": { - "status": "enabled", - "keyVaultProperties": { - "identity": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name')), '2018-11-30').clientId]", - "KeyIdentifier": "[parameters('kek_id')]" - } - }, - "networkRuleSet": { - "defaultAction": "Allow", - "virtualNetworkRules": [], - "ipRules": [] - }, - "policies": { - "quarantinePolicy": { - "status": "disabled" - }, - "trustPolicy": { - "type": "Notary", - "status": "disabled" - }, - "retentionPolicy": { - "days": 7, - "status": "disabled" - } - } - } - }, - { - "type": "Microsoft.KeyVault/vaults/accessPolicies", - "apiVersion": "2018-02-14", - "name": "[concat(parameters('vault_name'), '/add')]", - "dependsOn": [ - "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]" - ], - "properties": { - "accessPolicies": [ - { - "tenantId": "[subscription().tenantId]", - "objectId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name')), '2018-11-30').principalId]", - "permissions": { - "keys": [ - "get", - "unwrapKey", - "wrapKey" - ] - } - } - ] - } - }, - { - "type": "Microsoft.ManagedIdentity/userAssignedIdentities", - "apiVersion": "2018-11-30", - "name": "[parameters('identity_name')]", - "location": "[resourceGroup().location]" - } - ] - } - ``` --2. Follow the steps in the previous sections to create the following resources: -- * Key vault, identified by name - * Key vault key, identified by key ID --3. Run the [az deployment group create][az-deployment-group-create] command to create the registry by using the preceding template file. When indicated, provide a new registry name and a user-assigned managed identity name, along with the key vault name and key ID that you created. -- ```azurecli - az deployment group create \ - --resource-group <resource-group-name> \ - --template-file CMKtemplate.json \ - --parameters \ - registry_name=<registry-name> \ - identity_name=<managed-identity> \ - vault_name=<key-vault-name> \ - key_id=<key-vault-key-id> - ``` --4. Run the [az acr encryption show][az-acr-encryption-show] command to show the status of registry encryption: -- ```azurecli - az acr encryption show --name <registry-name> - ``` --## Next steps --Advance to the [next article](tutorial-rotate-revoke-customer-managed-keys.md) to walk through rotating customer-managed keys, updating key versions, and revoking a customer-managed key. ---<!-- LINKS - external --> --<!-- LINKS - internal --> --[azure-cli]: /cli/azure/install-azure-cli -[az-feature-register]: /cli/azure/feature#az_feature_register -[az-feature-show]: /cli/azure/feature#az_feature_show -[az-group-create]: /cli/azure/group#az_group_create -[az-identity-create]: /cli/azure/identity#az_identity_create -[az-feature-register]: /cli/azure/feature#az_feature_register -[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create -[az-keyvault-create]: /cli/azure/keyvault#az_keyvault_create -[az-keyvault-key-create]: /cli/azure/keyvault/key#az_keyvault_key_create -[az-keyvault-key]: /cli/azure/keyvault/key -[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy -[az-keyvault-delete-policy]: /cli/azure/keyvault#az_keyvault_delete_policy -[az-resource-show]: /cli/azure/resource#az_resource_show -[az-acr-create]: /cli/azure/acr#az_acr_create -[az-acr-show]: /cli/azure/acr#az_acr_show -[az-acr-encryption-rotate-key]: /cli/azure/acr/encryption#az_acr_encryption_rotate_key -[az-acr-encryption-show]: /cli/azure/acr/encryption#az_acr_encryption_show |
container-registry | Tutorial Rotate Revoke Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-rotate-revoke-customer-managed-keys.md | - Title: Rotate and revoke a customer-managed key -description: Learn how to rotate, update, and revoke a customer-managed key on Azure Container Registry. - Previously updated : 10/31/2023------# Rotate and revoke a customer-managed key --This article is part three in a four-part tutorial series. [Part one](tutorial-customer-managed-keys.md) provides an overview of customer-managed keys, their features, and considerations before you enable one on your registry. In [part two](tutorial-enable-customer-managed-keys.md), you learn how to enable a customer-managed key by using the Azure CLI, the Azure portal, or an Azure Resource Manager template. This article walks you through rotating, updating, and revoking a customer-managed key. --## Rotate a customer-managed key --To rotate a key, you can either update the key version in Azure Key Vault or create a new key. While rotating the key, you can specify the same identity that you used to create the registry. --Optionally, you can: --- Configure a new user-assigned identity to access the key.-- Enable and specify the registry's system-assigned identity.--> [!NOTE] -> To enable the registry's system-assigned identity in the portal, select **Settings** > **Identity** and set the system-assigned identity's status to **On**. -> -> Ensure that the required [key vault access](tutorial-enable-customer-managed-keys.md#enable-managed-identities-to-access-the-key-vault) is set for the identity that you configure for key access. --### Create or update the key version by using the Azure CLI --To create a new key version, run the [az keyvault key create](/cli/azure/keyvault/key#az-keyvault-key-create) command: --```azurecli -# Create new version of existing key -az keyvault key create \ - --name <key-name> \ - --vault-name <key-vault-name> -``` --If you configure the registry to detect key version updates, the customer-managed key is automatically updated within one hour. --If you configure the registry for manual updating for a new key version, run the [az-acr-encryption-rotate-key](/cli/azure/acr/#az-acr-encryption-rotate-key) command. Pass the new key ID and the identity that you want to configure. --> [!TIP] -> When you run `az-acr-encryption-rotate-key`, you can pass either a versioned key ID or an unversioned key ID. If you use an unversioned key ID, the registry is then configured to automatically detect later key version updates. --To update a customer-managed key version manually, you have three options: --- Rotate the key and use a client ID of a managed identity.--If you're using the key from a different key vault, verify the `identity` has the `get`, `wrap`, and `unwrap` permissions on that key vault. -- ```azurecli - az acr encryption rotate-key \ - --name <registry-name> \ - --key-encryption-key <new-key-id> \ - --identity <client ID of a managed identity> - ``` --- Rotate the key and use a user-assigned identity.--Before you use the user-assigned identity, verify that the `get`, `wrap`, and `unwrap` permissions are assigned to it. -- ```azurecli - az acr encryption rotate-key \ - --name <registry-name> \ - --key-encryption-key <new-key-id> \ - --identity <id of user assigned identity> - ``` - -- Rotate the key and use a system-assigned identity.--Before you use the system-assigned identity, verify that the `get`, `wrap`, and `unwrap` permissions are assigned to it. -- ```azurecli - az acr encryption rotate-key \ - --name <registry-name> \ - --key-encryption-key <new-key-id> \ - --identity [system] - ``` --### Create or update the key version by using the Azure portal --Use the registry's **Encryption** settings to update the key vault, key, or identity settings for a customer-managed key. --For example, to configure a new key: --1. In the portal, go to your registry. -1. Under **Settings**, select **Encryption** > **Change key**. -- :::image type="content" source="media/container-registry-customer-managed-keys/rotate-key.png" alt-text="Screenshot of encryption key options in the Azure portal."::: -1. In **Encryption**, choose one of the following options: - * Choose **Select from Key Vault**, and then either select an existing key vault and key or select **Create new**. The key that you select is unversioned and enables automatic key rotation. - * Select **Enter key URI**, and provide a key identifier directly. You can provide either a versioned key URI (for a key that must be rotated manually) or an unversioned key URI (which enables automatic key rotation). -1. Complete the key selection, and then select **Save**. --## Revoke a customer-managed key --You can revoke a customer-managed encryption key by changing the access policy, by changing the permissions on the key vault, or by deleting the key. --To change the access policy of the managed identity that your registry uses, run the [az-keyvault-delete-policy](/cli/azure/keyvault#az-keyvault-delete-policy) command: --```azurecli -az keyvault delete-policy \ - --resource-group <resource-group-name> \ - --name <key-vault-name> \ - --object-id <key-vault-key-id> -``` --To delete the individual versions of a key, run the [az-keyvault-key-delete](/cli/azure/keyvault/key#az-keyvault-key-delete) command. This operation requires the *keys/delete* permission. --```azurecli -az keyvault key delete \ - --name <key-vault-name> \ - -- - --object-id $identityPrincipalID \ -``` --> [!NOTE] -> Revoking a customer-managed key will block access to all registry data. If you enable access to the key or restore a deleted key, the registry will pick the key, and you can regain control of access to the encrypted registry data. --## Next steps --Advance to the [next article](tutorial-troubleshoot-customer-managed-keys.md) to troubleshoot common problems like errors when you're removing a managed identity, 403 errors, and accidental key deletions. - |
container-registry | Tutorial Troubleshoot Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-troubleshoot-customer-managed-keys.md | - Title: Troubleshoot a customer-managed key -description: Learn how to troubleshoot the most common problems for a registry that's enabled with a customer-managed key. -- Previously updated : 10/31/2023-----# Troubleshoot a customer-managed key --This article is part four in a four-part tutorial series. [Part one](tutorial-customer-managed-keys.md) provides an overview of customer-managed keys, their features, and considerations before you enable one on your registry. In [part two](tutorial-enable-customer-managed-keys.md), you learn how to enable a customer-managed key by using the Azure CLI, the Azure portal, or an Azure Resource Manager template. In [part three](tutorial-rotate-revoke-customer-managed-keys.md), you learn how to rotate, update, and revoke a customer-managed key. This article helps you troubleshoot and resolve common problems with customer-managed keys. --## Error when you're removing a managed identity --If you try to remove a user-assigned or system-assigned managed identity that you used to configure encryption for your registry, you might see an error: - -``` -Azure resource '/subscriptions/xxxx/resourcegroups/myGroup/providers/Microsoft.ContainerRegistry/registries/myRegistry' does not have access to identity 'xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx' Try forcibly adding the identity to the registry <registry name>. For more information on bring your own key, please visit 'https://aka.ms/acr/cmk'. -``` - -You're unable to change (rotate) the encryption key. The resolution steps depend on the type of identity that you used for encryption. --### Removing a user-assigned identity --If you get the error when you try to remove a user-assigned identity, follow these steps: - -1. Reassign the user-assigned identity by using the [az acr identity assign](/cli/azure/acr/identity/#az-acr-identity-assign) command. -2. Pass the user-assigned identity's resource ID, or use the identity's name when it's in the same resource group as the registry. -- For example: -- ```azurecli - az acr identity assign -n myRegistry \ - --identities "/subscriptions/mysubscription/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myidentity" - ``` - -3. Change the key and assign a different identity. -4. Now, you can remove the original user-assigned identity. --### Removing a system-assigned identity --If you get the error when you try to remove a system-assigned identity, [create an Azure support ticket](https://azure.microsoft.com/support/create-ticket/) for assistance in restoring the identity. --## Error after you enable a key vault firewall --If you enable a key vault firewall or virtual network after creating an encrypted registry, you might see HTTP 403 or other errors with image import or automated key rotation. To correct this problem, reconfigure the managed identity and key that you initially used for encryption. See the steps in [Rotate a customer-managed key](tutorial-rotate-revoke-customer-managed-keys.md#rotate-a-customer-managed-key). --If the problem persists, contact Azure Support. --## Identity expiry error --The identity attached to a registry is set for autorenewal to avoid expiry. If you disassociate an identity from a registry, an error message occurs explaining to you can't remove the identity in use for CMK. Attempting to remove the identity jeopardizes the autorenewal of identity. The artifact pull/push operations work until the identity expires (Usually three months). After the identity expiration, you'll see the HTTP 403 with an error message "The identity associated with the registry is inactive. This could be due to attempted removal of the identity. Reassign the identity manually". --You have to reassign the identity back to registry explicitly. --1. Run the [az acr identity assign](/cli/azure/acr/identity/#az-acr-identity-assign) command to reassign the identity manually. -- - For example, - - ```azurecli-interactive - az acr identity assign -n myRegistry \ - --identities "/subscriptions/mysubscription/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myidentity" - ``` --## Accidental deletion of a key vault or key --Deletion of the key vault, or the key, that's used to encrypt a registry with a customer-managed key makes the registry's content inaccessible. If [soft delete](/azure/key-vault/general/soft-delete-overview) is enabled in the key vault (the default option), you can recover a deleted vault or key vault object and resume registry operations. --## Next steps --For key vault deletion and recovery scenarios, see [Azure Key Vault recovery management with soft delete and purge protection](/azure/key-vault/general/key-vault-recovery). |
container-registry | Zone Redundancy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/zone-redundancy.md | - Title: Zone-redundant registry for high availability -description: Learn about enabling zone redundancy in Azure Container Registry. Create a container registry or replication in an Azure availability zone. Zone redundancy is a feature of the Premium service tier. --- Previously updated : 10/31/2023------# Enable zone redundancy in Azure Container Registry for resiliency and high availability --In addition to [geo-replication](container-registry-geo-replication.md), which replicates registry data across one or more Azure regions to provide availability and reduce latency for regional operations, Azure Container Registry supports optional *zone redundancy*. [Zone redundancy](../availability-zones/az-overview.md#availability-zones) provides resiliency and high availability to a registry or replication resource (replica) in a specific region. --This article shows how to set up a zone-redundant container registry or replica by using the Azure CLI, Azure portal, or Azure Resource Manager template. --Zone redundancy is a feature of the Premium container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md). --## Regional Support --* ACR Availability Zones are supported in the following regions: - - |Americas |Europe |Africa |Asia Pacific | - ||||| - |Brazil South<br/>Canada Central<br/>Central US<br/>East US<br/>East US 2<br/>East US 2 EUAP<br/>South Central US<br/>US Government Virginia<br/>West US 2<br/>West US 3 |France Central<br/>Germany West Central<br/>Italy North<br/>North Europe<br/>Norway East<br/>Sweden Central<br/>Switzerland North<br/>UK South<br/>West Europe |South Africa North<br/> |Australia East<br/>Central India<br/>China North 3<br/>East Asia<br/>Japan East<br/>Korea Central<br/>Qatar Central<br/>Southeast Asia<br/>UAE North | --* Region conversions to availability zones aren't currently supported. -* To enable availability zone support in a region, create the registry in the desired region with availability zone support enabled, or add a replicated region with availability zone support enabled. -* A registry with an AZ-enabled stamp creates a home region replication with an AZ-enabled stamp by default. The AZ stamp can't be disabled once it's enabled. -* The home region replication represents the home region registry. It helps to view and manage the availability zone properties and can't be deleted. -* The availability zone is per region, once the replications are created, their states can't be changed, except by deleting and re-creating the replications. -* Zone redundancy can't be disabled in a region. -* [ACR Tasks](container-registry-tasks-overview.md) doesn't yet support availability zones. ---## About zone redundancy --Use Azure [availability zones](../availability-zones/az-overview.md) to create a resilient and high availability Azure container registry within an Azure region. For example, organizations can set up a zone-redundant Azure container registry with other [supported Azure resources](../availability-zones/az-region.md) to meet data residency or other compliance requirements, while providing high availability within a region. --Azure Container Registry also supports [geo-replication](container-registry-geo-replication.md), which replicates the service across multiple regions, enabling redundancy and locality to compute resources in other locations. The combination of availability zones for redundancy within a region, and geo-replication across multiple regions, enhances both the reliability and performance of a registry. --Availability zones are unique physical locations within an Azure region. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. Each zone has one or more datacenters equipped with independent power, cooling, and networking. When configured for zone redundancy, a registry (or a registry replica in a different region) is replicated across all availability zones in the region, keeping it available if there are datacenter failures. --## Create a zone-redundant registry - CLI --To use the Azure CLI to enable zone redundancy, you need Azure CLI version 2.17.0 or later, or Azure Cloud Shell. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). --### Create a resource group --If needed, run the [az group create](/cli/azure/group#az-group-create) command to create a resource group for the registry. --```azurecli -az group create --name <resource-group-name> --location <location> -``` --### Create zone-enabled registry --Run the [az acr create](/cli/azure/acr#az-acr-create) command to create a zone-redundant registry in the Premium service tier. Choose a region that [supports availability zones](../availability-zones/az-region.md) for Azure Container Registry. In the following example, zone redundancy is enabled in the *eastus* region. See the `az acr create` command help for more registry options. --```azurecli -az acr create \ - --resource-group <resource-group-name> \ - --name <container-registry-name> \ - --location eastus \ - --zone-redundancy enabled \ - --sku Premium -``` --In the command output, note the `zoneRedundancy` property for the registry. When enabled, the registry is zone redundant: --```JSON -{ - [...] -"zoneRedundancy": "Enabled", -} -``` --### Create zone-redundant replication --Run the [az acr replication create](/cli/azure/acr/replication#az-acr-replication-create) command to create a zone-redundant registry replica in a region that [supports availability zones](../availability-zones/az-region.md) for Azure Container Registry, such as *westus2*. --```azurecli -az acr replication create \ - --location westus2 \ - --resource-group <resource-group-name> \ - --registry <container-registry-name> \ - --zone-redundancy enabled -``` - -In the command output, note the `zoneRedundancy` property for the replica. When enabled, the replica is zone redundant: --```JSON -{ - [...] -"zoneRedundancy": "Enabled", -} -``` --## Create a zone-redundant registry - portal --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Select **Create a resource** > **Containers** > **Container Registry**. -1. In the **Basics** tab, select or create a resource group, and enter a unique registry name. -1. In **Location**, select a region that supports zone redundancy for Azure Container Registry, such as *East US*. -1. In **SKU**, select **Premium**. -1. In **Availability zones**, select **Enabled**. -1. Optionally, configure more registry settings, and then select **Review + create**. -1. Select **Create** to deploy the registry instance. -- :::image type="content" source="media/zone-redundancy/enable-availability-zones-portal.png" alt-text="Enable zone redundancy in Azure portal"::: --To create a zone-redundant replication: --1. Navigate to your Premium tier container registry, and select **Replications**. -1. On the map that appears, select a green hexagon in a region that supports zone redundancy for Azure Container Registry, such as **West US 2**. Or select **+ Add**. -1. In the **Create replication** window, confirm the **Location**. In **Availability zones**, select **Enabled**, and then select **Create**. -- :::image type="content" source="media/zone-redundancy/enable-availability-zones-replication-portal.png" alt-text="Enable zone-redundant replication in Azure portal"::: --## Create a zone-redundant registry - template --### Create a resource group --If needed, run the [az group create](/cli/azure/group#az-group-create) command to create a resource group for the registry in a region that [supports availability zones](../availability-zones/az-region.md) for Azure Container Registry, such as *eastus*. This region is used by the template to set the registry location. --```azurecli -az group create --name <resource-group-name> --location eastus -``` --### Deploy the template --You can use the following Resource Manager template to create a zone-redundant, geo-replicated registry. The template by default enables zone redundancy in the registry and a regional replica. --Copy the following contents to a new file and save it using a filename such as `registryZone.json`. --```JSON -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "acrName": { - "type": "string", - "defaultValue": "[concat('acr', uniqueString(resourceGroup().id))]", - "minLength": 5, - "maxLength": 50, - "metadata": { - "description": "Globally unique name of your Azure Container Registry" - } - }, - "location": { - "type": "string", - "defaultValue": "[resourceGroup().location]", - "metadata": { - "description": "Location for registry home replica." - } - }, - "acrSku": { - "type": "string", - "defaultValue": "Premium", - "allowedValues": [ - "Premium" - ], - "metadata": { - "description": "Tier of your Azure Container Registry. Geo-replication and zone redundancy require Premium SKU." - } - }, - "acrZoneRedundancy": { - "type": "string", - "defaultValue": "Enabled", - "metadata": { - "description": "Enable zone redundancy of registry's home replica. Requires registry location to support availability zones." - } - }, - "acrReplicaLocation": { - "type": "string", - "metadata": { - "description": "Short name for registry replica location." - } - }, - "acrReplicaZoneRedundancy": { - "type": "string", - "defaultValue": "Enabled", - "metadata": { - "description": "Enable zone redundancy of registry replica. Requires replica location to support availability zones." - } - } - }, - "resources": [ - { - "comments": "Container registry for storing docker images", - "type": "Microsoft.ContainerRegistry/registries", - "apiVersion": "2020-11-01", - "name": "[parameters('acrName')]", - "location": "[parameters('location')]", - "sku": { - "name": "[parameters('acrSku')]", - "tier": "[parameters('acrSku')]" - }, - "tags": { - "displayName": "Container Registry", - "container.registry": "[parameters('acrName')]" - }, - "properties": { - "adminUserEnabled": "[parameters('acrAdminUserEnabled')]", - "zoneRedundancy": "[parameters('acrZoneRedundancy')]" - } - }, - { - "type": "Microsoft.ContainerRegistry/registries/replications", - "apiVersion": "2020-11-01", - "name": "[concat(parameters('acrName'), '/', parameters('acrReplicaLocation'))]", - "location": "[parameters('acrReplicaLocation')]", - "dependsOn": [ - "[resourceId('Microsoft.ContainerRegistry/registries/', parameters('acrName'))]" - ], - "properties": { - "zoneRedundancy": "[parameters('acrReplicaZoneRedundancy')]" - } - } - ], - "outputs": { - "acrLoginServer": { - "value": "[reference(resourceId('Microsoft.ContainerRegistry/registries',parameters('acrName')),'2019-12-01').loginServer]", - "type": "string" - } - } - } -``` --Run the following [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create) command to create the registry using the preceding template file. Where indicated, provide: --* a unique registry name, or deploy the template without parameters and it will create a unique name for you -* a location for the replica that supports availability zones, such as *westus2* --```azurecli -az deployment group create \ - --resource-group <resource-group-name> \ - --template-file registryZone.json \ - --parameters acrName=<registry-name> acrReplicaLocation=<replica-location> -``` --In the command output, note the `zoneRedundancy` property for the registry and the replica. When enabled, each resource is zone redundant: --```JSON -{ - [...] -"zoneRedundancy": "Enabled", -} -``` --## Next steps --* Learn more about [regions that support availability zones](../availability-zones/az-region.md). -* Learn more about building for [reliability](/azure/architecture/framework/resiliency/app-design) in Azure. |
copilot | Analyze Cost Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/analyze-cost-management.md | - Title: Analyze, estimate and optimize cloud costs using Microsoft Copilot in Azure -description: Learn about scenarios where Microsoft Copilot in Azure can use Microsoft Cost Management to help you manage your costs. Previously updated : 05/28/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Analyze, estimate and optimize cloud costs using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can help you analyze, estimate and optimize cloud costs. Ask questions using natural language to get information and recommendations based on [Microsoft Cost Management](/azure/cost-management-billing/costs/overview-cost-management). --When you ask Microsoft Copilot in Azure questions about your costs, it automatically pulls context based on the last scope that you accessed using Cost Management. If the context isn't clear, you'll be prompted to select a scope or provide more context. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use for Cost Management. Modify these prompts based on your real-life scenarios, or try additional prompts to meet your needs. --- "Summarize my costs for the last 6 months."-- "Why did my cost spike on July 8th?"-- "Can you provide an estimate of our expected expenses for the next 6 months?"-- "Show me the resource group with the highest spending in the last 6 months."-- "How can we reduce our costs?"-- "Which resources are covered by savings plans?"--## Examples --When you prompt Microsoft Copilot in Azure, "**Summarize my costs for the last 6 months**," a summary of costs for the selected scope is provided. You can follow up with questions to get more granular details, such as "What was our virtual machine spending last month?" ----Next, you can ask "**How can we reduce our costs?**" Microsoft Copilot in Azure provides a list of recommendations you can take, including an estimate of the potential savings you might see. ----## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Microsoft Cost Management](/azure/cost-management-billing/costs/overview-cost-management). |
copilot | Author Api Management Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/author-api-management-policies.md | - Title: Author API Management policies using Microsoft Copilot in Azure -description: Learn about how Microsoft Copilot in Azure can generate Azure API Management policies based on your requirements. Previously updated : 05/28/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Author API Management policies using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can author [Azure API Management policies](/azure/api-management/api-management-howto-policies) based on your requirements. By using Microsoft Copilot in Azure, you can create policies quickly, even if you're not sure what code you need. This can be especially helpful when creating complex policies with many requirements. --To get help authoring API Management policies, start from the **Design** tab of an API you previously imported to your API Management instance. Be sure to use the [code editor view](/azure/api-management/set-edit-policies?tabs=editor#configure-policy-in-the-portal). Ask Microsoft Copilot in Azure to generate policy definitions for you, then copy the results right into the editor, making any desired changes. You can also ask questions to understand the different options or change the provided policy. --When you're working with API Management policies, you can also select a portion of the policy, right-click, and then select **Explain**. This will open Microsoft Copilot in Azure and paste your selection with a prompt to explain how that part of the policy works. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use to get help authoring API Management policies. Modify these prompts based on your real-life scenarios, or try additional prompts to create different kinds of policies. --- "Generate a policy to configure rate limiting with 5 requests per second"-- "Generate a policy to remove a 'X-AspNet-Version' header from the response"-- "Explain (selected policy or element) to me"--## Examples --When creating an API Management policy, you can say "**Generate a policy to configure rate limiting with 5 requests per second.**" Microsoft Copilot in Azure provides an example and explains how you might want to modify the provided based on your requirements. ---In this example, a policy is generated based on the prompt "Generate a policy to remove a 'X-AspNet-Version' header from the response." ---When you have questions about a certain policy element, you can get more information by selecting a section of the policy, right-clicking, and selecting **Explain**. ---Microsoft Copilot in Azure explains how the code works, breaking down each specific section. ---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Azure API Management](/azure/api-management/api-management-key-concepts). |
copilot | Build Infrastructure Deploy Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/build-infrastructure-deploy-workloads.md | - Title: Build infrastructure and deploy workloads using Microsoft Copilot in Azure -description: Learn how Microsoft Copilot in Azure can help you build custom infrastructure for your workloads and provide templates and scripts to help you deploy. Previously updated : 02/26/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Build infrastructure and deploy workloads using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can help you quickly build custom infrastructure for your workloads and provide templates and scripts to help you deploy. By using Microsoft Copilot in Azure, you can often reduce your deployment time dramatically. Microsoft Copilot in Azure also helps you align to security and compliance standards and other best practices. --Throughout a conversation, Microsoft Copilot in Azure asks you questions to better understand your requirements and applications. Based on the provided information, it then provides several architecture options suitable for deploying that infrastructure. After you select an option, Microsoft Copilot in Azure provides detailed descriptions of the infrastructure, including how it can be configured. Finally, Microsoft Copilot in Azure provides templates and scripts using the language of your choice to deploy your infrastructure. --To get help building infrastructure and deploying workloads, start on the [More virtual machines and related solutions](https://portal.azure.com/#view/Microsoft_Azure_SolutionCenter/SolutionGroup.ReactView/groupid/defaultLandingVmBrowse) page in the Azure portal. You can reach this page from **Virtual machines** by selecting the arrow next to **Create**, then selecting **More VMs and related solutions**. ---Once you're there, start the conversation by letting Microsoft Copilot in Azure know what you want to build and deploy. ----## Sample prompts --The prompts you use can vary depending on the type of workload you want to deploy, and the stage of the conversation that you're in. Here are a few examples of the kinds of prompts you might use. Modify these prompts based on your real-life scenarios, or try additional prompts as the conversation continues. --- Starting the conversation:- - "Help me deploy a website on Azure" - - "Give me infrastructure for my new application." -- Requirement gathering stage:- - "Give me examples of these requirements." - - "What do you mean by security requirements?" - - (or provide your requirements based on the questions) -- High level architecture stage:- - "Let's go with option 1." - - "Give me more details about option 1." - - "Are there more options available?" - - "Instead of SQL, use Cosmos DB." - - "Can you give me comparison table for these options? Also include approximate cost." -- Detailed infrastructure building stage:- - "I like this infrastructure. Give me an ARM template to deploy this." - - "Can you include rolling upgrade mode Manual instead of Automatic for the VMSS?" - - "Can you explain this design in more detail?" - - "Are there any alternatives to private link?" -- Code building stage:- - "Can you give me PS instead of ARM template?" - - "Change VMSS instance count to 100 instead of 10." - - "Explain this code in English." --## Examples --From the **[More virtual machines and related solutions](https://portal.azure.com/#view/Microsoft_Azure_SolutionCenter/SolutionGroup.ReactView/groupid/defaultLandingVmBrowse)** page, you can tell Microsoft Copilot in Azure "**I want to deploy a website on Azure**." Microsoft Copilot in Azure responds with a series of questions to better understand your scenario. ---After you provide answers, Microsoft Copilot in Azure provides several options that might be a good fit. You can choose one of these or ask more questions. ---After you specify which option you'd like to use, Microsoft Copilot in Azure provides a step-by-step plan to walk you through the deployment. It gives you the option to change parts of the plan and also asks you to choose a development tool. In this example, Azure App Service is selected. ---Since the response in this example is ARM template, Microsoft Copilot in Azure creates a basic ARM template, then provides instructions for how to deploy it. ----## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [virtual machines in Azure](/azure/virtual-machines/overview). |
copilot | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/capabilities.md | - Title: Microsoft Copilot in Azure capabilities -description: Learn about the things you can do with Microsoft Copilot in Azure. Previously updated : 08/29/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Microsoft Copilot in Azure capabilities --Microsoft Copilot in Azure (preview) amplifies your impact with AI-enhanced operations. You can ask Copilot to Azure for help with designing, operating, optimizing, and troubleshooting your Azure apps and infrastructure. Copilot for Azure can help you gain new insights, discover more benefits of the cloud, and orchestrate data across both the cloud and the edge. --This article describes some of the ways that you can use Copilot for Azure. ---## Perform tasks --Use Microsoft Copilot in Azure to perform many basic tasks in the Azure portal or [in the Azure mobile app](../azure-portal/mobile-app/microsoft-copilot-in-azure.md). There are many things you can do! Take a look at these articles to learn about some of the scenarios in which Microsoft Copilot in Azure can be especially helpful. --- Understand your Azure environment:- - [Get resource information through Azure Resource Graph queries](get-information-resource-graph.md) - - [Understand service health events and status](understand-service-health.md) - - [Analyze, estimate, and optimize costs](analyze-cost-management.md) - - [Query your attack surface](query-attack-surface.md) -- Work smarter with Azure - - [Execute commands](execute-commands.md) - - [Deploy virtual machines effectively](deploy-vms-effectively.md) - - [Build infrastructure and deploy workloads](build-infrastructure-deploy-workloads.md) - - [Create resources using interactive deployments](use-guided-deployments.md) - - [Work with AKS clusters efficiently](work-aks-clusters.md) - - [Get information about Azure Monitor metrics and logs](get-monitoring-information.md) - - [Work smarter with Azure Stack HCI](work-smarter-edge.md) - - [Secure and protect storage accounts](improve-storage-accounts.md) - - [Improve Azure SQL Database-driven applications](/azure/azure-sql/copilot/copilot-azure-sql-overview#microsoft-copilot-for-azure-enhanced-scenarios) -- Write and optimize code:- - [Generate Azure CLI scripts](generate-cli-scripts.md) - - [Generate PowerShell scripts](generate-powershell-scripts.md) - - [Generate Terraform configurations](generate-terraform-configurations.md) - - [Discover performance recommendations with Code Optimizations](optimize-code-application-insights.md) - - [Author API Management policies](author-api-management-policies.md) - - [Create Kubernetes YAML files](generate-kubernetes-yaml.md) - - [Troubleshoot apps faster with App Service](troubleshoot-app-service.md) --> [!NOTE] -> Microsoft Copilot in Azure (preview) includes access to Copilot in Azure SQL Database (preview). This offering can help you streamline the design, operation, optimization, and health of Azure SQL Database-driven applications. It improves productivity in the Azure portal by offering natural language to SQL conversion and self-help for database administration. For more information, see [Copilot in Azure SQL Database (preview)](https://aka.ms/sqlcopilot). --## Get information --From anywhere in the Azure portal, you can ask Microsoft Copilot in Azure to explain more about Azure concepts, services, or offerings. You can ask questions to learn how a feature works, or which configurations best meet your budgets, security, and scale requirements. Copilot can guide you to the right user experience or even author scripts and other artifacts that you can use to deploy your solutions. Answers are grounded in the latest Azure documentation, so you can get up-to-date guidance just by asking a question. --Asking questions to understand more can be especially helpful when you're troubleshooting problems. Describe the problem, and Microsoft Copilot in Azure will provide some suggestions on how you might be able to resolve the issue. For example, you can say things like "Cluster stuck in upgrading state while performing update operation" or "Azure database unable to connect from Power BI". You'll see information about the problem and possible resolution options. --Microsoft Copilot in Azure can also help you understand more about information presented in Azure. This can be especially helpful when looking at diagnostic details. For example, when viewing diagnostics for a resource, you can say "Give me a summary of this page" or "What's the issue with my app?" You can ask what an error means, or ask what the next steps would be to implement a recommended solution. --## Find recommended services --Ask questions to learn which services are best suited for your workloads, or get ideas about additional services that might help support your objectives. For instance, you can ask "What service would you recommend to implement distributed caching?" or "What are popular services used with Azure Container Apps?" Where applicable, Microsoft Copilot in Azure provides links to start working with the service or learn more. In some cases, you'll also see metrics about how often a service is used. You can also ask additional questions to find out more about the service and whether it's right for your needs. --## Navigation --Rather than searching for a service to open, simply ask Microsoft Copilot in Azure to open the service for you. If you can't remember the exact name, you'll see some suggestions and can choose the right one, or ask Microsoft Copilot in Azure to explain more. --## Manage portal settings --Use Microsoft Copilot in Azure to confirm your settings selection or change options, without having to open the **Settings** pane. For example, you can ask Copilot which Azure themes are available, then have it apply the one you choose. --## Current limitations --While Microsoft Copilot in Azure can perform many types of tasks, it's important to understand what not to expect. In some cases, Microsoft Copilot in Azure might not be able to complete your request. In these cases, you'll generally see an explanation along with more information about how you can carry out the intended action. --Keep in mind these current limitations: --- Any action taken on more than 10 resources must be performed outside of Microsoft Copilot in Azure.--- You can only make 15 requests during any given chat, and you only have 10 chats in a 24 hour period.--- Some responses that display lists will be limited to the top five items.-- For some tasks and queries, using a resource's name will not work, and the Azure resource ID must be provided.-- Microsoft Copilot in Azure is currently available in English only.--## Next steps --- [Get tips for writing effective prompts](write-effective-prompts.md) to use with Microsoft Copilot in Azure.-- Learn about [managing access to Copilot in Azure](manage-access.md) in your organization.-- Explore the [Microsoft Copilot in Azure video series](/shows/microsoft-copilot-in-azure/). |
copilot | Deploy Vms Effectively | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/deploy-vms-effectively.md | - Title: Deploy virtual machines effectively using Microsoft Copilot in Azure -description: Learn how Microsoft Copilot in Azure can help you deploy cost-efficient VMs. Previously updated : 05/28/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Deploy virtual machines effectively using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can help you deploy [virtual machines in Azure](/azure/virtual-machines/overview) that are efficient and effective. You can get suggestions for different options to save costs and choose the right type and size for your VMs. --For best results, start on the **Virtual machines** page in Azure. When you ask Microsoft Copilot in Azure for help with a VM, it automatically pulls context when possible, based on the current conversation or on the page you're viewing in the Azure portal. If the context isn't clear, you'll be prompted to specify the VM for which you want assistance. --While it can be helpful to have some familiarity with different VM configuration options such as pricing, scalability, availability, and size can be beneficial, Microsoft Copilot in Azure is designed to help you regardless of your level of expertise. In all cases, we recommend that you closely review the suggestions to confirm that they meet your needs. ----## Create cost-efficient VMs --Microsoft Copilot in Azure can guide you in suggesting different options to save costs as you deploy a virtual machine. If you're new to creating VMs, Microsoft Copilot in Azure can help you understand the best ways to reduce costs More experienced users can confirm the best ways to make sure VMs align with both use cases and budget needs, or find ways to make a specific VM size more cost-effective by enabling certain features that might help lower overall cost. --### Sample prompts --- How do I reduce the cost of my virtual machine?-- Help me create a cost-efficient virtual machine-- Help me create a low cost VM--### Examples --During the VM creation process, you can ask "**How do I reduce the cost of my virtual machine?**" Microsoft Copilot in Azure guides you through options to make your VM more cost-effective, providing options that you can enable. ---Once you complete the options that Microsoft Copilot in Azure suggests, you can review and create the VM with the provided recommendations, or continue to make other changes. ---## Create highly available and scalable VMs --Microsoft Copilot in Azure can provide additional context to help you create high-availability VMs. It can help you create VNs in availability zones, decide whether a Virtual Machine Scale Set is the right option for your needs, or assess which networking resources will help manage traffic effectively across your compute resources. --### Sample prompts --- How do I create a resilient virtual machine-- Help me create a high availability virtual machine--### Examples --During the VM creation process, you can ask "**How do I create a resilient and high availability virtual machine?**" Microsoft Copilot in Azure guides you through options to configure your VM for high availability, providing options that you can enable. ---## Choose the right size for your VMs --Azure offers different size options for VMs based on your workload needs. Microsoft Copilot in Azure can help you identify the best size for your VM, keeping in mind the context of your other configuration requirements, and guide you through the selection process. --### Sample prompts --- Help me choose a size for my Virtual Machine-- Which Virtual Machine size will best suit my requirements?--### Examples --Ask "**Help me choose the right VM size for my workload?**" Microsoft Copilot in Azure asks for some more information to help it determine the best options. After that, it presents some options and lets you choose which recommended size to use for your VM. ---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [virtual machines in Azure](/azure/virtual-machines/overview). |
copilot | Example Prompts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/example-prompts.md | - Title: Example prompts for Microsoft Copilot in Azure -description: View example prompts that you can try out with Microsoft Copilot in Azure. Previously updated : 07/30/2024-------# Example prompts for Microsoft Copilot in Azure --Prompts are how you can interact with Microsoft Copilot in Azure (preview) to get help working with your Azure environment. This article shows a list of skills and example prompts that you can try out to see how Copilot in Azure can help with different scenarios and Azure services. ---To learn more about the ways that you can use Copilot for Azure, see [Capabilities of Microsoft Copilot for Azure](capabilities.md). For tips on creating your own prompts, see [Write effective prompts for Microsoft Copilot in Azure](write-effective-prompts.md). ---## Prompt library --To get started with Copilot for Azure, try a few prompts from this list. Feel free to experiment and make changes, or create your own prompts based on your current tasks and interests. --| Scenario | Outcome | Example prompt to try | -|-|-|--| -| [Azure portal](capabilities.md#manage-portal-settings) | Changes Azure portal theme. | "Change my theme to dark mode." | -| [Learn](capabilities.md#get-information) | Explains documentation and purposes of Azure services. | "What are the benefits and applications of Azure API Management?" | -| [Learn](capabilities.md#get-information) | Explains how to implement Azure services. | "How can I process real-time events in my application with Azure?" | -| [Learn](capabilities.md#get-information) | Outlines steps for performing tasks. | "Outline steps to secure Azure Blob Storage with private endpoints and Azure Private Link." | -| [Learn](capabilities.md#get-information) | Generates code from documentation. | "How to upload a storage container with JavaScript." | -| [Learn](capabilities.md#get-information) | Creates guides from multiple documentation sources. | "I want to use Azure functions to build an OpenAI application." | -| [Azure Resource Graph](get-information-resource-graph.md) | Lists the number of critical alerts. | "How many critical alerts do I have?" | -| [Azure Resource Graph](get-information-resource-graph.md) | Retrieves live resource information. | "Which VMs are running right now? Please restart them." | -| [Azure Resource Graph](get-information-resource-graph.md) | Identifies states of resources. | "Which resources are non-compliant?" | -| [Azure Resource Graph](get-information-resource-graph.md) | Lists resources created or modified in the last 24 hours. | "List resources that have been created or modified in the last 24 hours." | -| [Cost Management](analyze-cost-management.md) | Compares the current month's cost to the previous month's cost. | "How does our cost this month compare to last month's." | -| [Cost Management](analyze-cost-management.md) | Forecasts cost for the next 3 months. | "Forecast my cost for the next 3 months." | -| [Cost Management](analyze-cost-management.md) | Shows Azure credits balance. | "What's our Azure credits balance?" | -| [Azure CLI](generate-cli-scripts.md) | Generates a cheatsheet for managing resources with CLI. | "Generate a cheatsheet for managing VMs with CLI." | -| [Azure CLI](generate-cli-scripts.md) | Lists all resources of a certain kind using Azure CLI. | "How do I list all my VMs using Azure CLI?" | -| [Azure CLI](generate-cli-scripts.md) | Creates resources with CLI. | "Create a virtual network with two subnets using the address space of 10.0.0.0/16 using az cli." | -| [Azure CLI](generate-cli-scripts.md) | Deploys resources with CLI. | "I want to use Azure CLI to deploy and manage AKS using a private service endpoint." | -| [PowerShell](generate-powershell-scripts.md) | Create resources with PowerShell. | "How can I create a new resource group using PowerShell?" | -| [PowerShell](generate-powershell-scripts.md) | Deploy multiple resources with PowerShell. | "Help me write a PS script where after creating VM, deploy an AKS cluster on it." | -| [PowerShell](generate-powershell-scripts.md) | Manage resources with PowerShell. | "How to back up an Azure SQL single database to an Azure storage container using Azure PowerShell?" | -| [Interactive deployments](use-guided-deployments.md) | Provides a detailed guide on deploying an AKS cluster on Azure. | "Provide me with a detailed guide on deploying an AKS cluster on Azure." | -| [Interactive deployments](use-guided-deployments.md) | Explains steps to create a Linux VM on Azure and SSH into it. | "What are the steps to create a Linux VM on Azure and how do I SSH into it?" | -| [Azure Monitor](get-monitoring-information.md) | Detects anomalies in a specific resource. | "Is there any anomaly in my AKS resource?" | -| [Azure Monitor](get-monitoring-information.md) | Performs root cause analysis. | "Why is this resource not working properly?" | -| [Azure Monitor](get-monitoring-information.md) | Provides charts on platform metrics for a specific resource. | "Give me a chart of OS disk latency statistics for the last week." | -| [Azure Monitor](get-monitoring-information.md) | Queries logs using natural language | "Show me container logs that include word 'error' for the last day for namespace 'xyz'." | -| [Azure Monitor](get-monitoring-information.md) | Runs an investigation on a specific resource. | "Had an alert in my HCI at 8 am this morning, run an anomaly investigation for me." | -| [Azure Monitor](get-monitoring-information.md) | Lists alerts using natural language. | "Show me all alerts triggered during the last 24 hours." | -| [Azure Monitor](get-monitoring-information.md) | Provides a summary of alerts, including the number of critical alerts. | "Tell me more about these alerts. How many critical alerts are there?" | -| [Azure App Service](troubleshoot-app-service.md) | Analyzes performance issues with an app. | "Troubleshoot performance issues with my app." | -| [Azure App Service](troubleshoot-app-service.md) | Diagnoses high CPU usage issues. | "It seems like there's a high CPU issue with my web app." | -| [Azure App Service](troubleshoot-app-service.md) | Enables auto-heal for web apps. | "Enable auto heal on my web app." | -| [Azure App Service](troubleshoot-app-service.md) | Explains an error related to deployed web apps. | "What does this error mean for my Azure web app?" | -| [Azure App Service](troubleshoot-app-service.md) | Provides assistance with slow-running app code. | "Why is my web app slow?" | -| [Azure App Service](troubleshoot-app-service.md) | Summarizes diagnostics. | "Give me a summary of these diagnostics." | -| [Azure App Service](troubleshoot-app-service.md) | Takes a memory dump of the app. | "Take a memory dump." | -| [Azure App Service](troubleshoot-app-service.md) | Tracks uptime and downtime of a web app. | "Can I track uptime and downtime of my web app over a specific time period?" | -| [Azure Kubernetes Service](work-aks-clusters.md) | Adds the user's IP address to the allowlist. | "Add my IP address to the allowlist of my AKS cluster's network policies." | -| [Azure Kubernetes Service](work-aks-clusters.md) | Configures AKS backups. | "Configure AKS backup." | -| [Azure Kubernetes Service](work-aks-clusters.md) | Scales the number of replicas of a deployment. | "Scale the number of replicas of my deployment my-deployment to 5." | -| [Azure Kubernetes Service](work-aks-clusters.md) | Updates the authorized IP ranges. | "Update my AKS cluster's authorized IP ranges." | -| [Azure Kubernetes Service](work-aks-clusters.md) | Shows existing backups. | "I want to view the backups on my AKS cluster." | -| [Azure Kubernetes Service](work-aks-clusters.md) | Manages the backup extension. | "Manage backup extension on my AKS cluster." | -| [Azure Kubernetes Service](work-aks-clusters.md) | Upgrades the AKS pricing tier. | "Upgrade AKS cluster pricing tier to Standard." | -| [Azure SQL Databases](https://aka.ms/sqlcopilot) | Use natural language to manage Azure SQL Databases | "I want to automate Azure SQL Database scaling based on performance metrics using Azure Functions." | -| [Azure Storage](improve-storage-accounts.md) | Checks if a storage account follows security best practices. | "Does this storage account follow security best practices?" | -| [Azure Storage](improve-storage-accounts.md) | Provides recommendations to make a storage account more secure. | "How can I make this storage account more secure?" | -| [Azure Storage](improve-storage-accounts.md) | Checks for vulnerabilities in a storage account. | "Is this storage account vulnerable?" | -| [Azure Storage](improve-storage-accounts.md) | Prevents deletion of a storage account. | "How can I prevent this storage account from being deleted?" | -| [Azure Storage](improve-storage-accounts.md) | Protects data from loss or theft. | "How do I protect this storage account's data from data loss or theft?" | -| [Azure Virtual Machines](deploy-vms-effectively.md) | Creates a cost-efficient virtual machine configuration. | "Help me create a cost-efficient virtual machine." | -| [Execute commands](capabilities.md)| Restarts VMs with the tag 'env' | "Restart my VMs that have the tag 'env'" | -| [Service health](understand-service-health.md) | Checks for any outage impacting the user. | "Is there any outage impacting me?" | --## Next steps --- Learn more about [things you can do with Copilot in Azure](capabilities.md).-- Get tips on [writing effective prompts](write-effective-prompts.md) for Copilot in Azure.-- Review our [Responsible AI FAQ for Microsoft Copilot in Azure](responsible-ai-faq.md). |
copilot | Execute Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/execute-commands.md | - Title: Execute commands using Microsoft Copilot in Azure (preview) -description: Learn about scenarios where Microsoft Copilot in Azure (preview) can help you perform tasks. Previously updated : 08/28/2024-------# Execute commands using Microsoft Copilot in Azure (preview) --Microsoft Copilot in Azure (preview) can help you execute individual or bulk commands on your resources. With Copilot in Azure, you can save time by prompting Copilot in Azure with natural language, rather than manually navigating to a resource and selecting a button in a resource's command bar. --For example, you can restart your virtual machines by using prompts like **"Restart my VM named ContosoDemo"** or **"Stop my VMs in West US 2."** Copilot in Azure infers relevant resources inferred through an Azure Resource Graph query and determines the relevant command. Next, it asks you to confirm the action. Commands are never executed without your explicit confirmation. After the command is executed, you can track progress in the notification pane, just as if you manually ran the command from within the Azure portal. For faster responses, specify the resource ID of the resources that you want to run the command on. --Copilot in Azure can execute many common commands on your behalf, as long as you have the permissions to perform them yourself. If Copilot in Azure is unable to run a command for you, it generally provides instructions to help you perform the task yourself. To learn more about the commands you can execute with natural language for a resource or service, you can ask Copilot in Azure directly. For instance, you can say **"Which commands can you help me perform on virtual machines?"** ----## Sample prompts --Here are a few examples of the kinds of prompts you can use to execute commands. Modify these prompts based on your real-life scenarios, or try additional prompts to create different kinds of queries. --- "Restart my VM named ContosoDemo"-- "Stop VMs in Europe regions-- "Restore my deleted storage account-- "Enable backup on VM named ContosoDemo"-- "Restart my web app named ContosoWebApp"-- "Start my AKS cluster"--## Examples --When you say **"Restore my deleted storage account**, Copilot in Azure launches the **Restored deleted account** experience. From here, you can select the subscription and the storage account that you want to recover. ---If you say **"Find the VMs running right now and stop them"**, Copilot in Azure first queries to find all VMs running in your selected subscriptions. It then shows you the results and asks you to confirm that the selected VMs should be stopped. You can uncheck a box to exclude a resource from the command. After you confirm, the command is run, with progress shown in your notifications. ---Similarly, if you say **"Delete my VMs in West US 2"**, Copilot in Azure runs a query and then asks you to confirm before running the delete command. ---You can also specify the resource name in your prompt. When you say things like **"Restart my VM named ContosoDemo**", Copilot in Azure looks for that resource, then prompts you to confirm the operation. ---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- [Get tips for writing effective prompts](write-effective-prompts.md) to use with Microsoft Copilot in Azure. |
copilot | Generate Cli Scripts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/generate-cli-scripts.md | - Title: Generate Azure CLI scripts using Microsoft Copilot in Azure -description: Learn about scenarios where Microsoft Copilot in Azure can generate Azure CLI scripts for you to customize and use. Previously updated : 04/25/2024--------# Generate Azure CLI scripts using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can generate [Azure CLI](/cli/azure/) scripts that you can use to create or manage resources. --When you tell Microsoft Copilot in Azure about a task you want to perform by using Azure CLI, it provides a script with the necessary commands. You'll see which placeholder values that you need to update with the actual values based on your environment. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use to generate Azure CLI scripts. Some prompts will return a single command, while others provide multiple steps walking through the full scenario. Modify these prompts based on your real-life scenarios, or try additional prompts to create different kinds of queries. --- "Give me a CLI script to create a new storage account"-- "How do I list all my VMs using Azure CLI?"-- "Create a virtual network with two subnets using the address space of 10.0.0.0/16 using az cli"-- "I need to assign a dns name to a vm using a script"-- "How to attach a disk to a VM using az cli ?"-- "How to create and manage a Linux pool in Azure Batch using cli?"-- "Show me how to backup and restore a web app from a backup using cli"-- "Create VNet service endpoints for Azure Database for PostgreSQL using CLI"-- "I want to create a function app with a named storage account connection using Azure CLI"-- "How to create an App Service app and deploy code to a staging environment using CLI?"-- "I want to use Azure CLI to deploy and manage AKS using a private service endpoint."--## Examples --In this example, the prompt "**I want to use Azure CLI to create a web application**" provides a list of steps, along with the necessary Azure CLI commands. ---When you follow that request with "**Provide full script**", the commands are shown together in one script. ---You can also start off by letting Microsoft Copilot in Azure know that you want the commands all together. For example, you could say "**I want a script to create a low cost VM (all in one codeblock for me to copy and paste)**". ---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Azure CLI](/azure/cli). |
copilot | Generate Kubernetes Yaml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/generate-kubernetes-yaml.md | - Title: Create Kubernetes YAML files for AKS clusters using Microsoft Copilot in Azure -description: Learn how Microsoft Copilot in Azure can help you create Kubernetes YAML files for you to customize and use. Previously updated : 07/29/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Create Kubernetes YAML files using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can help you create [Kubernetes YAML files](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests) to apply [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes) clusters. Generated YAML files adhere to best practices so that you can focus more on your applications and less on the underlying infrastructure. You can also get help when authoring your own YAML files by asking Microsoft Copilot to make changes, fix problems, or explain elements in the context of your specific scenario. --When you ask Copilot in Azure for help with Kubernetes YAML files, it prompts you to open the YAML deployment editor. From there, you can use Copilot in Azure help you create, edit, and format the desired YAML file to create your cluster. --This video shows how Copilot in Azure can assist in writing, formatting, and troubleshooting Kubernetes YAML files. --> [!VIDEO https://learn-video.azurefd.net/vod/player?show=microsoft-copilot-in-azure&ep=microsoft-copilot-in-azure-series-inline-yaml-editing] ----## Generate Kubernetes YAML files using Microsoft Copilot in Azure --Microsoft Copilot in Azure can help you generate Kubernetes YAML files to apply to your AKS cluster pr create a new deployment. You provide your application specifications, such as container images, resource requirements, and networking preferences. Microsoft Copilot in Azure uses your input to generate comprehensive YAML files that define the desired Kubernetes deployments, services, and other resources, effectively encapsulating the infrastructure as code. --When you ask Microsoft Copilot in Azure for help with Kubernetes YAML files, it asks if you'd like to open the YAML deployment editor. -- :::image type="content" source="media/generate-kubernetes-yaml/aks-yaml-question.png" alt-text="Screenshot of a prompt for help generating an AKS YAML file in Microsoft Copilot in Azure."::: --After you confirm, the YAML deployment editor appears. From here, you can enter **ALT + I** to open an inline Copilot prompt. Enter prompts here to see generated YAML based on your requirements. ---## Get help working with Kubernetes files in the YAML editor --Once Microsoft Copilot in Azure has generated a YAML file for you, you can continue to work in the YAML editor to make changes. You can also start from scratch and enter your own YAML directly into the editor. In the YAML editor, Microsoft Copilot in Azure offers several features that help you quickly create valid YAML files. --When working in the AKS YAML editor, enter **ALT + I** to open an inline Copilot prompt. --### Autocomplete --Microsoft Copilot in Azure automatically provides autocomplete suggestions based on your input. ---### Natural language questions --You can use the inline Copilot control (**ALT + I**) to request specific changes using natural languages. For example, you can say **Update to use the latest nginx**. ---Based on your request, Microsoft Copilot in Azure makes changes to your YAML, with differences highlighted. ---Select **Accept** to save these changes, or select the **X** to reject them. To make further changes before accepting, you can enter a different query and then select the **Refresh** button to see the new changes. --You can also select the **Diff** button to toggle the diff view between inline and side-by-side. ---### Built-in commands --When working with YAML files, Microsoft Copilot in Azure provides built-in commands to help you work more efficiently. To access these commands, type **/** into the inline Copilot control. -- :::image type="content" source="media/generate-kubernetes-yaml/aks-yaml-commands.png" alt-text="Screenshot showing the commands available in the inline Microsoft Copilot in Azure control in an AKS YAML file."::: --The following commands are currently available: --- **/explain**: Get more information about a section or element of your YAML file.-- **/format**: Apply standard indentation or fix other formatting issues.-- **/fix**: Resolve problems with invalid YAML.-- **/discard**: Discard previously-made changes.-- **/chat**: Open a full Microsoft Copilot in Azure pane.-- **/close**: Closes the inline Copilot control.-- **/retry**: Tries the previous prompt again.--## Next steps --- Learn about more ways that Microsoft Copilot in Azure can [help you work with AKS](work-aks-clusters.md).-- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes). |
copilot | Generate Powershell Scripts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/generate-powershell-scripts.md | - Title: Generate PowerShell scripts using Microsoft Copilot in Azure -description: Learn about scenarios where Microsoft Copilot in Azure can generate PowerShell scripts for you to customize and use. Previously updated : 05/28/2024---- - build-2024 -----# Generate PowerShell scripts using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can generate [PowerShell](/powershell/azure/) scripts that you can use to create or manage resources. --When you tell Microsoft Copilot in Azure about a task you want to perform by using PowerShell, it provides a script with the necessary cmdlets. You'll see which placeholder values that you need to update with the actual values based on your environment. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use to generate PowerShell scripts. Some prompts return a single cmdlet, while others provide multiple steps walking through the full scenario. Modify these prompts based on your real-life scenarios, or try additional prompts to create different kinds of queries. --- "How do I list the VMs I have running in Azure using PowerShell?"-- "Create a storage account using PowerShell."-- "How do I get all quota limits for a subscription using Azure PowerShell?"-- "Can you show me how to stop all virtual machines in a specific resource group using PowerShell?"--## Examples --In this example, the prompt "**How do I list all my resource groups using PowerShell?**" provides the cmdlet along with information on other ways to use it. ---Similarly, if you ask "**How can I create a new resource group using PowerShell?**", you see an example cmdlet that you can customize as needed. ---You can also ask Microsoft Copilot in Azure for a script with multiple cmdlets. For example, you could say "**Can you help me write a script for Azure PowerShell that can be run directly, and after creating a VM, deploy an AKS cluster on it.**" Copilot in Azure provides a code block that you can copy, letting you know which values to replace. ---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Azure PowerShell](/powershell/azure/). |
copilot | Generate Terraform Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/generate-terraform-configurations.md | - Title: Generate Terraform configurations using Microsoft Copilot in Azure -description: Learn about scenarios where Microsoft Copilot in Azure can generate Terraform configurations for you to use. Previously updated : 08/13/2024-------# Generate Terraform configurations using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can generate Terraform configurations that you can use to create and manage your Azure infrastructure. --When you tell Microsoft Copilot in Azure about some Azure infrastructure that you want to manage through Terraform, it provides a configuration using resources from the [AzureRM provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs). In addition to the primary resources, any dependent resources required to accomplish a successful deployment are included in the configuration. You can ask follow-up questions to further customize the configuration. Once you've reviewed the configuration and are happy with it, copy the configuration contents and deploy the configuration using your Terraform deployment method of choice. --The requested Azure infrastructure should be limited to fewer than eight primary resource types. For example, you should see good results when asking for a configuration to manage a resource group that contains Azure Container App, Azure Functions, and Azure Cosmos DB resources. However, requesting configurations to fully address complex architectures may result in inaccurate results and truncated configurations. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use to generate Terraform configurations. Modify these prompts based on your real-life scenarios, or try additional prompts to create different kinds of queries. --- "Create a Terraform config for a Cognitive Services instance with name 'mycognitiveservice' and S0 pricing tier."-- "Show me a Terraform configuration for a linux virtual machine with 8GB ram and an image of 'UbuntuServer 18.04-LTS'. The resource should be placed in the West US location and have a public IP address. Additionally, it should be part of a virtual network with a network security group."-- "Create Terraform configuration for a container app resource with name 'myApp' with quick start image. Add a log analytic space with PerGB2018 sku and set the retention days to 31. Enable single revision mode in the container app and set the CPU and memory limits to 2 and 4GB respectively. Also, set the name of the container app environment to 'awesomeAzureEnv' and set the name of the container to 'myQuickStartContainer'."-- "What is the Terraform code for a Databricks workspace in Azure with name 'myworkspace' and a premium SKU. The workspace should be created in the West US region."-- "Create an OpenAI deployment with gpt-3.5-turbo model using Terraform template. Set the version of the model to 0613."---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Terraform on Azure](/azure/developer/terraform/overview). |
copilot | Get Information Resource Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/get-information-resource-graph.md | - Title: Get resource information using Microsoft Copilot in Azure (preview) -description: Learn about scenarios where Microsoft Copilot in Azure (preview) can help with Azure Resource Graph. Previously updated : 05/28/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Get resource information using Microsoft Copilot in Azure (preview) --You can ask Microsoft Copilot in Azure (preview) questions about your Azure resources and cloud environment. Using the combined power of large language models (LLMs) and [Azure Resource Graph](/azure/governance/resource-graph/overview), Microsoft Copilot in Azure (preview) helps you author Azure Resource Graph queries. You provide input using natural language from anywhere in the Azure portal, and Microsoft Copilot in Azure (preview) returns a working query that you can use with Azure Resource Graph. Azure Resource Graph also acts as an underpinning mechanism for other scenarios that require real-time access to your resource inventory. --Azure Resource Graph's query language is based on the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) used by Azure Data Explorer. However, you don't need to be familiar with KQL in order to use Microsoft Copilot in Azure (preview) to retrieve information about your Azure resources and environment. Experienced query authors can also use Microsoft Copilot in Azure to help streamline their query generation process. --While a high level of accuracy is typical, we strongly advise you to review the generated queries to ensure they meet your expectations. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use to generate Azure Resource Graph queries. Modify these prompts based on your real-life scenarios, or try additional prompts to create different kinds of queries. --- "Show me all resources that are noncompliant"-- "List all virtual machines lacking enabled replication resources"-- "List all the updates applied to my Linux virtual machines"-- "List all storage accounts that are accessible from the internet"-- "List all virtual machines that are not running now"-- "Write a query that finds all changes for last 7 days."-- "Help me write an ARG query that looks up all virtual machines scale sets, sorted by creation date descending"-- "What are the public IPs of my VMs?"-- "Show me all my storage accounts in East US?"-- "List all my Resource Groups and its subscription."-- "Write a query that finds all resources that were created yesterday."--## Examples --You can ask Microsoft Copilot in Azure (preview) to write queries with prompts like "**Write a query to list my virtual machines with their public interface and public IP.**" ---If the generated query isn't exactly what you want, you can ask Microsoft Copilot in Azure (preview) to make changes. In this example, the first prompt is "**Write a KQL query to list my VMs by OS.**" After the query is shown, the additional prompt "Sorted alphabetically" results in a revised query that lists the OS alphabetically by name. ---You can view the generated query in Azure Resource Graph Explorer by selecting **Run**. For example, you can ask "**What resources were created in the last 24 hours?**" After Microsoft Copilot in Azure (preview) generates the query, select **Run** to see the query and results in Azure Resource Graph Explorer. ---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Azure Resource Graph](/azure/governance/resource-graph/overview). |
copilot | Get Monitoring Information | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/get-monitoring-information.md | - Title: Get information about Azure Monitor metrics and logs using Microsoft Copilot in Azure -description: Learn about scenarios where Microsoft Copilot in Azure can provide information about Azure Monitor metrics and logs. Previously updated : 07/03/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 ------# Get information about Azure Monitor metrics, logs, and alerts using Microsoft Copilot in Azure (preview) --You can ask Microsoft Copilot in Azure (preview) questions about metrics and logs collected by [Azure Monitor](/azure/azure-monitor/), and about Azure Monitor alerts. --When you ask Microsoft Copilot in Azure for this information, it automatically pulls context when possible, based on the current conversation or on the page you're viewing in the Azure portal. If the context of a query isn't clear, you'll be prompted to specify the resource for which you want information. ----## Answer questions about Azure Monitor platform metrics --Use Microsoft Copilot in Azure to ask questions about your Azure Monitor metrics. When asked about metrics for a particular resource, Microsoft Copilot in Azure generates a graph, summarizes the results, and allows you to further explore the data in Metrics Explorer. When asked about what metrics are available, Microsoft Copilot in Azure describes the platform metrics available for the given resource type. --### Platform metrics sample prompts --Here are a few examples of the kinds of prompts you can use to get information about Azure Monitor platform metrics. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information. --- "What platform metrics are available for my VM?"-- "Show me the memory usage trend for my VM over the last 4 hours"-- "Show trends for network bytes in over the last day"-- "Give me a chart of os disk latency statistics for the last week"--## Answer questions about Azure Monitor logs --When asked about logs for a particular resource, Microsoft Copilot in Azure generates an example KQL expression and allows you to further explore the data in Azure Monitor logs. This capability is available for all customers using Log Analytics, and can be used in the context of a particular Azure Kubernetes Service (AKS) cluster that uses Azure Monitor logs. --To get details about your container logs, start on the **Logs** page for your AKS cluster. --### Logs sample prompts --Here are a few examples of the kinds of prompts you can use to get information about Azure Monitor logs for an AKS cluster. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information. --- "Are there any errors in container logs?"-- "Show logs for the last day of pod <provide_pod_name> under namespace <provide_namespace>"-- "Show me container logs that include word 'error' for the last day for namespace 'xyz'"-- "Check in logs which containers keep restarting"-- "Show me all Kubernetes events"--## Answer questions about Azure Monitor alerts --Use Microsoft Copilot in Azure (preview) to ask questions about your Azure Monitor alerts. When asked about alerts, Microsoft Copilot in Azure (preview) summarizes the list of alerts, their severity, and allows you to further explore the data in the alerts page. --### Sample prompts --Here are a few examples of the kinds of prompts you can use to get information about Azure Monitor alerts. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information. --- "Are there any alerts for my resource?"-- "Tell me more about these alerts. How many critical alerts are there?"-- "Show me all the alerts in my resource group"-- "List all the alerts for the subscription"-- "Show me all alerts triggered during the last 24 hours"--## Answer questions about Azure Monitor Investigator (preview) --Use Microsoft Copilot in Azure (preview) to ask questions about your resources and to run Azure Monitor Investigator. You can ask to run an investigation on a resource to learn about what happened, possible causes, and ways to troubleshoot the issue. --### Sample prompts --Here are a few examples of the kinds of prompts you can use to get information about Azure Monitor Investigator. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information. --- "Why is this resource not working properly?"-- "Is there any anomaly in my AKS resource?" -- "Run investigation on my resource"-- "What is causing the issue in this resource?"-- "Had an alert in my HCI at 8 am this morning, run an anomaly investigation for me"-- "Run anomaly detection at 10/27/2023, 8:48:53 PM"--## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Azure Monitor](/azure/azure-monitor/) and [how to use it with AKS clusters](/azure/aks/monitor-aks). |
copilot | Improve Storage Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/improve-storage-accounts.md | - Title: Improve security and resiliency of storage accounts using Microsoft Copilot in Azure -description: Learn how Microsoft Copilot in Azure can improve the security posture and data resiliency of storage accounts. Previously updated : 04/25/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Improve security and resiliency of storage accounts using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can provide contextual and dynamic responses to harden the security posture and enhance data resiliency of [storage accounts](/azure/storage/common/storage-account-overview). --Responses are dynamic and based on your specific storage account and settings. Based on your prompts, Microsoft Copilot in Azure runs a security check or a data resiliency check, and provides specific recommendations to improve your storage account. --When you ask Microsoft Copilot in Azure about improving security accounts, it automatically pulls context when possible, based on the current conversation or on the page you're viewing in the Azure portal. If the context isn't clear, you'll be prompted to specify the storage resource for which you want information. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use to improve and protect your storage accounts. Modify these prompts based on your real-life scenarios, or try additional prompts to get advice on specific areas. --- "How can I make this storage account more secure?"-- "Does this storage account follow security best practices?"-- "Is this storage account vulnerable?"-- "How can I prevent this storage account from being deleted?"-- "How do I protect this storage account's data from data loss or theft?"-- "Prevent malicious users from accessing this storage account."--## Examples --You can ask "**How can I make this storage account more secure?**" If you're already working with a storage account, Microsoft Copilot in Azure asks if you'd like to run a security check on that resource. If it's not clear which storage account you're asking about, you'll be prompted to select one. After the check, you'll see specific recommendations about things you can do to align your storage account with security best practices. ---You can also say things like "**Prevent this storage account from data loss during a disaster recovery situation**." After confirming you'd like Microsoft Copilot in Azure to run a data resiliency check, you'll see specific recommendations for protecting its data. ---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Azure Storage](/azure/storage/common/storage-introduction). |
copilot | Manage Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/manage-access.md | - Title: Manage access to Microsoft Copilot in Azure -description: Learn how administrators can manage user access to Microsoft Copilot in Azure. Previously updated : 08/21/2024---- - build-2024 - - copilot-learning-hub -----# Manage access to Microsoft Copilot in Azure --By default, Copilot in Azure is available to all users in a tenant. However, [Global Administrators](/entra/identity/role-based-access-control/permissions-reference#global-administrator) can manage access to Copilot in Azure for their organization. Access can also be optionally granted to specific Microsoft Entra users or groups. --If Copilot in Azure is not available for a user, they'll see an unauthorized message when they select the **Copilot** button in the Azure portal. --> [!NOTE] -> In some cases, your tenant may not have access to Copilot in Azure by default. Global Administrators can enable access by following the steps described in this article at any time. --As always, Microsoft Copilot in Azure only has access to resources that the user has access to. It can only take actions that the user has permission to perform, and requires confirmation before making changes. Copilot in Azure complies with all existing access management rules and protections such as Azure role-based access control (Azure RBAC), Privileged Identity Management, Azure Policy, and resource locks. ---## Manage user access to Microsoft Copilot in Azure --To manage access to Microsoft Copilot in Azure for users in your tenant, any Global Administrator in that tenant can follow these steps. --1. [Elevate your access](/azure/role-based-access-control/elevate-access-global-admin?tabs=azure-portal#step-1-elevate-access-for-a-global-administrator) so that your Global Administrator account can manage all subscriptions in your tenant. --1. In the Azure portal, search for **Copilot for Azure admin center** and select it. --1. In **Copilot for Azure admin center**, under **Settings**, select **Access management**. --1. Select the toggle next to **On for entire tenant** to change it to **Off for entire tenant**. --1. To grant access to specific Microsoft Entra users or groups, select **Manage RBAC roles**. --1. Assign the **Copilot for Azure User** role to specific users or groups. For detailed steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal). --1. When you're finished, [remove your elevated access](/azure/role-based-access-control/elevate-access-global-admin?tabs=azure-portal#step-2-remove-elevated-access). --Global Administrators for a tenant can change the **Access management** selection at any time. --> [!IMPORTANT] -> In order to use Microsoft Copilot in Azure, your organization must allow websocket connections to `https://directline.botframework.com`. Please ask your network administrator to enable this connection. ----## Next steps --- [Learn more about Microsoft Copilot in Azure](overview.md).-- Read the [Responsible AI FAQ for Microsoft Copilot in Azure](responsible-ai-faq.md).-- Explore the [capabilities](capabilities.md) of Microsoft Copilot in Azure and learn how to [write effective prompts](write-effective-prompts.md). |
copilot | Optimize Code Application Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/optimize-code-application-insights.md | - Title: Discover performance recommendations with Code Optimizations using Microsoft Copilot in Azure -description: Learn about scenarios where Microsoft Copilot in Azure can use Application Insight Code Optimizations to help optimize your apps. Previously updated : 11/20/2023---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Discover performance recommendations with Code Optimizations using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can provide [Code Optimizations](/azure/azure-monitor/insights/code-optimizations) for Application Insights resources that have an active [Application Insights Profiler](/azure/azure-monitor/profiler/profiler-settings). This lets you view recommendations tailored to your app to help optimize its performance. --When you ask Microsoft Copilot in Azure to provide these recommendations, it automatically pulls context from an open Application Insights blade or App Service blade to display available recommendations specific to that app. If the context isn't clear, you'll be prompted to choose an Application Insights resource from a resource selector page. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use with Code Optimizations. Modify these prompts based on your real-life scenarios, or try additional prompts about specific areas for optimization. --- "Show my code performance recommendations"-- "Any available app code optimizations?"-- "Code optimizations in my app"-- "My app code is running slow"-- "Make my app faster with a code change"--## Examples --In this example, Microsoft Copilot in Azure responds to the prompt, "**Any code performance optimizations?**" The response notes that there are 6 recommendations, providing the option to view either the top recommendation or all recommendations at once. ---When the **Review all** option is selected, Microsoft Copilot in Azure displays all recommendations. You can then select any recommendation to see more details. --## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Code Optimizations](/azure/azure-monitor/insights/code-optimizations). |
copilot | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/overview.md | - Title: Microsoft Copilot in Azure overview -description: Microsoft Copilot in Azure is an AI-powered tool to help you do more with Azure. Previously updated : 05/28/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 ---hideEdit: true ---# What is Microsoft Copilot in Azure? --Microsoft Copilot in Azure (preview) is an AI-powered tool to help you do more with Azure. With Microsoft Copilot in Azure, you can gain new insights, discover more benefits of the cloud, and orchestrate across both cloud and edge. Copilot leverages Large Language Models (LLMs), the Azure control plane, and insights about your Azure environment to help you work more efficiently. ---Microsoft Copilot in Azure can help you navigate the hundreds of services and thousands of resource types that Azure offers. It unifies knowledge and data across hundreds of services to increase productivity, reduce costs, and provide deep insights. Copilot in Azure can help you learn about Azure by answering questions, and it can provide information tailored to your own Azure resources and environment. By letting you [express your goals in natural language](write-effective-prompts.md), Copilot in Azure can simplify your Azure management experience. --You can access Copilot in Azure in the Azure portal or [through the Azure mobile app](../azure-portal/mobile-app/microsoft-copilot-in-azure.md). Throughout a conversation, Copilot in Azure answers questions, generates queries, performs tasks, and safely acts on your behalf. It makes high-quality recommendations and takes actions while respecting your organization's policy and privacy. Copilot in Azure can access all of the resources that you have permission to access, and can take actions that you have permission to perform, with your confirmation required for any actions. For more information about what Copilot in Azure can do, see [Capabilities of Microsoft Copilot in Azure](capabilities.md). --Microsoft Copilot in Azure (preview) is made available to customers under the terms governing their subscription to Microsoft Azure Services, including the Microsoft Copilot in Azure section of the [Microsoft Product Terms](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/EAEAS). Please review these terms carefully as they contain important conditions and obligations governing your use of Microsoft Copilot in Azure. --## Manage access --By default, Copilot in Azure is available to all users in a tenant. However, [Global Administrators](/entra/identity/role-based-access-control/permissions-reference#global-administrator) can choose to control access to Copilot in Azure for their organization. --For more information, see [Manage access to Microsoft Copilot in Azure](manage-access.md). --> [!IMPORTANT] -> In order to use Microsoft Copilot in Azure, your organization must allow websocket connections to `https://directline.botframework.com`. Please ask your network administrator to enable this connection. --## Next steps --- Learn about [some of the things you can do with Microsoft Copilot in Azure](capabilities.md).-- Review our [Responsible AI FAQ for Microsoft Copilot in Azure](responsible-ai-faq.md).-- Explore the [Microsoft Copilot in Azure video series](/shows/microsoft-copilot-in-azure/). |
copilot | Query Attack Surface | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/query-attack-surface.md | - Title: Query your attack surface with Defender EASM using Microsoft Copilot in Azure -description: Learn how Microsoft Copilot in Azure can help query Attack Surface Insights from Defender EASM. Previously updated : 04/25/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Query your attack surface with Defender EASM using Microsoft Copilot in Azure --[Microsoft Defender External Attack Surface Management (Defender EASM)](/azure/external-attack-surface-management/overview) scans inventory assets and collects robust contextual metadata that powers Attack Surface Insights. These insights help an organization understand what their attack surface looks like, where the risk resides, and what assets they need to focus on. --> [!IMPORTANT] -> Use of Copilot in Azure to query Defender EASM is included with Copilot for Security and requires [security compute units (SCUs)](/copilot/security/get-started-security-copilot#security-compute-units). You can provision SCUs and increase or decrease them at any time. For more information on SCUs, see [Get started with Microsoft Copilot](/copilot/security/get-started-security-copilot) and [Manage usage of security compute units](/copilot/security/manage-usage). -> -> To use Copilot in Azure to query Defender EASM, you or your admin team must be a member of the appropriate role in Copilot for Security and must have access to a Defender EASM resource. For information on supported roles, see [Understand authentication in Microsoft Copilot for Security](/copilot/security/authentication). --With Microsoft Copilot in Azure (preview), you can use natural language to ask questions and better understand your organization's attack surface. Through Defender EASM's extensive querying capabilities, you can extract asset metadata and key asset information, even if you don't have an advanced Defender EASM querying skillset. --When you ask Microsoft Copilot in Azure about your attack surface, it automatically pulls context when possible, based on the current conversation or on the page you're viewing in the Azure portal. If the context isn't clear, you'll be prompted to specify which Defender EASM resource to use. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use to query attack surface data collected by Defender EASM. Modify these prompts based on your real-life scenarios, or try additional prompts to get advice on specific areas. --- "Tell me about Defender EASM high priority attack surface insights."-- "What are my externally facing assets?"-- "Find all the page and host assets in my inventory with the IP address (address)"-- "Show me all assets that require investigation."-- "Do I have any domains that are expiring within 30 days?"-- "What assets are using jQuery version 3.1.0?"-- "Get the hosts with port X open in my attack surface?"-- "Which of my assets have a registrant email of `name@example.com`?"-- "Which of my assets have services containing 'Azure' and vulnerabilities on them?"--## Example --You can use a natural language query to better understand your attack surface. In this example, the query is "**find all the page and host assets in my inventory with an ip address that is (list of IP addresses)**". Copilot in Azure queries your Defender EASM inventory and provides details about the assets matching your criteria. You can then follow up with additional questions as needed. ---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Defender EASM](/azure/external-attack-surface-management/overview). |
copilot | Responsible Ai Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/responsible-ai-faq.md | - Title: Responsible AI FAQ for Microsoft Copilot in Azure (preview) -description: Learn how Microsoft Copilot in Azure (preview) uses data and what to expect. Previously updated : 05/28/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 ---hideEdit: true ---# Responsible AI FAQ for Microsoft Copilot in Azure (preview) --## What is Microsoft Copilot in Azure (preview)? --Microsoft Copilot in Azure is an AI companion that enables IT teams to operate, optimize, and troubleshoot applications and infrastructure more efficiently. With Microsoft Copilot in Azure, users can gain new insights into their workloads, unlock untapped Azure functionality and orchestrate tasks across both cloud and edge. Copilot leverages Large Language Models (LLMs), the Azure control plane and insights about a userΓÇÖs Azure and Arc-enabled assets. All of this is carried out within the framework of AzureΓÇÖs steadfast commitment to safeguarding the customer's data security and privacy. For an overview of how Microsoft Copilot in Azure works and a summary of Copilot capabilities, see [Microsoft Copilot in Azure (preview) overview](overview.md). --## Are Microsoft Copilot in Azure's results reliable? --Microsoft Copilot in Azure is designed to generate the best possible responses with the context it has access to. However, like any AI system, Microsoft Copilot in Azure's responses will not always be perfect. All of Microsoft Copilot in Azure's responses should be carefully tested, reviewed, and vetted before making changes to your Azure environment. --## How does Microsoft Copilot in Azure (preview) use data from my Azure environment? --Microsoft Copilot in Azure generates responses grounded in your Azure environment. Microsoft Copilot in Azure only has access to resources that you have access to and can only perform actions that you have the permissions to perform, after your confirmation. Microsoft Copilot in Azure will respect all existing access management and protections such as Azure Role-Based Action Control, Privileged Identity Management, Azure Policy, and resource locks. --## What data does Microsoft Copilot in Azure collect? --User-provided prompts and Microsoft Copilot in Azure's responses are not used to further train, retrain, or improve Azure OpenAI Service foundation models that generate responses. User-provided prompts and Microsoft Copilot in Azure's responses are collected and used to improve Microsoft products and services only when users have given explicit consent to include this information within feedback. We collect user engagement data, such as, number of chat sessions and session duration, the skill used in a particular session, thumbs up, thumbs down, feedback, etc. This information is retained and used as set forth in the [Microsoft Privacy Statement](https://privacy.microsoft.com/en-us/privacystatement). --## What should I do if I see unexpected or offensive content? --The Azure team has built Microsoft Copilot in Azure guided by our [AI principles](https://www.microsoft.com/ai/principles-and-approach) and [Responsible AI Standard](https://aka.ms/RAIStandardPDF). We have prioritized mitigating exposing customers to offensive content. However, you might still see unexpected results. We're constantly working to improve our technology in preventing harmful content. --If you encounter harmful or inappropriate content in the system, please provide feedback or report a concern by selecting the downvote button on the response. --## How current is the information Microsoft Copilot in Azure provides? --We frequently update Microsoft Copilot in Azure to ensure Microsoft Copilot in Azure provides the latest information to you. In most cases, the information Microsoft Copilot in Azure provides will be up to date. However, there might be some delay between new Azure announcements to the time Microsoft Copilot in Azure is updated. --## Do all Azure services have the same level of integration with Microsoft Copilot in Azure? --No. Some Azure services have richer integration with Microsoft Copilot in Azure. We will continue to increase the number of scenarios and services that Microsoft Copilot in Azure supports. To learn more about some of the current capabilities, see [Microsoft Copilot in Azure (preview) capabilities](capabilities.md) and the articles in the **Enhanced scenarios** section. |
copilot | Troubleshoot App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/troubleshoot-app-service.md | - Title: Troubleshoot your apps faster with App Service using Microsoft Copilot in Azure -description: Learn how Microsoft Copilot in Azure can help you troubleshoot your web apps hosted with App Service. Previously updated : 05/28/2024---- - build-2024 -----# Troubleshoot your apps faster with App Service using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can act as your expert companion for [Azure App Service](/azure/app-service/overview) and [Azure Functions](/azure/azure-functions/functions-overview) diagnostics and solutions. --Azure offers many troubleshooting tools for different types of issues with web apps and function apps. Rather than figure out which tool to use, you can ask Microsoft Copilot in Azure about the problem you're experiencing. Microsoft Copilot in Azure determines which tool is best suited to your question, whether it's related to high CPU usage, networking issues, getting a memory dump, or other issues. These tools provide diagnostics and suggestions to help you resolve problems you're experiencing. --Copilot in Azure can also help you understand diagnostic information in the Azure portal. For example, when you're looking at the **Diagnose and solve** page for a resource, or viewing diagnostics provided by a troubleshooting tool, you can ask Copilot in Azure to summarize the page, or to explain what an error means. --When you ask Microsoft Copilot in Azure for troubleshooting help, it automatically pulls context when possible, based on the current conversation or the app you're viewing in the Azure portal. If the context isn't clear, you'll be prompted to specify the resource for which you want information. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use to access troubleshooting tools and understand diagnostic information. Modify these prompts based on your real-life scenarios, or try additional prompts to get help with different types of issues. --Troubleshooting: --- "My web app is down"-- "Why is my web app slow?"-- "Enable auto heal"-- "High CPU issue"-- "Troubleshoot performance issues with my app"-- "Analyze app latency?"-- "Take a memory dump"--Understanding available tools: --- "Can I track uptime and downtime of my web app over a specific time period?"-- "Is there a tool that can help me view event logs for my web app?"--Proactive practices: --- "Risk alerts for my app"-- "Are there any best practices for availability for this app?"-- "How can I make my app future-proof"--Summarization and explanation: --- "Give me a summary of these diagnostics."-- "Summarize this page."-- "What does this error mean?"-- "Can you tell me more about the 3rd diagnostic on this page?"-- "What are the next steps to resolve this error?"--## Examples --You can tell Microsoft Copilot in Azure "**my web app is down**." After you select the resource that you want to troubleshoot, Copilot in Azure opens the **App Service - Web App Down** tool so you can view diagnostics. ---On the **Web App Down** page, you can say "**Give me a summary of this page.**" Copilot in Azure summarizes the insights and provides some recommended solutions. ---For another example, you could say "**web app slow**." Copilot in Azure checks for potential root causes and show you the results. It then offers to collect a profiling trace for further debugging. ---If you say "**Take a memory dump**", Microsoft Copilot in Azure suggests opening the **Collect a Memory Dump** tool so that you can take a snapshot of the app's current state. In this example, Microsoft Copilot in Azure continues to work with the resource selected earlier in the conversation. ---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Azure App Service](/azure/app-service/overview) and [Azure Functions](/azure/azure-functions/functions-overview). |
copilot | Understand Service Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/understand-service-health.md | - Title: Understand service health events and status using Microsoft Copilot in Azure -description: Learn about scenarios where Microsoft Copilot in Azure can provide information about service health events. Previously updated : 05/28/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Understand service health events and status using Microsoft Copilot in Azure --You can ask Microsoft Copilot in Azure (preview) questions to get information from [Azure Service Health](/azure/service-health/overview). This provides a quick way to find out if there are any service health events impacting your Azure subscriptions. You can also get more information about a known service health event. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use to get service health information. Modify these prompts based on your real-life scenarios, or try additional prompts about specific service health events. --- "Am I impacted by any service health events?"-- "Is there any outage impacting me?"-- "Can you tell me more about this tracking ID {0}?"-- "Is the event with tracking ID {0} still active?"-- "What is the status of the event with tracking ID {0}"--## Examples --You can ask "**Is there any Azure outage ongoing?**" In this example, no outages or service health issues are found. If there are service health issues impacting your account, you can ask further questions to get more information. ---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Azure Monitor](/azure/azure-monitor/). |
copilot | Use Guided Deployments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/use-guided-deployments.md | - Title: Create resources using interactive deployments from Microsoft Copilot in Azure -description: Learn how Microsoft Copilot in Azure (preview) can provide quick or guided deployment assistance. Previously updated : 07/24/2024--------# Create resources using interactive deployments from Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can help you deploy certain resources and workloads by providing quick or guided deployment assistance. --Interactive deployments are currently available for select workloads. For other types of deployments, Copilot in Azure helps by [providing links to templates](#template-suggestions) that you can customize and deploy, often with various deployment options such as Azure CLI, Terraform, Bicep, or ARM. If a template isn't available for your scenario, Copilot in Azure provides information to help you choose and deploy the best resources for your scenario. ----## Deploy a LEMP stack on an Azure Linux VM --Copilot in Azure can help you deploy an NGINX web server, Azure MySQL Flexible Server, and PHP (the [LEMP stack](/azure/virtual-machines/linux/tutorial-lemp-stack)) on an Ubuntu Linux VM in Azure. To see the LEMP server in action, you can also install and configure a WordPress site. You can choose either a quick deployment or a guided deployment that provides step-by-step assistance. --### LEMP stack sample prompts --- "I want to deploy a LEMP stack on Ubuntu VM"-- "How do I create a single VM LEMP stack?-- "I need a step-by-step guide to create a LEMP stack on a Linux VM. Can you help?"-- "Guided deployment for creating a LEMP stack infrastructure on Azure"-- "One click deployment for LEMP stack"--### LEMP stack example --You can say "**I want to deploy a LEMP stack on a Ubuntu VM**". Copilot in Azure checks for deployment experiences, and presents you with two deployment options: **Guided deployment** or **Quick deployment**. ---If you choose **Guided deployment** and select a subscription, Copilot in Azure launches a guided experience that walks you through each step of the deployment. ---After you complete your deployment, you can check and browse the WordPress website running on your VM. ---## Create a Linux virtual machine and connect via SSH --Copilot in Azure can help you [create a Linux VM and connect to it via SSH](/azure/virtual-machines/linux/quick-create-cli). You can choose either a quick deployment or a guided deployment that provides step-by-step assistance to handle the necessary tasks. These tasks include installing the latest Ubuntu image, provisioning the VM, generating a private key, and establishing the SSH connection. --### Linux VM sample prompts --- "How do I deploy a Linux VM?"-- "What are the steps to create a Linux VM on Azure and how do I SSH into it?"-- "I need a detailed guide on creating a Linux VM on Azure and establishing an SSH connection."-- "Deploy a Linux VM on Azure infrastructure."-- "Learn mode deployment for Linux VM"--### Linux VM example --You can say "**How do I create a Linux VM and SSH into it?**" You'll see two deployment options: **Guided deployment** or **Quick deployment**. If you choose the quick option and select a subscription, you can run the script to deploy the infrastructure. While the deployment is running, don't close or refresh the page. You'll see progress as each step of the deployment is completed. ---## Create an AKS cluster with a custom domain and HTTPS --Copilot in Azure can help you [create an Azure Kubernetes Service (AKS) cluster](/azure/aks/learn/quick-kubernetes-deploy-cli) with an NGINX ingress controller and a custom domain. As with the other deployments, you can choose either a quick or guided deployment. --### AKS cluster sample prompts --- "Guide me through creating an AKS cluster."-- "How do I make a scalable AKS cluster?"-- "I'm new to AKS. Can you help me deploy a cluster?"-- "Detailed guide on deploying an AKS cluster on Azure"-- "One-click deployment for AKS cluster on Azure"--### AKS cluster example --When you say "**Seamless deployment for AKS cluster on Azure**", Microsoft Copilot in Azure presents you with two deployment options: **Guided deployment** or **Quick deployment**. In this example, the quick deployment option is selected. As with the other examples, you see progress as each step of the deployment is completed. ---## Template suggestions --If an interactive deployment isn't available, Copilot in Azure checks to see if there's a template available to help with your scenario. Where possible, multiple deployment options are provided, such as Azure CLI, Terraform, Bicep, or ARM. You can then download and customize the templates as desired. --If a template isn't available, Copilot in Azure provides information to help you achieve your goal. You can also revise your prompt to be more specific or ask if there are any related templates you could start from. --### Template suggestion sample prompts --- "I want to use OpenAI to build a chatbot."-- "Do you have a suggestion for a Python app?"-- "I want to use Azure OpenAI endpoints in a sample app."-- "I want to use OpenAI to build a chatbot."-- "How could I easily deploy a Wordpress site?"-- "Any templates to start with using App service?"-- "Azure AI search + OpenAI template?"-- "Can you suggest a template for app services using SAP cloud SDK?"-- "Java app with Azure OpenAI?"-- "Can I use Azure OpenAI with React?"-- "Enterprise chat with GPT using Java?"-- "How can I deploy a sample app using Enterprise chat with GPT and java?"-- "I want to use Azure functions to build an OpenAI app"-- "How can I deploy container apps with Azure OpenAI?"-- "Do you have a template for using Azure AI search?"-- "Do you have a template for using Node js in Azure?"-- "I want a Wordpress app using App services."--## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn how to [deploy virtual machines effectively using Microsoft Copilot in Azure](deploy-vms-effectively.md). |
copilot | Work Aks Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/work-aks-clusters.md | - Title: Work with AKS clusters efficiently using Microsoft Copilot in Azure -description: Learn how Microsoft Copilot in Azure can help you be more efficient when working with Azure Kubernetes Service (AKS). Previously updated : 07/29/2024---- - build-2024 -----# Work with AKS clusters efficiently using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can help you work more efficiently with [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes) clusters. --When you ask Microsoft Copilot in Azure for help with AKS, it automatically pulls context when possible, based on the current conversation or on the page you're viewing in the Azure portal. If the context isn't clear, you'll be prompted to specify a cluster. --This video shows how Copilot in Azure can assist with AKS cluster management and configurations. --> [!VIDEO https://learn-video.azurefd.net/vod/player?show=microsoft-copilot-in-azure&ep=microsoft-copilot-in-azure-series-kubectl] ----## Run cluster commands --You can use Microsoft Copilot in Azure to run kubectl commands based on your prompts. When you make a request that can be achieved by a kubectl command, you'll see the command along with the option to execute it directly in the **Run command** pane. This pane lets you [run commands on your cluster through the Azure API](/azure/aks/access-private-cluster?tabs=azure-portal), without directly connecting to the cluster. You can also copy the generated command and run it directly. --This video shows how Copilot in Azure can assist with kubectl commands for managing AKS clusters. --> [!VIDEO https://learn-video.azurefd.net/vod/player?show=microsoft-copilot-in-azure&ep=microsoft-copilot-in-azure-series-kubectl] --### Cluster command sample prompts --Here are a few examples of the kinds of prompts you can use to run kubectl commands on an AKS cluster. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information. --- "List all of my failed pods in this cluster"-- "Check the rollout status for deployment `aksdeployment`"-- "Get all pods that are in pending states in all namespaces"-- "Can you delete my deployment named `my-deployment` in namespace `my-namespace`?"-- "Scale the number of replicas of my deployment `my-deployment` to 5"--### Cluster command example --You can say **"List all namespaces in my cluster."** If you're not already working with a cluster, you'll be prompted to select one. Microsoft Copilot in Azure shows you the kubectl command to perform your request, and ask if you'd like to execute the command. When you confirm, the **Run command** pane opens with the generated command included. ---## Enable IP address authorization --Use Microsoft Copilot in Azure to quickly make changes to the IP addresses that are allowed to access an AKS cluster. When you reference your own IP address, Microsoft Copilot in Azure can add it to the authorized IP ranges, without your providing the exact address. If you want to include alternative IP addresses, Microsoft Copilot in Azure asks if you want to open the **Networking** pane for your AKS cluster and helps you edit the relevant field. --### IP address sample prompts --Here are a few examples of the kinds of prompts you can use to manage the IP addresses that can access an AKS cluster. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information. --- "Allow my IP to access my AKS cluster"-- "Add my IP address to the allowlist of my AKS cluster's network policies"-- "Add my IP address to the authorized IP ranges of AKS cluster's networking configuration"-- "Add IP CIDR to my AKS clusterΓÇÖs authorized IP ranges"-- "Update my AKS cluster's authorized IP ranges"--## Manage cluster backups --Microsoft Copilot in Azure can help streamlines the process of installing the Azure [Backup extension](/azure/backup/azure-kubernetes-service-backup-overview) to an AKS cluster. On clusters where the extension is already installed, it helps you [configure backups](/azure/backup/azure-kubernetes-service-cluster-backup#configure-backups) and view existing backups. --When you ask for help with backups, you'll be prompted to select a cluster. From there, Microsoft Copilot in Azure prompts you to open the **Backup** pane for that cluster, where you can proceed with installing the extension, configuring backups, or viewing existing backups. --### Backup sample prompts --Here are a few examples of the kinds of prompts you can use to manage AKS cluster backups. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information. --- "Install backup extension on my AKS cluster"-- "Configure AKS backup"-- "Manage backup extension on my AKS cluster"-- "I want to view the backups on my AKS cluster"--### Backup example --You can say **"Install AKS backup"** to start the process of installing the AKS backup extension. After you select a cluster, you'll be prompted to open its **Backup** pane. From there, select **Launch install backup** to open the experience. After reviewing the prerequisites for the extension, you can step through the installation process. ---## Update AKS pricing tier --Use Microsoft Copilot in Azure to make changes to your [AKS pricing tier](/azure/aks/free-standard-pricing-tiers). When you request an update to your pricing tier, you're prompted to confirm, and then Microsoft Copilot in Azure makes the change for you. --You can also get information about different pricing tiers, helping you to make informed decisions before changing your clusters' pricing tier. --### Pricing tier sample prompts --Here are a few examples of the kinds of prompts you can use to manage your AKS pricing tier. Modify these prompts based on your real-life scenarios, or try additional prompts to make different kinds of changes. --- "What is my AKS pricing tier?"-- "Update my AKS cluster pricing tier"-- "Upgrade AKS cluster pricing tier to Standard"-- "Downgrade AKS cluster pricing tier to Free"-- "What are the limitations of the Free pricing tier?"-- "What do you get with the Premium AKS pricing tier?"--## Work with Kubernetes YAML files --Microsoft Copilot in Azure can help you create [Kubernetes YAML files](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests) to apply to AKS clusters. --For more information, see [Create Kubernetes YAML files using Microsoft Copilot in Azure](generate-kubernetes-yaml.md). --## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes). |
copilot | Work Smarter Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/work-smarter-edge.md | - Title: Work smarter with your Azure Stack HCI clusters using Microsoft Copilot in Azure -description: Learn about scenarios where Microsoft Copilot in Azure can help you work with your Azure Stack HCI clusters. Previously updated : 05/28/2024---- - ignite-2023 - - ignite-2023-copilotinAzure - - build-2024 -----# Work smarter with your Azure Stack HCI clusters using Microsoft Copilot in Azure --Microsoft Copilot in Azure (preview) can help you identify problems and get information about your [Azure Stack HCI](/azure-stack/hci/overview) clusters. --When you ask Microsoft Copilot in Azure for information about the state of your hybrid infrastructure, it automatically pulls context when possible, based on the current conversation or on the page you're viewing in the Azure portal. If the context of a query isn't clear, you'll be prompted to clarify what you're looking for. ----## Sample prompts --Here are a few examples of the kinds of prompts you can use to work with your Azure Stack HCI clusters. Modify these prompts based on your real-life scenarios, or try additional prompts to get different types of information. --- "Summarize my HCI clusters"-- "Tell me more about the alerts"-- "Find any anomalies in my HCI clusters"-- "Find any anomalies from the most recent alert"--## Examples --In this example, Microsoft Copilot in Azure responds to the prompt "**summarize my HCI clusters**" with details about the number of clusters, their status, and any alerts that affect them. ---If you follow up by asking "**tell me more about the alerts**", Microsoft Copilot in Azure provides more details about the current alerts. ---## Next steps --- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.-- Learn more about [Azure Stack HCI](/azure-stack/hci/overview). |
copilot | Write Effective Prompts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/write-effective-prompts.md | - Title: Write effective prompts for Microsoft Copilot in Azure -description: Maximize productivity and intent understanding with prompt engineering in Microsoft Copilot in Azure. Previously updated : 04/16/2024---- - build-2024 -----# Write effective prompts for Microsoft Copilot in Azure --Prompt engineering is the process of designing prompts that elicit the best and most accurate responses from large language models (LLMs) like Microsoft Copilot in Azure (preview). As these models become more sophisticated, understanding how to create effective prompts becomes even more essential. --This article explains how to use prompt engineering to create effective prompts for Microsoft Copilot in Azure. ---## What is prompt engineering? --Prompt engineering involves strategically crafting inputs for AI models like Copilot in Azure, enhancing their ability to deliver precise, relevant, and valuable outcomes. These models rely on pattern recognition from their training data, lacking real-world understanding or awareness of user goals. By incorporating specific contexts, examples, constraints, and directives into prompts, you can significantly elevate the response quality. --Good prompt engineering practices help you unlock more of Copilot in Azure's potential for code generation, recommendations, documentation retrieval, and navigation. By crafting your prompts thoughtfully, you can reduce the chance of seeing irrelevant suggestions. Prompt engineering is a crucial technique to help improve responses and complete tasks more efficiently. Taking the time to write great prompts ultimately fosters efficient code development, drives down cost, and minimizes errors by providing clear guidelines and expectations. --## Tips for writing better prompts --Microsoft Copilot in Azure can't read your mind. To get meaningful help, guide it: ask for shorter replies if its answers are too long, request complex details if replies are too basic, and specify the format you have in mind. Taking the time to write detailed instructions and refine your prompts helps you get what you're looking for. --The following tips can be useful when considering how to write effective prompts. --### Be clear and specific --Start with a clear intent. For example, if you say "Check performance," Microsoft Copilot in Azure won't know what you're referring to. Instead, be more specific with prompts like "Check the performance of Azure SQL Database in the last 24 hours." --For code generation, specify the language and the desired outcome. For example: --- **Create a YAML file that represents ...**-- **Generate CLI script to ...**-- **Give me a Kusto query to retrieve ...**-- **Help me deploy my workload by generating Terraform that ...**--### Set expectations --The words you use help shape Microsoft Copilot in Azure's responses. Slightly different verbs can return different results, so consider the best ways to phrase your requests. For example: --- For high-level information, use phrases like **How to** or **Create a guide**.-- For actionable responses, use words like **Generate**, **Deploy**, or **Stop**.-- To fetch information and display it in your chat, use terms like **Fetch**, **List**, or **Retrieve**.-- To change your view or navigate to a new page, try phrases like **Show me**, **Take me to**, or **Navigate to**.--You can also mention your expertise level to tailor the advice to your understanding, whether you're a beginner or an expert. --### Add context about your scenario --Detail your goals and why you're undertaking a task to get more precise assistance, or clarify the technologies you're interested in. For example, instead of just saying **Deploy Azure function**, describe your end goal in detail, such as **Deploy Azure function for processing data from IoT devices with a new resource**. --### Break down your requests --For complex issues or tasks, break down your request into smaller, manageable parts. For example: **First, identify virtual machines that are running right now. After you have a working query, stop them.** You can also try using separate prompts for different parts of a larger scenario. --### Customize your code --When asking for on-demand code generation, specify known parameters, resource names, and locations. When you do so, Microsoft Copilot in Azure generates code with those values, so that you don't have to update them yourself. For example, rather than saying **Give me a CLI script to create a storage account**, you can say **Give me a CLI script to create a storage account named Storage1234 in the TestRG resource group in the EastUS region.** --### Use Azure terminology --When possible, use Azure-specific terms for resources, services, and tasks. Copilot in Azure may not grasp your intent if it doesn't know which parts of Azure you're referring to. If you aren't sure about which term to use, you can ask Copilot in Azure about general information about your scenario, then use the terms it provides in your prompt. --### Use the feedback loop --If you don't get the response you were looking for, try again, using the previous response to help refine your prompts. For example, you can ask Copilot in Azure to tell you more about a previous response or to explain more about one aspect. For generated code, you can ask to change one aspect or add another step. Don't be afraid to experiment to see what works best. --To leave feedback on any response that Microsoft Copilot in Azure provides, use the thumbs up/down control. This feedback helps us understand your expectations so that we can improve the Copilot in Azure experience over time. --## Next steps --- Learn about [some of the things you can do with Microsoft Copilot in Azure](capabilities.md).-- Review our [Responsible AI FAQ for Microsoft Copilot in Azure](responsible-ai-faq.md). |
cost-management-billing | Cost Management Error Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-management-error-codes.md | For more information, see [Set up AWS integration with Cost Management](aws-inte ## Create a support request -If you're facing an error not listed above or need more help, file a [support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) and specify the issue type as **Billing**. +If you're facing an error not listed above or need more help, file a [support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) and specify the issue type as **Billing**. ## Next steps |
cost-management-billing | Reservation Utilization Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/reservation-utilization-alerts.md | The following table explains the fields in the alert rule form. | | | | | | Alert type|Mandatory | The type of alert that you want to create. | Reservation utilization | | Services | Optional | Select if you want to filter the alert rule for any specific reservation type. **Note**: If you havenΓÇÖt applied a filter, then the alert rule monitors all available services by default. |Virtual machine, SQL Database, and so on. |-| Reservations | Optional | Select if you want to filter the alert rule for any specific reservations. **Note**: If you havenΓÇÖt applied a filter, then the alert rule monitors all available reservations by default. | Contoso\_Sub\_alias-SQL\_Server\_Standard\_Edition. | +| Reservations | Optional | Select if you want to filter the alert rule for any specific reservations. **Note**: If you havenΓÇÖt applied a filter, then the alert rule monitors all available reservations by default. | Contoso\_Sub\_alias-SQL\_Server\_Standard\_Edition. | | Utilization percentage | Mandatory | When any of the reservations have a utilization that is less than the target percentage, then the alert notification is sent. | Utilization is less than 95% | | Time grain | Mandatory | Choose the time over which reservation utilization value should be averaged. For example, if you choose Last 7-days, then the alert rule evaluates the last 7-day average reservation utilization of all reservations. **Note**: Last day reservation utilization is subject to change because the usage data refreshes. So, Cost Management relies on the last 7-day or 30-day averaged utilization, which is more accurate. | Last 7-days, Last 30-days| | Start on | Mandatory | The start date for the alert rule. | Current or any future date | The following information provides more detail. **Permissions needed to view reservations** - For partners to review reservations in the customer tenant, partners require foreign principal access to the customer subscription. The default permissions required for managing reservations are explained at [Who can manage a reservation by default](../reservations/view-reservations.md#who-can-manage-a-reservation-by-default). +> [!NOTE] +> Filtering options to monitor specific reservation categories or individual reservations aren't supported within this scope. As a result, the alert rule evaluates the utilization of all available reservations by default. + ## Next steps -If you havenΓÇÖt already set up cost alerts for budgets, credits, or department spending quotas, see [Use cost alerts to monitor usage and spending](cost-mgt-alerts-monitor-usage-spending.md). +If you havenΓÇÖt already set up cost alerts for budgets, credits, or department spending quotas, see [Use cost alerts to monitor usage and spending](cost-mgt-alerts-monitor-usage-spending.md). |
cost-management-billing | Save Share Views | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/save-share-views.md | To view the dashboard after you've pinned it, from the Azure portal menu, select 1. Select the **Pin** symbol to the right of the page header. 1. From the dashboard, you can now remove the original tile. -For more advanced dashboard customizations, you can also export the dashboard, customize the dashboard JSON, and upload a new dashboard. Dashboard creations can include other tile sizes or names without saving new views. For more information, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md). +For more advanced dashboard customizations, you can also export the dashboard, customize the dashboard JSON, and upload a new dashboard. Dashboard creations can include other tile sizes or names without saving new views. For more information, see [Create a dashboard in the Azure portal](/azure/azure-portal/azure-portal-dashboards). ## Download data or charts If you selected 'Add a CSV download link' when creating the alert rule, you will ## Next steps -- For more information about creating dashboards, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md).+- For more information about creating dashboards, see [Create a dashboard in the Azure portal](/azure/azure-portal/azure-portal-dashboards). - To learn more about Cost Management, see [Cost Management + Billing documentation](../index.yml). |
cost-management-billing | Add Change Subscription Administrator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/add-change-subscription-administrator.md | This article describes how to add or change the administrator role for a user us This article applies to a Microsoft Online Service Program (pay-as-you-go) account or a Visual Studio account. If you have a Microsoft Customer Agreement (Azure plan) account, see [Understand Microsoft Customer Agreement administrative roles in Azure](understand-mca-roles.md). If you have an Azure Enterprise Agreement, see [Manage Azure Enterprise Agreement roles](understand-ea-roles.md). -Microsoft recommends that you manage access to resources using Azure RBAC. Classic administrative roles are retried. For more information, see [Prepare for Azure classic administrator roles retirement](classic-administrator-retire.md). +Microsoft recommends that you manage access to resources using Azure RBAC. Classic administrative roles are retired. For more information, see [Prepare for Azure classic administrator roles retirement](classic-administrator-retire.md). ## Determine account billing administrator |
cost-management-billing | Avoid Unused Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/avoid-unused-subscriptions.md | When your subscription gets blocked, you receive another notification. The notif ## Related content -- [Create a support request in the Azure portal](../../azure-portal/supportability/how-to-create-azure-support-request.md)+- [Create a support request in the Azure portal](/azure/azure-portal/supportability/how-to-create-azure-support-request) |
cost-management-billing | Discover Cloud Footprint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/discover-cloud-footprint.md | + + Title: Discover your Microsoft cloud footprint FAQ +description: This article helps to answer frequently asked questions that customers have about their Microsoft cloud footprint. ++ Last updated : 09/17/2024+++++++# Discover your Microsoft cloud footprint FAQ ++This article helps to answer frequently asked questions that customers have about their Microsoft cloud footprint. Your cloud footprint commonly includes but isnΓÇÖt limited to, legal entities, billing accounts, and billing profiles, tenants, subscriptions, and so on. ++This article provides links to other articles that you can use to determine your entire Microsoft cloud footprint. +++## What are common Microsoft customer identifiers and how do I find them? ++The following table shows some common identifiers that Microsoft customers have. TheyΓÇÖre starting points to help you understand your Microsoft cloud footprint. ++| Identifier | Related documentation | +| | | +| **Tenant ID** | ΓÇó [What is Microsoft Entra ID?](/entra/fundamentals/whatis) <br> ΓÇó [How to find your tenant ID](/entra/fundamentals/how-to-find-tenant) <br> ΓÇó [Change your organization's address and technical contact in the Microsoft 365 admin center](/microsoft-365/admin/manage/change-address-contact-and-more#what-do-the-organization-information-fields-mean) | +| **Billing Account ID** | ΓÇó [View your billing accounts in Azure portal](view-all-accounts.md)ΓÇ» <br> ΓÇó [Understand your Microsoft business billing account](/microsoft-365/commerce/manage-billing-accounts) | +| **Subscription ID** | ΓÇó [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id) <br> ΓÇó [What Microsoft business subscriptions do I have?](/microsoft-365/admin/admin-overview/what-subscription-do-i-have) | +| **Agreement ID** | **Microsoft Customer Agreement:** <br> ΓÇó [Microsoft Customer Agreement documentation](../microsoft-customer-agreement/index.yml) <br><br>**Microsoft Partner Agreement:** <br> ΓÇó [The Microsoft Partner Agreement (MPA) for CSP](/partner-center/enroll/microsoft-partner-agreement) <br><br>**Enterprise Agreement:** <br> ΓÇó [EA Billing administration on the Azure portal](direct-ea-administration.md) | +| **Legal Entity** | ItΓÇÖs your company name, address, phone number, and so on. | +| **Microsoft Partner Network (MPN) ID** | [Add, change, or delete a Microsoft 365 subscription advisor partner](/microsoft-365/admin/misc/add-partner) | +| **Domain name** | [Add and replace your onmicrosoft.com fallback domain in Microsoft 365](/microsoft-365/admin/setup/add-or-replace-your-onmicrosoftcom-domain) | +++## Can I view all my accounts across the Microsoft cloud? ++No. The ability to view accounts depends on your agreement and who manages it. ++### Customer managed or Microsoft managed ++Both types of Microsoft Customer Agreement accounts are visible in the Billing account list views in the Microsoft 365 admin center and Azure portal. ++With these accounts, customers purchase directly from Microsoft or through a field seller. The customer owns the billing accounts and pays Microsoft directly for the service. ++### Partner managed ++Billing accounts for the end customer that purchases through a Cloud Solution Provider (CSP) partner arenΓÇÖt in the list views of Microsoft 365 admin center or Azure portal. ++## How do I view my billing account and tenant information? ++Azure customers can use the following information to view their billing accounts and tenants: ++[Billing accounts and scopes in the Azure portal](view-all-accounts.md). The article provides details about all types of Azure accounts. ++Microsoft 365 customers can use the following information to view their billing accounts and tenants: ++- [Understand your Microsoft business billing account](/microsoft-365/commerce/manage-billing-accounts) +- [Manage your Microsoft business billing profiles](/microsoft-365/commerce/billing-and-payments/manage-billing-profiles) +++## How can I view every billing account created for a tenant? ++Global Administrators can view all direct billing accounts at the organization level and can elevate their billing account permissions. Purchases made for individual use by non-administrators are only visible to and managed by the original purchaser. ++For more information, see [Elevate access to manage billing accounts](elevate-access-global-admin.md). ++## How can I view employees participating in other tenants? ++Global Administrators can view log data showing employee activity in other tenants. For more information, see [Cross-tenant access activity workbook in Azure AD](/entra/identity/monitoring-health/workbook-cross-tenant-access-activity). ++## How do I view my partner billing information? ++As a partner, you can see your partner billing account, billing group, and associated customer information in Azure. This information isnΓÇÖt currently available in the Microsoft 365 admin center. ++- [View your billing accounts in Azure portal](view-all-accounts.md#microsoft-partner-agreement) +- [Get started with your Microsoft Partner Agreement billing account](../understand/mpa-overview.md) ++A customer canΓÇÖt view the billing account or billing group thatΓÇÖs managed by a partner because the partner makes purchases on behalf of their customer. ++## How do I view my organizationΓÇÖs billing account and tenant information? ++A customer can view their billing accounts and related tenant information in the Azure portal or Microsoft 365 admin center. ++Get access to your billing account: ++- [Understand your Microsoft business billing account](/microsoft-365/commerce/manage-billing-accounts#assign-billing-account-roles) +- [Billing roles for Microsoft Customer Agreements](understand-mca-roles.md) ++After you get access, hereΓÇÖs how you can view the Billing account and related tenant info. ++- Customers can view their Azure Billing Accounts and tenants: + - [View your billing accounts in Azure portal](view-all-accounts.md) ++The article provides details about all types of Azure accounts. ++- Customers can view their Microsoft 365 billing accounts and tenants: + - [Understand your Microsoft business billing account](/microsoft-365/commerce/manage-billing-accounts) + - [Manage your Microsoft business billing profiles](/microsoft-365/commerce/billing-and-payments/manage-billing-profiles) ++## How do I manage purchases that I made myself? ++Administrators can manage trials and purchases that they made, and self-service purchases and trials made by non-administrators. For more information, see [Manage self-service purchases and trials (for admins)](/microsoft-365/commerce/subscriptions/manage-self-service-purchases-admins). ++## How can I get a list of subscriptions and tenants for a billing account? ++You can view the tenants associated with a subscription in the Azure portal on the **Subscriptions** page under **Cost Management + Billing**. ++## How can I view the Azure tenant that I am currently signed in to? ++You can view your tenant information in the Azure portal using the information at [Manage Azure portal settings and preferences](/azure/azure-portal/set-preferences). ++## How can I organize costs for my billing accounts? ++You can organize your invoice for each of your billing accounts using the information in [Organize your invoice based on your needs](mca-section-invoice.md). ++## How do I understand different associated tenant types? ++When you add an associated billing tenant, you can enable two access settings: ++**Billing Management** - Allows billing account owners to assign roles to users in the associated billing tenant. Essentially, it gives them permission to access billing information and make purchasing decisions. ++**Provisioning** - Allows the invited tenantΓÇÖs subscriptions to be billed to the inviting billing account. ++- If you want to understand the differences between tenant types in Azure, see [Manage billing across multiple tenants using associated billing tenants](manage-billing-across-tenants.md). +- If you want to understand the differences between tenant types in Microsoft 365, see [Manage billing across multiple tenants in the Microsoft 365 admin center](/microsoft-365/commerce/billing-and-payments/manage-multi-tenant-billing). ++## How do I add an associated billing tenant? ++For Azure: ++1. Sign in to the Azure portal. +2. Search for **Cost Management + Billing**. +3. Select **Access control (IAM)** on the left side of the page. +4. Select **Associated billing tenants** at the top of the page. +5. Select **Add** and provide the tenant ID or domain name for the tenant you want to add. You can also give it a friendly name. +6. Choose one or both options for access settings: Billing management and Provisioning. For more information, see [Manage billing across multiple tenants using associated billing tenants](manage-billing-across-tenants.md). ++For Microsoft 365: ++1. Sign in to the [Microsoft 365 admin center](https://go.microsoft.com/fwlink/p/?linkid=2024339). +2. Under **Billing** > **Billing Accounts**, select a billing account. +3. Select the **Associated billing tenants** tab. ++There you can manage the existing associating billing tenants or add new ones. For more information, see [Manage billing across multiple tenants in the Microsoft 365 admin center](/microsoft-365/commerce/billing-and-payments/manage-multi-tenant-billing). ++## How can I see which tenants my users are authenticating to? ++You can view tenants with B2B relationships in the [Cross-tenant access activity workbook](/entra/identity/monitoring-health/workbook-cross-tenant-access-activity). ++> [!NOTE] +> The tenants that your users access by B2B authentication arenΓÇÖt necessarily part of your organization. ++## How can I take over unmanaged directories owned by my organization? ++Review the domains in your registrar that arenΓÇÖt verified to your tenant. If you canΓÇÖt register a domain that you reserved in your registrar, it might be associated with an unmanaged directory. Global administrators can claim or take over unmanaged directories (also called _unmanaged tenants_) that were created by members of their organization through free sign-up offers. If your registrar shows you pay for a domain that isnΓÇÖt part of your home tenant, it might be used in an unmanaged directory. For more information, see [Admin takeover of an unmanaged directory](/entra/identity/users/domains-admin-takeover). ++## How can I regain access to a tenant owned by my organization? ++If youΓÇÖre locked out of a tenant, you must open a [support ticket](https://support.microsoft.com/topic/global-customer-service-phone-numbers-c0389ade-5640-e588-8b0e-28de8afeb3f2). The Data Protection Team can help you: ++- Reset credentials of an administrator account +- Claim ownership of normal tenants ++## How can I restrict users in my organization from creating new tenants? ++You can enable a tenant-level policy to restrict users from creating new tenants in their organization. For more information, see [Restrict new organization creation, Microsoft Entra tenant policy](/azure/devops/organizations/accounts/azure-ad-tenant-policy-restrict-org-creation). ++You can also restrict subscriptions from moving from one tenant to another. ItΓÇÖs a common pattern an employee might take when they create a tenant. For more information, see [Manage Azure subscription policies](manage-azure-subscription-policy.md). ++## How can I review audit logs for tenants created by users in my organization? ++You can view audit logs in the Microsoft Entra admin center. Events relating to tenant creation are tagged as Directory Management. For more information, see [Azure Active Directory (Azure AD) audit activity reference](/azure/active-directory/reports-monitoring/reference-audit-activities). ++To learn about notifications for audit log events, follow the tutorial in [Enable security notifications for audit log events](/entra/identity/authentication/tutorial-enable-security-notifications-for-audit-logs). ++## Related content ++- For Azure: + - [Billing accounts and scopes in the Azure portal](view-all-accounts.md) +- For Microsoft 365: + - [Understand your Microsoft business billing account](/microsoft-365/commerce/manage-billing-accounts) + - [Manage your Microsoft business billing profiles](/microsoft-365/commerce/billing-and-payments/manage-billing-profiles) |
cost-management-billing | Link Partner Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id.md | Before you link your partner ID, your customer must give you access to their Azu - **Service principal**: Your customer can add an app or script from your organization in their directory and assign any Azure role. The identity of the app or script is known as a service principal. -- **Azure Lighthouse**: Your customer can delegate a subscription (or resource group) so that your users can work on it from within your tenant. For more information, see [Azure Lighthouse](../../lighthouse/overview.md).+- **Azure Lighthouse**: Your customer can delegate a subscription (or resource group) so that your users can work on it from within your tenant. For more information, see [Azure Lighthouse](/azure/lighthouse/overview). ## Link to a partner ID You can't see the customer in the reports due to following reasons The link between the partner ID and the account is done for each customer tenant. Link the partner ID in each customer tenant. -However, if you're managing customer resources through [Azure Lighthouse](../../lighthouse/overview.md), you should create the link in your service provider tenant, using an account that has access to the customer resources. +However, if you're managing customer resources through [Azure Lighthouse](/azure/lighthouse/overview), you should create the link in your service provider tenant, using an account that has access to the customer resources. -**How do I link my partner ID if my company uses [Azure Lighthouse](../../lighthouse/overview.md) to access customer resources?** +**How do I link my partner ID if my company uses [Azure Lighthouse](/azure/lighthouse/overview) to access customer resources?** For Azure Lighthouse activities to be recognized, you need to associate your Partner ID with at least one user account that has access to each of your onboarded customer subscriptions. The association is needed in your service provider tenant, rather than in each customer tenant. For simplicity, we recommend creating a service principal account in your tenant If you've already onboarded a customer, you can link the partner ID to a user account that already has permission to work in that customer's tenant so that you don't have to perform another deployment. -For more information, see [Onboard a customer to Azure Lighthouse](../../lighthouse/how-to/onboard-customer.md). +For more information, see [Onboard a customer to Azure Lighthouse](/azure/lighthouse/how-to/onboard-customer). **Does linking a partner ID work with Azure Stack?** |
cost-management-billing | Manage Consumption Commitment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-consumption-commitment.md | + + Title: Manage a Microsoft Azure Consumption Commitment resource +description: Learn how to manage your Microsoft Azure Consumption Commitment (MACC) resource, including moving it across resource groups or subscriptions. +++++ Last updated : 09/18/2024++#customer intent: As a Microsoft Customer Agreement billing owner, I want learn about managing a MACC so that I move it when needed. +++# Manage a Microsoft Azure Consumption Commitment resource under a subscription ++When you accept a Microsoft Azure Consumption Commitment (MACC) in a Microsoft Customer Agreement, the MACC resource gets placed in a subscription and resource group. The resource contains the metadata related to the MACC. Including: status of the MACC, commitment amount, start date, end date, and System ID. You can view the metadata in the Azure portal. +++## Move MACC across resource groups or subscriptions ++You can move the MACC resource to another resource group or subscription. Moving it works the same way as moving other Azure resources. ++Moving a MACC resource to another subscription or resource group is a metadata change. The move doesn't affect the commitment. The destination resource group or subscription must be within the same billing profile where the MACC is currently located. ++### To move a MACC ++Here are the high-level steps to move a MACC resource. For more information about moving an Azure resource, see [Move Azure resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). ++1. In the [Azure portal](https://portal.azure.com), navigate to **Resource groups**. +2. Select the resource group that contains the MACC resource. +3. Select the MACC resource. +4. At top of the page select **Move** and then select **Move to another subscription** or **Move to another resource group**. +5. Follow the instructions to move the MACC resource. +6. After the move is complete, verify that the MACC resource is in the new resource group or subscription. ++After a MACC moves, its resource URI changes because of the move. ++### To view the MACC resource URI ++1. In the [Azure portal](https://portal.azure.com), search for **Microsoft Azure Consumption Commitments**. +2. Select the MACC resource. +3. On the Overview page, in the left navigation menu, expand **Settings**, and then select **Properties**. +4. The MACC resource URI is the **ID** value. ++Here's an example image: ++++## Delete MACC ++You can't delete an active MACC resource. The MACC must be **Expired** or **Canceled** before you can delete it. ++## Related content ++- [Move Azure resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md) |
cost-management-billing | Mpa Request Ownership | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mpa-request-ownership.md | Azure Marketplace products, which are available for subscriptions that are manag Access for existing users, groups, or service principals that was assigned using [Azure role-based access control (Azure RBAC role)](../../role-based-access-control/overview.md) isn't affected during the transition. The partner wonΓÇÖt get any new Azure RBAC role access to the subscriptions. -The partners should work with the customer to get access to subscriptions. The partners need to get either Admin on Behalf Of - AOBO or [Azure Lighthouse](../../lighthouse/concepts/cloud-solution-provider.md) access open support tickets. +The partners should work with the customer to get access to subscriptions. The partners need to get either Admin on Behalf Of - AOBO or [Azure Lighthouse](/azure/lighthouse/concepts/cloud-solution-provider) access open support tickets. ### Power BI connectivity |
cost-management-billing | Switch Azure Offer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/switch-azure-offer.md | There's no service downtime for any users associated with the subscription. Howe #### Quota increases are reset -When you switch offers, any [limit or quota increases above the default limit](../../azure-portal/supportability/regional-quota-requests.md) are reset. There's no service downtime, even if you have more resources beyond the default limit. For example, you're using 200 cores on your subscription, then switching offers resets your cores quota back to the default of 20 cores. The VMs that use the 200 cores are unaffected and would continue to run. If you don't make another quota increase request, however, you can't provision any more cores. +When you switch offers, any [limit or quota increases above the default limit](/azure/azure-portal/supportability/regional-quota-requests) are reset. There's no service downtime, even if you have more resources beyond the default limit. For example, you're using 200 cores on your subscription, then switching offers resets your cores quota back to the default of 20 cores. The VMs that use the 200 cores are unaffected and would continue to run. If you don't make another quota increase request, however, you can't provision any more cores. #### Billing |
cost-management-billing | Cannot Create Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/cannot-create-vm.md | When you use an EA dev/test subscription under an account that isn't marked as d For other assistance, follow these links: -* [How to manage an Azure support request](../../azure-portal/supportability/how-to-manage-azure-support-request.md) +* [How to manage an Azure support request](/azure/azure-portal/supportability/how-to-manage-azure-support-request) * [Azure support ticket REST API](/rest/api/support) * Engage with us on [X](https://x.com/azuresupport) * Get help from your peers in the [Microsoft question and answer](/answers/products/azure) |
cost-management-billing | How To Create Azure Support Request Ea | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/how-to-create-azure-support-request-ea.md | If you're still unable to resolve the issue, continue creating your support requ Next, we collect more details about the problem. Providing thorough and detailed information in this step helps us route your support request to the right engineer. -1. On the Details tab, complete the **Problem details** section so that we have more information about your issue. If possible, tell us when the problem started and any steps to reproduce it. You can upload a file, such as a log file or output from diagnostics. For more information on file uploads, see [File upload guidelines](../../azure-portal/supportability/how-to-manage-azure-support-request.md#file-upload-guidelines). +1. On the Details tab, complete the **Problem details** section so that we have more information about your issue. If possible, tell us when the problem started and any steps to reproduce it. You can upload a file, such as a log file or output from diagnostics. For more information on file uploads, see [File upload guidelines](/azure/azure-portal/supportability/how-to-manage-azure-support-request#file-upload-guidelines). 1. In the **Share diagnostic information** section, select **Yes** or **No**. Selecting **Yes** allows Azure support to gather [diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) from your Azure resources. If you prefer not to share this information, select **No**. In some cases, there will be more options to choose from. If you have an MSA, have an administrator create an organizational account for y Follow these links to learn more: -* [How to manage an Azure support request](../../azure-portal/supportability/how-to-manage-azure-support-request.md) +* [How to manage an Azure support request](/azure/azure-portal/supportability/how-to-manage-azure-support-request) * [Azure support ticket REST API](/rest/api/support) * Engage with us on [X](https://x.com/azuresupport) * Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure) |
cost-management-billing | Understand Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-usage.md | +#customer intent: As a billing administrator, I want to understand usage and charges so that I can manage Azure billing. # Understand the terms in your Azure usage and charges file The detailed usage and charges file contains daily rated usage based on negotiat purchases (for example, reservations, Marketplace fees), and refunds for the specified period. Fees don't include credits, taxes, or other charges or discounts. You manually download the usage and charges file. -The information in the usage and charges file is the same information that's [exported from Cost Management](../costs/tutorial-export-acm-data.md). And, it's the same information that's retrieved from the Cost Details API. For more information about choosing a method to get cost details, see [Choose a cost details solution](../automate/usage-details-best-practices.md). +The information in the usage and charges file is the same information that gets [exported from Cost Management](../costs/tutorial-export-acm-data.md). And, it's the same information that gets retrieved from the Cost Details API. For more information about choosing a method to get cost details, see [Choose a cost details solution](../automate/usage-details-best-practices.md). ++## Charges for account types The following table covers which charges are included for each account type. Account type | Azure usage | Marketplace usage | Purchases | Refunds | | | | Enterprise Agreement (EA) | Yes | Yes | Yes | No Microsoft Customer Agreement (MCA) | Yes | Yes | Yes | Yes-Pay-as-you-go (PAYG) | Yes | Yes | No | No +Pay-as-you-go | Yes | Yes | No | No To learn more about Marketplace orders (also known as external services), see [Understand your Azure external service charges](understand-azure-marketplace-charges.md). If you have usage or charges that you don't recognize, there are several things - Find people responsible for the resource and engage with them - Analyze the audit logs - Analyze user permissions to the resource's parent scope-- Create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) to help identify the charges+- To help identify the charges Create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) For more information, see [Analyze unexpected charges](analyze-unexpected-charges.md). -Note that Azure doesn't log most user actions. Instead, Microsoft logs resource usage for billing. If you notice a usage spike in the past and you didn't have logging enabled, Microsoft can't pinpoint the cause. Enable logging for the service that you want to view the increased usage for so that the appropriate technical team can assist you with the issue. ++Azure doesn't log most user actions. Instead, Microsoft logs resource usage for billing. If you notice a usage spike in the past and you didn't have logging enabled, Microsoft can't pinpoint the cause. Enable logging for the service that you want to view the increased usage for so that the appropriate technical team can assist you with the issue. ## Need help? Contact us. |
data-factory | Connector Deprecation Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-deprecation-plan.md | Title: Planned connector deprecations for Azure Data Factory description: This page describes future deprecations for some connectors of Azure Data Factory.--++ Previously updated : 09/13/2024 Last updated : 09/19/2024 # Planned connector deprecations for Azure Data Factory This article describes future deprecations for some connectors of Azure Data Fac | Connector|Release stage |End of Support Date |Disabled Date | |:-- |:-- |:-- | :-- | -| [Google BigQuery (legacy)](connector-google-bigquery-legacy.md)  | End of support announced and new version available | October 31, 2024 | January 10, 2024 | -| [MariaDB (legacy driver version)](connector-mariadb.md)  | End of support announced and new version available | October 31, 2024 | January 10, 2024 | -| [MySQL (legacy driver version)](connector-mysql.md)  | End of support announced and new version available | October 31, 2024| January 10, 2024| -| [Salesforce (legacy)](connector-salesforce-legacy.md)   | End of support announced and new version available | October 11, 2024 | January 10, 2024 | -| [Salesforce Service Cloud (legacy)](connector-salesforce-service-cloud-legacy.md)   | End of support announced and new version available | October 11, 2024 |January 10, 2024 | -| [PostgreSQL (legacy)](connector-postgresql-legacy.md)   | End of support announced and new version available |October 31, 2024 | January 10, 2024 | -| [Snowflake (legacy)](connector-snowflake-legacy.md)   | End of support announced and new version available | October 31, 2024 | January 10, 2024 | +| [Google BigQuery (legacy)](connector-google-bigquery-legacy.md)  | End of support announced and new version available | October 31, 2024 | January 10, 2025 | +| [MariaDB (legacy driver version)](connector-mariadb.md)  | End of support announced and new version available | October 31, 2024 | January 10, 2025 | +| [MySQL (legacy driver version)](connector-mysql.md)  | End of support announced and new version available | October 31, 2024| January 10, 2025| +| [Salesforce (legacy)](connector-salesforce-legacy.md)   | End of support announced and new version available | October 11, 2024 | January 10, 2025| +| [Salesforce Service Cloud (legacy)](connector-salesforce-service-cloud-legacy.md)   | End of support announced and new version available | October 11, 2024 |January 10, 2025 | +| [PostgreSQL (legacy)](connector-postgresql-legacy.md)   | End of support announced and new version available |October 31, 2024 | January 10, 2025 | +| [Snowflake (legacy)](connector-snowflake-legacy.md)   | End of support announced and new version available | October 31, 2024 | January 10, 2025 | | [Azure Database for MariaDB](connector-azure-database-for-mariadb.md) | End of support announced |December 31, 2024 | December 31, 2024 | | [Concur (Preview)](connector-concur.md) | End of support announced | December 31, 2024 | December 31, 2024 | | [Drill](connector-drill.md) | End of support announced | December 31, 2024 | December 31, 2024 | |
data-manager-for-agri | Concepts Llm Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-llm-apis.md | Our copilot templates make generative AI in agriculture a reality. - An instance of [Azure Data Manager for Agriculture](quickstart-install-data-manager-for-agriculture.md) - An instance of [Azure OpenAI Service](/azure/ai-services/openai/how-to/create-resource) created in your Azure subscription - [Azure Key Vault](/azure/key-vault/general/quick-create-portal)-- [Azure Container Registry](../container-registry/container-registry-get-started-portal.md)+- [Azure Container Registry](/azure/container-registry/container-registry-get-started-portal) ## High-level architecture |
databox-online | Azure Stack Edge Deploy Aks On Azure Stack Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge.md | Before you begin, ensure that: - You have a Microsoft account with credentials to access Azure portal, and access to an Azure Stack Edge Pro GPU device. The Azure Stack Edge device is configured and activated using instructions in [Set up and activate your device](azure-stack-edge-gpu-deploy-checklist.md). - You have at least one virtual switch created and enabled for compute on your Azure Stack Edge device. For detailed steps, see [Create virtual switches](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=single-node#configure-virtual-switches). - You have a client to access your device that's running a supported operating system. If using a Windows client, make sure that it's running PowerShell 5.0 or later.-- Before you enable Azure Arc on the Kubernetes cluster, make sure that youΓÇÖve enabled and registered `Microsoft.Kubernetes` and `Microsoft.KubernetesConfiguration` resource providers against your subscription. For detailed steps, see [Register resource providers via Azure CLI](../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli#register-providers-for-azure-arc-enabled-kubernetes).+- Before you enable Azure Arc on the Kubernetes cluster, make sure that youΓÇÖve enabled and registered `Microsoft.Kubernetes` and `Microsoft.KubernetesConfiguration` resource providers against your subscription. For detailed steps, see [Register resource providers via Azure CLI](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli#register-providers-for-azure-arc-enabled-kubernetes). - If you intend to deploy Azure Arc for Kubernetes cluster, you need to create a resource group. You must have owner level access to the resource group. To verify the access level for the resource group, go to **Resource group** > **Access control (IAM)** > **View my access**. Under **Role assignments**, you must be listed as an Owner. Before you begin, ensure that: Depending on the workloads you intend to deploy, you may need to ensure the following **optional** steps are also completed: -- If you intend to deploy [custom locations](../azure-arc/platform/conceptual-custom-locations.md) on your Arc-enabled cluster, you need to register the `Microsoft.ExtendedLocation` resource provider against your subscription.+- If you intend to deploy [custom locations](/azure/azure-arc/platform/conceptual-custom-locations) on your Arc-enabled cluster, you need to register the `Microsoft.ExtendedLocation` resource provider against your subscription. You must fetch the custom location object ID and use it to enable custom locations via the PowerShell interface of your device. Depending on the workloads you intend to deploy, you may need to ensure the foll PS /home/user> ``` - For more information, see [Create and manage custom locations in Arc-enabled Kubernetes](../azure-arc/kubernetes/custom-locations.md). + For more information, see [Create and manage custom locations in Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/custom-locations). - If deploying Kubernetes or PMEC workloads: - You may have selected a specific workload profile using the local UI or using PowerShell. Detailed steps are documented for the local UI in [Configure compute IPS](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=two-node#configure-compute-ips-1) and for PowerShell in [Change Kubernetes workload profiles](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-workload-profiles). |
databox-online | Azure Stack Edge Gpu 2101 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2101-release-notes.md | The following table provides a summary of known issues in the 2101 release. |**3.**|Kubernetes |Edge container registry does not work when web proxy is enabled.|The functionality will be available in a future release. | |**4.**|Kubernetes |Edge container registry does not work with IoT Edge modules.| | |**5.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration&view=aspnetcore-3.1&preserve-view=true#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**6.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**6.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**7.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**8.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| | |**9.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. | |
databox-online | Azure Stack Edge Gpu 2103 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2103-release-notes.md | The following table provides a summary of known issues carried over from the pre |**20**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**21.**|Kubernetes Dashboard | *Https* endpoint for Kubernetes Dashboard with SSL certificate is not supported. | | |**22.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**23.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**23.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**24.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**25.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| | |**26.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. | |
databox-online | Azure Stack Edge Gpu 2105 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2105-release-notes.md | The following table provides a summary of known issues carried over from the pre |**20**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**21.**|Kubernetes Dashboard | *Https* endpoint for Kubernetes Dashboard with SSL certificate is not supported. | | |**22.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**23.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**23.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**24.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**25.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| | |**26.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. | |
databox-online | Azure Stack Edge Gpu 2106 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2106-release-notes.md | The following table provides a summary of known issues carried over from the pre |**20**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**21.**|Kubernetes Dashboard | *Https* endpoint for Kubernetes Dashboard with SSL certificate is not supported. | | |**22.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**23.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**23.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**24.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**25.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| | |**26.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. | |
databox-online | Azure Stack Edge Gpu 2110 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2110-release-notes.md | The following table provides a summary of known issues carried over from the pre |**15.**|Compute + Kubernetes |Compute/Kubernetes does not support NTLM web proxy. || |**16**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**17.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**18.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**18.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**19.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**20.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| | |**21.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. | |
databox-online | Azure Stack Edge Gpu 2111 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2111-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Compute + Kubernetes |Compute/Kubernetes does not support NTLM web proxy. || |**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. | |
databox-online | Azure Stack Edge Gpu 2202 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2202-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy is not supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. | |
databox-online | Azure Stack Edge Gpu 2203 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2203-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu 2205 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2205-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu 2207 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2207-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu 2209 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2209-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu 2210 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2210-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu 2301 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2301-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu 2303 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2303-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu 2304 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2304-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu 2309 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2309-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you might not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu 2312 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2312-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you might not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu 2403 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2403-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you might not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information, see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu 2407 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2407-release-notes.md | The following table provides a summary of known issues carried over from the pre |**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || |**17.**|Internet Explorer|If enhanced security features are enabled, you might not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information, see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|-|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters). | |**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |
databox-online | Azure Stack Edge Gpu Deploy Arc Kubernetes Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md | This article shows you how to enable Azure Arc on an existing Kubernetes cluster This procedure assumes that you have read and understood the following articles: - [Kubernetes workloads on Azure Stack Edge Pro device](azure-stack-edge-gpu-kubernetes-workload-management.md)-- [What is Azure Arc-enabled Kubernetes (Preview)?](../azure-arc/kubernetes/overview.md)+- [What is Azure Arc-enabled Kubernetes (Preview)?](/azure/azure-arc/kubernetes/overview) ## Prerequisites Before you enable Azure Arc on the Kubernetes cluster, you need to enable and re ![Register Kubernetes resource providers 3](media/azure-stack-edge-gpu-connect-powershell-interface/register-k8-resource-providers-4.png) -You can also register resource providers via the `az cli`. For more information, see [Register the two providers for Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md#register-providers-for-azure-arc-enabled-kubernetes). +You can also register resource providers via the `az cli`. For more information, see [Register the two providers for Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/quickstart-connect-cluster#register-providers-for-azure-arc-enabled-kubernetes). ## Create service principal, assign role Follow these steps to configure the Kubernetes cluster for Azure Arc management: [10.128.44.240]: PS> ``` -A conceptual overview of these agents is available [here](../azure-arc/kubernetes/conceptual-agent-overview.md). +A conceptual overview of these agents is available [here](/azure/azure-arc/kubernetes/conceptual-agent-overview). ### Remove Arc from the Kubernetes cluster To remove the Azure Arc management, follow these steps: > [!NOTE]-> By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. You need to set `--sync-garbage-collection` in Arc OperatorParams to allow the deletion of resources when deleted from git repository. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters) +> By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. You need to set `--sync-garbage-collection` in Arc OperatorParams to allow the deletion of resources when deleted from git repository. For more information, see [Delete a configuration](/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster#additional-parameters) ## Next steps |
databox-online | Azure Stack Edge Gpu Deploy Sample Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-sample-module.md | Before you begin, make sure you have: - You've access to a GPU enabled 1-node Azure Stack Edge Pro device. This device is activated with a resource in Azure. See [Activate the device](azure-stack-edge-gpu-deploy-activate.md). - You've configured compute on this device. Follow the steps in [Tutorial: Configure compute on your Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-configure-compute.md).-- An Azure Container Registry (ACR). Go to the **Access keys** blade and make a note of the ACR login server, username, and password. For more information, go to [Quickstart: Create a private container registry using the Azure portal](../container-registry/container-registry-get-started-portal.md#create-a-container-registry).+- An Azure Container Registry (ACR). Go to the **Access keys** blade and make a note of the ACR login server, username, and password. For more information, go to [Quickstart: Create a private container registry using the Azure portal](/azure/container-registry/container-registry-get-started-portal#create-a-container-registry). - The following development resources on a Windows client: - [Azure CLI 2.0 or later](https://aka.ms/installazurecliwindows) - [Docker CE](https://store.docker.com/editions/community/docker-ce-desktop-windows). You may have to create an account to download and install the software. |
databox-online | Azure Stack Edge Gpu Deploy Stateless Application Git Ops Guestbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-stateless-application-git-ops-guestbook.md | This article shows you how to build and deploy a simple, multi-tier web applicat The deployment is done using GitOps on the Azure Arc-enabled Kubernetes cluster on your Azure Stack Edge Pro device. -This procedure is intended for people who have reviewed the [Kubernetes workloads on Azure Stack Edge Pro device](azure-stack-edge-gpu-kubernetes-workload-management.md) and are familiar with the concepts of [What is Azure Arc-enabled Kubernetes (Preview)](../azure-arc/kubernetes/overview.md). +This procedure is intended for people who have reviewed the [Kubernetes workloads on Azure Stack Edge Pro device](azure-stack-edge-gpu-kubernetes-workload-management.md) and are familiar with the concepts of [What is Azure Arc-enabled Kubernetes (Preview)](/azure/azure-arc/kubernetes/overview). > [!NOTE] > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. |
databox-online | Azure Stack Edge Gpu Edge Container Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-edge-container-registry.md | This article describes how to enable the Edge container registry and use it from ### About Edge container registry -Containerized compute applications run on container images and these images are stored in registries. Registries can be public such as Docker Hub, private, or cloud provider managed such as Azure Container Registry. For more information, see [About registries, repositories, and images](../container-registry/container-registry-concepts.md). +Containerized compute applications run on container images and these images are stored in registries. Registries can be public such as Docker Hub, private, or cloud provider managed such as Azure Container Registry. For more information, see [About registries, repositories, and images](/azure/container-registry/container-registry-concepts). An Edge container registry provides a repository at the Edge, on your Azure Stack Edge Pro device. You can use this registry to store and manage your private container images. |
databox-online | Azure Stack Edge Gpu Kubernetes Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-kubernetes-overview.md | For more information on deploying Kubernetes cluster, go to [Deploy a Kubernetes ### Kubernetes and Azure Arc -Azure Arc is a hybrid management tool that will allow you to deploy applications on your Kubernetes clusters. Azure Arc also allows you to use Azure Monitor for containers to view and monitor your clusters. For more information, go to [What is Azure Arc-enabled Kubernetes?](../azure-arc/kubernetes/overview.md). For information on Azure Arc pricing, go to [Azure Arc pricing](https://azure.microsoft.com/services/azure-arc/#pricing). +Azure Arc is a hybrid management tool that will allow you to deploy applications on your Kubernetes clusters. Azure Arc also allows you to use Azure Monitor for containers to view and monitor your clusters. For more information, go to [What is Azure Arc-enabled Kubernetes?](/azure/azure-arc/kubernetes/overview). For information on Azure Arc pricing, go to [Azure Arc pricing](https://azure.microsoft.com/services/azure-arc/#pricing). <!-- confirm with Anoob/Rohan if this needs to be updated as Azure Arc is now GA--> |
databox-online | Azure Stack Edge Gpu Kubernetes Workload Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-kubernetes-workload-management.md | There are three primary ways of deploying your workloads. Each of these deployme - **Azure Arc-enabled Kubernetes deployment**: Azure Arc-enabled Kubernetes is a hybrid management tool that will allow you to deploy applications on your Kubernetes clusters. You connect to the Kubernetes cluster on your Azure Stack Edge Pro device via the `azure-arc namespace`. The agents deployed in this namespace are responsible for connectivity to Azure. You apply the deployment configuration by using the GitOps-based configuration management. - Azure Arc-enabled Kubernetes will also allow you to use Azure Monitor for containers to view and monitor your cluster. For more information, go to [What is Azure Arc-enabled Kubernetes?](../azure-arc/kubernetes/overview.md). + Azure Arc-enabled Kubernetes will also allow you to use Azure Monitor for containers to view and monitor your cluster. For more information, go to [What is Azure Arc-enabled Kubernetes?](/azure/azure-arc/kubernetes/overview). Beginning March 2021, Azure Arc-enabled Kubernetes will be generally available to the users and standard usage charges apply. As a valued preview customer, the Azure Arc-enabled Kubernetes will be available to you at no charge for Azure Stack Edge device(s). To avail the preview offer, create a [Support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest): |
ddos-protection | Ddos Response Strategy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-response-strategy.md | Azure DDoS Protection identifies and mitigates DDoS attacks without any user int ### When to contact Microsoft support -Azure DDoS Network Protection customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis. For more information, including when you should engage the DRR team, see [DDoS Rapid Response](ddos-rapid-response.md). Azure DDoS IP Protection customers should create a request to connect with Microsoft support. To learn more, see [Create a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +Azure DDoS Network Protection customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis. For more information, including when you should engage the DRR team, see [DDoS Rapid Response](ddos-rapid-response.md). Azure DDoS IP Protection customers should create a request to connect with Microsoft support. To learn more, see [Create a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). ## Post-attack steps |
defender-for-iot | Plan Prepare Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/plan-prepare-deploy.md | Prepare a workstation from where you can run Defender for IoT deployment activit - Terminal software, such as PuTTY -- A supported browser for connecting to sensor consoles and the Azure portal. For more information, see [recommended browsers for the Azure portal](../../../azure-portal/azure-portal-supported-browsers-devices.md#recommended-browsers).+- A supported browser for connecting to sensor consoles and the Azure portal. For more information, see [recommended browsers for the Azure portal](/azure/azure-portal/azure-portal-supported-browsers-devices#recommended-browsers). - Required firewall rules configured, with access open for required interfaces. For more information, see [Networking requirements](../networking-requirements.md). |
defender-for-iot | Eiot Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-sensor.md | This procedure describes how to install Enterprise IoT monitoring software on [y In the **Sites and sensors** page, Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**. For more information, see [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md). > [!TIP]-> If you don't see your Enterprise IoT data in Defender for IoT as expected, make sure that you're viewing the Azure portal with the correct subscriptions selected. For more information, see [Manage Azure portal settings](../../azure-portal/set-preferences.md). +> If you don't see your Enterprise IoT data in Defender for IoT as expected, make sure that you're viewing the Azure portal with the correct subscriptions selected. For more information, see [Manage Azure portal settings](/azure/azure-portal/set-preferences). > > If you still don't view your data as expected, [validate your sensor setup](extra-deploy-enterprise-iot.md#validate-your-enterprise-iot-sensor-setup) from the CLI. |
defender-for-iot | How To Manage Device Inventory For Organizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md | Use the **Device inventory** page in [Defender for IoT](https://portal.azure.com For more information, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot). +>[!Note] +> +>Currently, devices discovered in the Azure portal aren't synchronized with Defender XDR, and therefore the list of devices discovered could be different in each portal. +> + ## View the device inventory To view detected devices in the **Device inventory** page in the Azure portal, go to **Defender for IoT** > **Device inventory**. |
defender-for-iot | Activate Deploy Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/legacy-central-management/activate-deploy-management.md | Activate your on-premises management console using a downloaded file from the Az 1. In the **Plans** grid, select your subscription. - If you don't see the subscription that you're looking for, make sure that you're viewing the Azure portal with the correct subscriptions selected. For more information, see [Manage Azure portal settings](../../../azure-portal/set-preferences.md). + If you don't see the subscription that you're looking for, make sure that you're viewing the Azure portal with the correct subscriptions selected. For more information, see [Manage Azure portal settings](/azure/azure-portal/set-preferences). 1. In the toolbar, select **Download on-premises management console activation file**. The activation file downloads. |
deployment-environments | How To Request Quota Increase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-request-quota-increase.md | This guide explains how to submit a support request to increase the number of re If your organization uses Deployment Environments extensively, you might encounter a quota limit during deployment. When you reach the limit for a resource in your subscription, you can request a limit increase (sometimes called a capacity increase or a quota increase) to extend the number of resources available. The request process allows the Azure Deployment Environments team to ensure that your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments. -To learn more about the general process for creating Azure support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +To learn more about the general process for creating Azure support requests, see [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). ## Prerequisites |
dev-box | How To Get Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-get-help.md | If you can't resolve the issue, open a support request to contact Azure support: **[Contact Microsoft Azure Support - Microsoft Support](https://support.microsoft.com/topic/contact-microsoft-azure-support-2315e669-8b1f-493b-5fb1-d88a8736ffe4).** -- To learn more about support requests, refer to: [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).+- To learn more about support requests, refer to: [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). ## Support for dev team leads Developer team leads are often assigned the DevCenter Project Admin role. Project Admins have access to manage projects and pools. If you donΓÇÖt see your issue in the discussion forum, you can report it to the ## Next steps -- To learn more about support requests, refer to: [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).+- To learn more about support requests, refer to: [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). |
dev-box | How To Request Quota Increase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-request-quota-increase.md | To complete the support request form, configure the remaining settings. When you ## Related content - Check the default quota for each resource type by subscription type with [Microsoft Dev Box limits](../azure-resource-manager/management/azure-subscription-service-limits.md#microsoft-dev-box-limits).-- Learn more about the general [process for creating Azure support requests](../azure-portal/supportability/how-to-create-azure-support-request.md).+- Learn more about the general [process for creating Azure support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request). |
devtest-labs | Devtest Lab Reference Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-reference-architecture.md | DevTest Labs has no built-in quotas or limits, but other Azure resources that la - Resources per resource group per resource type. The default limit for [resources per resource group per resource type is 800](../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits). Putting all VMs in the same resource group hits this limit much sooner, especially if the VMs have many extra disks. -- Storage accounts. Every lab in DevTest Labs comes with a storage account. The Azure quota for [number of storage accounts per region per subscription is 250](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-storage-limits) by default. So the maximum number of DevTest Labs in one region is also 250. With a quota increase, you can create up to 500 storage accounts per region. For more information, see [Increase Azure Storage account quotas](../quotas/storage-account-quota-requests.md).+- Storage accounts. Every lab in DevTest Labs comes with a storage account. The Azure quota for [number of storage accounts per region per subscription is 250](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-storage-limits) by default. So the maximum number of DevTest Labs in one region is also 250. With a quota increase, you can create up to 500 storage accounts per region. For more information, see [Increase Azure Storage account quotas](/azure/quotas/storage-account-quota-requests). - Role assignments. A role assignment gives a user or principal access to a resource. Azure has a limit of [2,000 role assignments per subscription](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-rbac-limits). |
digital-twins | Troubleshoot Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-performance.md | If you're still experiencing performance issues after troubleshooting with the s Follow these steps: 1. Gather [metrics](how-to-monitor.md#metrics-and-alerts) and [logs](how-to-monitor.md#diagnostics-logs) for your instance.-2. Navigate to [Azure Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal. Use the prompts to provide details of your issue, see recommended solutions, share your metrics/log files, and submit any other information that the support team can use to help investigate your issue. For more information on creating support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +2. Navigate to [Azure Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal. Use the prompts to provide details of your issue, see recommended solutions, share your metrics/log files, and submit any other information that the support team can use to help investigate your issue. For more information on creating support requests, see [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). ## Next steps |
dns | Dns Private Resolver Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md | Outbound endpoints have the following limitations: - IPv6 enabled subnets aren't supported. - DNS private resolver doesn't support Azure ExpressRoute FastPath.-- DNS private resolver isn't compatible with [Azure Lighthouse](../lighthouse/overview.md).+- DNS private resolver isn't compatible with [Azure Lighthouse](/azure/lighthouse/overview). - To see if Azure Lighthouse is in use, search for **Service providers** in the Azure portal and select **Service provider offers**. ## Next steps |
dns | Tutorial Alias Tm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-tm.md | For more information, see [Resolve errors for resource provider registration](/a Create a virtual network and a subnet to place your web servers in. 1. In the Azure portal, enter *virtual network* in the search box at the top of the portal, and then select **Virtual networks** from the search results.-1. In **Virtual networks**, select **+ Create**. -1. In **Create virtual network**, enter or select the following information in the **Basics** tab: +2. In **Virtual networks**, select **+ Create**. +3. In **Create virtual network**, enter or select the following information in the **Basics** tab: | Setting | Value | ||-| Create a virtual network and a subnet to place your web servers in. | Name | Enter *myTMVNet*. | | Region | Select your region. | -1. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page. -1. In the **IP Addresses** tab, enter the following information: +4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page. +5. In the **IP Addresses** tab, enter the following information: | Setting | Value | ||-| | IPv4 address space | Enter *10.10.0.0/16*. | -1. Select **+ Add subnet**, and enter this information in the **Add subnet**: +6. Select **+ Add subnet**, and enter this information in the **Add subnet**: | Setting | Value | ||-| | Subnet name | Enter *WebSubnet*. | | Subnet address range | Enter *10.10.0.0/24*. | -1. Select **Add**. -1. Select the **Review + create** tab or select the **Review + create** button. -1. Select **Create**. +7. Select **Add**. +8. Select the **Review + create** tab or select the **Review + create** button. +9. Select **Create**. ## Create web server virtual machines Create two Windows Server virtual machines, and install IIS web server on them, Create two Windows Server 2019 virtual machines. 1. In the Azure portal, enter *virtual machine* in the search box at the top of the portal, and then select **Virtual machines** from the search results.-1. In **Virtual machines**, select **+ Create** and then select **Azure virtual machine**. -1. In **Create a virtual machine**, enter or select the following information in the **Basics** tab: +2. In **Virtual machines**, select **+ Create** and then select **Azure virtual machine**. +3. In **Create a virtual machine**, enter or select the following information in the **Basics** tab: | Setting | Value | ||-| Create two Windows Server 2019 virtual machines. | Public inbound ports | Select **None**. | -1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. --1. In the **Networking** tab, enter or select the following information: +4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. +5. In the **Networking** tab, enter or select the following information: | Setting | Value | ||-| Create two Windows Server 2019 virtual machines. | Public inbound ports | Select **Allow selected ports**. | | Select inbound ports | Select **HTTP (80)**, **HTTPS (443)** and **RDP (3389)**. | -1. Select **Review + create**. -1. Review the settings, and then select **Create**. -1. Repeat previous steps to create the second virtual machine. Enter *Web-02* in the **Virtual machine name** and *Web-02-ip* in the **Name** of **Public IP**. For the other settings, use the same information from the previous steps used with first virtual machine. +6. Select **Review + create**. +7. Review the settings, and then select **Create**. +8. Repeat previous steps to create the second virtual machine. Enter *Web-02* in the **Virtual machine name** and *Web-02-ip* in the **Name** of **Public IP**. For the other settings, use the same information from the previous steps used with first virtual machine. Each virtual machine deployment may take a few minutes to complete. Each virtual machine deployment may take a few minutes to complete. Install IIS on both **Web-01** and **Web-02** virtual machines. 1. In the **Connect** page of **Web-01** virtual machine, select **RDP** and then **Download RDP File**.-1. Open *Web-01.rdp* file, and select **Connect**. -1. Enter the username and password entered during virtual machine creation. -1. On the **Server Manager** dashboard, select **Manage** then **Add Roles and Features**. -1. Select **Server Roles** or select **Next** three times. On the **Server Roles** screen, select **Web Server (IIS)**. -1. Select **Add Features**, and then select **Next**. +2. Open *Web-01.rdp* file, and select **Connect**. +3. Enter the username and password entered during virtual machine creation. +4. On the **Server Manager** dashboard, select **Manage** then **Add Roles and Features**. +5. Select **Server Roles** or select **Next** three times. On the **Server Roles** screen, select **Web Server (IIS)**. +6. Select **Add Features**, and then select **Next**. :::image type="content" source="./media/tutorial-alias-tm/iis-web-server-installation.png" alt-text="Screenshot of Add Roles and Features Wizard in Windows Server 2019 showing how to install the I I S Web Server by adding the Web Server role."::: -1. Select **Confirmation** or select **Next** three times, and then select **Install**. The installation process takes a few minutes to finish. -1. After the installation finishes, select **Close**. -1. Go to *C:\inetpub\wwwroot* and open *iisstart.htm* with Notepad or any editor of your choice to edit the default IIS web page. -1. Replace all the text in the file with `Hello World from Web-01` and save the changes to *iisstart.htm*. -1. Open a web browser. Browse to **localhost** to verify that the default IIS web page appears. +7. Select **Confirmation** or select **Next** three times, and then select **Install**. The installation process takes a few minutes to finish. +8. After the installation finishes, select **Close**. +9. Go to *C:\inetpub\wwwroot* and open *iisstart.htm* with Notepad or any editor of your choice to edit the default IIS web page. +10. Replace all the text in the file with `Hello World from Web-01` and save the changes to *iisstart.htm*. +11. Open a web browser. Browse to **localhost** to verify that the default IIS web page appears. :::image type="content" source="./media/tutorial-alias-tm/iis-on-web-01-vm-in-web-browser.png" alt-text="Screenshot of Internet Explorer showing the I I S Web Server default page of first virtual machine."::: -1. Repeat previous steps to install IIS web server on **Web-02** virtual machine. Use `Hello World from Web-02` to replace all the text in *iisstart.htm*. +12. Repeat previous steps to install IIS web server on **Web-02** virtual machine. Use `Hello World from Web-02` to replace all the text in *iisstart.htm*. ### Add a DNS label Public IP addresses need DNS labels to work with Traffic Manager. 1. In the Azure portal, enter *TMResourceGroup* in the search box at the top of the portal, and then select **TMResourceGroup** from the search results.-1. In the **TMResourceGroup** resource group, select the **Web-01-ip** public IP address. -1. Under **Settings**, select **Configuration**. -1. Enter *web01pip* in the **DNS name label**. -1. Select **Save**. +2. In the **TMResourceGroup** resource group, select the **Web-01-ip** public IP address. +3. Under **Settings**, select **Configuration**. +4. Enter *web01pip* in the **DNS name label**. +5. Select **Save**. :::image type="content" source="./media/tutorial-alias-tm/ip-dns-name-label-inline.png" alt-text="Screenshot of the Configuration page of Azure Public IP Address showing D N S name label." lightbox="./media/tutorial-alias-tm/ip-dns-name-label-expanded.png"::: -1. Repeat the previous steps for the **Web-02-ip** public IP address and enter *web02pip* in the **DNS name label**. +6. Repeat the previous steps for the **Web-02-ip** public IP address and enter *web02pip* in the **DNS name label**. ## Create a Traffic Manager profile 1. In the **Overview** page of **Web-01-ip** public IP address, note the IP address for later use. Repeat this step for the **Web-02-ip** public IP address.-1. In the Azure portal, enter *Traffic Manager profile* in the search box at the top of the portal, and then select **Traffic Manager profiles**. -1. Select **+ Create**. -1. In the **Create Traffic Manager profile** page, enter or select the following information: +2. In the Azure portal, enter *Traffic Manager profile* in the search box at the top of the portal, and then select **Traffic Manager profiles**. +3. Select **+ Create**. +4. In the **Create Traffic Manager profile** page, enter or select the following information: | Setting | Value | ||-| Public IP addresses need DNS labels to work with Traffic Manager. :::image type="content" source="./media/tutorial-alias-tm/create-traffic-manager-profile.png" alt-text="Screenshot of the Create Traffic Manager profile page showing the selected settings."::: -1. Select **Create**. --1. After **TM-alias-test** deployment finishes, select **Go to resource**. -1. In the **Endpoints** page of **TM-alias-test** Traffic Manager profile, select **+ Add** and enter or select the following information: +5. Select **Create**. +6. After **TM-alias-test** deployment finishes, select **Go to resource**. +7. In the **Endpoints** page of **TM-alias-test** Traffic Manager profile, select **+ Add** and enter or select the following information: | Setting | Value | ||-| Public IP addresses need DNS labels to work with Traffic Manager. :::image type="content" source="./media/tutorial-alias-tm/add-endpoint-tm-inline.png" alt-text="Screenshot of the Endpoints page in Traffic Manager profile showing selected settings for adding an endpoint." lightbox="./media/tutorial-alias-tm/add-endpoint-tm-expanded.png"::: -1. Select **Add**. --1. Repeat the last two steps to create the second endpoint. Enter or select the following information: +8. Select **Add**. +9. Repeat the last two steps to create the second endpoint. Enter or select the following information: | Setting | Value | ||-| Public IP addresses need DNS labels to work with Traffic Manager. Create an alias record that points to the Traffic Manager profile. 1. In the Azure portal, enter *contoso.com* in the search box at the top of the portal, and then select **contoso.com** DNS zone from the search results.-1. In the **Overview** page of **contoso.com** DNS zone, select the **+ Record set** button. -1. In the **Add record set**, leave the **Name** box empty to represent the apex domain name. An example is `contoso.com`. -1. Select **A** for the **Type**. -1. Select **Yes** for the **Alias record set**, and then select the **Azure Resource** for the **Alias type**. -1. Select the **TM-alias-test** Traffic Manager profile for the **Azure resource**. -1. Select **OK**. +2. On the **Overview** page of **contoso.com** DNS zone, select the **+ Record set** button. +3. In **Add record set**, leave the **Name** box empty to represent the apex domain name. An example is `contoso.com`. +4. Select **A** for the **Type**. +5. Select **Yes** for the **Alias record set**, and then select the **Azure Resource** for the **Alias type**. +6. Select the **TM-alias-test** Traffic Manager profile for the **Azure resource**. +7. Select **OK**. :::image type="content" source="./media/tutorial-alias-tm/add-record-set-tm-inline.png" alt-text="Screenshot of adding an alias record to refer to the Traffic Manager profile." lightbox="./media/tutorial-alias-tm/add-record-set-tm-expanded.png"::: +> [!NOTE] +> DNS Queries to your newly aliased Traffic Manager recordset are displayed in your Traffic Manager profile billing. For more information on Traffic Manager billing, see [Traffic Manager pricing](https://azure.microsoft.com/pricing/details/traffic-manager). + ## Test the alias record 1. From a web browser, browse to `contoso.com` or your apex domain name. You see the IIS default page with `Hello World from Web-01`. The Traffic Manager directed traffic to **Web-01** IIS web server because it has the highest priority. Close the web browser and shut down **Web-01** virtual machine. Wait a few minutes for the virtual machine to completely shut down.-1. Open a new web browser, and browse again to `contoso.com` or your apex domain name. -1. You should see the IIS default page with `Hello World from Web-02`. The Traffic Manager handled the situation and directed traffic to the second IIS server after shutting down the first server that has the highest priority. +2. Open a new web browser, and browse again to `contoso.com` or your apex domain name. +3. You should see the IIS default page with `Hello World from Web-02`. The Traffic Manager handled the situation and directed traffic to the second IIS server after shutting down the first server that has the highest priority. ## Clean up resources When no longer needed, you can delete all resources created in this tutorial by following these steps: 1. On the Azure portal menu, select **Resource groups**.-1. Select the **TMResourceGroup** resource group. -1. On the **Overview** page, select **Delete resource group**. -1. Enter *TMResourceGroup* and select **Delete**. -1. On the Azure portal menu, select **All resources**. -1. Select **contoso.com** DNS zone. -1. On the **Overview** page, select the **@** record created in this tutorial. -1. Select **Delete** and then **Yes**. +2. Select the **TMResourceGroup** resource group. +3. On the **Overview** page, select **Delete resource group**. +4. Enter *TMResourceGroup* and select **Delete**. +5. On the Azure portal menu, select **All resources**. +6. Select **contoso.com** DNS zone. +7. On the **Overview** page, select the **@** record created in this tutorial. +8. Select **Delete** and then **Yes**. ## Next steps |
event-grid | Event Schema Container Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-container-registry.md | The connectedRegistry object has the following properties: ## Tutorials and how-tos |Title |Description | |||-| [Quickstart: send container registry events](../container-registry/container-registry-event-grid-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) | Shows how to use Azure CLI to send Container Registry events. | +| [Quickstart: send container registry events](/azure/container-registry/container-registry-event-grid-quickstart?toc=%2fazure%2fevent-grid%2ftoc.json) | Shows how to use Azure CLI to send Container Registry events. | ## Next steps |
event-grid | Handler Webhooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-webhooks.md | See the following articles for an overview and examples of using webhooks as eve ||| | Quickstart: create and route custom events with - [Azure CLI](custom-event-quickstart.md), [PowerShell](custom-event-quickstart-powershell.md), and [portal](custom-event-quickstart-portal.md). | Shows how to send custom events to a WebHook. | | Quickstart: route Blob storage events to a custom web endpoint with - [Azure CLI](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json), [PowerShell](../storage/blobs/storage-blob-event-quickstart-powershell.md?toc=%2fazure%2fevent-grid%2ftoc.json), and [portal](blob-event-quickstart-portal.md). | Shows how to send blob storage events to a WebHook. |-| [Quickstart: send container registry events](../container-registry/container-registry-event-grid-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) | Shows how to use Azure CLI to send Container Registry events. | +| [Quickstart: send container registry events](/azure/container-registry/container-registry-event-grid-quickstart?toc=%2fazure%2fevent-grid%2ftoc.json) | Shows how to use Azure CLI to send Container Registry events. | | [Overview: receive events to an HTTP endpoint](receive-events.md) | Describes how to validate an HTTP endpoint to receive events from an event subscription, and receive and deserialize events. | |
event-grid | Create Topic Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/create-topic-subscription.md | In this quickstart, you create a topic in Event Grid on Kubernetes, create a sub ## Prerequisites -1. [Connect your Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md). +1. [Connect your Kubernetes cluster to Azure Arc](/azure/azure-arc/kubernetes/quickstart-connect-cluster). 1. [Install Event Grid extension on Kubernetes cluster](install-k8s-extension.md). This extension deploys Event Grid to a Kubernetes cluster. As an Azure location extension, a custom location lets you use your Azure Arc-en customlocationid=$(az customlocation show -n $customlocationname -g $resourcegroupname --query id -o tsv) ``` - For more information on creating custom locations, see [Create and manage custom locations on Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/custom-locations.md). + For more information on creating custom locations, see [Create and manage custom locations on Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/custom-locations). ## Create a topic In this section, you create a topic in the custom location you created in the previous step. Update resource group and Event Grid topic names before running the command. Update the location if you're using a location other than East US. |
event-grid | Get Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/get-support.md | If you run into an issue, [create an Azure support request](https://portal.azure 1. If you're experiencing problems in any other step of the installation steps, type **Event Grid on Kubernetes with Azure Arc** and select that option. 1. For **Resource**, select a suitable Azure resource or select **General question** if there's no resource available. 1. In the **Summary** field, provide a succinct description of your problem.-1. For Problem type, select **Cluster connect** if you're having problems with [connecting your cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md). For other issues, select a suitable option. For example, select **Management issues** if you're experiencing problems while deploying Event Grid or creating a topic or event subscription. +1. For Problem type, select **Cluster connect** if you're having problems with [connecting your cluster to Azure Arc](/azure/azure-arc/kubernetes/quickstart-connect-cluster). For other issues, select a suitable option. For example, select **Management issues** if you're experiencing problems while deploying Event Grid or creating a topic or event subscription. 1. For Problem subtype, select a suitable option. 1. Select **Next: Solutions**. The Solutions tab is shown. 1. Read through the suggested solutions. If you do not find a suitable solution or the solution did not solve the issue, select **Next: Details** at the bottom of the page. |
event-grid | Install K8s Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/install-k8s-extension.md | -This article guides you through the steps to install Event Grid on an [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) cluster. +This article guides you through the steps to install Event Grid on an [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview) cluster. For brevity, this article refers to "Event Grid on Kubernetes extension" as "Event Grid on Kubernetes" or just "Event Grid". Following are the supported Kubernetes distributions to which Event Grid can be ## Event Grid Extension-The operation that installs an Event Grid service instance on a Kubernetes cluster is the creation of an Azure Arc cluster extension, which deploys both an **Event Grid broker** and an **Event Grid operator**. For more information on the function of the broker and operator, see [Event Grid on Kubernetes components](concepts.md#event-grid-on-kubernetes-components). [Azure Arc cluster extension](../../azure-arc/kubernetes/conceptual-extensions.md) feature provides lifecycle management using Azure Resource Manager (ARM) control plane operations to Event Grid deployed to Azure Arc-enabled Kubernetes clusters. +The operation that installs an Event Grid service instance on a Kubernetes cluster is the creation of an Azure Arc cluster extension, which deploys both an **Event Grid broker** and an **Event Grid operator**. For more information on the function of the broker and operator, see [Event Grid on Kubernetes components](concepts.md#event-grid-on-kubernetes-components). [Azure Arc cluster extension](/azure/azure-arc/kubernetes/conceptual-extensions) feature provides lifecycle management using Azure Resource Manager (ARM) control plane operations to Event Grid deployed to Azure Arc-enabled Kubernetes clusters. > [!NOTE]-> The preview version of the service only supports a single instance of the Event Grid extension on a Kubernetes cluster as the Event Grid extension is currently defined as a cluster-scoped extension. There is no support for namespace-scoped deployments for Event Grid yet that would allow for multiple instances to be deployed to a cluster. For more information, see [Extension scope](../../azure-arc/kubernetes/conceptual-extensions.md#extension-scope). +> The preview version of the service only supports a single instance of the Event Grid extension on a Kubernetes cluster as the Event Grid extension is currently defined as a cluster-scoped extension. There is no support for namespace-scoped deployments for Event Grid yet that would allow for multiple instances to be deployed to a cluster. For more information, see [Extension scope](/azure/azure-arc/kubernetes/conceptual-extensions#extension-scope). ## Prerequisites Before proceeding with the installation of Event Grid, make sure the following prerequisites are met. Before proceeding with the installation of Event Grid, make sure the following p 1. A cluster running on one of the [supported Kubernetes distributions](#supported-kubernetes-distributions). 1. [An Azure subscription](https://azure.microsoft.com/free/). 1. [PKI Certificates](#pki-certificate-requirements) to be used for establishing an HTTPS connection with the Event Grid broker.-1. [Connect your cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md). +1. [Connect your cluster to Azure Arc](/azure/azure-arc/kubernetes/quickstart-connect-cluster). ## Getting support If you run into an issue, see the [Troubleshooting](#troubleshooting) section for help with common conditions. If you still have problems, [create an Azure support request](get-support.md#how-to-create-a-support-request). To establish a secure HTTPS communication with the Event Grid broker and Event G ``` > [!IMPORTANT]- > A Custom Location needs to be created before attempting to deploy Event Grid topics. To create a custom location, you can select the **Context** page at the bottom 5 minutes after the "Your deployment is complete" notification is shown. Alternatively, you can create a custom location using the [Azure portal](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.ExtendedLocation%2FCustomLocations). For more information, see the [Custom Location documentation](../../azure-arc/kubernetes/custom-locations.md). + > A Custom Location needs to be created before attempting to deploy Event Grid topics. To create a custom location, you can select the **Context** page at the bottom 5 minutes after the "Your deployment is complete" notification is shown. Alternatively, you can create a custom location using the [Azure portal](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.ExtendedLocation%2FCustomLocations). For more information, see the [Custom Location documentation](/azure/azure-arc/kubernetes/custom-locations). 1. After the deployment succeeds, you'll be able to see an entry on the **Extensions** page with the name you provided to your Event Grid extension. If you see **Pending** for the **Install status**, wait for a few minutes, and then select **Refresh** on the toolbar. :::image type="content" source="./media/install-k8s-extension/extension-installed.png" alt-text="Event Grid extension - installed"::: To establish a secure HTTPS communication with the Event Grid broker and Event G 1. Create a Kubernetes extension that installs Event Grid components on your cluster. - For parameters ``cluster-name`` and ``resource-group``, you must use the same names provided when you [connected your cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md). + For parameters ``cluster-name`` and ``resource-group``, you must use the same names provided when you [connected your cluster to Azure Arc](/azure/azure-arc/kubernetes/quickstart-connect-cluster). ``release-namespace`` is the namespace where Event Grid components will be deployed into. The default is **eventgrid-system**. You might want to provide a value to override the default. For example, you might want to have a single namespace for all Azure Arc-enabled services deployed to your cluster. If the namespace provided doesn't exist, it's created for you. > [!IMPORTANT]- > During the preview version, ``cluster`` is the only scope supported when creating or updating an Event Grid extension. That means the service only supports a single instance of the Event Grid extension on a Kubernetes cluster.There is no support for namespace-scoped deployments yet. For more information, see [Extension scope](../../azure-arc/kubernetes/conceptual-extensions.md#extension-scope). + > During the preview version, ``cluster`` is the only scope supported when creating or updating an Event Grid extension. That means the service only supports a single instance of the Event Grid extension on a Kubernetes cluster.There is no support for namespace-scoped deployments yet. For more information, see [Extension scope](/azure/azure-arc/kubernetes/conceptual-extensions#extension-scope). ```azurecli-interactive az k8s-extension create \ To establish a secure HTTPS communication with the Event Grid broker and Event G ### Custom location > [!IMPORTANT]-> A Custom Location needs to be created before attempting to deploy Event Grid topics. You can create a custom location using the [Azure portal](../../azure-arc/kubernetes/custom-locations.md#create-custom-location). +> A Custom Location needs to be created before attempting to deploy Event Grid topics. You can create a custom location using the [Azure portal](/azure/azure-arc/kubernetes/custom-locations#create-custom-location). ## Troubleshooting To establish a secure HTTPS communication with the Event Grid broker and Event G **Problem**: When you navigate to **Azure Arc** and select **Kubernetes cluster** on the left-hand side menu, the page displayed doesn't show the Kubernetes cluster where I intent to install Event Grid. -**Resolution**: Your Kubernetes cluster isn't registered with Azure. Follow the steps in article [Connect an existing Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md). If you have a problem during this step, file a [support request](#getting-support) with the Azure Arc-enabled Kubernetes team. +**Resolution**: Your Kubernetes cluster isn't registered with Azure. Follow the steps in article [Connect an existing Kubernetes cluster to Azure Arc](/azure/azure-arc/kubernetes/quickstart-connect-cluster). If you have a problem during this step, file a [support request](#getting-support) with the Azure Arc-enabled Kubernetes team. ### Event Grid extension issues To establish a secure HTTPS communication with the Event Grid broker and Event G ## Next steps-[Create a custom location](../../azure-arc/kubernetes/custom-locations.md) and then follow instructions in the quick start [Route cloud events to Webhooks with Azure Event Grid on Kubernetes](create-topic-subscription.md). +[Create a custom location](/azure/azure-arc/kubernetes/custom-locations) and then follow instructions in the quick start [Route cloud events to Webhooks with Azure Event Grid on Kubernetes](create-topic-subscription.md). |
event-grid | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/overview.md | Regardless of the edition of Event Grid you use, there's an **event publisher** ## Event Grid on Kubernetes with Azure Arc-Event Grid on Kubernetes with Azure Arc is an offering that allows you to run Event Grid on your own Kubernetes cluster. This capability is enabled by the use of [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). Through Azure Arc-enabled Kubernetes, a [supported Kubernetes cluster](install-k8s-extension.md#supported-kubernetes-distributions) connects to Azure. Once connected, you're able to [install Event Grid](install-k8s-extension.md) on it. +Event Grid on Kubernetes with Azure Arc is an offering that allows you to run Event Grid on your own Kubernetes cluster. This capability is enabled by the use of [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview). Through Azure Arc-enabled Kubernetes, a [supported Kubernetes cluster](install-k8s-extension.md#supported-kubernetes-distributions) connects to Azure. Once connected, you're able to [install Event Grid](install-k8s-extension.md) on it. ### Use case Event Grid on Kubernetes supports various event-driven integration scenarios. However, the main encompassing scenario supported and expressed as a user story is: Event Grid on Kubernetes with Azure Arc is offered without charge during its pre ## Next steps Follow these steps in the order to start routing events using Event Grid on Kubernetes. -1. [Connect your cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md). +1. [Connect your cluster to Azure Arc](/azure/azure-arc/kubernetes/quickstart-connect-cluster). 1. [Install an Event Grid extension](install-k8s-extension.md), which is the actual resource that deploys Event Grid to a Kubernetes cluster. To learn more about the extension, see [Event Grid Extension](install-k8s-extension.md#event-grid-extension) section to learn more. -1. [Create a custom location](../../azure-arc/kubernetes/custom-locations.md). A custom location represents a namespace in the cluster and it's the place where topics and event subscriptions are deployed. +1. [Create a custom location](/azure/azure-arc/kubernetes/custom-locations). A custom location represents a namespace in the cluster and it's the place where topics and event subscriptions are deployed. 1. [Create a topic and one or more event subscriptions](create-topic-subscription.md). 1. [Publish events](create-topic-subscription.md). |
event-grid | Outlook Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/outlook-events.md | When an event is triggered, the Event Grid service sends data about that event t "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce", "type": "Microsoft.Graph.MessageCreated", "source": "/tenants/<tenant-id>/applications/<application-id>",- "subject": "Messages/<messaeg-id>", + "subject": "Users/<user-id>/Messages/<messaeg-id>", "time": "2024-05-22T22:24:31.3062901Z", "datacontenttype": "application/json", "specversion": "1.0", When an event is triggered, the Event Grid service sends data about that event t "SubscriptionExpirationDateTime": "2024-06-22T23:56:30.1307708Z", "ChangeType": "created", "subscriptionId": "MTE1MTVlYTktMjVkZS00MjY3LWI1YzYtMjg0NzliZmRhYWQ2",- "resource": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Messages('<message id>')", + "resource": "Users/<user-id>/Messages/<message-id>", "clientState": "<client state>",+ "tenantId":"<tenant-id>", "resourceData": { "Id": "<message id>", "@odata.etag": "<tag id>",- "@odata.id": "https://outlook.office365.com/api/beta/Users('userId@tenantId')/Messages('<message id>')", + "@odata.id": "Users/<user-id>/Messages/<message-id>", "@odata.type": "#Microsoft.OutlookServices.Message", "OtherResourceData": "<some other resource data>" } |
expressroute | Get Correlation Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/get-correlation-id.md | This guide walks you through the steps to obtain the operation correlation ID fr ## Next steps -* File a support request with the correlation ID to help troubleshoot your issue. For more information, see [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +* File a support request with the correlation ID to help troubleshoot your issue. For more information, see [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). |
extended-zones | Request Quota Increase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/extended-zones/request-quota-increase.md | In this section, you request a quota increase in the Azure portal. 1. On the **My quotas** page, select the quota you want to increase from the **Quota name** column. Make sure that the **Adjustable** column shows **Yes** for this quota. > [!TIP]- > You can request an increase for a quota that is non-adjustable by submitting a support request. For more information, see [Request an increase for non-adjustable quotas](../quotas/per-vm-quota-requests.md#request-an-increase-for-non-adjustable-quotas). + > You can request an increase for a quota that is non-adjustable by submitting a support request. For more information, see [Request an increase for non-adjustable quotas](/azure/quotas/per-vm-quota-requests#request-an-increase-for-non-adjustable-quotas). 1. Select **New Quota Request**, then select **Enter a new limit**. |
governance | Migrating From Dsc Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/whats-new/migrating-from-dsc-extension.md | - Title: Planning a change from Desired State Configuration extension for Linux to machine configuration -description: Guidance for moving from Desired State Configuration extension to the machine configuration feature of Azure Policy. Previously updated : 02/01/2024----# Planning a change from Desired State Configuration extension for Linux to machine configuration --Machine configuration is the latest implementation of functionality that has been provided by the -PowerShell Desired State Configuration (DSC) extension for Linux virtual machines in Azure. When -possible, you should plan to move your content and machines to the new service. This article -provides guidance on developing a migration strategy. --New features in machine configuration: --- Advanced reporting through Azure Resource Graph including resource ID and state-- Manage multiple configurations for the same machine-- When machines drift from the desired state, you control when remediation occurs-- Linux machines consume PowerShell-based DSC resources--Before you begin, it's a good idea to read the conceptual overview information at the page -[Azure Policy's machine configuration][01]. --## Major differences --Configurations are deployed through the DSC extension for Linux in a "push" model, where the -operation is completed asynchronously. The deployment doesn't return until the configuration has -finished running inside the virtual machine. After deployment, no further information is returned -to Resource Manager. The monitoring and drift are managed within the machine. --Machine configuration processes configurations in a "pull" model. The extension is deployed to a -virtual machine and then jobs are executed based on machine configuration assignment details. It -isn't possible to view the status while the configuration in real time as it's being applied inside -the machine. It's possible to watch and correct drift from Azure Resource Manager after the -configuration is applied. --The DSC extension included **privateSettings** where secrets could be passed to the configuration, -such as passwords or shared keys. Secrets management hasn't yet been implemented for machine -configuration. --### Considerations for whether to migrate existing machines or only new machines --Machine configuration uses DSC version 3 with PowerShell version 7. DSC version 3 can coexist with -older versions of DSC in [Linux][02]. The implementations are separate. However, there's no -conflict detection. --For machines only intended to exist for days or weeks, update the deployment templates and switch -from the DSC extension to machine configuration. After testing, use the updated templates to build -future machines. --If a machine is planned to exist for months or years, you might choose to change which -configuration features of Azure manage the machine to take advantage of new features. --Using both platforms to manage the same configuration isn't advised. --## Understand migration --The best approach to migration is to recreate, test, and redeploy content first, and then use the -new solution for new machines. --The expected steps for migration are: --1. Download and expand the `.zip` package used for the DSC extension. -1. Examine the Managed Object Format (MOF) file and resources to understand the scenario. -1. Create custom DSC resources in PowerShell classes. -1. Update the MOF file to use the new resources. -1. Use the machine configuration authoring module to create, test, and publish a new package. -1. Use machine configuration for future deployments rather than DSC extension. --#### Consider decomposing complex configuration files --Machine configuration can manage multiple configurations per machine. Many configurations written -for the DSC extension for Linux assumed the limitation of managing a single configuration per -machine. To take advantage of the expanded capabilities offered by machine configuration, large -configuration files can be divided into many smaller configurations where each handles a specific -scenario. --There's no orchestration in machine configuration to control the order of how configurations are -sorted. Keep steps in a configuration together in one package if they must happen sequentially. --### Test content in Azure machine configuration --Read the page [How to create custom machine configuration package artifacts][03] to evaluate -whether your content from the DSC extension can be used with machine configuration. --When you reach the step [Author a configuration][04], use the MOF file from the DSC extension -package as the basis for creating a new MOF file and custom DSC resources. You must have the custom -PowerShell modules available in `$env:PSModulePath` before you can create a machine configuration -package. --#### Update deployment templates --If your deployment templates include the DSC extension (see [examples][05]), there are two changes -required. --First, replace the DSC extension with the [extension for the machine configuration feature][01]. --Then, add a [machine configuration assignment][06] that associates the new configuration package -(and hash value) with the machine. --#### Older nx\* modules for Linux DSC aren't compatible with DSCv3 --The modules that shipped with DSC for Linux on GitHub were created in the C programming language. -In the latest version of DSC, which is used by the machine configuration feature, modules for Linux -are written in PowerShell classes. None of the original resources are compatible with the new -platform. --As a result, new Linux packages require custom module development. --Linux content authored using Chef Inspec is still supported but should only be used for legacy -configurations. --#### Updated nx\* module functionality --A new open-source [nxtools module][07] has been released to help make managing Linux systems easier -for PowerShell users. --The module helps with managing common tasks such as: --- Managing users and groups-- Performing file system operations-- Managing services-- Performing archive operations-- Managing packages--The module includes class-based DSC resources for Linux and built-in machine configuration -packages. --To give feedback about this functionality, open an issue on the documentation. We currently _don't_ -accept PRs for this project, and support is best effort. --#### Do I need to add the Reasons property to custom resources? --Implementing the [Reasons property][08] provides a better experience when viewing the results of -a configuration assignment from the Azure portal. If the `Get` method in a module doesn't include -**Reasons**, generic output is returned with details from the properties returned by the `Get` -method. Therefore, it's optional for migration. --### Removing a configuration the DSC extension assigned in Linux --In previous versions of DSC, the DSC extension assigned a configuration through the Local -Configuration Manager (LCM). It's recommended to remove the DSC extension and reset the LCM. --> [!IMPORTANT] -> Removing a configuration in Local Configuration Manager doesn't "roll back" the settings in Linux -> that were set by the configuration. The action of removing the configuration only causes the LCM -> to stop managing the assigned configuration. The settings remain in place. --Use the `Remove.py` script as documented in -[Performing DSC Operations from the Linux Computer][09] --## Next steps --- [Develop a custom machine configuration package][10].-- Use the **GuestConfiguration** module to [create an Azure Policy definition][12] for at-scale- management of your environment. -- [Assign your custom policy definition][13] using Azure portal.--<!-- Reference link definitions --> -[01]: ../overview.md -[02]: /powershell/dsc/getting-started/lnxgettingstarted -[03]: ../how-to/develop-custom-package/2-create-package.md -[04]: ../how-to/develop-custom-package/2-create-package.md#author-a-configuration -[05]: /azure/virtual-machines/extensions/dsc-template -[06]: ../concepts/assignments.md -[07]: https://github.com/azure/nxtools#getting-started -[08]: ./psdsc-in-machine-configuration.md#special-requirements-for-get -[09]: https://github.com/Microsoft/PowerShell-DSC-for-Linux#performing-dsc-operations-from-the-linux-computer -[10]: ../how-to/develop-custom-package/overview.md -[12]: ../how-to/create-policy-definition.md -[13]: ../../policy/assign-policy-portal.md |
governance | Policy For Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md | Azure Policy makes it possible to manage and report on the compliance state of y Azure Policy for Kubernetes supports the following cluster environments: - [Azure Kubernetes Service (AKS)](/azure/aks/what-is-aks), through **Azure Policy's **Add-on** for AKS**-- [Azure Arc enabled Kubernetes](../../../azure-arc/kubernetes/overview.md), through **Azure Policy's **Extension** for Arc**+- [Azure Arc enabled Kubernetes](/azure/azure-arc/kubernetes/overview), through **Azure Policy's **Extension** for Arc** > [!IMPORTANT] > The Azure Policy Add-on Helm model and the add-on for AKS Engine have been _deprecated_. Follow the instructions to [remove the add-ons](#remove-the-add-on). And the following output for clusters using managed identity: This article describes how to [create](#create-azure-policy-extension), [show extension status](#show-azure-policy-extension), and [delete](#delete-azure-policy-extension) the Azure Policy for Kubernetes extension. -For an overview of the extensions platform, see [Azure Arc cluster extensions](../../../azure-arc/kubernetes/conceptual-extensions.md). +For an overview of the extensions platform, see [Azure Arc cluster extensions](/azure/azure-arc/kubernetes/conceptual-extensions). ### Prerequisites If you already deployed Azure Policy for Kubernetes on an Azure Arc cluster usin 1. Ensure your Kubernetes cluster is a supported distribution. > [!NOTE]- > Azure Policy for Arc extension is supported on [the following Kubernetes distributions](../../../azure-arc/kubernetes/validation-program.md). + > Azure Policy for Arc extension is supported on [the following Kubernetes distributions](/azure/azure-arc/kubernetes/validation-program). -1. Ensure you met all the common prerequisites for Kubernetes extensions listed [here](../../../azure-arc/kubernetes/extensions.md) including [connecting your cluster to Azure Arc](../../../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli). +1. Ensure you met all the common prerequisites for Kubernetes extensions listed [here](/azure/azure-arc/kubernetes/extensions) including [connecting your cluster to Azure Arc](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli). > [!NOTE] > Azure Policy extension is supported for Arc enabled Kubernetes clusters [in these regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc). For more information about troubleshooting the Add-on for Kubernetes, see the of the Azure Policy troubleshooting article. For Azure Policy extension for Arc extension related issues, go to:-- [Azure Arc enabled Kubernetes troubleshooting](../../../azure-arc/kubernetes/troubleshooting.md)+- [Azure Arc enabled Kubernetes troubleshooting](/azure/azure-arc/kubernetes/troubleshooting) For Azure Policy related issues, go to: - [Inspect Azure Policy logs](#logging) |
governance | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md | Specifically, some useful governance actions you can enforce with Azure Policy i - Enforcing the consistent application of taxonomic tags - Requiring resources to send diagnostic logs to a Log Analytics workspace -It's important to recognize that with the introduction of [Azure Arc](../../azure-arc/overview.md), you can extend your +It's important to recognize that with the introduction of [Azure Arc](/azure/azure-arc/overview), you can extend your policy-based governance across different cloud providers and even to your local datacenters. All Azure Policy data and objects are encrypted at rest. For more information, see |
governance | General | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/troubleshoot/general.md | Troubleshoot your policy assignment's enforcement by doing the following steps: - The mode should be `all` for all resource types. - The mode should be `indexed` if the policy definition checks for tags or location. 1. Ensure that the scope of the resource isn't [excluded](../concepts/assignment-structure.md#excluded-scopes) or [exempt](../concepts/exemption-structure.md).-1. Verify that the resource payload matches the policy logic. This verification can be done by [capturing an HTTP Archive (HAR) trace](../../../azure-portal/capture-browser-trace.md) or reviewing the Azure Resource Manager template (ARM template) properties. +1. Verify that the resource payload matches the policy logic. This verification can be done by [capturing an HTTP Archive (HAR) trace](/azure/azure-portal/capture-browser-trace) or reviewing the Azure Resource Manager template (ARM template) properties. 1. For other common issues and solutions, see [Troubleshoot: Compliance not as expected](#scenario-compliance-isnt-as-expected). If you still have an issue with your duplicated and customized built-in policy definition or custom definition, create a support ticket under **Authoring a policy** to route the issue correctly. |
governance | Query Language | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/query-language.md | To support the _Open Query_ portal experience, Azure Resource Graph Explorer has The scope of the subscriptions or [management groups](../../management-groups/overview.md) from which resources are returned by a query defaults to a list of subscriptions based on the context of the authorized user. If a management group or a subscription list isn't defined, the query scope is-all resources, and includes [Azure Lighthouse](../../../lighthouse/overview.md) delegated +all resources, and includes [Azure Lighthouse](/azure/lighthouse/overview) delegated resources. The list of subscriptions or management groups to query can be manually defined to change the scope |
governance | First Query Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-portal.md | Queries that result in a list can also be pinned to the dashboard. The feature i When a query is run from the portal, you can select **Directory** to change the query's scope for the directory, management group, or subscription of the resources you want to query. When **Pin to dashboard** is selected, the results are added to your Azure dashboard with the scope used when the query was run. -For more information about working with dashboards, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md). +For more information about working with dashboards, see [Create a dashboard in the Azure portal](/azure/azure-portal/azure-portal-dashboards). ## Clean up resources |
governance | Keyboard Shortcuts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/keyboard-shortcuts.md | -This article lists the keyboard shortcuts that work in the Azure Resource Graph Explorer page of the Azure portal. For a list of global keyboard shortcuts or a list of keyboard shortcuts available for other pages, visit [Keyboard shortcuts in the Azure portal](../../../azure-portal/azure-portal-keyboard-shortcuts.md). +This article lists the keyboard shortcuts that work in the Azure Resource Graph Explorer page of the Azure portal. For a list of global keyboard shortcuts or a list of keyboard shortcuts available for other pages, visit [Keyboard shortcuts in the Azure portal](/azure/azure-portal/azure-portal-keyboard-shortcuts). ## Keyboard shortcuts for editing queries This article lists the keyboard shortcuts that work in the Azure Resource Graph ## Next steps -- [Keyboard shortcuts in the Azure portal](../../../azure-portal/azure-portal-keyboard-shortcuts.md)+- [Keyboard shortcuts in the Azure portal](/azure/azure-portal/azure-portal-keyboard-shortcuts) - [Understanding the Azure Resource Graph query language](../concepts/query-language.md) |
governance | General | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/troubleshoot/general.md | There are several methods of dealing with throttled requests: #### Issue -Customers with access to more than 1,000 subscriptions, including cross-tenant subscriptions with [Azure Lighthouse](../../../lighthouse/overview.md), can't fetch data across all subscriptions in a single call to Azure Resource Graph. +Customers with access to more than 1,000 subscriptions, including cross-tenant subscriptions with [Azure Lighthouse](/azure/lighthouse/overview), can't fetch data across all subscriptions in a single call to Azure Resource Graph. #### Cause |
hdinsight | Cluster Availability Monitor Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/cluster-availability-monitor-logs.md | As an example, run the **Availability rate** sample query by selecting **Run** o > [!NOTE] > Availability rate is measured over a 24-hour period, so your cluster will need to run for at least 24 hours before you see accurate availability rates. -You can pin this table to a shared dashboard by clicking **Pin** in the upper-right corner. If you don't have any writable shared dashboards, you can see how to create one here: [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md#publish-and-share-a-dashboard). +You can pin this table to a shared dashboard by clicking **Pin** in the upper-right corner. If you don't have any writable shared dashboards, you can see how to create one here: [Create and share dashboards in the Azure portal](/azure/azure-portal/azure-portal-dashboards#publish-and-share-a-dashboard). ## Azure Monitor alerts |
hdinsight | Apache Ambari Troubleshoot Stale Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-stale-alerts.md | If your problem wasn't mentioned here or you're unable to solve it, visit one of * Connect with [@AzureSupport](https://x.com/azuresupport) on X. This is the official Microsoft Azure account for improving customer experience. It connects the Azure community to the right resources: answers, support, and experts. -* If you need more help, submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). To get there, select Help (**?**) from the portal menu or open the **Help + support** pane. For more information, see [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). +* If you need more help, submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). To get there, select Help (**?**) from the portal menu or open the **Help + support** pane. For more information, see [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Support for subscription management and billing is included with your Microsoft Azure subscription. Technical support is available through the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Troubleshoot Disk Space | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-disk-space.md | If you didn't see your problem, or are unable to solve your issue, visit one of * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Troubleshoot Invalidnetworkconfigurationerrorcode Cluster Creation Fails | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-invalidnetworkconfigurationerrorcode-cluster-creation-fails.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Troubleshoot Lost Key Vault Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-lost-key-vault-access.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Troubleshoot Port Conflict | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-port-conflict.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Troubleshoot Wasbs Storage Exception | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-wasbs-storage-exception.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Troubleshoot Yarn Log Invalid Bcfile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-yarn-log-invalid-bcfile.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Hbase Troubleshoot Bindexception Address Use | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-bindexception-address-use.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Hbase Troubleshoot Hbase Hbck Inconsistencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-hbase-hbck-inconsistencies.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Hbase Troubleshoot Phoenix Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-phoenix-connectivity.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Hbase Troubleshoot Phoenix No Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-phoenix-no-data.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Hbase Troubleshoot Storage Exception Reset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-storage-exception-reset.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Hbase Troubleshoot Timeouts Hbase Hbck | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-timeouts-hbase-hbck.md | If you didn't see your problem or are unable to solve your issue, visit one of t - Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).+- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Hbase Troubleshoot Unassigned Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-unassigned-regions.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Troubleshoot Data Retention Issues Expired Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/troubleshoot-data-retention-issues-expired-data.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: `answers`, `support`, and `experts`. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Troubleshoot Hbase Performance Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/troubleshoot-hbase-performance-issues.md | If your problem remains unresolved, visit one of the following channels for more - Connect with [@AzureSupport](https://x.com/azuresupport). This is the official Microsoft Azure account for improving customer experience. It connects the Azure community to the right resources: answers, support, and experts. -- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Your Microsoft Azure subscription includes access to subscription management and billing support, and technical support is provided through one of the [Azure support plans](https://azure.microsoft.com/support/plans/).+- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Your Microsoft Azure subscription includes access to subscription management and billing support, and technical support is provided through one of the [Azure support plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Troubleshoot Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/troubleshoot-rest-api.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Hdinsight Hadoop Provision Linux Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-provision-linux-clusters.md | Different cluster types have different node types, numbers of nodes, and node si If you're just trying out HDInsight, we recommend you use one Worker node. For more information about HDInsight pricing, see [HDInsight pricing](https://go.microsoft.com/fwLink/?LinkID=282635&clcid=0x409). > [!NOTE]-> The cluster size limit varies among Azure subscriptions. Contact [Azure billing support](../azure-portal/supportability/how-to-create-azure-support-request.md) to increase the limit. +> The cluster size limit varies among Azure subscriptions. Contact [Azure billing support](/azure/azure-portal/supportability/how-to-create-azure-support-request) to increase the limit. When you use the Azure portal to configure the cluster, the node size is available through the **Configuration + pricing** tab. In the portal, you can also see the cost associated with the different node sizes. |
hdinsight | Hdinsight Troubleshoot Failed Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-failed-cluster.md | HDInsight relies on several Azure services. It runs virtual servers on Azure HDI #### Check Azure service usage limits If you are launching a large cluster, or have launched many clusters simultaneously, a cluster can fail if you have exceeded an Azure service limit. Service limits vary, depending on your Azure subscription. For more information, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).-You can request that Microsoft increase the number of HDInsight resources available (such as VM cores and VM instances) with a [Resource Manager core quota increase request](../azure-portal/supportability/regional-quota-requests.md). +You can request that Microsoft increase the number of HDInsight resources available (such as VM cores and VM instances) with a [Resource Manager core quota increase request](/azure/azure-portal/supportability/regional-quota-requests). #### Check the release version |
hdinsight | Hive Llap Sizing Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-llap-sizing-guide.md | If setting these values didn't resolve your issue, visit one of the following... * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). * ##### **Other References:** * [Configure other LLAP properties](https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/performance-tuning/content/hive_setup_llap.html) |
hdinsight | Interactive Query Troubleshoot Outofmemory Overhead Exceeded | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-outofmemory-overhead-exceeded.md | If setting this value didn't resolve your issue, visit one of the following... * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, please review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, please review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Interactive Query Troubleshoot Tez Hangs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-tez-hangs.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Llap Schedule Based Autoscale Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/llap-schedule-based-autoscale-best-practices.md | If the above guidelines didn't resolve your query, visit one of the following. * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). ## **Other references:** * [Interactive Query in Azure HDInsight](./apache-interactive-query-get-started.md) |
hdinsight | Troubleshoot Gateway Timeout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/troubleshoot-gateway-timeout.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Kafka Troubleshoot Insufficient Domains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/kafka-troubleshoot-insufficient-domains.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Quota Increase Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/quota-increase-request.md | To request a quota increase, do the following steps: > [!NOTE] > If you need to increase the HDInsight core quota in a private region, [submit a approved list request](https://aka.ms/canaryintwhitelist). -You can [contact support to request a quota increase](../azure-portal/supportability/regional-quota-requests.md). +You can [contact support to request a quota increase](/azure/azure-portal/supportability/regional-quota-requests). There are some fixed quota limits. For example, a single Azure subscription can have at most 10,000 cores. For details on these limits, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). |
hdinsight | Apache Spark Troubleshoot Outofmemory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-outofmemory.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Apache Spark Troubleshoot Sparkexception Kryo Serialization Failed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-sparkexception-kryo-serialization-failed.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Apache Troubleshoot Spark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-troubleshoot-spark.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Troubleshoot Debug Wasb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/troubleshoot-debug-wasb.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Troubleshoot Jupyter Notebook Convert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/troubleshoot-jupyter-notebook-convert.md | If you didn't see your problem or are unable to solve your issue, visit one of t * Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). +* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
hdinsight | Zookeeper Troubleshoot Quorum Fails | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/zookeeper-troubleshoot-quorum-fails.md | If you didn't see your problem or are unable to solve your issue, visit one of t - Get answers from Azure experts through [Azure Community Support](https://azure.microsoft.com/support/community/). - Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.-- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).+- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). |
healthcare-apis | Business Continuity Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/business-continuity-disaster-recovery.md | The support team handles the backups and restores of the FHIR database. To resto - Name of the service. - Restore point date and time within the last seven days. If the requested restore point is not available, we will use the nearest one available, unless you tell us otherwise. Include this information in your support request. -Learn more: [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md) +Learn more: [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) For a large or active database, the restore might take several hours to several days. The restoration process involves taking a snapshot of your database at a certain time and then creating a new database to point your FHIR service to. During the restoration process, the server may return an HTTP Status code response with 503, meaning the service is temporarily unavailable and can't handle the request at the moment. After the restoration process completes, the support team updates the ticket with a status that the operation has been completed to restore the requested service. |
healthcare-apis | Convert Data Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data-configuration.md | To access and use the default templates for your conversion requests, ensure tha > > The default templates are provided to help you get started with your data conversion workflow. These default templates are _not_ intended for production and might change when Microsoft releases updates for the FHIR service. To have consistent data conversion behavior across different versions of the FHIR service, you must do the following. >-> 1. Host your own copy of the templates in an [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md) instance. +> 1. Host your own copy of the templates in an [Azure Container Registry (ACR)](/azure/container-registry/container-registry-intro) instance. > 2. Register the templates to the FHIR service. > 3. Use your registered templates in your API calls. > 4. Verify that the conversion behavior meets your requirements. In the example code, two example custom fields `customfield_message` and `custom ## Host your own templates -We recommend that you host your own copy of templates in an [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md) instance. ACR can be used to host your custom templates and support with versioning. +We recommend that you host your own copy of templates in an [Azure Container Registry (ACR)](/azure/container-registry/container-registry-intro) instance. ACR can be used to host your custom templates and support with versioning. Hosting your own templates and using them for `$convert-data` operations involves the following seven steps. Hosting your own templates and using them for `$convert-data` operations involve ### Step 1: Create an Azure Container Registry instance -Read the [Introduction to container registries in Azure](../../container-registry/container-registry-intro.md) and follow the instructions for creating your own ACR instance. We recommend that you place your ACR instance in the same resource group as your FHIR service. +Read the [Introduction to container registries in Azure](/azure/container-registry/container-registry-intro) and follow the instructions for creating your own ACR instance. We recommend that you place your ACR instance in the same resource group as your FHIR service. ### Step 2: Push the templates to your Azure Container Registry instance After you create an ACR instance, you can use the **FHIR Converter: Push Templates** command in the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) to push your custom templates to your ACR instance. Alternatively, you can use the [Template Management CLI tool](https://github.com/microsoft/FHIR-Converter/blob/main/docs/TemplateManagementCLI.md) for this purpose. To maintain different versions of custom templates in your Azure Container Registry, you can push the image containing your custom templates into your ACR instance with different image tags. -* For more information about ACR registries, repositories, and artifacts, see [About registries, repositories, and artifacts](../../container-registry/container-registry-concepts.md). -* For more information about image tag best practices, see [Recommendations for tagging and versioning container images](../../container-registry/container-registry-image-tag-version.md). +* For more information about ACR registries, repositories, and artifacts, see [About registries, repositories, and artifacts](/azure/container-registry/container-registry-concepts). +* For more information about image tag best practices, see [Recommendations for tagging and versioning container images](/azure/container-registry/container-registry-image-tag-version). To reference specific template versions in the API, be sure to use the exact image name and tag that contains the versioned template to be used. For the API parameter `templateCollectionReference`, use the appropriate **image name + tag** (for example: `<RegistryServer>/<imageName>:<imageTag>`). You can register up to 20 ACR servers in the FHIR service. There are many methods for securing ACR using the built-in firewall depending on your particular use case. -* [Connect privately to an Azure container registry using Azure Private Link](../../container-registry/container-registry-private-link.md) -* [Configure public IP network rules](../../container-registry/container-registry-access-selected-networks.md) -* [Azure Container Registry mitigating data exfiltration with dedicated data endpoints](../../container-registry/container-registry-dedicated-data-endpoints.md) -* [Restrict access to a container registry using a service endpoint in an Azure virtual network](../../container-registry/container-registry-vnet.md) -* [Allow trusted services to securely access a network-restricted container registry](../../container-registry/allow-access-trusted-services.md) -* [Configure rules to access an Azure container registry behind a firewall](../../container-registry/container-registry-firewall-access-rules.md) +* [Connect privately to an Azure container registry using Azure Private Link](/azure/container-registry/container-registry-private-link) +* [Configure public IP network rules](/azure/container-registry/container-registry-access-selected-networks) +* [Azure Container Registry mitigating data exfiltration with dedicated data endpoints](/azure/container-registry/container-registry-dedicated-data-endpoints) +* [Restrict access to a container registry using a service endpoint in an Azure virtual network](/azure/container-registry/container-registry-vnet) +* [Allow trusted services to securely access a network-restricted container registry](/azure/container-registry/allow-access-trusted-services) +* [Configure rules to access an Azure container registry behind a firewall](/azure/container-registry/container-registry-firewall-access-rules) * [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519) > [!NOTE] |
healthcare-apis | Convert Data Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data-faq.md | Depending on the version of `$convert-data` youΓÇÖre using, you can: * Use the [troubleshooting guide](convert-data-troubleshoot.md) for the FHIR service in Azure Health Data Services version of the `$convert-data` operation. -* Open a [support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) for the FHIR service in Azure Health Data Service FHIR Services version of the `$convert-data` operation. +* Open a [support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) for the FHIR service in Azure Health Data Service FHIR Services version of the `$convert-data` operation. * Leave a comment on the [GitHub repository](https://github.com/microsoft/FHIR-Converter/issues) for the open source version of the FHIR converter. |
healthcare-apis | Migration Strategies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/migration-strategies.md | Compare the differences between Azure API for FHIR and Azure Health Data Service |Capabilities|Azure API for FHIR|Azure Health Data Services| |||--| |**Settings**|Supported: <br> ΓÇó Local RBAC <br> ΓÇó SMART on FHIR Proxy|Planned deprecation: <br> ΓÇó Local RBAC (9/6/23) <br> ΓÇó SMART on FHIR Proxy (9/21/26)|-|**Data storage Volume**|More than 4 TB|Current support is 4 TB. Open an [Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) if you need more than 4 TB| +|**Data storage Volume**|More than 4 TB|Current support is 4 TB. Open an [Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) if you need more than 4 TB| |**Data ingress**|Tools available in OSS|`$import` operation| |**Autoscaling**|Supported on request and incurs charge|Enabled by default at no extra charge| |**Search parameters**|Bundle type supported: Batch <br> ΓÇó Include and revinclude, iterate modifier not supported <br> ΓÇó Sorting supported by first name, family name, birthdate and clinical date|Bundle type supported: Batch and transaction <br> ΓÇó Selectable search parameters <br> ΓÇó Include, revinclude, and iterate modifier is supported <br>ΓÇó Sorting supported by string and dateTime fields| |
healthcare-apis | Configure Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/configure-metrics.md | The screenshot shows an example of a line chart that monitors the **Number of In ## Save metrics as a tile on an Azure dashboard -To keep your MedTech service metrics settings and view the metrics again later, pin them as a tile on an Azure dashboard. For steps, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md). +To keep your MedTech service metrics settings and view the metrics again later, pin them as a tile on an Azure dashboard. For steps, see [Create a dashboard in the Azure portal](/azure/azure-portal/azure-portal-dashboards). To learn more about advanced metrics display and sharing options, see [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics). |
healthcare-apis | Deploy Bicep Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-bicep-powershell-cli.md | Complete the following five steps to deploy the MedTech service using Azure Powe Connect-AzAccount ``` -2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md). +2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id). ```azurepowershell Set-AzContext <AzureSubscriptionId> Complete the following five steps to deploy the MedTech service using the Azure az login ``` -2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md). +2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id). ```azurecli az account set <AzureSubscriptionId> |
healthcare-apis | Deploy Json Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-json-powershell-cli.md | Complete the following five steps to deploy the MedTech service using Azure Powe Connect-AzAccount ``` -2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md). +2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id). ```azurepowershell Set-AzContext <AzureSubscriptionId> Complete the following five steps to deploy the MedTech service using the Azure az login ``` -2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md). +2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id). ```azurecli az account set <AzureSubscriptionId> |
healthcare-apis | Network Access Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/network-access-security.md | Here's a list of features that can make outbound connections from Azure Health D - **Export**: [Allow FHIR service export as a Microsoft Trusted Service](fhir/configure-export-data.md) - **Import**: [Allow FHIR service import as a Microsoft Trusted Service](fhir/configure-import-data.md)-- **Convert**: [Allow trusted services access to Azure Container Registry](../container-registry/allow-access-trusted-services.md)+- **Convert**: [Allow trusted services access to Azure Container Registry](/azure/container-registry/allow-access-trusted-services) - **Events**: [Allow trusted services access to Azure Event Hubs](../event-hubs/event-hubs-service-endpoints.md) - **Customer-managed keys**: [Allow trusted services access to Azure Key Vault](/azure/key-vault/general/overview-vnet-service-endpoints) |
iot-edge | How To Continuous Integration Continuous Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-continuous-integration-continuous-deployment.md | Unless otherwise specified, the procedures in this article do not explore all th >[!TIP] >If you're creating a new solution, clone your repository locally first. Then, when you create the solution you can choose to create it directly in the repository folder. You can easily commit and push the new files from there. -* A container registry where you can push module images. You can use [Azure Container Registry](../container-registry/index.yml) or a third-party registry. +* A container registry where you can push module images. You can use [Azure Container Registry](/azure/container-registry/) or a third-party registry. * An active Azure [IoT hub](../iot-hub/iot-hub-create-through-portal.md) with at least two IoT Edge devices for testing the separate test and production deployment stages. You can follow the quickstart articles to create an IoT Edge device on [Linux](quickstart-linux.md) or [Windows](quickstart.md) For more information about using Azure Repos, see [Share your code with Visual Studio and Azure Repos](/azure/devops/repos/git/share-your-code-in-git-vs). |
iot-edge | How To Visual Studio Develop Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-visual-studio-develop-module.md | This article assumes that you use a machine running Windows as your development * Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine to build and run your module images. For example, install [Docker Community Edition](https://docs.docker.com/install/). * To develop modules with **Linux containers**, use a Windows computer that meets the [requirements for Docker Desktop](https://docs.docker.com/docker-for-windows/install/#what-to-know-before-you-install). -* Create an [Azure Container Registry](../container-registry/index.yml) or [Docker Hub](https://docs.docker.com/docker-hub/repos/#viewing-repository-tags) to store your module images. +* Create an [Azure Container Registry](/azure/container-registry/) or [Docker Hub](https://docs.docker.com/docker-hub/repos/#viewing-repository-tags) to store your module images. > [!TIP] > You can use a local Docker registry for prototype and testing purposes instead of a cloud registry. |
iot-edge | Production Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md | This checklist is a starting point for firewall rules: Since the IP address of an IoT hub can change without notice, always use the FQDN to allowlist configuration. To learn more, see [Understanding the IP address of your IoT Hub](../iot-hub/iot-hub-understand-ip-address.md). -Some of these firewall rules are inherited from Azure Container Registry. For more information, see [Configure rules to access an Azure container registry behind a firewall](../container-registry/container-registry-firewall-access-rules.md). +Some of these firewall rules are inherited from Azure Container Registry. For more information, see [Configure rules to access an Azure container registry behind a firewall](/azure/container-registry/container-registry-firewall-access-rules). -You can enable dedicated data endpoints in your Azure Container registry to avoid wildcard allowlisting of the *\*.blob.core.windows.net* FQDN. For more information, see [Enable dedicated data endpoints](../container-registry/container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints). +You can enable dedicated data endpoints in your Azure Container registry to avoid wildcard allowlisting of the *\*.blob.core.windows.net* FQDN. For more information, see [Enable dedicated data endpoints](/azure/container-registry/container-registry-firewall-access-rules#enable-dedicated-data-endpoints). > [!NOTE] > To provide a consistent FQDN between the REST and data endpoints, beginning **June 15, 2020** the Microsoft Container Registry data endpoint will change from `*.cdn.mscr.io` to `*.data.mcr.microsoft.com` Before you deploy modules to production IoT Edge devices, ensure that you contro In the tutorials and other documentation, we instruct you to use the same container registry credentials on your IoT Edge device as you use on your development machine. These instructions are only intended to help you set up testing and development environments more easily, and should not be followed in a production scenario. -For a more secured access to your registry, you have a choice of [authentication options](../container-registry/container-registry-authentication.md). A popular and recommended authentication is to use an Active Directory service principal that's well suited for applications or services to pull container images in an automated or otherwise unattended (headless) manner, as IoT Edge devices do. Another option is to use repository-scoped tokens, which allow you to create long or short-live identities that exist only in the Azure Container Registry they were created in and scope access to the repository level. +For a more secured access to your registry, you have a choice of [authentication options](/azure/container-registry/container-registry-authentication). A popular and recommended authentication is to use an Active Directory service principal that's well suited for applications or services to pull container images in an automated or otherwise unattended (headless) manner, as IoT Edge devices do. Another option is to use repository-scoped tokens, which allow you to create long or short-live identities that exist only in the Azure Container Registry they were created in and scope access to the repository level. -To create a service principal, run the two scripts as described in [create a service principal](../container-registry/container-registry-auth-service-principal.md#create-a-service-principal). These scripts do the following tasks: +To create a service principal, run the two scripts as described in [create a service principal](/azure/container-registry/container-registry-auth-service-principal#create-a-service-principal). These scripts do the following tasks: * The first script creates the service principal. It outputs the Service principal ID and the Service principal password. Store these values securely in your records. -* The second script creates role assignments to grant to the service principal, which can be run subsequently if needed. We recommend applying the **acrPull** user role for the `role` parameter. For a list of roles, see [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md). +* The second script creates role assignments to grant to the service principal, which can be run subsequently if needed. We recommend applying the **acrPull** user role for the `role` parameter. For a list of roles, see [Azure Container Registry roles and permissions](/azure/container-registry/container-registry-roles). To authenticate using a service principal, provide the service principal ID and password that you obtained from the first script. Specify these credentials in the deployment manifest. To authenticate using a service principal, provide the service principal ID and * For the password or client secret, specify the service principal password. -To create repository-scoped tokens, follow [create a repository-scoped token](../container-registry/container-registry-repository-scoped-permissions.md). +To create repository-scoped tokens, follow [create a repository-scoped token](/azure/container-registry/container-registry-repository-scoped-permissions). To authenticate using repository-scoped tokens, provide the token name and password that you obtained after creating your repository-scoped token. Specify these credentials in the deployment manifest. |
iot-edge | Tutorial Configure Est Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-configure-est-server.md | The Dockerfile uses Ubuntu 18.04, a [Cisco library called `libest`](https://gith 1. You should see `--BEGIN CERTIFICATE--` midway through the output. Retrieving the certificate verifies that the server is reachable and can present its certificate. > [!TIP]-> To run this container in the cloud, build the image and [push the image to Azure Container Registry](../container-registry/container-registry-get-started-portal.md). Then, follow the [quickstart to deploy to Azure Container Instance](/azure/container-instances/container-instances-quickstart-portal). +> To run this container in the cloud, build the image and [push the image to Azure Container Registry](/azure/container-registry/container-registry-get-started-portal). Then, follow the [quickstart to deploy to Azure Container Instance](/azure/container-instances/container-instances-quickstart-portal). ## Download CA certificate |
iot-edge | Tutorial Deploy Custom Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-custom-vision.md | In this tutorial, you learn how to: * Configure your environment for Linux container development by completing [Tutorial: Develop IoT Edge modules using Visual Studio Code](tutorial-develop-for-linux.md). After completing the tutorial, you should have the following prerequisites in available in your development environment: * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).- * A container registry, like [Azure Container Registry](../container-registry/index.yml). + * A container registry, like [Azure Container Registry](/azure/container-registry/). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions. The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639). * Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. |
iot-edge | Tutorial Deploy Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-function.md | Before beginning this tutorial, do the tutorial to set up your development envir * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstart to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).-* A container registry, like [Azure Container Registry](../container-registry/index.yml). +* A container registry, like [Azure Container Registry](/azure/container-registry/). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions. The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639). * Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. |
iot-edge | Tutorial Store Data Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-store-data-sql-server.md | Before beginning this tutorial, you should have gone through the previous tutori * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * ARM devices, like Raspberry Pis, cannot run SQL Server. If you want to use SQL on an ARM device, you can use [Azure SQL Edge](../azure-sql-edge/overview.md).-* A container registry, like [Azure Container Registry](../container-registry/index.yml). +* A container registry, like [Azure Container Registry](/azure/container-registry/). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions. The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639). * Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. |
iot-hub | How To Routing Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-portal.md | The procedures that are described in the article use the following resources: ### Azure portal -This article uses the Azure portal to work with IoT Hub and other Azure services. To learn more about how to use the Azure portal, see [What is the Azure portal?](../azure-portal/azure-portal-overview.md). +This article uses the Azure portal to work with IoT Hub and other Azure services. To learn more about how to use the Azure portal, see [What is the Azure portal?](/azure/azure-portal/azure-portal-overview). ### IoT hub |
iot-hub | Iot Hub Ip Filtering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ip-filtering.md | Use IP filter to receive traffic only from a specified range of IP addresses and ## Default setting -To get to the IP Filter settings page of your IoT hub, select **Networking** > **Public access**, then choose **Selected IP Ranges**: +To get to the IP Filter settings page of your IoT hub, select **Security settings** > **Networking** > **Public access**, then choose **Selected IP Ranges**: :::image type="content" source="media/iot-hub-ip-filtering/ip-filter-default.png" alt-text="Screenshot showing how to set default IP filter settings."::: To add an IP filter rule, select **Add IP Filter Rule**. To quickly add your com :::image type="content" source="./media/iot-hub-ip-filtering/ip-filter-add-rule.png" alt-text="Screenshot showing how to add an IP filter rule to an IoT hub."::: -After selecting **Add IP Filter Rule**, fill in the fields. These fields are pre-filled for you if you selected to add your client IP address. +After selecting **Add IP Filter Rule**, fill in the fields. These fields are prefilled for you if you selected to add your client IP address. :::image type="content" source="./media/iot-hub-ip-filtering/ip-filter-after-selecting-add.png" alt-text="Screenshot that shows what to do after adding an IP filter rule."::: After selecting **Add IP Filter Rule**, fill in the fields. These fields are pre After filling in the fields, select **Save** to save the rule. You see an alert notifying you that the update is in progress. - The **Add** option is disabled when you reach the maximum of 100 IP filter rules. To edit an existing rule, select the data you want to change, make the change, then select **Save** to save your edit. ## Delete an IP filter rule -To delete an IP filter rule, select the trash can icon on that row and then select **Save**. The rule is removed and the change is saved. +To delete an IP filter rule, select the trash can icon on that row, and then select **Save**. The rule is removed and the change is saved. :::image type="content" source="./media/iot-hub-ip-filtering/ip-filter-delete-rule.png" alt-text="Screenshot showing how to delete an IoT Hub IP filter rule."::: To apply the IP filter rules to the built-in Event Hubs compatible endpoint, che By enabling this option, your IP filter rules are replicated to the built-in endpoint, so only trusted IP ranges can access it. -If you disable this option, the built-in endpoint is accessible to all IP addresses. This behavior may be useful if you want to read from the endpoint with services with source IP addresses which may change over time like Azure Stream Analytics. +If you disable this option, the built-in endpoint is accessible to all IP addresses. This behavior can be useful if you want to read from the endpoint with services with source IP addresses which might change over time like Azure Stream Analytics. ## How filter rules are applied The IP filter rules are applied at the IoT Hub service level. Therefore, the IP filter rules apply to all connections from devices and back-end apps using any supported protocol. Also, you can choose if the [built-in Event Hubs compatible endpoint](iot-hub-devguide-messages-read-builtin.md) (not via the IoT Hub connection string) are bound to these rules. -Any connection attempt from an IP address that isn't explicitly allowed receives an unauthorized 401 status code and description. The response message does not mention the IP rule. Rejecting IP addresses can prevent other Azure services such as Azure Stream Analytics, Azure Virtual Machines, or the Device Explorer in Azure portal from interacting with the IoT hub. +Any connection attempt from an IP address that isn't explicitly allowed receives an unauthorized 401 status code and description. The response message doesn't mention the IP rule. Rejecting IP addresses can prevent other Azure services such as Azure Stream Analytics, Azure Virtual Machines, or the Device Explorer in Azure portal from interacting with the IoT hub. > [!NOTE] > If you want to use Azure Stream Analytics (ASA) to read messages from an IoT hub with IP filter enabled, **disable** the **Apply IP filters to the built-in endpoint** option, and then use the event hub-compatible name and endpoint of your IoT hub to manually add an [Event Hubs stream input](../stream-analytics/stream-analytics-define-inputs.md#stream-data-from-event-hubs) in the ASA. -### Ordering +### Azure portal -IP filter rules are *allow* rules and applied without ordering. Only IP addresses that you add are allowed to connect to IoT Hub. +IP filter rules are also applied when using IoT Hub through Azure portal. This is because API calls to the IoT Hub service are made directly using your browser with your credentials, which is consistent with other Azure services. To access IoT Hub using Azure portal when IP filter is enabled, add your computer's IP address to the allowlist. -For example, if you want to accept addresses in the range `192.168.100.0/22` and reject everything else, you only need to add one rule in the grid with address range `192.168.100.0/22`. +### Ordering -### Azure portal +IP filter rules are *allow* rules and are applied without ordering. Only IP addresses that you add are allowed to connect to IoT Hub. -IP filter rules are also applied when using IoT Hub through Azure portal. This is because API calls to the IoT Hub service are made directly using your browser with your credentials, which is consistent with other Azure services. To access IoT Hub using Azure portal when IP filter is enabled, add your computer's IP address to the allowlist. +For example, if you want to accept addresses in the range `192.168.100.0/22` and reject everything else, you only need to add one rule in the grid with address range `192.168.100.0/22`. ## Retrieve and update IP filters using Azure CLI -Your IoT Hub's IP filters can be retrieved and updated through [Azure CLI](/cli/azure/). +Your IoT hub's IP filters can be retrieved and updated through [Azure CLI](/cli/azure/). To retrieve current IP filters of your IoT Hub, run: To retrieve current IP filters of your IoT Hub, run: az resource show -n <iothubName> -g <resourceGroupName> --resource-type Microsoft.Devices/IotHubs ``` -This will return a JSON object where your existing IP filters are listed under the `properties.networkRuleSets` key: +This returns a JSON object where your existing IP filters are listed under the `properties.networkRuleSets` key: ```json { To remove an existing IP filter in your IoT Hub, run: az resource update -n <iothubName> -g <resourceGroupName> --resource-type Microsoft.Devices/IotHubs --add properties.networkRuleSets.ipRules <ipFilterIndexToRemove> ``` -Here, `<ipFilterIndexToRemove>` must correspond to the ordering of IP filters in your IoT Hub's `properties.networkRuleSets.ipRules`. +Here, `<ipFilterIndexToRemove>` corresponds to the ordering of IP filters in your IoT hub's `properties.networkRuleSets.ipRules`. ## Retrieve and update IP filters using Azure PowerShell $iothubResource | Set-AzResource -Force ## Update IP filter rules using REST -You may also retrieve and modify your IoT Hub's IP filter using Azure resource Provider's REST endpoint. See `properties.networkRuleSets` in [createorupdate method](/rest/api/iothub/iothubresource/createorupdate). +You can also retrieve and modify your IoT Hub's IP filter using Azure resource Provider's REST endpoint. See `properties.networkRuleSets` in [createorupdate method](/rest/api/iothub/iothubresource/createorupdate). ## Next steps |
iot-operations | Howto Prepare Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-prepare-cluster.md | An Azure Arc-enabled Kubernetes cluster is a prerequisite for deploying Azure Io To prepare your Azure Arc-enabled Kubernetes cluster, you need: -* Hardware that meets the [system requirements](../../azure-arc/kubernetes/system-requirements.md). +* Hardware that meets the [system requirements](/azure/azure-arc/kubernetes/system-requirements). ### [AKS Edge Essentials](#tab/aks-edge-essentials) The [AksEdgeQuickStartForAio.ps1](https://github.com/Azure/AKS-Edge/blob/main/to | Placeholder | Value | | -- | -- |- | SUBSCRIPTION_ID | The ID of your Azure subscription. If you don't know your subscription ID, see [Find your Azure subscription](../../azure-portal/get-subscription-tenant-id.md#find-your-azure-subscription). | - | TENANT_ID | The ID of your Microsoft Entra tenant. If you don't know your tenant ID, see [Find your Microsoft Entra tenant](../../azure-portal/get-subscription-tenant-id.md#find-your-microsoft-entra-tenant). | + | SUBSCRIPTION_ID | The ID of your Azure subscription. If you don't know your subscription ID, see [Find your Azure subscription](/azure/azure-portal/get-subscription-tenant-id#find-your-azure-subscription). | + | TENANT_ID | The ID of your Microsoft Entra tenant. If you don't know your tenant ID, see [Find your Microsoft Entra tenant](/azure/azure-portal/get-subscription-tenant-id#find-your-microsoft-entra-tenant). | | RESOURCE_GROUP_NAME | The name of an existing resource group or a name for a new resource group to be created. | | LOCATION | An Azure region close to you. For the list of currently supported Azure regions, see [Supported regions](../overview-iot-operations.md#supported-regions). | | CLUSTER_NAME | A name for the new cluster to be created. | pod/metrics-agent-6588f97dc-455j8 2/2 Running 0 ## Create sites -A _site_ is a collection of Azure IoT Operations instances. Sites typically group instances by physical location and make it easier for OT users to locate and manage assets. An IT administrator creates sites and assigns Azure IoT Operations instances to them. To learn more, see [What is Azure Arc site manager (preview)?](../../azure-arc/site-manager/overview.md). +A _site_ is a collection of Azure IoT Operations instances. Sites typically group instances by physical location and make it easier for OT users to locate and manage assets. An IT administrator creates sites and assigns Azure IoT Operations instances to them. To learn more, see [What is Azure Arc site manager (preview)?](/azure/azure-arc/site-manager/overview). ## Next steps |
iot-operations | Howto Manage Assets Remotely | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-manage-assets-remotely.md | _OPC UA servers_ are software applications that communicate with assets. OPC UA An _asset endpoint_ is a custom resource in your Kubernetes cluster that connects OPC UA servers to connector for OPC UA modules. This connection enables a connector for OPC UA to access an asset's data points. Without an asset endpoint, data can't flow from an OPC UA server to the connector for OPC UA and MQTT broker. After you configure the custom resources in your cluster, a connection is established to the downstream OPC UA server and the server forwards telemetry to the connector for OPC UA. -A _site_ is a collection of Azure IoT Operations instances. Sites typically group instances by physical location and make it easier for OT users to locate and manage assets. Your IT administrator creates sites and assigns Azure IoT Operations instances to them. To learn more, see [What is Azure Arc site manager (preview)?](../../azure-arc/site-manager/overview.md). +A _site_ is a collection of Azure IoT Operations instances. Sites typically group instances by physical location and make it easier for OT users to locate and manage assets. Your IT administrator creates sites and assigns Azure IoT Operations instances to them. To learn more, see [What is Azure Arc site manager (preview)?](/azure/azure-arc/site-manager/overview). In the operations experience web UI, an _instance_ represents an Azure IoT Operations cluster. An instance can have one or more asset endpoints. To sign in to the operations experience, go to the [operations experience](https ## Select your site -After you sign in, the web UI displays a list of sites. Each site is a collection of Azure IoT Operations instances where you can configure and manage your assets. A site typically represents a physical location where a you have physcial assets deployed. Sites make it easier for you to locate and manage assets. Your [IT administrator is responsible for grouping instances in to sites](../../azure-arc/site-manager/overview.md). Any Azure IoT Operations instances that aren't assigned to a site appear in the **Unassigned instances** node. Select the site that you want to use: +After you sign in, the web UI displays a list of sites. Each site is a collection of Azure IoT Operations instances where you can configure and manage your assets. A site typically represents a physical location where a you have physcial assets deployed. Sites make it easier for you to locate and manage assets. Your [IT administrator is responsible for grouping instances in to sites](/azure/azure-arc/site-manager/overview). Any Azure IoT Operations instances that aren't assigned to a site appear in the **Unassigned instances** node. Select the site that you want to use: :::image type="content" source="media/howto-manage-assets-remotely/site-list.png" alt-text="Screenshot that shows a list of sites in the operations experience."::: |
iot-operations | Quickstart Add Assets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started-end-to-end-sample/quickstart-add-assets.md | Browse to the [operations experience](https://iotoperations.azure.com) in your b ## Select your site -A _site_ is a collection of Azure IoT Operations instances. Sites typically group instances by physical location and make it easier for OT users to locate and manage assets. Your IT administrator creates [sites and assigns Azure IoT Operations instances to them](../../azure-arc/site-manager/overview.md). Because you're working with a new deployment, there are no sites yet. You can find the cluster you created in the previous quickstart by selecting **Unassigned instances**. In the operations experience, an instance represents a cluster where you deployed Azure IoT Operations. +A _site_ is a collection of Azure IoT Operations instances. Sites typically group instances by physical location and make it easier for OT users to locate and manage assets. Your IT administrator creates [sites and assigns Azure IoT Operations instances to them](/azure/azure-arc/site-manager/overview). Because you're working with a new deployment, there are no sites yet. You can find the cluster you created in the previous quickstart by selecting **Unassigned instances**. In the operations experience, an instance represents a cluster where you deployed Azure IoT Operations. ## Select your instance |
iot-operations | Overview Iot Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/overview-iot-operations.md | There are two core elements in the Azure IoT Operations Preview architecture: * **Azure IoT Operations Preview**. The set of data services that run on Azure Arc-enabled edge Kubernetes clusters. It includes the following * The _MQTT broker_ is an edge-native MQTT broker that powers event-driven architectures. * The _connector for OPC UA_ handles the complexities of OPC UA communication with OPC UA servers and other leaf devices.-* The _operations experience_ is a web UI that provides a unified experience for operational technologists to manage assets and data processor pipelines in an Azure IoT Operations deployment. An IT administrator can use [Azure Arc site manager (preview)](../azure-arc/site-manager/overview.md) to group Azure IoT Operations instances by physical location and make it easier for OT users to find instances. +* The _operations experience_ is a web UI that provides a unified experience for operational technologists to manage assets and data processor pipelines in an Azure IoT Operations deployment. An IT administrator can use [Azure Arc site manager (preview)](/azure/azure-arc/site-manager/overview) to group Azure IoT Operations instances by physical location and make it easier for OT users to find instances. ## Deploy |
lab-services | How To Request Capacity Increase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-request-capacity-increase.md | To complete the support request, enter the following information: - For more information about capacity limits, see [Capacity limits in Azure Lab Services](capacity-limits.md). - Learn more about the different [virtual machine sizes in the administrator's guide](./administrator-guide.md#vm-sizing).-- Learn more about the general [process for creating Azure support requests](../azure-portal/supportability/how-to-create-azure-support-request.md).+- Learn more about the general [process for creating Azure support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request). |
lighthouse | Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/architecture.md | - Title: Azure Lighthouse architecture -description: Learn about the relationship between tenants in Azure Lighthouse, and the resources created in the customer's tenant that enable that relationship. Previously updated : 07/10/2024----# Azure Lighthouse architecture --Azure Lighthouse helps service providers simplify customer engagement and onboarding experiences, while managing delegated resources at scale with agility and precision. Authorized users, groups, and service principals can work directly in the context of a customer subscription without having an account in that customer's Microsoft Entra tenant or being a co-owner of the customer's tenant. The mechanism used to support this access is called Azure delegated resource management. ---> [!TIP] -> Azure Lighthouse can also be used [within an enterprise which has multiple Microsoft Entra tenants of its own](enterprise.md) to simplify cross-tenant management. --This topic discusses the relationship between tenants in Azure Lighthouse, and the resources created in the customer's tenant that enable that relationship. --> [!NOTE] -> Onboarding a customer to Azure Lighthouse requires a deployment by a non-guest account in the customer's tenant who has a role with the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), for the subscription being onboarded (or which contains the resource groups that are being onboarded). --## Delegation resources created in the customer tenant --When a customerΓÇÖs subscription or resource group is onboarded to Azure Lighthouse, two resources are created: the **registration definition** and the **registration assignment**. You can use [APIs and management tools](cross-tenant-management-experience.md#apis-and-management-tool-support) to access these resources, or work with them [in the Azure portal](../how-to/view-manage-customers.md). --### Registration definition --The registration definition contains the details of the Azure Lighthouse offer (the managing tenant ID and the authorizations that assign built-in roles to specific users, groups, and/or service principals in the managing tenant. --A registration definition is created at the subscription level for each delegated subscription, or in each subscription that contains a delegated resource group. When using APIs to create a registration definition, youΓÇÖll need to work at the subscription level. For instance, using Azure PowerShell, youΓÇÖll need to use New-AzureRmDeployment before you create a new registration definition ([New-AzManagedServicesDefinition](/powershell/module/az.managedservices/new-azmanagedservicesdefinition)), rather than using New-AzureRmResourceGroupDeployment. --### Registration assignment --The registration assignment assigns the registration definition to a specific scopeΓÇöthat is, the onboarded subscription(s) and/or resource group(s). --A registration assignment is created in each delegated scope, so it will either be at the subscription group level or the resource group level, depending on what was onboarded. --Each registration assignment must reference a valid registration definition at the subscription level, tying the authorizations for that service provider to the delegated scope and thus granting access. --## Logical projection --Azure Lighthouse creates a logical projection of resources from one tenant onto another tenant. This lets authorized service provider users sign in to their own tenant with authorization to work in delegated customer subscriptions and resource groups. Users in the service provider's tenant can then perform management operations on behalf of their customers, without having to sign in to each individual customer tenant. --Whenever a user, group, or service principal in the service provider tenant accesses resources in a customer's tenant, Azure Resource Manager receives a request. Resource Manager authenticates these requests, just as it does for requests made by users within the customer's own tenant. For Azure Lighthouse, it does this by confirming that two resourcesΓÇöthe registration definition and the registration assignmentΓÇöare present in the customer's tenant. If so, Resource Manager authorizes the access according to the information defined by those resources. ---Activity from users in the service provider's tenant is tracked in the activity log, which is stored in the customer's tenant. This allows the customer to [see what changes were made and by whom](../how-to/view-service-provider-activity.md). --## How Azure Lighthouse works --At a high level, here's how Azure Lighthouse works for the managing tenant: --1. Identify the [roles](tenants-users-roles.md#role-support-for-azure-lighthouse) that your groups, service principals, or users will need to manage the customer's Azure resources. -2. Specify this access and onboard the customer to Azure Lighthouse either by [publishing a Managed Service offer to Azure Marketplace](../how-to/publish-managed-services-offers.md), or by [deploying an Azure Resource Manager template](../how-to/onboard-customer.md). This onboarding process creates the two resources described above (registration definition and registration assignment) in the customer's tenant. -3. Once the customer has been onboarded, authorized users sign in to your managing tenant and perform tasks at the specified customer scope (subscription or resource group) per the access that you defined. Customers can review all actions taken, and they can remove access at any time. --While in most cases only one service provider will be managing specific resources for a customer, itΓÇÖs possible for the customer to create multiple delegations for the same subscription or resource group, allowing multiple service providers to have access. This scenario also enables ISV scenarios that [project resources from the service providerΓÇÖs tenant to multiple customers](isv-scenarios.md#saas-based-multitenant-offerings). --## Next steps --- Review [Azure CLI](/cli/azure/managedservices) and [Azure PowerShell](/powershell/module/az.managedservices) commands for working with registration definitions and registration assignments.-- Learn about [enhanced services and scenarios](cross-tenant-management-experience.md#enhanced-services-and-scenarios) for Azure Lighthouse.-- Learn more about how [tenants, users, and roles](tenants-users-roles.md) work with Azure Lighthouse. |
lighthouse | Cloud Solution Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/cloud-solution-provider.md | - Title: Cloud Solution Provider program considerations -description: For CSP partners, Azure delegated resource management helps improve security and control by enabling granular permissions. Previously updated : 12/07/2023----# Azure Lighthouse and the Cloud Solution Provider program --If you're a [CSP (Cloud Solution Provider)](/partner-center/csp-overview) partner, you can already access the Azure subscriptions created for your customers through the CSP program by using the Administer On Behalf Of (AOBO) functionality. This access allows you to directly support, configure, and manage your customers' subscriptions. --With [Azure Lighthouse](../overview.md), you can use Azure delegated resource management along with AOBO. This helps improve security and reduces unnecessary access by enabling more granular permissions for your users. It also allows for greater efficiency and scalability, as your users can work across multiple customer subscriptions using a single login in your tenant. --> [!TIP] -> To help safeguard customer resources, be sure to review and follow our [recommended security practices](recommended-security-practices.md) along with the [partner security requirements](/partner-center/partner-security-requirements). --## Administer on Behalf of (AOBO) --With AOBO, any user with the [Admin Agent](/partner-center/permissions-overview#manage-csp-commercial-transactions-in-partner-center-microsoft-entra-id-and-csp-roles) role in your tenant will have AOBO access to Azure subscriptions that you create through the CSP program. Any users who need access to any customers' subscriptions must be a member of this group. AOBO doesnΓÇÖt allow the flexibility to create distinct groups that work with different customers, or to enable different roles for groups or users. --![Diagram showing tenant management using AOBO.](../media/csp-1.jpg) --## Azure Lighthouse --Using Azure Lighthouse, you can assign different groups to different customers or roles, as shown in the following diagram. Because users will have the appropriate level of access through [Azure delegated resource management](architecture.md), you can reduce the number of users who have the Admin Agent role (and thus have full AOBO access). --![Diagram showing tenant management using AOBO and Azure Lighthouse.](../media/csp-2.jpg) --Azure Lighthouse helps improve security by limiting unnecessary access to your customers' resources. It also gives you more flexibility to manage multiple customers at scale, using the [Azure built-in role](tenants-users-roles.md#role-support-for-azure-lighthouse) that's most appropriate for each user's duties, without granting a user more access than necessary. --To further minimize the number of permanent assignments, you can [create eligible authorizations](../how-to/create-eligible-authorizations.md) to grant additional permissions to your users on a just-in-time basis. --Onboarding a subscription that you created through the CSP program follows the steps described in [Onboard a customer to Azure Lighthouse](../how-to/onboard-customer.md). Any user who has the Admin Agent role in the customer's tenant can perform this onboarding. --> [!TIP] -> [Managed Service offers](managed-services-offers.md) with private plans aren't supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program. Instead, you can onboard these subscriptions to Azure Lighthouse by [using Azure Resource Manager templates](../how-to/onboard-customer.md). --> [!NOTE] -> The [**My customers** page in the Azure portal](../how-to/view-manage-customers.md) now includes a **Cloud Solution Provider (Preview)** section, which displays billing info and resources for CSP customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started). For more info, see [Get started with your Microsoft Partner Agreement billing account](../../cost-management-billing/understand/mpa-overview.md). -> -> CSP customers may appear in this section whether or not they have also been onboarded to Azure Lighthouse. If they have, they'll also appear in the **Customers** section, as described in [View and manage customers and delegated resources](../how-to/view-manage-customers.md). Similarly, a CSP customer does not have to appear in the **Cloud Solution Provider (Preview)** section of **My customers** in order for you to onboard them to Azure Lighthouse. --## Link your partner ID to track your impact on delegated resources --Members of the [Microsoft Cloud Partner Program](https://partner.microsoft.com/) can link a partner ID with the credentials used to manage delegated customer resources. This link allows Microsoft to identify and recognize partners who drive Azure customer success. It also allows [CSP (Cloud Solution Provider)](/partner-center/csp-overview) partners to receive [partner earned credit (PEC)](/partner-center/partner-earned-credit) for customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started). --To earn recognition for Azure Lighthouse activities, you'll need to [link your partner ID](../../cost-management-billing/manage/link-partner-id.md) with at least one user account in your managing tenant, and ensure that the linked account has access to each of your onboarded subscriptions. For simplicity, we recommend creating a service principal account in your tenant, associating it with your Partner ID, then granting it access to every customer you onboard with an [Azure built-in role that is eligible for partner earned credit](/partner-center/azure-roles-perms-pec). --For more information, see [Link a partner ID](../../cost-management-billing/manage/link-partner-id.md). --## Next steps --- Learn about [cross-tenant management experiences](cross-tenant-management-experience.md).-- Learn how to [onboard a subscription to Azure Lighthouse](../how-to/onboard-customer.md).-- Learn about the [Cloud Solution Provider program](/partner-center/csp-overview). |
lighthouse | Cross Tenant Management Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/cross-tenant-management-experience.md | - Title: Cross-tenant management experiences -description: Azure Lighthouse enables and enhances cross-tenant experiences in many Azure services. Previously updated : 06/18/2024----# Cross-tenant management experiences --As a service provider, you can use [Azure Lighthouse](../overview.md) to manage your customers' Azure resources from within your own Microsoft Entra tenant. Many common tasks and services can be performed across these managed tenants. --> [!TIP] -> Azure Lighthouse can also be used [within an enterprise which has multiple Microsoft Entra tenants of its own](enterprise.md) to simplify cross-tenant administration. --## Understanding tenants and delegation --A Microsoft Entra tenant is a representation of an organization. It's a dedicated instance of Microsoft Entra ID that an organization receives when they create a relationship with Microsoft by signing up for Azure, Microsoft 365, or other services. Each Microsoft Entra tenant is distinct and separate from other Microsoft Entra tenants, and has its own tenant ID (a GUID). For more information, see [What is Microsoft Entra ID?](/entra/fundamentals/whatis) --Typically, in order to manage Azure resources for a customer, service providers must sign in to the Azure portal using an account associated with that customer's tenant. In this scenario, an administrator in the customer's tenant must create and manage user accounts for the service provider. --With Azure Lighthouse, the onboarding process specifies users in the service provider's tenant who are assigned roles to delegated subscriptions and resource groups in the customer's tenant. These users can then sign in to the Azure portal, using their own credentials, and work on resources belonging to all of the customers to which they have access. Users in the managing tenant can see all of these customers by visiting the [My customers](../how-to/view-manage-customers.md) page in the Azure portal. They can also work on resources directly within the context of that customer's subscription, either in the Azure portal or via APIs. --Azure Lighthouse provides flexibility to manage resources for multiple customers without having to sign in to different accounts in different tenants. For example, a service provider may have two customers with different responsibilities and access levels. Using Azure Lighthouse, authorized users can sign in to the service provider's tenant and access all of the delegated resources across these customers, according to the [roles they've been assigned](tenants-users-roles.md) for each delegation. --![Diagram showing resources for two customers managed through one service provider tenant.](../media/azure-delegated-resource-management-service-provider-tenant.jpg) --## APIs and management tool support --You can perform management tasks on delegated resources in the Azure portal, or you can use APIs and management tools such as Azure CLI and Azure PowerShell. All existing APIs can be used on delegated resources, as long as the functionality is supported for cross-tenant management and the user has the appropriate permissions. --The Azure PowerShell [Get-AzSubscription cmdlet](/powershell/module/Az.Accounts/Get-AzSubscription) shows the `TenantId` for the managing tenant by default. The `HomeTenantId` and `ManagedByTenantIds` attributes for each subscription allow you to identify whether a returned subscription belongs to a managed tenant or to your managing tenant. --Similarly, Azure CLI commands such as [az account list](/cli/azure/account#az-account-list) show the `homeTenantId` and `managedByTenants` attributes. If you don't see these values when using Azure CLI, try clearing your cache by running `az account clear` followed by `az login --identity`. --In the Azure REST API, the [Subscriptions - Get](/rest/api/resources/subscriptions/get) and [Subscriptions - List](/rest/api/resources/subscriptions/list) commands include `ManagedByTenant`. --> [!NOTE] -> In addition to tenant information related to Azure Lighthouse, tenants shown by these APIs may also reflect partner tenants for Azure Databricks or Azure managed applications. --We also provide APIs that are specific to performing Azure Lighthouse tasks. For more info, see the **Reference** section. --## Enhanced services and scenarios --Most Azure tasks and services can be used with delegated resources across managed tenants, assuming the appropriate roles are granted. Below are some of the key scenarios where cross-tenant management can be especially effective. --[Azure Arc](../../azure-arc/index.yml): --- Manage hybrid servers at scale - [Azure Arc-enabled servers](../../azure-arc/servers/overview.md):- - [Onboard servers](../../azure-arc/servers/learn/quick-enable-hybrid-vm.md) to delegated customer subscriptions and/or resource groups in Azure - - Manage Windows Server or Linux machines outside Azure that are connected to delegated subscriptions - - Manage connected machines using Azure constructs, such as Azure Policy and tagging - - Ensure the same set of [policies are applied](../../azure-arc/servers/learn/tutorial-assign-policy-portal.md) across customers' hybrid environments - - Use Microsoft Defender for Cloud to [monitor compliance across customers' hybrid environments](/azure/defender-for-cloud/quickstart-onboard-machines?pivots=azure-arc) -- Manage hybrid Kubernetes clusters at scale - [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md):- - [Connect Kubernetes clusters](../../azure-arc/kubernetes/quickstart-connect-cluster.md) to delegated subscriptions and/or resource groups - - [Use GitOps](../../azure-arc/kubernetes/tutorial-use-gitops-flux2.md) to deploy configurations to connected clusters - - Perform management tasks such as [enforcing policies across connected clusters](../../governance/policy/concepts/policy-for-kubernetes.md#install-azure-policy-extension-for-azure-arc-enabled-kubernetes) --[Azure Automation](../../automation/index.yml): --- Use Automation accounts to access and work with delegated resources--[Azure Backup](../../backup/index.yml): --- Back up and restore customer data using Azure Backup. Currently, the following Azure workloads are supported: Azure Virtual Machines (Azure VM), Azure Files, SQL Server on Azure VMs, SAP HANA on Azure VMs. Workloads which leverage [Backup vault](../../backup/backup-vault-overview.md) (such as Azure Database for PostgreSQL, Azure Blob, Azure Managed Disk, and Azure Kubernetes Services) currently aren't fully supported.-- View data for all delegated customer resources in [Backup center](../../backup/backup-center-overview.md)-- Use the [Backup Explorer](../../backup/monitor-azure-backup-with-backup-explorer.md) to help view operational information of backup items (including Azure resources not yet configured for backup) and monitoring information (jobs and alerts) for delegated subscriptions. The Backup Explorer is currently available only for Azure VM data.-- Use [Backup reports](../../backup/configure-reports.md) across delegated subscriptions to track historical trends, analyze backup storage consumption, and audit backups and restores.--[Azure Blueprints](../../governance/blueprints/overview.md): --- Use Azure Blueprints to orchestrate the deployment of resource templates and other artifacts (requires [additional access](https://www.wesleyhaakman.org/preparing-azure-lighthouse-customer-subscriptions-for-azure-blueprints/) to prepare the customer subscription)--[Azure Cost Management + Billing](../../cost-management-billing/index.yml): --- From the managing tenant, CSP partners can view, manage, and analyze pre-tax consumption costs (not inclusive of purchases) for customers who are under the Azure plan. The cost is based on retail rates and the Azure role-based access control (Azure RBAC) access that the partner has for the customer's subscription. Currently, you can view consumption costs at retail rates for each individual customer subscription based on Azure RBAC access.--[Azure Key Vault](/azure/key-vault/general/): --- Create Key Vaults in customer tenants-- Use a managed identity to create Key Vaults in customer tenants--[Azure Kubernetes Service (AKS)](/azure/aks/): --- Manage hosted Kubernetes environments and deploy and manage containerized applications within customer tenants-- Deploy and manage clusters in customer tenants-- [Use Azure Monitor for containers](/azure/aks/monitor-aks) to monitor performance across customer tenants--[Azure Migrate](../../migrate/index.yml): --- Create migration projects in the customer tenant and migrate VMs--[Azure Monitor](/azure/azure-monitor/): --- View alerts for delegated subscriptions, with the ability to view and refresh alerts across all subscriptions-- View activity log details for delegated subscriptions-- [Log analytics](/azure/azure-monitor/logs/workspace-design#multiple-tenant-strategies): Query data from remote workspaces in multiple tenants (note that automation accounts used to access data from workspaces in customer tenants must be created in the same tenant)-- Create, view, and manage [alerts](/azure/azure-monitor/alerts/alerts-create-new-alert-rule) in customer tenants-- Create alerts in customer tenants that trigger automation, such as Azure Automation runbooks or Azure Functions, in the managing tenant through webhooks-- Create [diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings) in workspaces created in customer tenants, to send resource logs to workspaces in the managing tenant-- For SAP workloads, [monitor SAP Solutions metrics with an aggregated view across customer tenants](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-lighthouse-and-azure-monitor-for-sap-solutions-to/ba-p/1537293)-- For Azure AD B2C, [route sign-in and auditing logs](../../active-directory-b2c/azure-monitor.md) to different monitoring solutions--[Azure Networking](../../networking/fundamentals/networking-overview.md): --- Deploy and manage [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md) and virtual network interface cards (vNICs) within managed tenants-- Deploy and configure [Azure Firewall](../../firewall/overview.md) to protect customersΓÇÖ Virtual Network resources-- Manage connectivity services such as [Azure Virtual WAN](../../virtual-wan/virtual-wan-about.md), [Azure ExpressRoute](../../expressroute/expressroute-introduction.md), and [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md)-- Use Azure Lighthouse to support key scenarios for the [Azure Networking MSP Program](../../networking/networking-partners-msp.md)--[Azure Policy](../../governance/policy/index.yml): --- Create and edit policy definitions within delegated subscriptions-- Deploy policy definitions and policy assignments across multiple tenants-- Assign customer-defined policy definitions within delegated subscriptions-- Customers see policies authored by the service provider alongside any policies they've authored themselves-- Can [remediate deployIfNotExists or modify assignments within the managed tenant](../how-to/deploy-policy-remediation.md)-- Note that viewing compliance details for non-compliant resources in customer tenants is not currently supported--[Azure Resource Graph](../../governance/resource-graph/index.yml): --- See the tenant ID in returned query results, allowing you to identify whether a subscription belongs to a managed tenant--[Azure Service Health](/azure/service-health/): --- Monitor the health of customer resources with Azure Resource Health-- Track the health of the Azure services used by your customers--[Azure Site Recovery](../../site-recovery/index.yml): --- Manage disaster recovery options for Azure virtual machines in customer tenants (note that you can't use `RunAs` accounts to copy VM extensions)--[Azure Virtual Machines](/azure/virtual-machines/): --- Use virtual machine extensions to provide post-deployment configuration and automation tasks on Azure VMs-- Use boot diagnostics to troubleshoot Azure VMs-- Access VMs with serial console-- Integrate VMs with Azure Key Vault for passwords, secrets, or cryptographic keys for disk encryption by using [managed identity through policy](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/create-keyvault-secret), ensuring that secrets are stored in a Key Vault in the managed tenants-- Note that you can't use Microsoft Entra ID for remote login to VMs--[Microsoft Defender for Cloud](/azure/defender-for-cloud/): --- Cross-tenant visibility- - Monitor compliance with security policies and ensure security coverage across all tenants' resources - - Continuous regulatory compliance monitoring across multiple tenants in a single view - - Monitor, triage, and prioritize actionable security recommendations with secure score calculation -- Cross-tenant security posture management- - Manage security policies - - Take action on resources that are out of compliance with actionable security recommendations - - Collect and store security-related data -- Cross-tenant threat detection and protection- - Detect threats across tenants' resources - - Apply advanced threat protection controls such as just-in-time (JIT) VM access - - Harden network security group configuration with Adaptive Network Hardening - - Ensure servers are running only the applications and processes they should be with adaptive application controls - - Monitor changes to important files and registry entries with File Integrity Monitoring (FIM) -- Note that the entire subscription must be delegated to the managing tenant; Microsoft Defender for Cloud scenarios are not supported with delegated resource groups--[Microsoft Sentinel](../../sentinel/multiple-tenants-service-providers.md): --- Manage Microsoft Sentinel resources [in customer tenants](../../sentinel/multiple-tenants-service-providers.md)-- [Track attacks and view security alerts across multiple tenants](https://techcommunity.microsoft.com/t5/azure-sentinel/using-azure-lighthouse-and-azure-sentinel-to-monitor-across/ba-p/1043899)-- [View incidents](../../sentinel/multiple-workspace-view.md) across multiple Microsoft Sentinel workspaces spread across tenants--Support requests: --- [Open support requests from **Help + support**](../../azure-portal/supportability/how-to-create-azure-support-request.md#getting-started) in the Azure portal for delegated resources (selecting the support plan available to the delegated scope)-- Use the [Azure Quota API](/rest/api/reserved-vm-instances/quotaapi) to view and manage Azure service quotas for delegated customer resources--## Current limitations --With all scenarios, be aware of the following current limitations: --- Requests handled by Azure Resource Manager can be performed using Azure Lighthouse. The operation URIs for these requests start with `https://management.azure.com`. However, requests that are handled by an instance of a resource type (such as Key Vault secrets access or storage data access) aren't supported with Azure Lighthouse. The operation URIs for these requests typically start with an address that is unique to your instance, such as `https://myaccount.blob.core.windows.net` or `https://mykeyvault.vault.azure.net/`. The latter are also typically data operations rather than management operations.-- Role assignments must use [Azure built-in roles](../../role-based-access-control/built-in-roles.md). All built-in roles are currently supported with Azure Lighthouse, except for Owner or any built-in roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission. The User Access Administrator role is supported only for limited use in [assigning roles to managed identities](../how-to/deploy-policy-remediation.md#create-a-user-who-can-assign-roles-to-a-managed-identity-in-the-customer-tenant). Custom roles and [classic subscription administrator roles](../../role-based-access-control/classic-administrators.md) are not supported. For more information, see [Role support for Azure Lighthouse](tenants-users-roles.md#role-support-for-azure-lighthouse).-- For users in the managed tenant, role assignments made through Azure Lighthouse aren't shown under Access Control (IAM) or with CLI tools such as `az role assignment list`. These assignments are only visible in the Azure portal in the **Delegations** section of Azure Lighthouse, or through the Azure Lighthouse API.-- While you can onboard subscriptions that use Azure Databricks, users in the managing tenant can't launch Azure Databricks workspaces on a delegated subscription.-- While you can onboard subscriptions and resource groups that have resource locks, those locks won't prevent actions from being performed by users in the managing tenant. [Deny assignments](../../role-based-access-control/deny-assignments.md) that protect system-managed resources (system-assigned deny assignments), such as those created by Azure managed applications or Azure Blueprints, do prevent users in the managing tenant from acting on those resources. However, users in the customer tenant can't create their own deny assignments.-- Delegation of subscriptions across a [national cloud](../../active-directory/develop/authentication-national-cloud.md) and the Azure public cloud, or across two separate national clouds, is not supported.--## Next steps --- Onboard your customers to Azure Lighthouse, either by [using Azure Resource Manager templates](../how-to/onboard-customer.md) or by [publishing a private or public managed services offer to Azure Marketplace](../how-to/publish-managed-services-offers.md).-- [View and manage customers](../how-to/view-manage-customers.md) by going to **My customers** in the Azure portal.-- Learn more about [Azure Lighthouse architecture](architecture.md). |
lighthouse | Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/enterprise.md | - Title: Azure Lighthouse in enterprise scenarios -description: The capabilities of Azure Lighthouse can be used to simplify cross-tenant management within an enterprise which uses multiple Microsoft Entra tenants. Previously updated : 07/10/2024----# Azure Lighthouse in enterprise scenarios --A common scenario for [Azure Lighthouse](../overview.md) involves a service provider that manages resources in its customers' Microsoft Entra tenants. The capabilities of Azure Lighthouse can also be used to simplify cross-tenant management within an enterprise that uses multiple Microsoft Entra tenants. --## Single vs. multiple tenants --For most organizations, management is easier with a single Microsoft Entra tenant. Having all resources within one tenant allows centralization of management tasks by designated users, user groups, or service principals within that tenant. We recommend using one tenant for your organization whenever possible. --Some organizations may need to use multiple Microsoft Entra tenants. This might be a temporary situation, as when acquisitions have taken place and a long-term tenant consolidation strategy hasn't been defined yet. Other times, organizations may need to maintain multiple tenants on an ongoing basis due to wholly independent subsidiaries, geographical or legal requirements, or other considerations. --In cases where a [multitenant architecture](/azure/architecture/guide/multitenant/overview) is required, Azure Lighthouse can help centralize and streamline management operations. By using Azure Lighthouse, users in one managing tenant can perform [cross-tenant management functions](cross-tenant-management-experience.md) in a centralized, scalable manner. --## Tenant management architecture --To use Azure Lighthouse in an enterprise, you'll need to determine which tenant will include the users who perform management operations on the other tenants. In other words, you will need to designate one tenant as the managing tenant for the other tenants. --For example, say your organization has a single tenant that weΓÇÖll call *Tenant A*. Your organization then acquires *Tenant B* and *Tenant C*, and you have business reasons that require you to maintain them as separate tenants. However, you'd like to use the same policy definitions, backup practices, and security processes for all of them, with management tasks performed by the same set of users. --Since Tenant A already includes users in your organization who have been performing those tasks for Tenant A, you can onboard subscriptions within Tenant B and Tenant C, which allows the same users in Tenant A to perform those tasks across all tenants. --![Diagram showing users in Tenant A managing resources in Tenant B and Tenant C.](../media/enterprise-azure-lighthouse.jpg) --## Security and access considerations --In most enterprise scenarios, youΓÇÖll want to delegate a full subscription to Azure Lighthouse. You can also choose to delegate only specific resource groups within a subscription. --Either way, be sure to [follow the principle of least privilege when defining which users will have access to delegated resources](recommended-security-practices.md#assign-permissions-to-groups-using-the-principle-of-least-privilege). Doing so helps to ensure that users only have the permissions needed to perform the required tasks and reduces the chance of inadvertent errors. --Azure Lighthouse only provides logical links between a managing tenant and managed tenants, rather than physically moving data or resources. Furthermore, the access always goes in only one direction, from the managing tenant to the managed tenants. Users and groups in the managing tenant should use multifactor authentication when performing management operations on managed tenant resources. --Enterprises with internal or external governance and compliance guardrails can use [Azure Activity logs](/azure/azure-monitor/essentials/activity-log) to meet their transparency requirements. When enterprise tenants have established managing and managed tenant relationships, users in each tenant can view logged activity to see actions taken by users in the managing tenant. --## Onboarding considerations --Subscriptions (or resource groups within a subscription) can be onboarded to Azure Lighthouse either by deploying Azure Resource Manager templates or through Managed Services offers published to Azure Marketplace. --Since enterprise users will typically have direct access to the enterpriseΓÇÖs tenants, and there's no need to market or promote a management offering, it's usually faster and more straightforward to deploy Azure Resource Manager templates. While the [onboarding guidance](../how-to/onboard-customer.md) refers to service providers and customers, enterprises can use the same processes to onboard their tenants. --If you prefer, tenants within an enterprise can be onboarded by [publishing a Managed Services offer to Azure Marketplace](../how-to/publish-managed-services-offers.md). To ensure that the offer is only available to the appropriate tenants, be sure that your plans are marked as private. With a private plan, you provide the subscription IDs for each tenant that you plan to onboard, and no one else will be able to get your offer. --## Azure AD B2C --[Azure Active Directory B2C (Azure AD B2C)](../../active-directory-b2c/overview.md) provides business-to-customer identity as a service. When you delegate a resource group through Azure Lighthouse, you can use Azure Monitor to route Azure Active Directory B2C (Azure AD B2C) sign-in and auditing logs to different monitoring solutions. You can retain the logs for long-term use, or integrate with third-party security information and event management (SIEM) tools to gain insights into your environment. --For more information, see [Monitor Azure AD B2C with Azure Monitor](../../active-directory-b2c/azure-monitor.md). --## Terminology notes --For cross-tenant management within the enterprise, references to service providers in the Azure Lighthouse documentation can be understood to apply to the managing tenant within an enterpriseΓÇöthat is, the tenant that includes the users who will manage resources in other tenants through Azure Lighthouse. Similarly, any references to customers can be understood to apply to the tenants that are delegating resources to be managed through users in the managing tenant. --For instance, in the example described above, Tenant A can be thought of as the service provider tenant (the managing tenant) and Tenant B and Tenant C can be thought of as the customer tenants. --Continuing with that example, Tenant A users with the appropriate permissions can [view and manage delegated resources](../how-to/view-manage-customers.md) in the **My customers** page of the Azure portal. Likewise, Tenant B and Tenant C users with the appropriate permissions can [view and manage the resources that have been delegated](../how-to/view-manage-service-providers.md) to Tenant A in the **Service providers** page of the Azure portal. --## Next steps --- Explore options for [resource organization in multitenant architectures](/azure/architecture/guide/multitenant/approaches/resource-organization).-- Learn about [cross-tenant management experiences](cross-tenant-management-experience.md).-- Learn more about [how Azure Lighthouse works](architecture.md). |
lighthouse | Isv Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/isv-scenarios.md | - Title: Azure Lighthouse in ISV scenarios -description: ISVs can use the capabilities of Azure Lighthouse for more flexibility with customer offerings. Previously updated : 07/10/2024----# Azure Lighthouse in ISV scenarios --A typical scenario for [Azure Lighthouse](../overview.md) involves a service provider that manages resources in its customers' Microsoft Entra tenants. Independent Software Vendors (ISVs) using SaaS-based offerings with their customers may also benefit from the capabilities of Azure Lighthouse. Using Azure Lighthouse can be especially helpful for ISVs who offer managed services that require access to a customer's subscription scope. --## Managed Service offers in Azure Marketplace --As an ISV, you may already have published solutions to the Azure Marketplace. If you offer managed services to your customers, you can do so by publishing a Managed Service offer. These offers streamline the onboarding process and make your services more scalable to as many customers as needed. Azure Lighthouse supports a wide range of [management tasks and scenarios](cross-tenant-management-experience.md#enhanced-services-and-scenarios) that can be used to provide value to your customers. --For more information, see [Publish a Managed Service offer to Azure Marketplace](../how-to/publish-managed-services-offers.md). --## Using Azure Lighthouse with Azure managed applications --[Azure managed applications](../../azure-resource-manager/managed-applications/overview.md) are another way that ISVs can provide services to their customers. You can use Azure Lighthouse along with your Azure managed applications to enable enhanced scenarios. --For more information, see [Azure Lighthouse and Azure managed applications](managed-applications.md). --## SaaS-based multitenant offerings --An additional scenario is where the ISV hosts resources in a subscription in their own tenant, then uses Azure Lighthouse to let customers access those specific resources. Once this access is granted, the customer can log in to their own tenant and access the resources as needed. The ISV maintains their IP in their own tenant, and can use their own support plan to raise tickets related to the solution hosted in their tenant, rather than the customer's plan. Since the resources are in the ISV's tenant, all actions can be performed directly by the ISV, such as logging into VMs, installing apps, and performing maintenance tasks. --In this scenario, users in the customer's tenant are essentially granted access as a "managing tenant," even though the customer isn't managing the ISV's resources. Because the customer is directly accessing the ISV's tenant, it's important to grant only the minimum permissions necessary, so that they can't make changes to the solution or access other ISV resources. --To enable this architecture, the ISV needs to obtain the object ID for a user group in the customer's Microsoft Entra tenant, along with their tenant ID. The ISV then builds an ARM template granting this user group the appropriate permissions, and [deploys it on the ISV's subscription](../how-to/onboard-customer.md) that contains the resources that the customer will access. --## Next steps --- Learn about [cross-tenant management experiences](cross-tenant-management-experience.md).-- Learn more about [Azure Lighthouse architecture](architecture.md). |
lighthouse | Managed Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/managed-applications.md | - Title: Azure Lighthouse and Azure managed applications -description: Understand how Azure Lighthouse and Azure managed applications can be used together. Previously updated : 07/10/2024----# Azure Lighthouse and Azure managed applications --Both [Azure managed applications](../../azure-resource-manager/managed-applications/overview.md) and [Azure Lighthouse](../overview.md) work by enabling a service provider to access resources that reside in the customer's tenant. It can be helpful to understand the differences in the way that they work, the scenarios that they help to enable, and how they can be used together. --> [!TIP] -> Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](enterprise.md) can use the same processes and tools. --## Comparing Azure Lighthouse and Azure managed applications --This table illustrates some high-level differences that may impact whether you might choose to use Azure Lighthouse or Azure managed applications. In some cases, you may want to design a solution that uses them together. --|Consideration |Azure Lighthouse |Azure managed applications | -|||| -|Typical user |Service providers or enterprises managing multiple tenants |Independent Software Vendors (ISVs) | -|Scope of cross-tenant access |Subscription(s) or resource group(s) |Resource group (scoped to a single application) | -|Purchasable in Azure Marketplace |No (offers can be published to Azure Marketplace, but customers are billed separately) |Yes | -|IP protection |Yes (IP can remain in the service provider's tenant) |Yes (If the ISV chooses to restrict customer access with deny assignments, the managed resource group is locked to customers) | -|Deny assignments |No |Yes | --### Azure Lighthouse --With [Azure Lighthouse](../overview.md), a service provider can perform a wide range of management tasks directly on a customer's subscription (or resource group). This access is achieved through a [logical projection](architecture.md#logical-projection), allowing service providers to sign in to their own tenant and access resources that belong to the customer's tenant. The customer can determine which subscriptions or resource groups to delegate to the service provider, and the customer maintains full access to those resources. They can also remove the service provider's access at any time. --To use Azure Lighthouse, customers are onboarded either by [deploying ARM templates](../how-to/onboard-customer.md) or through a [Managed Service offer in Azure Marketplace](managed-services-offers.md). You can track your impact on customer engagements by [linking your partner ID](../../cost-management-billing/manage/link-partner-id.md). --Azure Lighthouse is typically used when a service provider will perform management tasks for a customer on an ongoing basis. To learn more about how Azure Lighthouse works at a technical level, see [Azure Lighthouse architecture](architecture.md). --### Azure managed applications --[Azure managed applications](../../azure-resource-manager/managed-applications/overview.md) allow an ISV/publisher to offer cloud solutions that are easy for customers to deploy and use in their own subscriptions. --In a managed application, the resources used by the application are bundled together and deployed to a resource group that can be managed by the ISV/publisher. This "managed resource group" is present in the customer's subscription, but identities in the publisher's tenant can have access to it. When publishing an offer in Microsoft Partner Center, the publisher can choose whether they enable or disable management access by the publisher itself. In addition, the publisher can restrict customer access (using deny assignments), or grant the customer full access. --Managed applications support [customized Azure portal experiences](../../azure-resource-manager/managed-applications/concepts-view-definition.md) and [integration with custom providers](../../azure-resource-manager/managed-applications/tutorial-create-managed-app-with-custom-provider.md). These options can be used to deliver a more customized and integrated experience, making it easier for customers to perform some management tasks themselves. --Managed applications can be [published to Azure Marketplace](../../marketplace/azure-app-offer-setup.md), either as a private offer for a specific customer's use, or as public offers that multiple customers can purchase. They can also be delivered to users within your organization by [publishing managed applications to your service catalog](../../azure-resource-manager/managed-applications/publish-service-catalog-app.md). You can deploy both service catalog and Marketplace instances using ARM templates, which can include a commercial marketplace partner's unique identifier to track [customer usage attribution](../../marketplace/azure-partner-customer-usage-attribution.md). --Azure managed applications are typically used for a specific customer need that can be achieved through a turnkey solution that is fully managed by the service provider. --## Using Azure Lighthouse and Azure managed applications together --While Azure Lighthouse and Azure managed applications use different access mechanisms to achieve different goals, there may be scenarios where it makes sense for a service provider to use both of them with the same customer. --For example, a customer might want managed services delivered by a service provider through Azure Lighthouse, so that they have visibility into the partner's actions along with continued control of their delegated subscription. However, the service provider may not want the customer to access certain resources that will be stored in the customer's tenant, or allow any customized actions on those resources. To meet these goals, the service provider can publish a private offer as a managed application. The managed application can include a resource group that is deployed in the customer's tenant, but that can't be accessed directly by the customer. --Customers might also be interested in managed applications from multiple service providers, whether or not they also use managed services via Azure Lighthouse from any of those service providers. Additionally, partners in the Cloud Solution Provider (CSP) program can resell certain managed applications published by other ISVs to customers that they support through Azure Lighthouse. With a wide range of options, service providers can choose the right balance to meet their customers' needs while restricting access to resources when appropriate. --## Next steps --- Learn about [Azure managed applications](../../azure-resource-manager/managed-applications/overview.md).-- Learn how to [onboard a subscription to Azure Lighthouse](../how-to/onboard-customer.md).-- Learn about [ISV scenarios with Azure Lighthouse](isv-scenarios.md). |
lighthouse | Managed Services Offers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/managed-services-offers.md | - Title: Managed Service offers in Azure Marketplace -description: Offer your Azure Lighthouse management services to customers through Managed Services offers in Azure Marketplace. Previously updated : 03/07/2024----# Managed Service offers in Azure Marketplace --This article describes the **Managed Service** offer type in [Azure Marketplace](https://azuremarketplace.microsoft.com). Managed Service offers allow you to offer resource management services to customers through [Azure Lighthouse](../overview.md). You can make these offers available to all potential customers, or only to one or more specific customers. Since you bill customers directly for costs related to these managed services, there are no fees charged by Microsoft. --## Understand Managed Service offers --Managed Service offers streamline the process of onboarding customers to Azure Lighthouse. When a customer purchases an offer in Azure Marketplace, they'll be able to specify which subscriptions and/or resource groups should be onboarded. --For each offer, you define the access that users in your organization will have to work on resources in the customer tenant. This is done through a manifest that specifies the Microsoft Entra users, groups, and service principals that will have access to customer resources, along with [roles that define their level of access](tenants-users-roles.md#role-support-for-azure-lighthouse). --> [!NOTE] -> Managed Service offers may not be available in Azure Government and other national clouds. --## Public and private plans --Each Managed Service offer includes one or more plans. Plans can be either private or public. --If you want to limit your offer to specific customers, you can publish a private plan. When you do so, the plan can only be purchased for the specific subscription IDs that you provide. For more info, see [Private plans](/partner-center/marketplace/private-plans). --> [!NOTE] -> Private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program. --Public plans let you promote your services to new customers. These are usually more appropriate when you only require limited access to the customer's tenant. Once you've established a relationship with a customer, if they decide to grant your organization additional access, you can do so either by publishing a new private plan for that customer only, or by [onboarding them for further access using Azure Resource Manager templates](../how-to/onboard-customer.md). --If appropriate, you can include both public and private plans in the same offer. --> [!IMPORTANT] -> Once a plan has been published as public, you can't change it to private. To control which customers can accept your offer and delegate resources, use a private plan. With a public plan, you can't restrict availability to certain customers or even to a certain number of customers (although you can stop selling the plan completely if you choose). -> -> After a customer accepts an offer, you can [remove access to a delegation](../how-to/remove-delegation.md) only if you included an **Authorization** with the **Role Definition** set to [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) when you published the offer. You can also reach out to the customer and ask them to [remove your access](../how-to/view-manage-service-providers.md#remove-service-provider-offers). --## Publish Managed Service offers --To learn how to publish a Managed Service offer, see [Publish a Managed Service offer to Azure Marketplace](../how-to/publish-managed-services-offers.md). --## Next steps --- Learn about Azure Lighthouse [architecture](architecture.md) and [cross-tenant management experiences](cross-tenant-management-experience.md).-- Learn about the [commercial marketplace](/partner-center/marketplace/overview).-- [Publish Managed Service offers](../how-to/publish-managed-services-offers.md) to Azure Marketplace. |
lighthouse | Recommended Security Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/recommended-security-practices.md | - Title: Recommended security practices -description: When using Azure Lighthouse, it's important to consider security and access control. Previously updated : 11/28/2023----# Recommended security practices --When using [Azure Lighthouse](../overview.md), it's important to consider security and access control. Users in your tenant will have direct access to customer subscriptions and resource groups, so you'll want to take steps to maintain your tenant's security. You'll also want to make sure you only allow the access that's needed to effectively manage your customers' resources. This topic provides recommendations to help you do so. --> [!TIP] -> These recommendations also apply to [enterprises managing multiple tenants](enterprise.md) with Azure Lighthouse. --<a name='require-azure-ad-multi-factor-authentication'></a> --## Require Microsoft Entra multifactor authentication --[Microsoft Entra multifactor authentication](/entra/identity/authentication/concept-mfa-howitworks) (also known as two-step verification) helps prevent attackers from gaining access to an account by requiring multiple authentication steps. You should require Microsoft Entra multifactor authentication for all users in your managing tenant, including users who will have access to delegated customer resources. --We recommend that you ask your customers to implement Microsoft Entra multifactor authentication in their tenants as well. --> [!IMPORTANT] -> Conditional access policies that are set on a customer's tenant don't apply to users who access that customer's resources through Azure Lighthouse. Only policies set on the managing tenant apply to those users. We strongly recommend requiring Microsoft Entra multifactor authentication for both the managing tenant and the managed (customer) tenant. --## Assign permissions to groups, using the principle of least privilege --To make management easier, use Microsoft Entra groups for each role required to manage your customers' resources. This lets you add or remove individual users to the group as needed, rather than assigning permissions directly to each user. --> [!IMPORTANT] -> In order to add permissions for a Microsoft Entra group, the **Group type** must be set to **Security**. This option is selected when the group is created. For more information, see [Create a basic group and add members](/entra/fundamentals/how-to-manage-groups#create-a-basic-group-and-add-members). --When creating your permission structure, be sure to follow the principle of least privilege so that users only have the permissions needed to complete their job, helping to reduce the chance of inadvertent errors. --For example, you may want to use a structure like this: --|Group name |Type |principalId |Role definition |Role definition ID | -|||||| -|Architects |User group |\<principalId\> |Contributor |b24988ac-6180-42a0-ab88-20f7382dd24c | -|Assessment |User group |\<principalId\> |Reader |acdd72a7-3385-48ef-bd42-f606fba81ae7 | -|VM Specialists |User group |\<principalId\> |VM Contributor |9980e02c-c2be-4d73-94e8-173b1dc7cf3c | -|Automation |Service principal name (SPN) |\<principalId\> |Contributor |b24988ac-6180-42a0-ab88-20f7382dd24c | --Once you've created these groups, you can assign users as needed. Only add the users who truly need to have access. Be sure to review group membership regularly and remove any users that are no longer appropriate or necessary to include. --Keep in mind that when you [onboard customers through a public managed service offer](../how-to/publish-managed-services-offers.md), any group (or user or service principal) that you include will have the same permissions for every customer who purchases the plan. To assign different groups to work with each customer, you'll need to publish a separate private plan that is exclusive to each customer, or onboard customers individually by using Azure Resource Manager templates. For example, you could publish a public plan that has very limited access, then work with the customer directly to onboard their resources for additional access using a customized Azure Resource Template granting additional access as needed. --> [!TIP] -> You can also create *eligible authorizations* that let users in your managing tenant temporarily elevate their role. By using eligible authorizations, you can minimize the number of permanent assignments of users to privileged roles, helping to reduce security risks related to privileged access by users in your tenant. This feature has specific licensing requirements. For more information, see [Create eligible authorizations](../how-to/create-eligible-authorizations.md). --## Next steps --- Review the [security baseline information](/security/benchmark/azure/baselines/lighthouse-security-baseline) to understand how guidance from the Microsoft cloud security benchmark applies to Azure Lighthouse.-- [Deploy Microsoft Entra multifactor authentication](/entra/identity/authentication/howto-mfa-getstarted).-- Learn about [cross-tenant management experiences](cross-tenant-management-experience.md). |
lighthouse | Tenants Users Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/tenants-users-roles.md | - Title: Tenants, users, and roles in Azure Lighthouse scenarios -description: Understand how Microsoft Entra tenants, users, and roles can be used in Azure Lighthouse scenarios. Previously updated : 07/10/2024----# Tenants, users, and roles in Azure Lighthouse scenarios --Before onboarding customers for [Azure Lighthouse](../overview.md), it's important to understand how Microsoft Entra tenants, users, and roles work, and how they can be used in Azure Lighthouse scenarios. --A *tenant* is a dedicated and trusted instance of Microsoft Entra ID. Typically, each tenant represents a single organization. Azure Lighthouse enables [logical projection](architecture.md#logical-projection) of resources from one tenant to another tenant. This allows users in the managing tenant (such as one belonging to a service provider) to access delegated resources in a customer's tenant, or lets [enterprises with multiple tenants centralize their management operations](enterprise.md). --In order to achieve this logical projection, a subscription (or one or more resource groups within a subscription) in the customer tenant must be *onboarded* to Azure Lighthouse. This onboarding process can be done either [through Azure Resource Manager templates](../how-to/onboard-customer.md) or by [publishing a public or private offer to Azure Marketplace](../how-to/publish-managed-services-offers.md). --With either onboarding method, you'll need to define *authorizations*. Each authorization includes a **principalId** (a Microsoft Entra user, group, or service principal in the managing tenant) combined with a built-in role that defines the specific permissions that will be granted for the delegated resources. --> [!NOTE] -> Unless explicitly specified, references to a "user" in the Azure Lighthouse documentation can apply to a Microsoft Entra user, group, or service principal in an authorization. --## Best practices for defining users and roles --When creating your authorizations, we recommend the following best practices: --- In most cases, you'll want to assign permissions to a Microsoft Entra user group or service principal, rather than to a series of individual user accounts. Doing so lets you add or remove access for individual users through your tenant's Microsoft Entra ID, without having to [update the delegation](../how-to/update-delegation.md) every time your individual access requirements change.-- Follow the principle of least privilege. To reduce the chance of inadvertent errors, users should have only the permissions needed to perform their specific job. For more information, see [Recommended security practices](../concepts/recommended-security-practices.md).-- Include an authorization with the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) so that you can [remove access to the delegation](../how-to/remove-delegation.md) if needed. If this role isn't assigned, access to delegated resources can only be removed by a user in the customer's tenant.-- Be sure that any user who needs to [view the My customers page in the Azure portal](../how-to/view-manage-customers.md) has the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role that includes Reader access).--> [!IMPORTANT] -> In order to add permissions for a Microsoft Entra group, the **Group type** must be set to **Security**. This option is selected when the group is created. For more information, see [Create a basic group and add members using Microsoft Entra ID](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). --## Role support for Azure Lighthouse --When you define an authorization, each user account must be assigned one of the [Azure built-in roles](../../role-based-access-control/built-in-roles.md). Custom roles and [classic subscription administrator roles](../../role-based-access-control/classic-administrators.md) aren't supported. --All [built-in roles](../../role-based-access-control/built-in-roles.md) are currently supported with Azure Lighthouse, with the following exceptions: --- The [Owner](../../role-based-access-control/built-in-roles.md#owner) role isn't supported.-- The [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role is supported, but only for the limited purpose of [assigning roles to a managed identity in the customer tenant](../how-to/deploy-policy-remediation.md#create-a-user-who-can-assign-roles-to-a-managed-identity-in-the-customer-tenant). No other permissions typically granted by this role will apply. If you define a user with this role, you must also specify the role(s) that this user can assign to managed identities.-- Any roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission aren't supported.-- Roles that include any of the following [actions](../../role-based-access-control/role-definitions.md#actions) aren't supported:-- - */write - - */delete - - Microsoft.Authorization/* - - Microsoft.Authorization/*/write - - Microsoft.Authorization/*/delete - - Microsoft.Authorization/roleAssignments/write - - Microsoft.Authorization/roleAssignments/delete - - Microsoft.Authorization/roleDefinitions/write - - Microsoft.Authorization/roleDefinitions/delete - - Microsoft.Authorization/classicAdministrators/write - - Microsoft.Authorization/classicAdministrators/delete - - Microsoft.Authorization/locks/write - - Microsoft.Authorization/locks/delete - - Microsoft.Authorization/denyAssignments/write - - Microsoft.Authorization/denyAssignments/delete --> [!IMPORTANT] -> When assigning roles, be sure to review the [actions](../../role-based-access-control/role-definitions.md#actions) specified for each role. Even though roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission aren't supported, there are cases where actions included in a supported role may allow access to data. This generally occurs when data is exposed through access keys, not accessed via the user's identity. For example, the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md) role includes the `Microsoft.Storage/storageAccounts/listKeys/action` action, which returns storage account access keys that could be used to retrieve certain customer data. --In some cases, a role that was previously supported with Azure Lighthouse may become unavailable. For example, if the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission is added to a role that previously didn't have that permission, that role can no longer be used when onboarding new delegations. Users who had already been assigned that role will still be able to work on previously delegated resources, but they won't be able to perform any tasks that use the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission. --As soon as a new applicable built-in role is added to Azure, it can be assigned when [onboarding a customer using Azure Resource Manager templates](../how-to/onboard-customer.md). There may be a delay before the newly added role becomes available in Partner Center when [publishing a managed service offer](../how-to/publish-managed-services-offers.md). Similarly, if a role becomes unavailable, you may still see it in Partner Center for a while, but you won't be able to publish new offers using such roles. --<a name='transferring-delegated-subscriptions-between-azure-ad-tenants'></a> --## Transferring delegated subscriptions between Microsoft Entra tenants --If a subscription is [transferred to another Microsoft Entra tenant account](../../cost-management-billing/manage/billing-subscription-transfer.md#transfer-a-subscription-to-another-azure-ad-tenant-account), the [registration definition and registration assignment resources](architecture.md#delegation-resources-created-in-the-customer-tenant) created through the [Azure Lighthouse onboarding process](../how-to/onboard-customer.md) are preserved. This means that access granted through Azure Lighthouse to managing tenants remains in effect for that subscription (or for delegated resource groups within that subscription). --The only exception is if the subscription is transferred to a Microsoft Entra tenant to which it had been previously delegated. In this case, the delegation resources for that tenant are removed and the access granted through Azure Lighthouse no longer applies, since the subscription now belongs directly to that tenant (rather than being delegated to it through Azure Lighthouse). However, if that subscription was also delegated to other managing tenants, those other managing tenants will retain the same access to the subscription. --## Next steps --- Learn about [recommended security practices for Azure Lighthouse](recommended-security-practices.md).-- Onboard your customers to Azure Lighthouse, either by [using Azure Resource Manager templates](../how-to/onboard-customer.md) or by [publishing a private or public managed services offer to Azure Marketplace](../how-to/publish-managed-services-offers.md). |
lighthouse | Create Eligible Authorizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/create-eligible-authorizations.md | - Title: Create eligible authorizations -description: When onboarding customers to Azure Lighthouse, you can let users in your managing tenant elevate their role on a just-in-time basis. Previously updated : 06/03/2024-----# Create eligible authorizations --When onboarding customers to Azure Lighthouse, you create authorizations to grant specified Azure built-in roles to users in your managing tenant. You can also create eligible authorizations that use [Microsoft Entra Privileged Identity Management (PIM)](/entra/id-governance/privileged-identity-management/pim-configure) to let users in your managing tenant temporarily elevate their role. This lets you grant additional permissions on a just-in-time basis so that users only have those permissions for a set duration. --Creating eligible authorizations lets you minimize the number of permanent assignments of users to privileged roles, helping to reduce security risks related to privileged access by users in your tenant. --This topic explains how eligible authorizations work and how to create them when [onboarding a customer to Azure Lighthouse](onboard-customer.md). --## License requirements --Creating eligible authorizations requires an [Enterprise Mobility + Security E5 (EMS E5)](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/compare-plans-and-pricing) or [Microsoft Entra ID P2](https://www.microsoft.com/security/business/microsoft-entra-pricing) license. --The EMS E5 or Microsoft Entra ID P2 license must be held by the managing tenant, not the customer tenant. --Any extra costs associated with an eligible role will apply only during the period of time in which the user elevates their access to that role. --For information about licenses for users, see [Microsoft Entra ID Governance licensing fundamentals](/entra/id-governance/licensing-fundamentals). --## How eligible authorizations work --An eligible authorization defines a role assignment that requires the user to activate the role when they need to perform privileged tasks. When they activate the eligible role, they'll have the full access granted by that role for the specified period of time. --Users in the customer tenant can review all role assignments, including those in eligible authorizations, before the onboarding process. --Once a user successfully activates an eligible role, they will have that elevated role on the delegated scope for a pre-configured time period, in addition to their permanent role assignment(s) for that scope. --Administrators in the managing tenant can review all Privileged Identity Management activities by viewing the audit log in the managing tenant. Customers can view these actions in the Azure activity log for the delegated subscription. --## Eligible authorization elements --You can create an eligible authorization when onboarding customers with Azure Resource Manager templates or by publishing a Managed Services offer to Azure Marketplace. Each eligible authorization must include three elements: the user, the role, and the access policy. --### User --For each eligible authorization, you provide the Principal ID for either an individual user or a Microsoft Entra group in the managing tenant. Along with the Principal ID, you must provide a display name of your choice for each authorization. --If a group is provided in an eligible authorization, any member of that group will be able to elevate their own individual access to that role, per the access policy. --You can't use eligible authorizations with service principals, since there's currently no way for a service principal account to elevate its access and use an eligible role. You also canΓÇÖt use eligible authorizations with `delegatedRoleDefinitionIds` that a User Access Administrator can [assign to managed identities](deploy-policy-remediation.md). --> [!NOTE] -> For each eligible authorization, be sure to also create a permanent (active) authorization for the same Principal ID with a different role, such as Reader (or another Azure built-in role that includes Reader access). If you don't include a permanent authorization with Reader access, the user won't be able to elevate their role in the Azure portal. --### Role --Each eligible authorization needs to include an [Azure built-in role](../../role-based-access-control/built-in-roles.md) that the user will be eligible to use on a just-in-time basis. --The role can be any Azure built-in role that is [supported for Azure delegated resource management](../concepts/tenants-users-roles.md#role-support-for-azure-lighthouse), except for User Access Administrator. --> [!IMPORTANT] -> If you include multiple eligible authorizations that use the same role, each of the eligible authorizations must have the same access policy settings. --### Access policy --The access policy defines the multifactor authentication requirements, the length of time a user will be activated in the role before it expires, and whether approvers are required. --#### Multifactor authentication --Specify whether or not to require [Microsoft Entra multifactor authentication](/entra/identity/authentication/concept-mfa-howitworks) in order for an eligible role to be activated. --#### Maximum duration --Define the total length of time for which the user will have the eligible role. The minimum value is 30 minutes and the maximum is 8 hours. --#### Approvers --The approvers element is optional. If you include it, you can specify up to 10 users or user groups in the managing tenant who can approve or deny requests from a user to activate the eligible role. --You can't use a service principal account as an approver. Also, approvers can't approve their own access; if an approver is also included as the user in an eligible authorization, a different approver will have to grant access in order for them to elevate their role. --If you donΓÇÖt include any approvers, the user will be able to activate the eligible role whenever they choose. --## Create eligible authorizations using Managed Services offers --To onboard your customer to Azure Lighthouse, you can publish Managed Services offers to Azure Marketplace. When [creating your offers in Partner Center](publish-managed-services-offers.md), you can now specify whether the **Access type** for each [Authorization](../../marketplace/create-managed-service-offer-plans.md#authorizations) should be **Active** or **Eligible**. --When you select **Eligible**, the user in your authorization will be able to activate the role according to the access policy you configure. You must set a maximum duration between 30 minutes and 8 hours, and specify whether you'll require multifactor authentication. You can also add up to 10 approvers if you choose to use them, providing a display name and a principal ID for each one. --Be sure to review the details in the [Eligible authorization elements](#eligible-authorization-elements) section when configuring your eligible authorizations in Partner Center. --## Create eligible authorizations using Azure Resource Manager templates --To onboard your customer to Azure Lighthouse, you use an [Azure Resource Manager template along with a corresponding parameters file](onboard-customer.md#create-an-azure-resource-manager-template) that you modify. The template you choose will depend on whether you're onboarding an entire subscription, a resource group, or multiple resource groups within a subscription. --To include eligible authorizations when you onboard a customer, use one of the templates from the [delegated-resource-management-eligible-authorizations section of our samples repo](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-eligible-authorizations). We provide templates with and without approvers included, so that you can use the one that works best for your scenario. --|To onboard this (with eligible authorizations) |Use this Azure Resource Manager template |And modify this parameter file | -|||| -|Subscription |[subscription.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/subscription/subscription.json) |[subscription.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/subscription/subscription.parameters.json) | -|Subscription (with approvers) |[subscription-managing-tenant-approvers.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/subscription/subscription-managing-tenant-approvers.json) |[subscription-managing-tenant-approvers.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/subscription/subscription-managing-tenant-approvers.parameters.json) | -|Resource group |[rg.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/rg.json) |[rg.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/rg.parameters.json) | -|Resource group (with approvers) |[rg-managing-tenant-approvers.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/rg-managing-tenant-approvers.json) |[rg-managing-tenant-approvers.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/rg-managing-tenant-approvers.parameters.json) | -|Multiple resource groups within a subscription |[multiple-rg.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/multiple-rg.json) |[multiple-rg.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/multiple-rg.parameters.json) | -|Multiple resource groups within a subscription (with approvers) |[multiple-rg-managing-tenant-approvers.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/multiple-rg-managing-tenant-approvers.json) |[multiple-rg-managing-tenant-approvers.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/multiple-rg-managing-tenant-approvers.parameters.json) | --The **subscription-managing-tenant-approvers.json** template, which can be used to onboard a subscription with eligible authorizations (including approvers), is shown below. --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-08-01/subscriptionDeploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "mspOfferName": { - "type": "string", - "metadata": { - "description": "Specify a unique name for your offer" - } - }, - "mspOfferDescription": { - "type": "string", - "metadata": { - "description": "Name of the Managed Service Provider offering" - } - }, - "managedByTenantId": { - "type": "string", - "metadata": { - "description": "Specify the tenant id of the Managed Service Provider" - } - }, - "authorizations": { - "type": "array", - "metadata": { - "description": "Specify an array of objects, containing tuples of Azure Active Directory principalId, a Azure roleDefinitionId, and an optional principalIdDisplayName. The roleDefinition specified is granted to the principalId in the provider's Active Directory and the principalIdDisplayName is visible to customers." - } - }, - "eligibleAuthorizations": { - "type": "array", - "metadata": { - "description": "Provide the authorizations that will have just-in-time role assignments on customer environments with support for approvals from the managing tenant" - } - } - }, - "variables": { - "mspRegistrationName": "[guid(parameters('mspOfferName'))]", - "mspAssignmentName": "[guid(parameters('mspOfferName'))]" - }, - "resources": [ - { - "type": "Microsoft.ManagedServices/registrationDefinitions", - "apiVersion": "2020-02-01-preview", - "name": "[variables('mspRegistrationName')]", - "properties": { - "registrationDefinitionName": "[parameters('mspOfferName')]", - "description": "[parameters('mspOfferDescription')]", - "managedByTenantId": "[parameters('managedByTenantId')]", - "authorizations": "[parameters('authorizations')]", - "eligibleAuthorizations": "[parameters('eligibleAuthorizations')]" - } - }, - { - "type": "Microsoft.ManagedServices/registrationAssignments", - "apiVersion": "2020-02-01-preview", - "name": "[variables('mspAssignmentName')]", - "dependsOn": [ - "[resourceId('Microsoft.ManagedServices/registrationDefinitions/', variables('mspRegistrationName'))]" - ], - "properties": { - "registrationDefinitionId": "[resourceId('Microsoft.ManagedServices/registrationDefinitions/', variables('mspRegistrationName'))]" - } - } - ], - "outputs": { - "mspOfferName": { - "type": "string", - "value": "[concat('Managed by', ' ', parameters('mspOfferName'))]" - }, - "authorizations": { - "type": "array", - "value": "[parameters('authorizations')]" - }, - "eligibleAuthorizations": { - "type": "array", - "value": "[parameters('eligibleAuthorizations')]" - } - } - } -``` --### Define eligible authorizations in your parameters file --The [subscription-managing-tenant-approvers.parameters.json sample template](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/subscription/subscription-managing-tenant-approvers.parameters.json) can be used to define authorizations, including eligible authorizations, when onboarding a subscription. --Each of your eligible authorizations must be defined in the `eligibleAuthorizations` parameter. This example includes one eligible authorization. --This template also includes the `managedbyTenantApprovers` element, which adds a `principalId` who will be required to approve all attempts to activate the eligible roles that are defined in the `eligibleAuthorizations` element. --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentParameters.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "mspOfferName": { - "value": "Relecloud Managed Services" - }, - "mspOfferDescription": { - "value": "Relecloud Managed Services" - }, - "managedByTenantId": { - "value": "<insert the managing tenant id>" - }, - "authorizations": { - "value": [ - { - "principalId": "00000000-0000-0000-0000-000000000000", - "roleDefinitionId": "acdd72a7-3385-48ef-bd42-f606fba81ae7", - "principalIdDisplayName": "PIM group" - } - ] - }, - "eligibleAuthorizations":{ - "value": [ - { - "justInTimeAccessPolicy": { - "multiFactorAuthProvider": "Azure", - "maximumActivationDuration": "PT8H", - "managedByTenantApprovers": [ - { - "principalId": "00000000-0000-0000-0000-000000000000", - "principalIdDisplayName": "PIM-Approvers" - } - ] - }, - "principalId": "00000000-0000-0000-0000-000000000000", - "principalIdDisplayName": "Tier 2 Support", - "roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c" -- } - ] - } - } -} -``` --Each entry within the `eligibleAuthorizations` parameter contains [three elements](#eligible-authorization-elements) that define an eligible authorization: `principalId`, `roleDefinitionId`, and `justInTimeAccessPolicy`. --`principalId` specifies the ID for the Microsoft Entra user or group to which this eligible authorization will apply. --`roleDefinitionId` contains the role definition ID for an [Azure built-in role](../../role-based-access-control/built-in-roles.md) that the user will be eligible to use on a just-in-time basis. If you include multiple eligible authorizations that use the same `roleDefinitionId`, they all must have identical settings for `justInTimeAccessPolicy`. --`justInTimeAccessPolicy` specifies three elements: --- `multiFactorAuthProvider` can either be set to **Azure**, which will require authentication using Microsoft Entra multifactor authentication, or to **None** if no multifactor authentication will be required.-- `maximumActivationDuration` sets the total length of time for which the user will have the eligible role. This value must use the ISO 8601 duration format. The minimum value is PT30M (30 minutes) and the maximum value is PT8H (8 hours). For simplicity, we recommend using values in half-hour increments only, such as PT6H for 6 hours or PT6H30M for 6.5 hours.-- `managedByTenantApprovers` is optional. If you include it, it must contain one or more combinations of a principalId and a principalIdDisplayName who will be required to approve any activation of the eligible role.--For more information about these elements, see the [Eligible authorization elements](#eligible-authorization-elements) section. --## Elevation process for users --After you onboard a customer to Azure Lighthouse, any eligible roles you included will be available to the specified user (or to users in any specified groups). --Each user can elevate their access at any time by visiting the **My customers** page in the Azure portal, selecting a delegation, and then selecting **Manage eligible roles**. After that, they can follow the [steps to activate the role](/entra/id-governance/privileged-identity-management/pim-resource-roles-activate-your-roles) in Microsoft Entra Privileged Identity Management. --If approvers have been specified, the user won't have access to the role until approval is granted by a designated [approver from the managing tenant](#approvers). All of the approvers will be notified when approval is requested, and the user won't be able to use the eligible role until approval is granted. Approvers will also be notified when that happens. For more information about the approval process, see [Approve or deny requests for Azure resource roles in Privileged Identity Management](/entra/id-governance/privileged-identity-management/pim-resource-roles-approval-workflow). --Once the eligible role has been activated, the user will have that role for the full duration specified in the eligible authorization. After that time period, they will no longer be able to use that role, unless they repeat the elevation process and elevate their access again. --## Next steps --- Learn how to [onboard customers to Azure Lighthouse using ARM templates](onboard-customer.md).-- Learn how to [onboard customers using Managed Services offers](publish-managed-services-offers.md).-- Learn more about [Microsoft Entra Privileged Identity Management](/entra/id-governance/privileged-identity-management/pim-configure).-- Learn more about [tenants, users, and roles in Azure Lighthouse](../concepts/tenants-users-roles.md). |
lighthouse | Deploy Policy Remediation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/deploy-policy-remediation.md | - Title: Deploy a policy that can be remediated within a delegated subscription -description: To deploy policies that use a remediation task via Azure Lighthouse, you need to create a managed identity in the customer tenant. Previously updated : 07/16/2024----# Deploy a policy that can be remediated within a delegated subscription --[Azure Lighthouse](../overview.md) allows service providers to create and edit policy definitions within a delegated subscription. To deploy policies that use a [remediation task](../../governance/policy/how-to/remediate-resources.md) (that is, policies with the [deployIfNotExists](../../governance/policy/concepts/effect-deploy-if-not-exists.md) or [modify](../../governance/policy/concepts/effect-modify.md) effect), you must create a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) in the customer tenant. This managed identity can be used by Azure Policy to deploy the template within the policy. This article describes the steps that are required to enable this scenario, both when you onboard the customer for Azure Lighthouse, and when you deploy the policy itself. --> [!TIP] -> Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same processes. --## Create a user who can assign roles to a managed identity in the customer tenant --When you [onboard a customer to Azure Lighthouse](onboard-customer.md), you define authorizations that grant access to delegated resources in the customer tenant. Each authorization specifies a **principalId** that corresponds to a Microsoft Entra user, group, or service principal in the managing tenant, and a **roleDefinitionId** that corresponds to the [Azure built-in role](../../role-based-access-control/built-in-roles.md) that will be granted. --To allow a **principalId** to assign roles to a managed identity in the customer tenant, you must set its **roleDefinitionId** to **User Access Administrator**. While this role isn't generally supported for Azure Lighthouse, it can be used in this specific scenario. Granting this role to this **principalId** allows it to assign specific built-in roles to managed identities. These roles are defined in the **delegatedRoleDefinitionIds** property, and can include any [supported Azure built-in role](../concepts/tenants-users-roles.md#role-support-for-azure-lighthouse) except for User Access Administrator or Owner. --After the customer is onboarded, the **principalId** created in this authorization will be able to assign these built-in roles to managed identities in the customer tenant. It won't any other permissions normally associated with the User Access Administrator role. --> [!NOTE] -> [Role assignments](../../role-based-access-control/role-assignments-steps.md#step-5-assign-role) across tenants must currently be done through APIs, not in the Azure portal. --This example shows a **principalId** with the User Access Administrator role. This user will be able to assign two built-in roles to managed identities in the customer tenant: Contributor and Log Analytics Contributor. --```json -{ - "principalId": "3kl47fff-5655-4779-b726-2cf02b05c7c4", - "principalIdDisplayName": "Policy Automation Account", - "roleDefinitionId": "18d7d88d-d35e-4fb5-a5c3-7773c20a72d9", - "delegatedRoleDefinitionIds": [ - "b24988ac-6180-42a0-ab88-20f7382dd24c", - "92aaf0da-9dab-42b6-94a3-d43ce8d16293" - ] -} -``` --## Deploy policies that can be remediated --After you create the user with the necessary permissions, that user can deploy policies that use remediation tasks within delegated customer subscriptions. --For example, let's say you wanted to enable diagnostics on Azure Key Vault resources in the customer tenant, as illustrated in this [sample](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/policy-enforce-keyvault-monitoring). A user in the managing tenant with the appropriate permissions (as described above) would deploy an [Azure Resource Manager template](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/policy-enforce-keyvault-monitoring/enforceAzureMonitoredKeyVault.json) to enable this scenario. --Creating the policy assignment to use with a delegated subscription must currently be done through APIs, not in the Azure portal. When doing so, the **apiVersion** must be set to **2019-04-01-preview** or later to include the new **delegatedManagedIdentityResourceId** property. This property allows you to include a managed identity that resides in the customer tenant (in a subscription or resource group that has been onboarded to Azure Lighthouse). --The following example shows a role assignment with a **delegatedManagedIdentityResourceId**. --```json -"type": "Microsoft.Authorization/roleAssignments", - "apiVersion": "2019-04-01-preview", - "name": "[parameters('rbacGuid')]", - "dependsOn": [ - "[variables('policyAssignment')]" - ], - "properties": { - "roleDefinitionId": "[concat(subscription().id, '/providers/Microsoft.Authorization/roleDefinitions/', variables('rbacContributor'))]", - "principalType": "ServicePrincipal", - "delegatedManagedIdentityResourceId": "[concat(subscription().id, '/providers/Microsoft.Authorization/policyAssignments/', variables('policyAssignment'))]", - "principalId": "[toLower(reference(concat('/providers/Microsoft.Authorization/policyAssignments/', variables('policyAssignment')), '2018-05-01', 'Full' ).identity.principalId)]" - } -``` --> [!TIP] -> A [similar sample](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/policy-add-or-replace-tag) is available to demonstrate how to deploy a policy that adds or removes a tag (using the modify effect) to a delegated subscription. --## Next steps --- Learn about [Azure Policy](../../governance/policy/index.yml).-- Learn about [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). |
lighthouse | Manage Hybrid Infrastructure Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/manage-hybrid-infrastructure-arc.md | - Title: Manage hybrid infrastructure at scale with Azure Arc -description: Azure Lighthouse helps you effectively manage customers' machines and Kubernetes clusters outside of Azure. Previously updated : 12/01/2023----# Manage hybrid infrastructure at scale with Azure Arc --[Azure Lighthouse](../overview.md) can help service providers use Azure Arc to manage customers' hybrid environments, with visibility across all managed Microsoft Entra tenants. --[Azure Arc](../../azure-arc/overview.md) helps simplify complex and distributed environments across on-premises, edge and multicloud, enabling deployment of Azure services anywhere and extending Azure management to any infrastructure. --With [Azure ArcΓÇôenabled servers](../../azure-arc/servers/overview.md), customers can manage Windows and Linux machines hosted outside of Azure on their corporate network, in the same way they manage native Azure virtual machines. Through Azure Lighthouse, service providers can then manage these connected non-Azure machines along with their customers' Azure resources. --[Azure ArcΓÇôenabled Kubernetes](../../azure-arc/kubernetes/overview.md) lets customers attach and configure Kubernetes clusters outside of Azure. When a Kubernetes cluster is connected to Azure Arc, it appears in the Azure portal with an Azure Resource Manager ID and a managed identity. Through Azure Lighthouse, service providers can connect Kubernetes clusters and manage them along with their customer's Azure Kubernetes Service (AKS) clusters and other Azure resources. --> [!TIP] -> Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md). --## Manage hybrid servers at scale with Azure ArcΓÇôenabled servers --As a service provider, you can connect and disconnect on-premises Windows Server or Linux machines outside Azure to your customer's subscription. When you [generate a script to connect a server](/azure/azure-arc/servers/learn/quick-enable-hybrid-vm), use the `--user-tenant-id` parameter to specify your managing tenant, with the `--tenant-id` parameter indicating the customer's tenant. --When viewing resources for a delegated subscription in the Azure portal, you'll see these connected machines labeled with **Azure Arc**. You can manage these connected machines using Azure constructs, such as Azure Policy and tagging, just as you would manage the customer's Azure resources. You can also work across customer tenants to manage all connected machines together. --For example, you can [ensure the same set of policies are applied across customers' hybrid machines](../../azure-arc/servers/learn/tutorial-assign-policy-portal.md). You can also use Microsoft Defender for Cloud to monitor compliance across all of your customers' hybrid environments, or [use Azure Monitor to collect data directly](../../azure-arc/servers/learn/tutorial-enable-vm-insights.md) into a Log Analytics workspace. [Virtual machine extensions](../../azure-arc/servers/manage-vm-extensions.md) can be deployed to non-Azure Windows and Linux VMs, simplifying management of your customers' hybrid machines. --## Manage hybrid Kubernetes clusters at scale with Azure Arc-enabled Kubernetes --You can manage Kubernetes clusters that have been [connected to a customer's subscription with Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md), just as if they were running in Azure. --If your customer has created a service principal account to onboard Kubernetes clusters to Azure Arc, you can access this account so that you can [onboard and manage clusters](../../azure-arc/kubernetes/quickstart-connect-cluster.md). To do so, a user in the managing tenant must have been granted the [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-clusterazure-arc-onboarding) when the subscription containing the service principal account was [onboarded to Azure Lighthouse](onboard-customer.md). --You can deploy [configurations and Helm charts](../../azure-arc/kubernetes/tutorial-use-gitops-flux2.md) using [GitOps for connected clusters](../../azure-arc/kubernetes/conceptual-gitops-flux2.md). --You can also [monitor connected clusters](/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters) with Azure Monitor, use tagging to organize clusters, and [use Azure Policy for Kubernetes](/azure/governance/policy/concepts/policy-for-kubernetes?toc=%2Fazure%2Fazure-arc%2Fkubernetes%2Ftoc.json&bc=%2Fazure%2Fazure-arc%2Fkubernetes%2Fbreadcrumb%2Ftoc.json) to manage and report on compliance state. --## Next steps --- Explore the [Azure Arc Jumpstart](https://azurearcjumpstart.com/).-- Learn about [supported cloud operations for Azure Arc-enabled servers](../../azure-arc/servers/overview.md#supported-cloud-operations).-- Learn about [accessing connected Kubernetes clusters through the Azure portal](../../azure-arc/kubernetes/kubernetes-resource-view.md). |
lighthouse | Manage Sentinel Workspaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/manage-sentinel-workspaces.md | - Title: Manage Microsoft Sentinel workspaces at scale -description: Azure Lighthouse helps you effectively manage Microsoft Sentinel across delegated customer resources. Previously updated : 07/16/2024----# Manage Microsoft Sentinel workspaces at scale --[Azure Lighthouse](../overview.md) allows service providers to perform operations at scale across several Microsoft Entra tenants at once, making management tasks more efficient. --[Microsoft Sentinel](../../sentinel/overview.md) delivers security analytics and threat intelligence, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response. With Azure Lighthouse, you can manage multiple Microsoft Sentinel workspaces across tenants at scale. This enables scenarios such as running queries across multiple workspaces, or creating workbooks to visualize and monitor data from your connected data sources to gain insights. IP such as queries and playbooks remain in your managing tenant, but can be used to perform security management in the customer tenants. --This topic provides an overview of how Azure Lighthouse lets you use Microsoft Sentinel in a scalable way for cross-tenant visibility and managed security services. --> [!TIP] -> Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md). --> [!NOTE] -> You can manage delegated resources that are located in different [regions](../../availability-zones/az-overview.md#regions). However, you can't delegate resources across a national cloud and the Azure public cloud, or across two separate [national clouds](../../active-directory/develop/authentication-national-cloud.md). --## Architectural considerations --For a managed security service provider (MSSP) who wants to build a Security-as-a-Service offering using Microsoft Sentinel, a single security operations center (SOC) may be needed to centrally monitor, manage, and configure multiple Microsoft Sentinel workspaces deployed within individual customer tenants. Similarly, enterprises with multiple Microsoft Entra tenants may want to centrally manage multiple Microsoft Sentinel workspaces deployed across their tenants. --This model of centralized management has the following advantages: --- Ownership of data remains with each managed tenant.-- Supports requirements to store data within geographical boundaries.-- Ensures data isolation, since data for multiple customers isn't stored in the same workspace.-- Prevents data exfiltration from the managed tenants, helping to ensure data compliance.-- Related costs are charged to each managed tenant, rather than to the managing tenant.-- Data from all data sources and data connectors that are integrated with Microsoft Sentinel (such as Microsoft Entra Activity Logs, Office 365 logs, or Microsoft Threat Protection alerts) remains within each customer tenant.-- Reduces network latency.-- Easy to add or remove new subsidiaries or customers.-- Able to use a multi-workspace view when working through Azure Lighthouse.-- To protect your intellectual property, you can use playbooks and workbooks to work across tenants without sharing code directly with customers. Only analytic and hunting rules will need to be saved directly in each customer's tenant.--> [!IMPORTANT] -> If workspaces are only created in customer tenants, the **Microsoft.SecurityInsights** and **Microsoft.OperationalInsights** resource providers must also be [registered](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) on a subscription in the managing tenant. --An alternate deployment model is to create one Microsoft Sentinel workspace in the managing tenant. In this model, Azure Lighthouse enables log collection from data sources across managed tenants. However, there are some data sources that can't be connected across tenants, such as Microsoft Defender XDR. Because of this limitation, this model isn't suitable for many service provider scenarios. --## Granular Azure role-based access control (Azure RBAC) --Each customer subscription that an MSSP will manage must be [onboarded to Azure Lighthouse](onboard-customer.md). This allows designated users in the managing tenant to access and perform management operations on Microsoft Sentinel workspaces deployed in customer tenants. --When creating your authorizations, you can assign Microsoft Sentinel built-in roles to users, groups, or service principals in your managing tenant. Common roles include: --- [Microsoft Sentinel Reader](../../role-based-access-control/built-in-roles.md#microsoft-sentinel-reader)-- [Microsoft Sentinel Responder](../../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder)-- [Microsoft Sentinel Contributor](../../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor)--You may also want to assign other built-in roles to perform additional functions. For information about specific roles that can be used with Microsoft Sentinel, see [Roles and permissions in Microsoft Sentinel](../../sentinel/roles.md). --After you onboard your customers, designated users can log into your managing tenant and [directly access the customer's Microsoft Sentinel workspace](../../sentinel/multiple-tenants-service-providers.md#how-to-access-microsoft-sentinel-in-managed-tenants) with the roles that were assigned. --## View and manage incidents across workspaces --If you work with Microsoft Sentinel resources for multiple customers, you can view and manage incidents in multiple workspaces across different tenants at once. For more information, see [Work with incidents in many workspaces at once](../../sentinel/multiple-workspace-view.md) and [Extend Microsoft Sentinel across workspaces and tenants](../../sentinel/extend-sentinel-across-workspaces-tenants.md). --> [!NOTE] -> Be sure that the users in your managing tenant have been assigned both read and write permissions on all of the managed workspaces. If a user only has read permissions on some workspaces, warning messages may appear when selecting incidents in those workspaces, and the user won't be able to modify those incidents or any others selected along with them (even if the user has write permissions for the others). --## Configure playbooks for mitigation --[Playbooks](../../sentinel/tutorial-respond-threats-playbook.md) can be used for automatic mitigation when an alert is triggered. These playbooks can be run manually, or they can run automatically when specific alerts are triggered. The playbooks can be deployed either in the managing tenant or the customer tenant, with the response procedures configured based on which tenant's users should take action in response to a security threat. --## Create cross-tenant workbooks --[Azure Monitor workbooks in Microsoft Sentinel](../../sentinel/monitor-your-data.md) help you visualize and monitor data from your connected data sources to gain insights. You can use the built-in workbook templates in Microsoft Sentinel, or create custom workbooks for your scenarios. --You can deploy workbooks in your managing tenant and create at-scale dashboards to monitor and query data across customer tenants. For more information, see [Cross-workspace workbooks](../../sentinel/extend-sentinel-across-workspaces-tenants.md#using-cross-workspace-workbooks). --You can also deploy workbooks directly in an individual managed tenant for scenarios specific to that customer. --## Run Log Analytics and hunting queries across Microsoft Sentinel workspaces --Create and save Log Analytics queries for threat detection centrally in the managing tenant, including [hunting queries](../../sentinel/extend-sentinel-across-workspaces-tenants.md#hunt-across-multiple-workspaces). These queries can be run across all of your customers' Microsoft Sentinel workspaces by using the Union operator and the [workspace() expression](/azure/azure-monitor/logs/workspace-expression). --For more information, see [Query multiple workspace](../../sentinel/extend-sentinel-across-workspaces-tenants.md#query-multiple-workspaces). --## Use automation for cross-workspace management --You can use automation to manage multiple Microsoft Sentinel workspaces and configure [hunting queries](../../sentinel/hunting.md), playbooks, and workbooks. For more information, see [Manage multiple workspaces using automation](../../sentinel/extend-sentinel-across-workspaces-tenants.md#manage-multiple-workspaces-using-automation). --## Monitor security of Office 365 environments --Use Azure Lighthouse with Microsoft Sentinel to monitor the security of Office 365 environments across tenants. First, enable out-of-the-box [Office 365 data connectors](../../sentinel/data-connectors/office-365.md) in the managed tenant. Information about user and admin activities in Exchange and SharePoint (including OneDrive) can then be ingested to a Microsoft Sentinel workspace within the managed tenant. This information includes details about actions such as file downloads, access requests sent, changes to group events, and mailbox operations, along with details about the users who performed those actions. [Office 365 DLP alerts](https://techcommunity.microsoft.com/t5/azure-sentinel/ingest-office-365-dlp-events-into-azure-sentinel/ba-p/1031820) are also supported as part of the built-in Office 365 connector. --The [Microsoft Defender for Cloud Apps connector](../../sentinel/data-connectors/microsoft-defender-for-cloud-apps.md) lets you stream alerts and Cloud Discovery logs into Microsoft Sentinel. This connector offers visibility into cloud apps, provides sophisticated analytics to identify and combat cyberthreats, and helps you control how data travels. Activity logs for Defender for Cloud Apps can be [consumed using the Common Event Format (CEF)](https://techcommunity.microsoft.com/t5/azure-sentinel/ingest-box-com-activity-events-via-microsoft-cloud-app-security/ba-p/1072849). --After setting up Office 365 data connectors, you can use cross-tenant Microsoft Sentinel capabilities such as viewing and analyzing the data in workbooks, using queries to create custom alerts, and configuring playbooks to respond to threats. --## Protect intellectual property --When working with customers, you might want to protect intellectual property developed in Microsoft Sentinel, such as Microsoft Sentinel analytics rules, hunting queries, playbooks, and workbooks. There are different methods you can use to ensure that customers don't have complete access to the code used in these resources. --For more information, see [Protecting MSSP intellectual property in Microsoft Sentinel](../../sentinel/mssp-protect-intellectual-property.md). --## Next steps --- Learn about [Microsoft Sentinel](../../sentinel/overview.md).-- Review the [Microsoft Sentinel pricing page](https://azure.microsoft.com/pricing/details/azure-sentinel/).-- Explore [MicrosoftSentinel All-in-One](https://github.com/Azure/Azure-Sentinel/tree/master/Tools/Sentinel-All-In-One), a project to speed up deployment and initial configuration tasks of a Microsoft Sentinel environment.-- Learn about [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md). |
lighthouse | Migration At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/migration-at-scale.md | - Title: Manage Azure Migrate projects at scale -description: Azure Lighthouse helps you effectively use Azure Migrate across delegated customer resources. Previously updated : 07/16/2024----# Manage Azure Migrate projects at scale with Azure Lighthouse --This topic provides an overview of how [Azure Lighthouse](../overview.md) can help you use [Azure Migrate](../../migrate/migrate-services-overview.md) in a scalable way across multiple Microsoft Entra tenants. --Azure Lighthouse allows service providers to perform operations at scale across several tenants at once, making management tasks more efficient. --Azure Migrate provides a centralized hub to assess and migrate to Azure on-premises servers, infrastructure, applications, and data. --Azure Lighthouse integration with Azure Migrate lets service providers discover, assess, and migrate workloads for different customers at scale, rather than accessing each customer subscription individually. Service providers can have a single view of all of the Azure Migrate projects they manage across multiple customer tenants. Their customers have visibility into service provider actions, and they maintain control of their own environments. --> [!TIP] -> Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md). --Depending on your scenario, you can create the Azure Migrate project in the customer tenant or in your managing tenant. This article describes each model so you can determine which one best fits your customers' migration needs. --> [!NOTE] -> With Azure Lighthouse, partners can perform discovery, assessment and migration for on-premises VMware VMs, Hyper-V VMs, physical servers and AWS/GCP instances. For [VMware VM migration](../../migrate/server-migrate-overview.md), only the [agent-based migration method](../../migrate/tutorial-migrate-vmware-agent.md) can be used for a migration project in a delegated customer subscription. Migration using agentless replication is not currently supported through delegated access to the customer's scope. --## Create an Azure Migrate project in the customer tenant --One option when using Azure Lighthouse is to create the Azure Migrate project in the customer tenant. Users in the managing tenant can then select the customer subscription when creating a migration project. From the managing tenant, the service provider can perform the necessary migration operations. Examples of these operations are deploying the Azure Migrate appliance to discover the workloads, assessing workloads by grouping VMs and calculating cloud-related costs, reviewing VM readiness, and performing the actual migration. --In this scenario, no resources are created or stored in the managing tenant, even though the discovery and assessment steps are initiated and executed from that tenant. All of the resources, such as migration projects, assessment reports for on-premises workloads, and migrated resources at the target destination, are deployed in the delegated customer subscription. The service provider can access all customer projects from their own tenant and portal experience. --This approach minimizes context switching for service providers working across multiple customers, and lets customers keep all of their resources in their own tenants. --A high-level workflow for this model is: --1. The customer is [onboarded to Azure Lighthouse](onboard-customer.md). The Contributor built-in role is required for the identity that will be used with Azure Migrate. See the [delegated-resource-management-azmigrate](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-azmigrate) sample template for an example using this role. Before deploying the template, be sure to modify the parameter file to reflect your environment. -1. The designated user signs into the managing tenant in the Azure portal, then goes to Azure Migrate. This user [creates an Azure Migrate project](../../migrate/create-manage-projects.md), selecting the appropriate delegated customer subscription. -1. The user then [performs steps for discovery and assessment](../../migrate/tutorial-discover-vmware.md). -- For VMware VMs, before you configure the appliance, you can limit discovery to vCenter Server datacenters, clusters, a folder of clusters, hosts, a folder of hosts, or individual VMs. To set the scope, assign permissions on the account that the appliance uses to access the vCenter Server. This is useful if multiple customers' VMs are hosted on the hypervisor. You can't limit the discovery scope of Hyper-V. -- > [!NOTE] - > For migration of VMware virtual machines, only the agent-based method is currently supported when working in a delegated customer subscription. --1. When the target customer subscription is ready, proceed with the migration through the access granted by Azure Lighthouse. The migration project containing assessment results and migrated resources are created in the customer tenant under the target subscription. --> [!TIP] -> Prior to migration, a landing zone must be deployed to provision the foundation infrastructure resources and to prepare the subscription to which virtual machines will be migrated. The Owner built-in role may be required to access or create some resources in this landing zone. Because this role is not currently supported in Azure Lighthouse, the customer may need to provide [guest access](/entra/external-id/what-is-b2b) to the service provider, or delegate admin access via the [Cloud Solution Provider (CSP) subscription model](/partner-center/customers-revoke-admin-privileges). -> -> For more information about multi-tenant landing zones, see [Considerations and recommendations for multi-tenant Azure landing zone scenarios](/azure/cloud-adoption-framework/ready/landing-zone/design-area/multi-tenant/considerations-recommendations) and the [Multi-tenant Landing-Zones demo solution](https://github.com/Azure/Multi-tenant-Landing-Zones) on GitHub. --## Create an Azure Migrate project in the managing tenant --In this scenario, the migration project and all of the relevant resources reside in the managing tenant. Customers don't have direct access to the migration project, although assessments can be shared with customers if desired. As with the previous scenario, migration-related operations such as discovery and assessment are performed by users in the managing tenant, and the migration destination for each customer is the target subscription in their tenant. --This approach enables service providers to begin migration discovery and assessment projects quickly, abstracting those initial steps from customer subscriptions and tenants. --A high-level workflow for this model is: --1. The customer is [onboarded to Azure Lighthouse](onboard-customer.md). The Contributor built-in role is required for the identity that will be used with Azure Migrate. See the [delegated-resource-management-azmigrate](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-azmigrate) sample template for an example using this role. Before deploying the template, be sure to modify the parameter file to reflect your environment. -1. The designated user signs into the managing tenant in the Azure portal, then goes to Azure Migrate. This user [creates an Azure Migrate project](../../migrate/create-manage-projects.md) in a subscription belonging to the managing tenant. -1. The user then [performs steps for discovery and assessment](../../migrate/tutorial-discover-vmware.md). The on-premises VMs are discovered and assessed within the migration project created in the managing tenant, then migrated from there. -- If you manage multiple customers in the same Hyper-V host, you can discover all workloads at once. You can select customer-specific VMs in the same group, and then create an assessment. Migration is performed by selecting the appropriate customer's subscription as the target destination. There's no need to limit the discovery scope, and you can maintain a full overview of all customer workloads in one migration project. --1. When ready, proceed with the migration by selecting the delegated customer subscription as the target destination for replicating and migrating the workloads. The new resources are created in the customer subscription, while assessment data and resources pertaining to the migration project remain in the managing tenant. --## Partner recognition for customer migrations --As a member of the [Microsoft Cloud Partner Program](https://partner.microsoft.com), you can link your partner ID with the credentials used to manage delegated customer resources. This link allows Microsoft to attribute influence and Azure consumed revenue to your organization based on the tasks you perform for customers, including migration projects. --For more information, see [Link a partner ID](../../cost-management-billing/manage/link-partner-id.md). --## Next steps --- Learn more about [Azure Migrate](../../migrate/migrate-services-overview.md).-- Learn about other [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md) supported by Azure Lighthouse. |
lighthouse | Monitor At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/monitor-at-scale.md | - Title: Monitor delegated resources at scale -description: Azure Lighthouse helps you use Azure Monitor Logs in a scalable way across customer tenants. Previously updated : 05/23/2023-----# Monitor delegated resources at scale --As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](../overview.md). Azure Lighthouse allows service providers to perform operations at scale across several tenants at once, making management tasks more efficient. --This topic shows you how to use [Azure Monitor Logs](/azure/azure-monitor/logs/data-platform-logs) in a scalable way across the customer tenants you're managing. Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md). --> [!NOTE] -> Be sure that users in your managing tenants have been granted the [necessary roles for managing Log Analytics workspaces](/azure/azure-monitor/logs/manage-access#azure-rbac) on your delegated customer subscriptions. --## Create Log Analytics workspaces --In order to collect data, you'll need to create Log Analytics workspaces. These Log Analytics workspaces are unique environments for data collected by Azure Monitor. Each workspace has its own data repository and configuration, and data sources and solutions are configured to store their data in a particular workspace. --We recommend creating these workspaces directly in the customer tenants. This way their data remains in their tenants rather than being exported into yours. Creating the workspaces in the customer tenants allows centralized monitoring of any resources or services supported by Log Analytics, giving you more flexibility on what types of data you monitor. Workspaces created in customer tenants are required in order to collect information from [diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings). --> [!TIP] -> Any automation account used to access data from a Log Analytics workspace must be created in the same tenant as the workspace. --You can create a Log Analytics workspace by using the [Azure portal](/azure/azure-monitor/logs/quick-create-workspace), by using [Azure Resource Manager templates](/azure/azure-monitor/logs/resource-manager-workspace), or by using [Azure PowerShell](/azure/azure-monitor/logs/powershell-workspace-configuration). --> [!IMPORTANT] -> If all workspaces are created in customer tenants, the Microsoft.Insights resource providers must also be [registered](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) on a subscription in the managing tenant. If your managing tenant doesn't have an existing Azure subscription, you can register the resource provider manually by using the following PowerShell commands: -> -> ```powershell -> $ManagingTenantId = "your-managing-Azure-AD-tenant-id" -> -> # Authenticate as a user with admin rights on the managing tenant -> Connect-AzAccount -Tenant $ManagingTenantId -> -> # Register the Microsoft.Insights resource providers Application Ids -> New-AzADServicePrincipal -ApplicationId 1215fb39-1d15-4c05-b2e3-d519ac3feab4 -Role Contributor -> New-AzADServicePrincipal -ApplicationId 6da94f3c-0d67-4092-a408-bb5d1cb08d2d -Role Contributor -> New-AzADServicePrincipal -ApplicationId ca7f3f0b-7d91-482c-8e09-c5d840d0eac5 -Role Contributor -> ``` --## Deploy policies that log data --Once you've created your Log Analytics workspaces, you can deploy [Azure Policy](../../governance/policy/overview.md) across your customer hierarchies so that diagnostic data is sent to the appropriate workspace in each tenant. The exact policies you deploy may vary, depending on the resource types that you want to monitor. --To learn more about creating policies, see [Tutorial: Create and manage policies to enforce compliance](../../governance/policy/tutorials/create-and-manage.md). This [community tool](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/tools/azure-diagnostics-policy-generator) provides a script to help you create policies to monitor the specific resource types that you choose. --When you've determined which policies to deploy, you can [deploy them to your delegated subscriptions at scale](policy-at-scale.md). --## Analyze the gathered data --After you've deployed your policies, data will be logged in the Log Analytics workspaces you've created in each customer tenant. To gain insights across all managed customers, you can use tools such as [Azure Monitor Workbooks](/azure/azure-monitor/visualize/workbooks-overview) to gather and analyze information from multiple data sources. --## Query data across customer workspaces --You can run [log queries](/azure/azure-monitor/logs/log-query-overview) to retrieve data across Log Analytics workspaces in different customer tenants by creating a union that includes multiple workspaces. By including the TenantID column, you can see which results belong to which tenants. --The following example query creates a union on the AzureDiagnostics table across workspaces in two separate customer tenants. The results show the Category, ResourceGroup, and TenantID columns. --``` Kusto -union AzureDiagnostics, -workspace("WS-customer-tenant-1").AzureDiagnostics, -workspace("WS-customer-tenant-2").AzureDiagnostics -| project Category, ResourceGroup, TenantId -``` --For more examples of queries across multiple Log Analytics workspaces, see [Create a log query across multiple workspaces and apps in Azure Monitor](/azure/azure-monitor/logs/cross-workspace-query). --> [!IMPORTANT] -> If you use an automation account used to query data from a Log Analytics workspace, that automation account must be created in the same tenant as the workspace. --## View alerts across customers --You can view [alerts](/azure/azure-monitor/alerts/alerts-overview) for delegated subscriptions in the customer tenants that you manage. --From your managing tenant, you can [create, view, and manage activity log alerts](/azure/azure-monitor/alerts/alerts-activity-log) in the Azure portal or through APIs and management tools. --To refresh alerts automatically across multiple customers, use an [Azure Resource Graph](../../governance/resource-graph/overview.md) query to filter for alerts. You can pin the query to your dashboard and select all of the appropriate customers and subscriptions. For example, the query below will display severity 0 and 1 alerts, refreshing every 60 minutes. --```kusto -alertsmanagementresources -| where type == "microsoft.alertsmanagement/alerts" -| where properties.essentials.severity =~ "Sev0" or properties.essentials.severity =~ "Sev1" -| where properties.essentials.monitorCondition == "Fired" -| where properties.essentials.startDateTime > ago(60m) -| project StartTime=properties.essentials.startDateTime,name,Description=properties.essentials.description, Severity=properties.essentials.severity, subscriptionId -| sort by tostring(StartTime) -``` --## Next steps --- Try out the [Activity Logs by Domain](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/workbook-activitylogs-by-domain) workbook on GitHub.-- Explore this [MVP-built sample workbook](https://github.com/scautomation/Azure-Automation-Update-Management-Workbooks), which tracks patch compliance reporting by [querying Update Management logs](../../automation/update-management/query-logs.md) across multiple Log Analytics workspaces.-- Learn about other [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md). |
lighthouse | Monitor Delegation Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/monitor-delegation-changes.md | - Title: Monitor delegation changes in your managing tenant -description: Learn how to monitor all Azure Lighthouse delegation activity to your managing tenant. Previously updated : 05/23/2023-----# Monitor delegation changes in your managing tenant --As a service provider, you may want to be aware when customer subscriptions or resource groups are delegated to your tenant through [Azure Lighthouse](../overview.md), or when previously delegated resources are removed. --In the managing tenant, the [Azure activity log](/azure/azure-monitor/essentials/activity-log) tracks delegation activity at the tenant level. This logged activity includes any added or removed delegations from customer tenants. --This topic explains the permissions needed to monitor delegation activity to your tenant across all of your customers. It also includes a sample script that shows one method for querying and reporting on this data. --> [!IMPORTANT] -> All of these steps must be performed in your managing tenant, rather than in any customer tenants. -> -> Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same processes. --## Enable access to tenant-level data --To access tenant-level Activity Log data, an account must be assigned the [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) Azure built-in role at root scope (/). This assignment must be performed by a user who has the Global Administrator role with additional elevated access. --### Elevate access for a Global Administrator account --To assign a role at root scope (/), you will need to have the Global Administrator role with elevated access. This elevated access should be added only when you need to make the role assignment, then removed when you are done. --For detailed instructions on adding and removing elevation, see [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md). --After you elevate your access, your account will have the User Access Administrator role in Azure at root scope. This role assignment allows you to view all resources and assign access in any subscription or management group in the directory, as well as to make role assignments at root scope. --### Assign the Monitoring Reader role at root scope --Once you have elevated your access, you can assign the appropriate permissions to an account so that it can query tenant-level activity log data. This account will need to have the [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) Azure built-in role assigned at the root scope of your managing tenant. --> [!IMPORTANT] -> Granting a role assignment at root scope means that the same permissions will apply to every resource in the tenant. Because this is a broad level of access, we recommend [assigning this role to a service principal account and using that account to query data](#use-a-service-principal-account-to-query-the-activity-log). -> -> You can also assign the Monitoring Reader role at root scope to individual users or to user groups so that they can [view delegation information directly in the Azure portal](#view-delegation-changes-in-the-azure-portal). If you do so, be aware that this is a broad level of access which should be limited to the fewest number of users possible. --Use one of the following methods to make the root scope assignment. --#### PowerShell --```azurepowershell-interactive -# Log in first with Connect-AzAccount if you're not using Cloud Shell --New-AzRoleAssignment -SignInName <yourLoginName> -Scope "/" -RoleDefinitionName "Monitoring Reader" -ObjectId <objectId> -``` --#### Azure CLI --```azurecli-interactive -# Log in first with az login if you're not using Cloud Shell --az role assignment create --assignee 00000000-0000-0000-0000-000000000000 --role "Monitoring Reader" --scope "/" -``` --### Remove elevated access for the Global Administrator account --After you've assigned the Monitoring Reader role at root scope to the desired account, be sure to [remove the elevated access](../../role-based-access-control/elevate-access-global-admin.md) for the Global Administrator account, as this level of access will no longer be needed. --## View delegation changes in the Azure portal --Users who have been assigned the Monitoring Reader role at root scope can view delegation changes directly in the Azure portal. --1. Navigate to the **My customers** page, then select **Activity log** from the left-hand navigation menu. -1. Ensure that **Directory Activity** is selected in the filter near the top of the screen. --A list of delegation changes will appear. You can select **Edit columns** to show or hide the **Status**, **Event category**, **Time**, **Time stamp**, **Subscription**, **Event initiated by**, **Resource group**, **Resource type**, and **Resource** values. ---## Use a service principal account to query the activity log --Because the Monitoring Reader role at root scope is such a broad level of access, you may wish to assign the role to a service principal account and use that account to query data using the script below. --> [!IMPORTANT] -> Currently, tenants with a large amount of delegation activity may run into errors when querying this data. --When using a service principal account to query the activity log, we recommend the following best practices: --- [Create a new service principal account](../../active-directory/develop/howto-create-service-principal-portal.md) to be used only for this function, rather than assigning this role to an existing service principal used for other automation.-- Be sure that this service principal does not have access to any delegated customer resources.-- [Use a certificate to authenticate](../../active-directory/develop/howto-create-service-principal-portal.md#set-up-authentication) and [store it securely in Azure Key Vault](/azure/key-vault/general/security-features).-- Limit the users who have access to act on behalf of the service principal.--Once you've created a new service principal account with Monitoring Reader access to the root scope of your managing tenant, you can use it to query and report on delegation activity in your tenant. --[This Azure PowerShell script](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/tools/monitor-delegation-changes) can be used to query the past day of activity and report any added or removed delegations (or attempts that were not successful). It queries the [Tenant Activity Log](/rest/api/monitor/TenantActivityLogs/List) data, then constructs the following values to report on delegations that are added or removed: --- **DelegatedResourceId**: The ID of the delegated subscription or resource group-- **CustomerTenantId**: The customer tenant ID-- **CustomerSubscriptionId**: The subscription ID that was delegated or that contains the resource group that was delegated-- **CustomerDelegationStatus**: The status change for the delegated resource (succeeded or failed)-- **EventTimeStamp**: The date and time at which the delegation change was logged--When querying this data, keep in mind: --- If multiple resource groups are delegated in a single deployment, separate entries will be returned for each resource group.-- Changes made to a previous delegation (such as updating the permission structure) will be logged as an added delegation.-- As noted above, an account must have the Monitoring Reader Azure built-in role at root scope (/) in order to access this tenant-level data.-- You can use this data in your own workflows and reporting. For example, you can use the [HTTP Data Collector API (preview)](/azure/azure-monitor/logs/data-collector-api) to log data to Azure Monitor from a REST API client, then use [action groups](/azure/azure-monitor/alerts/action-groups) to create notifications or alerts.--```azurepowershell-interactive -# Log in first with Connect-AzAccount if you're not using Cloud Shell --# Azure Lighthouse: Query Tenant Activity Log for registered/unregistered delegations for the last 1 day --$GetDate = (Get-Date).AddDays((-1)) --$dateFormatForQuery = $GetDate.ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssZ") --# Getting Azure context for the API call -$currentContext = Get-AzContext --# Fetching new token -$azureRmProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile -$profileClient = [Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient]::new($azureRmProfile) -$token = $profileClient.AcquireAccessToken($currentContext.Tenant.Id) --$listOperations = @{ - Uri = "https://management.azure.com/providers/microsoft.insights/eventtypes/management/values?api-version=2015-04-01&`$filter=eventTimestamp ge '$($dateFormatForQuery)'" - Headers = @{ - Authorization = "Bearer $($token.AccessToken)" - 'Content-Type' = 'application/json' - } - Method = 'GET' -} -$list = Invoke-RestMethod @listOperations --# First link can be empty - and point to a next link (or potentially multiple pages) -# While you get more data - continue fetching and add result -while($list.nextLink){ - $list2 = Invoke-RestMethod $list.nextLink -Headers $listOperations.Headers -Method Get - $data+=$list2.value; - $list.nextLink = $list2.nextlink; -} --$showOperations = $data; --if ($showOperations.operationName.value -eq "Microsoft.Resources/tenants/register/action") { - $registerOutputs = $showOperations | Where-Object -FilterScript { $_.eventName.value -eq "EndRequest" -and $_.resourceType.value -and $_.operationName.value -eq "Microsoft.Resources/tenants/register/action" } - foreach ($registerOutput in $registerOutputs) { - $eventDescription = $registerOutput.description | ConvertFrom-Json; - $registerOutputdata = [pscustomobject]@{ - Event = "An Azure customer has registered delegated resources to your Azure tenant"; - DelegatedResourceId = $eventDescription.delegationResourceId; - CustomerTenantId = $eventDescription.subscriptionTenantId; - CustomerSubscriptionId = $eventDescription.subscriptionId; - CustomerDelegationStatus = $registerOutput.status.value; - EventTimeStamp = $registerOutput.eventTimestamp; - } - $registerOutputdata | Format-List - } -} -if ($showOperations.operationName.value -eq "Microsoft.Resources/tenants/unregister/action") { - $unregisterOutputs = $showOperations | Where-Object -FilterScript { $_.eventName.value -eq "EndRequest" -and $_.resourceType.value -and $_.operationName.value -eq "Microsoft.Resources/tenants/unregister/action" } - foreach ($unregisterOutput in $unregisterOutputs) { - $eventDescription = $unregisterOutput.description | ConvertFrom-Json; - $unregisterOutputdata = [pscustomobject]@{ - Event = "An Azure customer has unregistered delegated resources from your Azure tenant"; - DelegatedResourceId = $eventDescription.delegationResourceId; - CustomerTenantId = $eventDescription.subscriptionTenantId; - CustomerSubscriptionId = $eventDescription.subscriptionId; - CustomerDelegationStatus = $unregisterOutput.status.value; - EventTimeStamp = $unregisterOutput.eventTimestamp; - } - $unregisterOutputdata | Format-List - } -} -else { - Write-Output "No new delegation events for tenant: $($currentContext.Tenant.TenantId)" -} -``` --## Next steps --- Learn how to [onboard customers to Azure Lighthouse](onboard-customer.md).-- Learn about [Azure Monitor](/azure/azure-monitor/) and the [Azure activity log](/azure/azure-monitor/essentials/activity-log).-- Review the [Activity Logs by Domain](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/workbook-activitylogs-by-domain) sample workbook to learn how to display Azure Activity logs across subscriptions with an option to filter them by domain name. |
lighthouse | Onboard Customer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/onboard-customer.md | - Title: Onboard a customer to Azure Lighthouse -description: Learn how to onboard a customer to Azure Lighthouse, allowing their resources to be accessed and managed by users in your tenant. Previously updated : 06/03/2024-----# Onboard a customer to Azure Lighthouse --This article explains how you, as a service provider, can onboard a customer to Azure Lighthouse. When you do so, delegated resources (subscriptions and/or resource groups) in the customer's Microsoft Entra tenant can be managed by users in your tenant through [Azure delegated resource management](../concepts/architecture.md). --> [!TIP] -> Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same process to set up Azure Lighthouse and consolidate their management experience. --You can repeat the onboarding process for multiple customers. When a user with the appropriate permissions signs in to your managing tenant, that user is authorized to perform management operations across customer tenancy scopes, without having to sign in to each individual customer tenant. --> [!NOTE] -> Customers can alternately be onboarded to Azure Lighthouse when they purchase a Managed Service offer (public or private) that you [publish to Azure Marketplace](publish-managed-services-offers.md). You can also use the onboarding process described here in conjunction with offers published to Azure Marketplace. --The onboarding process requires actions to be taken from within both the service provider's tenant and from the customer's tenant. All of these steps are described in this article. --## Gather tenant and subscription details --To onboard a customer's tenant, it must have an active Azure subscription. When you [create a template manually](#create-your-template-manually), you'll need to know the following: --- The tenant ID of the service provider's tenant (where you will be managing the customer's resources). -- The tenant ID of the customer's tenant (which will have resources managed by the service provider).-- The subscription IDs for each specific subscription in the customer's tenant that will be managed by the service provider (or that contains the resource group(s) that will be managed by the service provider).--If you don't know the ID for a tenant, you can [retrieve it by using the Azure portal, Azure PowerShell, or Azure CLI](/entra/fundamentals/how-to-find-tenant). --If you [create your template in the Azure portal](#create-your-template-in-the-azure-portal), your tenant ID is provided automatically. You don't need to know the customer's tenant or subscription details in order to create your template in the Azure portal. However, if you plan to onboard one or more resource groups in the customer's tenant (rather than the entire subscription), you'll need to know the names of each resource group. --## Define roles and permissions --As a service provider, you may want to perform multiple tasks for a single customer, requiring different access for different scopes. You can define as many authorizations as you need in order to assign the appropriate [Azure built-in roles](../../role-based-access-control/built-in-roles.md). Each authorization includes a `principalId` which refers to a Microsoft Entra user, group, or service principal in the managing tenant. --> [!NOTE] -> Unless explicitly specified, references to a "user" in the Azure Lighthouse documentation can apply to a Microsoft Entra user, group, or service principal in an authorization. --To define authorizations in your template, you must include the ID values for each user, user group, or service principal in the managing tenant to which you want to grant access. You'll also need to include the role definition ID for each [built-in role](../../role-based-access-control/built-in-roles.md) you want to assign. When you [create your template in the Azure portal](#create-your-template-in-the-azure-portal), you can select the user account and role, and these ID values will be added automatically. If you are [creating a template manually](#create-your-template-manually), you can [retrieve user IDs by using the Azure portal, Azure PowerShell, or Azure CLI](../../role-based-access-control/role-assignments-template.md#get-object-ids) from within the managing tenant. --> [!TIP] -> We recommend assigning the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) when onboarding a customer, so that users in your tenant can [remove access to the delegation](remove-delegation.md) later if needed. If this role is not assigned, delegated resources can only be removed by a user in the customer's tenant. --Whenever possible, we recommend using Microsoft Entra user groups for each assignment whenever possible, rather than individual users. This gives you the flexibility to add or remove individual users to the group that has access, so that you don't have to repeat the onboarding process to make user changes. You can also assign roles to a service principal, which can be useful for automation scenarios. --> [!IMPORTANT] -> In order to add permissions for a Microsoft Entra group, the **Group type** must be set to **Security**. This option is selected when the group is created. For more information, see [Learn about groups and access rights in Microsoft Entra ID](/entra/fundamentals/concept-learn-about-groups). --When defining your authorizations, be sure to follow the principle of least privilege so that users only have the permissions needed to complete their job. For information about supported roles and best practices, see [Tenants, users, and roles in Azure Lighthouse scenarios](../concepts/tenants-users-roles.md). --> [!TIP] -> You can also create *eligible authorizations* that let users in your managing tenant temporarily elevate their role. This feature has specific licensing requirements. For more information, see [Create eligible authorizations](create-eligible-authorizations.md). --To track your impact across customer engagements and receive recognition, associate your Microsoft Cloud Partner Program ID with at least one user account that has access to each of your onboarded subscriptions. You'll need to perform this association in your service provider tenant. We recommend creating a service principal account in your tenant that is associated with your partner ID, then including that service principal every time you onboard a customer. For more info, see [Link a partner ID](../../cost-management-billing/manage/link-partner-id.md). --## Create an Azure Resource Manager template --To onboard your customer, you'll need to create an [Azure Resource Manager](../../azure-resource-manager/index.yml) template for your offer with the following information. The `mspOfferName` and `mspOfferDescription` values will be visible to the customer in the [Service providers page](view-manage-service-providers.md) of the Azure portal once the template is deployed in the customer's tenant. --|Field |Definition | -||| -|`mspOfferName` |A name describing this definition. This value is displayed to the customer as the title of the offer and must be a unique value. | -|`mspOfferDescription` |A brief description of your offer (for example, "Contoso VM management offer"). This field is optional, but recommended so that customers have a clear understanding of your offer. | -|`managedByTenantId` |Your tenant ID. | -|`authorizations` |The `principalId` values for the users/groups/SPNs from your tenant, each with a `principalIdDisplayName` to help your customer understand the purpose of the authorization, and mapped to a built-in `roleDefinitionId` value to specify the level of access. | --You can create this template in the Azure portal, or by manually modifying the templates provided in our [samples repo](https://github.com/Azure/Azure-Lighthouse-samples/). --> [!IMPORTANT] -> The process described here requires a separate deployment for each subscription being onboarded, even if you are onboarding subscriptions in the same customer tenant. Separate deployments are also required if you are onboarding multiple resource groups within different subscriptions in the same customer tenant. However, onboarding multiple resource groups within a single subscription can be done in one deployment. -> -> Separate deployments are also required for multiple offers being applied to the same subscription (or resource groups within a subscription). Each offer applied must use a different `mspOfferName`. --### Create your template in the Azure portal --To create your template in the Azure portal, go to **My customers** and then select **Create ARM Template** from the overview page. --On the **Create ARM Template offer** Page, provide your **Name** and an optional **Description**. These values will be used for the `mspOfferName` and `mspOfferDescription` in your template, and they may be visible to your customer. The `managedByTenantId` value will be provided automatically, based on the Microsoft Entra tenant to which you are logged in. --Next, select either **Subscription** or **Resource group**, depending on the customer scope you want to onboard. If you select **Resource group**, you'll need to provide the name of the resource group to onboard. You can select the **+** icon to add additional resource groups in the same subscription if needed. (To onboard additional resource groups in a different subscription, you must create and deploy a separate template for that subscription.) --Finally, create your authorizations by selecting **+ Add authorization**. For each of your authorizations, provide the following details: --1. Select the **Principal type** depending on the type of account you want to include in the authorization. This can be either **User**, **Group**, or **Service principal**. In this example, we'll choose **User**. -1. Select the **+ Select user** link to open the selection pane. You can use the search field to find the user you'd like to add. Once you've done so, click **Select**. The user's **Principal ID** will be automatically populated. -1. Review the **Display name** field (populated based on the user you selected) and make changes, if desired. -1. Select the **Role** to assign to this user. -1. For **Access** type, select **Permanent** or **Eligible**. If you choose **Eligible**, you will need to specify options for maximum duration, multifactor authentication, and whether or not approval is required. For more information about these options, see [Create eligible authorizations](create-eligible-authorizations.md). The eligible authorizations feature can't be used with service principals. -1. Select **Add** to create your authorization. ---After you select **Add**, you'll return to the **Create ARM Template offer** screen. You can select **+ Add authorization** again to add as many authorizations as needed. --When you've added all of your authorizations, select **View template**. On this screen, you'll see a .json file that corresponds to the values you entered. Select **Download** to save a copy of this .json file. This template can then be [deployed in the customer's tenant](#deploy-the-azure-resource-manager-template). You can also edit it manually if you need to make any changes. --> [!IMPORTANT] -> The generated template file is not stored in the Azure portal. Be sure to download a copy before you navigate away from the **Show template** screen. --### Create your template manually --You can create your template by using an Azure Resource Manager template (provided in our [samples repo](https://github.com/Azure/Azure-Lighthouse-samples/)) and a corresponding parameter file that you modify to match your configuration and define your authorizations. If you prefer, you can include all of the information directly in the template, rather than using a separate parameter file. --The template you choose will depend on whether you are onboarding an entire subscription, a resource group, or multiple resource groups within a subscription. We also provide a template that can be used for customers who purchased a managed service offer that you published to Azure Marketplace, if you prefer to onboard their subscription(s) this way. --|To onboard this |Use this Azure Resource Manager template |And modify this parameter file | -|||| -|Subscription |[subscription.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/subscription/subscription.json) |[subscription.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/subscription/subscription.parameters.json) | -|Resource group |[rg.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/rg/rg.json) |[rg.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/rg/rg.parameters.json) | -|Multiple resource groups within a subscription |[multi-rg.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/rg/multi-rg.json) |[multiple-rg.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/rg/multiple-rg.parameters.json) | -|Subscription (when using an offer published to Azure Marketplace) |[marketplaceDelegatedResourceManagement.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/marketplace-delegated-resource-management/marketplaceDelegatedResourceManagement.json) |[marketplaceDelegatedResourceManagement.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/marketplace-delegated-resource-management/marketplaceDelegatedResourceManagement.parameters.json) | --If you want to include [eligible authorizations](create-eligible-authorizations.md#create-eligible-authorizations-using-azure-resource-manager-templates), select the corresponding template from the [delegated-resource-management-eligible-authorizations section of our samples repo](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-eligible-authorizations). --> [!TIP] -> While you can't onboard an entire management group in one deployment, you can deploy a policy to [onboard each subscription in a management group](onboard-management-group.md). You'll then have access to all of the subscriptions in the management group, although you'll have to work on them as individual subscriptions (rather than taking actions on the management group resource directly). --The following example shows a modified **subscription.parameters.json** file that can be used to onboard a subscription. The resource group parameter files (located in the [rg-delegated-resource-management](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management/rg) folder) have a similar format, but they also include an `rgName` parameter to identify the specific resource group(s) to be onboarded. --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentParameters.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "mspOfferName": { - "value": "Fabrikam Managed Services - Interstellar" - }, - "mspOfferDescription": { - "value": "Fabrikam Managed Services - Interstellar" - }, - "managedByTenantId": { - "value": "00000000-0000-0000-0000-000000000000" - }, - "authorizations": { - "value": [ - { - "principalId": "00000000-0000-0000-0000-000000000000", - "principalIdDisplayName": "Tier 1 Support", - "roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c" - }, - { - "principalId": "00000000-0000-0000-0000-000000000000", - "principalIdDisplayName": "Tier 1 Support", - "roleDefinitionId": "36243c78-bf99-498c-9df9-86d9f8d28608" - }, - { - "principalId": "00000000-0000-0000-0000-000000000000", - "principalIdDisplayName": "Tier 2 Support", - "roleDefinitionId": "acdd72a7-3385-48ef-bd42-f606fba81ae7" - }, - { - "principalId": "00000000-0000-0000-0000-000000000000", - "principalIdDisplayName": "Service Automation Account", - "roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c" - }, - { - "principalId": "00000000-0000-0000-0000-000000000000", - "principalIdDisplayName": "Policy Automation Account", - "roleDefinitionId": "18d7d88d-d35e-4fb5-a5c3-7773c20a72d9", - "delegatedRoleDefinitionIds": [ - "b24988ac-6180-42a0-ab88-20f7382dd24c", - "92aaf0da-9dab-42b6-94a3-d43ce8d16293" - ] - } - ] - } - } -} -``` --The last authorization in the example above adds a `principalId` with the User Access Administrator role (18d7d88d-d35e-4fb5-a5c3-7773c20a72d9). When assigning this role, you must include the `delegatedRoleDefinitionIds` property and one or more supported Azure built-in roles. The user created in this authorization will be able to assign these roles to [managed identities](/entr). The user is also able to create support incidents. No other permissions normally associated with the User Access Administrator role will apply to this `principalId`. --## Deploy the Azure Resource Manager template --Once you have created your template, a user in the customer's tenant must deploy it within their tenant. A separate deployment is needed for each subscription that you want to onboard (or for each subscription that contains resource groups that you want to onboard). --When onboarding a subscription (or one or more resource groups within a subscription) using the process described here, the **Microsoft.ManagedServices** resource provider will be registered for that subscription. --> [!IMPORTANT] -> This deployment must be done by a non-guest account in the customer's tenant who has a role with the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), for the subscription being onboarded (or which contains the resource groups that are being onboarded). To find users who can delegate the subscription, a user in the customer's tenant can select the subscription in the Azure portal, open **Access control (IAM)**, and [view all users with the Owner role](../../role-based-access-control/role-assignments-list-portal.yml#list-owners-of-a-subscription). -> -> If the subscription was created through the [Cloud Solution Provider (CSP) program](../concepts/cloud-solution-provider.md), any user who has the [Admin Agent](/partner-center/permissions-overview#manage-commercial-transactions-in-partner-center-azure-ad-and-csp-roles) role in your service provider tenant can perform the deployment. --The deployment may be done by using PowerShell, by using Azure CLI, or in the Azure portal, as shown below. --### Deploy by using PowerShell --To deploy a single template: --```azurepowershell-interactive -# Log in first with Connect-AzAccount if you're not using Cloud Shell --# Deploy Azure Resource Manager template using template and parameter file locally -New-AzSubscriptionDeployment -Name <deploymentName> ` - -Location <AzureRegion> ` - -TemplateFile <pathToTemplateFile> ` - -Verbose --# Deploy Azure Resource Manager template that is located externally -New-AzSubscriptionDeployment -Name <deploymentName> ` - -Location <AzureRegion> ` - -TemplateUri <templateUri> ` - -Verbose -``` --To deploy a template with a separate parameter file: --```azurepowershell-interactive -# Log in first with Connect-AzAccount if you're not using Cloud Shell --# Deploy Azure Resource Manager template using template and parameter file locally -New-AzSubscriptionDeployment -Name <deploymentName> ` - -Location <AzureRegion> ` - -TemplateFile <pathToTemplateFile> ` - -TemplateParameterFile <pathToParameterFile> ` - -Verbose --# Deploy Azure Resource Manager template that is located externally -New-AzSubscriptionDeployment -Name <deploymentName> ` - -Location <AzureRegion> ` - -TemplateUri <templateUri> ` - -TemplateParameterUri <parameterUri> ` - -Verbose -``` --### Deploy by using Azure CLI --To deploy a single template: --```azurecli-interactive -# Log in first with az login if you're not using Cloud Shell --# Deploy Azure Resource Manager template using template and parameter file locally -az deployment sub create --name <deploymentName> \ - --location <AzureRegion> \ - --template-file <pathToTemplateFile> \ - --verbose --# Deploy external Azure Resource Manager template, with local parameter file -az deployment sub create --name <deploymentName> \ - --location <AzureRegion> \ - --template-uri <templateUri> \ - --verbose -``` --To deploy a template with a separate parameter file: --```azurecli-interactive -# Log in first with az login if you're not using Cloud Shell --# Deploy Azure Resource Manager template using template and parameter file locally -az deployment sub create --name <deploymentName> \ - --location <AzureRegion> \ - --template-file <pathToTemplateFile> \ - --parameters <parameters/parameterFile> \ - --verbose --# Deploy external Azure Resource Manager template, with local parameter file -az deployment sub create --name <deploymentName> \ - --location <AzureRegion> \ - --template-uri <templateUri> \ - --parameters <parameterFile> \ - --verbose -``` --### Deploy in the Azure portal --To deploy a template in the Azure portal, follow the process described below. These steps must be done by a user in the customer tenant with the **Owner** role (or another role with the `Microsoft.Authorization/roleAssignments/write` permission). --1. From the [Service providers](view-manage-service-providers.md) page in the Azure portal, select **Server provider offers**. -1. Near the top of the screen, select the arrow next to **Add offer**, and then select **Add via template**. -- :::image type="content" source="../media/add-offer-via-template.png" alt-text="Screenshot showing the Add via template option in the Azure portal."::: --1. Upload the template by dragging and dropping it, or select **Browse for files** to find and upload the template. -1. If applicable, select the **I have a separate parameter file** box, then upload your parameter file. -1. After you've uploaded your template (and parameter file if needed), select **Upload**. -1. In the **Custom deployment** screen, review the details that appear. If needed, you can make changes to these values in this screen, or by selecting **Edit parameters**. -1. Select **Review and create**, then select **Create**. --After a few minutes, you should see a notification that the deployment has completed. --> [!TIP] -> Alternately, from our [GitHub repo](https://github.com/Azure/Azure-Lighthouse-samples/), select the **Deploy to Azure** button shown next to the template you want to use (in the **Auto-deploy** column). The example template will open in the Azure portal. If you use this process, you must update the values for **Msp Offer Name**, **Msp Offer Description**, **Managed by Tenant Id**, and **Authorizations** before you select **Review and create**. --## Confirm successful onboarding --When a customer subscription has successfully been onboarded to Azure Lighthouse, users in the service provider's tenant will be able to see the subscription and its resources (if they have been granted access to it through the process above, either individually or as a member of a Microsoft Entra group with the appropriate permissions). To confirm this, check to make sure the subscription appears in one of the following ways. --### Confirm in the Azure portal --In the service provider's tenant: --1. Navigate to the [My customers page](view-manage-customers.md). -2. Select **Customers**. -3. Confirm that you can see the subscription(s) with the offer name you provided in the Resource Manager template. --> [!IMPORTANT] -> In order to see the delegated subscription in [My customers](view-manage-customers.md), users in the service provider's tenant must have been granted the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role which includes Reader access) when the subscription was onboarded. --In the customer's tenant: --1. Navigate to the [Service providers page](view-manage-service-providers.md). -2. Select **Service provider offers**. -3. Confirm that you can see the subscription(s) with the offer name you provided in the Resource Manager template. --> [!NOTE] -> It may take up to 15 minutes after your deployment is complete before the updates are reflected in the Azure portal. You may be able to see the updates sooner if you update your Azure Resource Manager token by refreshing the browser, signing in and out, or requesting a new token. --### Confirm by using PowerShell --```azurepowershell-interactive -# Log in first with Connect-AzAccount if you're not using Cloud Shell --Get-AzContext --# Confirm successful onboarding for Azure Lighthouse --Get-AzManagedServicesDefinition -Get-AzManagedServicesAssignment -``` --### Confirm by using Azure CLI --```azurecli-interactive -# Log in first with az login if you're not using Cloud Shell --az account list --# Confirm successful onboarding for Azure Lighthouse --az managedservices definition list -az managedservices assignment list -``` --If you need to make changes after the customer has been onboarded, you can [update the delegation](update-delegation.md). You can also [remove access to the delegation](remove-delegation.md) completely. --## Troubleshooting --If you are unable to successfully onboard your customer, or if your users have trouble accessing the delegated resources, check the following tips and requirements and try again. --- Users who need to view customer resources in the Azure portal must have been granted the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role which includes Reader access) during the onboarding process.-- The `managedbyTenantId` value must not be the same as the tenant ID for the subscription being onboarded.-- You can't have multiple assignments at the same scope with the same `mspOfferName`.-- The **Microsoft.ManagedServices** resource provider must be registered for the delegated subscription. This should happen automatically during the deployment but if not, you can [register it manually](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).-- Authorizations must not include any users with the [Owner](../../role-based-access-control/built-in-roles.md#owner) role, any roles with [DataActions](../../role-based-access-control/role-definitions.md#dataactions), or any roles that include [restricted actions](../concepts/tenants-users-roles.md#role-support-for-azure-lighthouse).-- Groups must be created with [**Group type**](/entra/fundamentals/concept-learn-about-groups#group-types) set to **Security** and not **Microsoft 365**.-- If access was granted to a group, check to make sure the user is a member of that group. If they aren't, you can [add them to the group using Microsoft Entra ID](/entra/fundamentals/how-to-manage-groups), without having to perform another deployment. Note that group owners are not necessarily members of the groups they manage, and may need to be added in order to have access.-- There may be an additional delay before access is enabled for [nested groups](/entra/fundamentals/how-to-manage-groups#add-a-group-to-another-group).-- The [Azure built-in roles](../../role-based-access-control/built-in-roles.md) that you include in authorizations must not include any deprecated roles. If an Azure built-in role becomes deprecated, any users who were onboarded with that role will lose access, and you won't be able to onboard additional delegations. To fix this, update your template to use only supported built-in roles, then perform a new deployment.--## Next steps --- Learn about [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md).-- [View and manage customers](view-manage-customers.md) by going to **My customers** in the Azure portal.-- Learn how to [update](update-delegation.md) or [remove](remove-delegation.md) a delegation. |
lighthouse | Onboard Management Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/onboard-management-group.md | - Title: Onboard all subscriptions in a management group -description: You can deploy an Azure Policy to delegate all subscriptions within a management group to an Azure Lighthouse managing tenant. Previously updated : 05/23/2023----# Onboard all subscriptions in a management group --[Azure Lighthouse](../overview.md) allows delegation of subscriptions and/or resource groups, but not [management groups](../../governance/management-groups/overview.md). However, you can use an [Azure Policy](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/policy-delegate-management-groups) to delegate all subscriptions within a management group to a managing tenant. --The policy uses the [deployIfNotExists](../../governance/policy/concepts/effects.md#deployifnotexists) effect to check whether each subscription within the management group has been delegated to the specified managing tenant. If a subscription is not already delegated, the policy creates the Azure Lighthouse assignment based on the values you provide in the parameters. You will then have access to all of the subscriptions in the management group, just as if they had each been onboarded manually. --When using this policy, keep in mind: --- Each subscription within the management group will have the same set of authorizations. To vary the users and roles who are granted access, you'll have to onboard subscriptions manually.-- While every subscription in the management group will be onboarded, you can't take actions on the management group resource through Azure Lighthouse. You'll need to select subscriptions to work on, just as you would if they were onboarded individually.--Unless specified below, all of these steps must be performed by a user in the customer's tenant with the appropriate permissions. --> [!TIP] -> Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same processes. --## Register the resource provider across subscriptions --Typically, the **Microsoft.ManagedServices** resource provider is registered for a subscription as part of the onboarding process. When using the policy to onboard subscriptions in a management group, the resource provider must be registered in advance. This can be done by a Contributor or Owner user in the customer's tenant (or any user who has permissions to do the `/register/action` operation for the resource provider). For more information, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). --You can use an [Azure Logic App to automatically register the resource provider across subscriptions](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/register-managed-services-rp-customer). This Logic App can be deployed in a customer's tenant with limited permissions that allow it to register the resource provider in each subscription within a management group. --We also provide an [Azure Logic App that can be deployed in the service provider's tenant](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/register-managed-services-rp-partner). This Logic App can assign the resource provider across subscriptions in multiple tenants by [granting tenant-wide admin consent](../../active-directory/manage-apps/grant-admin-consent.md) to the Logic App. Granting tenant-wide admin consent requires you to sign in as a user that is authorized to consent on behalf of the organization. Note that even if you use this option to register the provider across multiple tenants, you'll still need to deploy the policy individually for each management group. --## Create your parameters file --To assign the policy, deploy the [deployLighthouseIfNotExistManagementGroup.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/policy-delegate-management-groups/deployLighthouseIfNotExistManagementGroup.json) file from our samples repo, along with a [deployLighthouseIfNotExistsManagementGroup.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/policy-delegate-management-groups/deployLighthouseIfNotExistsManagementGroup.parameters.json) parameters file that you edit with your specific tenant and assignment details. These two files contain the same details that would be used to [onboard an individual subscription](onboard-customer.md). --The example below shows a parameters file which will delegate the subscriptions to the Relecloud Managed Services tenant, with access granted to two principalIDs: one for Tier 1 Support, and one automation account which can [assign the delegateRoleDefinitionIds to managed identities in the customer tenant](deploy-policy-remediation.md#create-a-user-who-can-assign-roles-to-a-managed-identity-in-the-customer-tenant). --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "managedByName": { - "value": "Relecloud Managed Services" - }, - "managedByDescription": { - "value": "Relecloud provides managed services to its customers" - }, - "managedByTenantId": { - "value": "00000000-0000-0000-0000-000000000000" - }, - "managedByAuthorizations": { - "value": [ - { - "principalId": "00000000-0000-0000-0000-000000000000", - "principalIdDisplayName": "Tier 1 Support", - "roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c" - }, - { - "principalId": "00000000-0000-0000-0000-000000000000", - "principalIdDisplayName": "Automation Account - Full access", - "roleDefinitionId": "18d7d88d-d35e-4fb5-a5c3-7773c20a72d9", - "delegatedRoleDefinitionIds": [ - "b24988ac-6180-42a0-ab88-20f7382dd24c", - "92aaf0da-9dab-42b6-94a3-d43ce8d16293", - "91c1777a-f3dc-4fae-b103-61d183457e46" - ] - } - ] - } - } -} -``` --## Assign the policy to a management group --Once you've edited the policy to create your assignments, you can assign it at the management group level. To learn how to assign a policy and view compliance state results, see [Quickstart: Create a policy assignment](../../governance/policy/assign-policy-portal.md). --The PowerShell script below shows how to add the policy definition under the specified management group, using the template and parameter file you created. You need to create the assignment and remediation task for existing subscriptions. --```azurepowershell-interactive -New-AzManagementGroupDeployment -Name <DeploymentName> -Location <location> -ManagementGroupId <ManagementGroupName> -TemplateFile <path to file> -TemplateParameterFile <path to parameter file> -verbose -``` --## Confirm successful onboarding --There are several ways to verify that the subscriptions in the management group were successfully onboarded. For more information, see [Confirm successful onboarding](onboard-customer.md#confirm-successful-onboarding). --If you keep the Logic App and policy active for your management group, any new subscriptions that are added to the management group will be onboarded as well. --## Next steps --- Learn more about [onboarding customers to Azure Lighthouse](onboard-customer.md).-- Learn about [Azure Policy](../../governance/policy/index.yml).-- Learn about [Azure Logic Apps](../../logic-apps/logic-apps-overview.md). |
lighthouse | Policy At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/policy-at-scale.md | - Title: Deploy Azure Policy to delegated subscriptions at scale -description: Azure Lighthouse lets you deploy a policy definition and policy assignment across multiple tenants. Previously updated : 07/16/2024-----# Deploy Azure Policy to delegated subscriptions at scale --As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](../overview.md). Azure Lighthouse allows service providers to perform operations at scale across several tenants at once, making management tasks more efficient. --This topic explains how to use [Azure Policy](../../governance/policy/index.yml) to deploy a policy definition and policy assignment across multiple tenants using PowerShell commands. In this example, the policy definition ensures that storage accounts are secured by allowing only HTTPS traffic. You can use the same general process for any policy that you want to deploy. --> [!TIP] -> Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same processes. --## Use Azure Resource Graph to query across customer tenants --You can use [Azure Resource Graph](../../governance/resource-graph/overview.md) to query across all subscriptions in customer tenants that you manage. In this example, we'll identify any storage accounts in these subscriptions that don't currently require HTTPS traffic. --```powershell -$MspTenant = "insert your managing tenantId here" --$subs = Get-AzSubscription --$ManagedSubscriptions = Search-AzGraph -Query "ResourceContainers | where type == 'microsoft.resources/subscriptions' | where tenantId != '$($mspTenant)' | project name, subscriptionId, tenantId" -subscription $subs.subscriptionId --Search-AzGraph -Query "Resources | where type =~ 'Microsoft.Storage/storageAccounts' | project name, location, subscriptionId, tenantId, properties.supportsHttpsTrafficOnly" -subscription $ManagedSubscriptions.subscriptionId | convertto-json -``` --## Deploy a policy across multiple customer tenants --The following example shows how to use an [Azure Resource Manager template](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/policy-enforce-https-storage/enforceHttpsStorage.json) to deploy a policy definition and policy assignment across delegated subscriptions in multiple customer tenants. This policy definition requires all storage accounts to use HTTPS traffic. It prevents the creation of any new storage accounts that don't comply. Any existing storage accounts without the setting are marked as noncompliant. --```powershell -Write-Output "In total, there are $($ManagedSubscriptions.Count) delegated customer subscriptions to be managed" --foreach ($ManagedSub in $ManagedSubscriptions) -{ - Select-AzSubscription -SubscriptionId $ManagedSub.subscriptionId -- New-AzSubscriptionDeployment -Name mgmt ` - -Location eastus ` - -TemplateUri "https://raw.githubusercontent.com/Azure/Azure-Lighthouse-samples/master/templates/policy-enforce-https-storage/enforceHttpsStorage.json" ` - -AsJob -} -``` --> [!NOTE] -> While you can deploy policies across multiple tenants, currently you can't [view compliance details](../../governance/policy/how-to/determine-non-compliance.md#compliance-details) for non-compliant resources in these tenants. --## Validate the policy deployment --After you've deployed the Azure Resource Manager template, confirm that the policy definition was successfully applied by attempting to create a storage account with **EnableHttpsTrafficOnly** set to **false** in one of your delegated subscriptions. Because of the policy assignment, you should be unable to create this storage account. --```powershell -New-AzStorageAccount -ResourceGroupName (New-AzResourceGroup -name policy-test -Location eastus -Force).ResourceGroupName ` - -Name (get-random) ` - -Location eastus ` - -EnableHttpsTrafficOnly $false ` - -SkuName Standard_LRS ` - -Verbose -``` --## Clean up resources --When you're finished, you can remove the policy definition and assignment created by the deployment. --```powershell -foreach ($ManagedSub in $ManagedSubscriptions) -{ - select-azsubscription -subscriptionId $ManagedSub.subscriptionId -- Remove-AzSubscriptionDeployment -Name mgmt -AsJob -- $Assignment = Get-AzPolicyAssignment | where-object {$_.Name -like "enforce-https-storage-assignment"} -- if ([string]::IsNullOrEmpty($Assignment)) - { - Write-Output "Nothing to clean up - we're done" - } - else - { -- Remove-AzPolicyAssignment -Name 'enforce-https-storage-assignment' -Scope "/subscriptions/$($ManagedSub.subscriptionId)" -Verbose -- Write-Output "Deployment has been deleted - we're done" - } -} -``` --## Next steps --- Learn about [Azure Policy](../../governance/policy/index.yml).-- Learn about [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md).-- Learn how to [deploy a policy that can be remediated](deploy-policy-remediation.md) within a delegated subscription. |
lighthouse | Publish Managed Services Offers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/publish-managed-services-offers.md | - Title: Publish a Managed Service offer to Azure Marketplace -description: Learn how to publish a Managed Service offer that onboards customers to Azure Lighthouse. Previously updated : 03/07/2024----# Publish a Managed Service offer to Azure Marketplace --In this article, you'll learn how to publish a public or private Managed Service offer to [Azure Marketplace](https://azuremarketplace.microsoft.com) using the [commercial marketplace](/partner-center/marketplace/overview) program in Partner Center. Customers who purchase the offer will then delegate subscriptions or resource groups, allowing you to manage them through [Azure Lighthouse](../overview.md). --## Publishing requirements --You must have a valid [commercial marketplace account in Partner Center](/partner-center/marketplace/create-account) to create and publish offers. If you don't have an account already, the [sign-up process](https://aka.ms/joinmarketplace) will lead you through the steps of creating an account in Partner Center and enrolling in the commercial marketplace program. --Per the [Managed Service offer certification requirements](/legal/marketplace/certification-policies#700-managed-services), you must have [Solutions Partner designation](/partner-center/partner-capability-score) for Infrastructure (Azure) or Security in order to publish a Managed Service offer. --If you don't want to publish an offer to Azure Marketplace, or if you don't meet all the requirements, you can [onboard customers manually by using Azure Resource Manager templates](onboard-customer.md). --The following table can help determine whether to onboard customers by publishing a Managed Service offer or by using Azure Resource Manager templates. --|**Consideration** |**Managed Service offer** |**ARM templates** | -|||| -|Requires [commercial marketplace account in Partner Center](/partner-center/marketplace/create-account) |Yes |No | -|Requires [Solutions Partner designation](/partner-center/partner-capability-score) for Infrastructure (Azure) or Security |Yes |No | -|Available to new customers through Azure Marketplace |Yes |No | -|Can limit offer to specific customers |Yes (only with private plans, which can't be used with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program) |Yes | -|Can [automatically connect customers to your CRM system](/partner-center/marketplace/plan-managed-service-offer#customer-leads) |Yes |No | -|Requires customer acceptance in Azure portal |Yes |No | -|Can use automation to onboard multiple subscriptions, resource groups, or customers |No |Yes | -|Immediate access to new built-in roles and Azure Lighthouse features |Not always (generally available after some delay) |Yes | -|Customers can review and accept updated offers in the Azure portal | Yes | No | --> [!NOTE] -> Managed Service offers may not be available in Azure Government and other national clouds. --## Create your offer --For detailed instructions about how to create your offer, including all of the information and assets you'll need to provide, see [Create a Managed Service offer](/partner-center/marketplace/create-managed-service-offer). --To learn about the general publishing process, review the [commercial marketplace documentation](/partner-center/marketplace/overview). You should also review the [commercial marketplace certification policies](/legal/marketplace/certification-policies), particularly the [Managed Services](/legal/marketplace/certification-policies#700-managed-services) section. --Once a customer adds your offer, they will be able to delegate one or more subscriptions or resource groups, which will then be [onboarded to Azure Lighthouse](#the-customer-onboarding-process). --> [!IMPORTANT] -> Each plan in a Managed Service offer includes a **Manifest Details** section, where you define the Microsoft Entra entities in your tenant that will have access to the delegated resource groups and/or subscriptions for customers who purchase that plan. It's important to be aware that any group (or user or service principal) that you include will have the same permissions for every customer who purchases the plan. -> -> To assign different groups to work with each customer, you can publish a separate [private plan](/partner-center/marketplace/private-plans) that is exclusive to each customer. These private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program. --## Publish your offer --Once you've completed all of the sections, your next step is to publish the offer. After you initiate the publishing process, your offer will go through several validation and publishing steps. For more information, see [Review and publish an offer to the commercial marketplace](/partner-center/marketplace/review-publish-offer). --You can [publish an updated version of your offer](/partner-center/marketplace/update-existing-offer) at any time. For example, you may want to add a new role definition to a previously published offer. When you do so, customers who have already added the offer will see an icon in the **Service providers** page in the Azure portal that lets them know an update is available. Each customer will be able to [review the changes and update to the new version](view-manage-service-providers.md#update-service-provider-offers). --## The customer onboarding process --After a customer adds your offer, they can [delegate one or more specific subscriptions or resource groups](view-manage-service-providers.md#delegate-resources), which will be onboarded to Azure Lighthouse. If a customer has accepted an offer but has not yet delegated any resources, they'll see a note at the top of the **Service provider offers** section of the **Service providers** page in the Azure portal. --> [!IMPORTANT] -> Delegation must be done by a non-guest account in the customer's tenant who has a role with the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), for the subscription being onboarded (or which contains the resource groups that are being onboarded). To find users who can delegate the subscription, a user in the customer's tenant can select the subscription in the Azure portal, open **Access control (IAM)**, and [view all users with the Owner role](../../role-based-access-control/role-assignments-list-portal.yml#list-owners-of-a-subscription). --Once the customer delegates a subscription (or one or more resource groups within a subscription), the **Microsoft.ManagedServices** resource provider is registered for that subscription, and users in your tenant will be able to access the delegated resources according to the authorizations that you defined in your offer. --> [!NOTE] -> To delegate additional subscriptions or resource groups to the same offer at a later time, the customer must [manually register the **Microsoft.ManagedServices** resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) on each subscription before delegating. --If you publish an updated version of your offer, the customer can [review the changes in the Azure portal and accept the new version](view-manage-service-providers.md#update-service-provider-offers). --## Next steps --- Learn about the [commercial marketplace](/partner-center/marketplace/overview).-- Learn about [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md).-- [View and manage customers](view-manage-customers.md) by going to **My customers** in the Azure portal. |
lighthouse | Remove Delegation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/remove-delegation.md | - Title: Remove access to a delegation -description: Learn how to remove access to resources that were delegated to a service provider for Azure Lighthouse. Previously updated : 07/10/2024----# Remove access to a delegation --When a customer's subscription or resource group has been delegated to a service provider for [Azure Lighthouse](../overview.md), that delegation can be removed if needed. Once a delegation is removed, the [Azure delegated resource management](../concepts/architecture.md) access that was previously granted to users in the service provider tenant will no longer apply. --Removing a delegation can be done by a user in either the customer tenant or the service provider tenant, as long as the user has the appropriate permissions. --> [!TIP] -> Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same processes. --> [!IMPORTANT] -> When a customer subscription has multiple delegations from the same service provider, removing one delegation could cause users to lose access granted via the other delegations. This only occurs when the same `principalId` and `roleDefinitionId` combination is included in multiple delegations and then one of the delegations is removed. If this happens, you can fix the issue by repeating the [onboarding process](onboard-customer.md) for the delegations that you don't want to remove. --## Customers --Users in the customer's tenant who have a role with the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), can remove service provider access to that subscription (or to resource groups in that subscription). To do so, the user can go to the [Service providers page](view-manage-service-providers.md#remove-service-provider-offers) of the Azure portal, find the offer on the **Service provider offers** screen, and select the trash can icon in the row for that offer. --After confirming the deletion, no users in the service provider's tenant will be able to access the resources that had been previously delegated. --## Service providers --Users in a managing tenant can remove access to delegated resources if they were granted the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) during the onboarding process. If this role isn't assigned to any service provider users, the delegation can only be removed by a user in the customer's tenant. --This example shows an assignment granting the **Managed Services Registration Assignment Delete Role** that can be included in a parameter file during the [onboarding process](onboard-customer.md): --```json - "authorizations": [ - { - "principalId": "cfa7496e-a619-4a14-a740-85c5ad2063bb", - "principalIdDisplayName": "MSP Operators", - "roleDefinitionId": "91c1777a-f3dc-4fae-b103-61d183457e46" - } - ] -``` --This role can also be selected in an **Authorization** when [creating a Managed Service offer](../../marketplace/plan-managed-service-offer.md) to publish to Azure Marketplace. --A user with this permission can remove a delegation in one of the following ways. --### Azure portal --1. Navigate to the [My customers page](view-manage-customers.md). -2. Select **Delegations**. -3. Find the delegation you want to remove, then select the trash can icon that appears in its row. --### PowerShell --```azurepowershell-interactive -# Log in first with Connect-AzAccount if you're not using Cloud Shell --# Sign in as a user from the managing tenant directory --Login-AzAccount --# Select the subscription that is delegated or that contains the delegated resource group(s) --Select-AzSubscription -SubscriptionName "<subscriptionName>" --# Get the registration assignment --Get-AzManagedServicesAssignment -Scope "/subscriptions/{delegatedSubscriptionId}" --# Delete the registration assignment --Remove-AzManagedServicesAssignment -Name "<Assignmentname>" -Scope "/subscriptions/{delegatedSubscriptionId}" -``` --### Azure CLI --```azurecli-interactive -# Log in first with az login if you're not using Cloud Shell --# Sign in as a user from the managing tenant directory --az login --# Select the subscription that is delegated or that contains the delegated resource group(s) --az account set -s <subscriptionId/name> --# List registration assignments --az managedservices assignment list --# Delete the registration assignment --az managedservices assignment delete --assignment <id or full resourceId> -``` --## Next steps --- Learn about [Azure Lighthouse architecture](../concepts/architecture.md).-- [View and manage customers](view-manage-customers.md) by going to **My customers** in the Azure portal.-- Learn how to [update a previous delegation](update-delegation.md). |
lighthouse | Update Delegation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/update-delegation.md | - Title: Update a delegation -description: Learn how to update a delegation for a customer previously onboarded to Azure Lighthouse. Previously updated : 05/23/2023-----# Update a delegation --After you have onboarded a subscription (or resource group) to Azure Lighthouse, you may need to make changes. For example, your customer may want you to take on additional management tasks that require a different Azure built-in role, or you might need to change the tenant to which a customer subscription is delegated. --> [!TIP] -> Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same process to set up Azure Lighthouse and consolidate their management experience. --If you [onboarded your customer through Azure Resource Manager templates (ARM templates)](onboard-customer.md), a new deployment must be performed for that customer. Depending on what you are changing, you may want to update the original offer, or remove the original offer and create a new one. --- **If you are changing authorizations only**: You can update your delegation by changing the **authorizations** section of the ARM template.-- **If you are changing the managing tenant**: You must create a new ARM template using with a different **mspOfferName** than your previous offer.--## Update your ARM template --To update your delegation, you will need to deploy an ARM template that includes the changes you'd like to make. --If you are only updating authorizations (such as adding a new user group with a role you hadn't previously included, or changing the role for an existing user), you can use the same **mspOfferName** as in the [ARM template](onboard-customer.md#create-an-azure-resource-manager-template) that you used for the previous delegation. Use your previous template as a starting point. Then, make the changes you need, such as replacing one Azure built-in role with another, or adding a completely new authorization to the template. --If you change the **mspOfferName**, this will be considered a new, separate offer. This is required if you are changing the managing tenant. --You don't need to change the **mspOfferName** if the managing tenant remains the same. In most cases, we recommend having only one **mspOfferName** in use by the same customer and managing tenant. If you do choose to create a new **mspOfferName** for your template, be sure that the customer's previous delegation is removed before deploying the new one. --## Remove the previous delegation --Before performing a new deployment, you may want to [remove access to the previous delegation](remove-delegation.md). This ensures that all previous permissions are removed, allowing you to start clean with the exact users/groups and roles that should apply going forward. --> [!IMPORTANT] -> If you use a new **mspOfferName** and keep any of the same **principalId** values, you must remove access to the previous delegation before deploying the new offer. If you don't remove the offer first, users who had previously granted permission may lose access completely due to conflicting assignments. --If you are changing the managing tenant, you can leave the previous offer in place if you want both tenants to continue to have access. If you only want the new managing tenant to have access, the earlier offer must be removed. This can be done either before or after you onboard the new offer. --If you are updating the offer to adjust authorizations only, and keeping the same **mspOfferName**, you don't have to remove the previous delegation. The new deployment will replace the previous delegation, and only the authorizations in the newest template will apply. ---Removing access to the delegation can be done by any user in the managing tenant who was granted the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) in the original delegation. If no user in your managing tenant has this role, you can ask the customer to [remove access to the offer in the Azure portal](view-manage-service-providers.md#remove-service-provider-offers). --> [!TIP] -> If you have removed the previous delegation but are unable to deploy the new ARM template, you may need to [remove the registration definition completely](/powershell/module/az.managedservices/remove-azmanagedservicesdefinition). This can be done by any user with a role that has the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), in the customer tenant. --## Deploy the ARM template --Your customer can [deploy the updated template](onboard-customer.md#deploy-the-azure-resource-manager-template) in the same way that they did previously: in the Azure portal, by using PowerShell, or by using Azure CLI. --After the deployment has been completed, [confirm that it was successful](onboard-customer.md#confirm-successful-onboarding). The updated authorizations will then be in effect for the subscription or resource group(s) that the customer has delegated. --## Updating Managed Service offers --If you onboarded your customer through a Managed Service offer published to Azure Marketplace, and you want to update authorizations, you can do so by [publishing a new version of your offer](../../marketplace/update-existing-offer.md) with updates to the [authorizations](../../marketplace/create-managed-service-offer-plans.md#authorizations) in the plan for that customer. The customer will then be able to [review the changes in the Azure portal and accept the updated version](view-manage-service-providers.md#update-service-provider-offers). --If you want to change the managing tenant, you'll need to [create and publish a new Managed Service offer](publish-managed-services-offers.md) for the customer to accept. --> [!IMPORTANT] -> We recommend not having multiple offers between the same customer and managing tenant. If you publish a new offer for a current customer that uses the same managing tenant, be sure that the earlier offer is removed before the customer accepts the newer offer. --## Next steps --- [View and manage customers](view-manage-customers.md) by going to **My customers** in the Azure portal.-- Learn how to [remove access to a delegation](remove-delegation.md) that was previously onboarded.-- Learn more about [Azure Lighthouse architecture](../concepts/architecture.md). |
lighthouse | View Manage Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/view-manage-customers.md | - Title: View and manage customers and delegated resources in the Azure portal -description: As a service provider or enterprise using Azure Lighthouse, you can view delegated resources and subscriptions by going to My customers in the Azure portal. Previously updated : 07/10/2024----# View and manage customers and delegated resources in the Azure portal --Service providers using [Azure Lighthouse](../overview.md) can use the **My customers** page in the [Azure portal](https://portal.azure.com) to view delegated customer resources and subscriptions. --To view information about a customer, you must have been granted the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role that includes Reader access) when that customer was onboarded. --> [!TIP] -> While we'll refer to service providers and customers here, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same process to consolidate their management experience. --To access the **My customers** page in the Azure portal, enter "My customers" in the search box near the top of the Azure portal. You can also access this page from the main **Azure Lighthouse** page in the Azure portal by selecting **Manage your customers**. --The **Customers** section of the **My customers** page only shows information about customers who have delegated subscriptions or resource groups to your Microsoft Entra tenant through Azure Lighthouse. If you work with other customers (such as through the [Cloud Solution Provider (CSP) program](/partner-center/csp-overview)), you won't see those customers in the **Customers** section unless you [onboarded their resources to Azure Lighthouse](onboard-customer.md). However, you may see details about certain CSP customers in the [**Cloud Solution Provider (Preview)** section](#cloud-solution-provider-preview) lower on the page. --> [!NOTE] -> Your customers can view details about service providers by navigating to **Service providers** in the Azure portal. For more information, see [View and manage service providers](view-manage-service-providers.md). --## View and manage customer details --To view customer details, select **Customers** from the service menu of the **My customers** page. --For each customer, you'll see the customer's name and customer ID (tenant ID), along with the **Offer ID** and **Offer version** associated with the engagement. In the **Delegations** column, you'll see the number of delegated subscriptions and/or resource groups. --Options at the top of the page let you sort, filter, and group your customer information by specific customers, offers, or keywords. --To see additional details, use the following options: --- To see all of the subscriptions, offers, and delegations associated with a customer, select the customer's name.-- To see details about an offer and its delegations, select the offer name.-- To see details about role assignments for delegated subscriptions or resource groups, select the entry in the **Delegations** column.--> [!NOTE] -> If a customer renames a subscription after it's been delegated, you'll see the updated subscription name. However, if they rename their tenant, you may still see the older tenant name in some places in the Azure portal. --## View and manage delegations --Delegations show the subscription or resource group that has been delegated, along with the users and permissions that have access to it. To view this info, select **Delegations** on the left side of the **My customers** page. --Options at the top of the page let you sort, filter, and group this information by specific customers, offers, or keywords. --### View role assignments --The users and permissions associated with each delegation appear in the **Role assignments** column. You can select each entry to view more details. After you do so, select **Role assignments** to see the full list of users, groups, and service principals that have been granted access to the subscription or resource group. From there, you can select a particular user, group, or service principal name to see more information. --### Remove delegations --If you included users with the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) when onboarding a customer to Azure Lighthouse, those users can remove delegations by selecting the trash can icon that appears in the row for that delegation. When they do so, no users in the service provider's tenant will be able to access the resources that had been previously delegated. --For more information, see [Remove access to a delegation](remove-delegation.md). --## View delegation change activity --The **Activity log** section of the **My customers** page keeps track of every time that a customer subscription or resource group is delegated to your tenant. It also records whenever any previously delegated resources are removed. This information can only be viewed by users who have been [assigned the Monitoring Reader role at root scope](monitor-delegation-changes.md). --For more information, see [View delegation changes in the Azure portal](monitor-delegation-changes.md#view-delegation-changes-in-the-azure-portal). --## Work in the context of a delegated subscription --You can work directly in the context of a delegated subscription within the Azure portal, without switching the directory you're signed in to. To do so: --1. Select the **Settings** icon near the top of the Azure portal. -1. In the [Directories + subscriptions settings page](../../azure-portal/set-preferences.md#directories--subscriptions), ensure that the **Advanced filters** toggle is [turned off](../../azure-portal/set-preferences.md#subscription-filters). -1. In the **Default subscription filter** section, select the appropriate directory and subscription. (If you've been granted access to one or more resource groups, rather than to an entire subscription, select the subscription to which that resource group belongs. You'll then work in the context of that subscription, but will only be able to access the designated resource group(s).) ---After that, when you access a service that supports [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md), the service will default to the context of the delegated subscription that you included in your filter. --You can change the default subscription at any time by following the steps above and choosing a different subscription (or multiple subscriptions). If you want the filter to include all of the subscriptions to which you have access, select **All directories**, then check the **Select all** box. ---> [!IMPORTANT] -> Checking the **Select all** box sets the filter to show all of the subscriptions to which you *currently* have access. If you later gain access to additional subscriptionsΓÇöfor example, after you've onboarded a new customer to Azure LighthouseΓÇöthese subscriptions will not automatically be added to your filter. You'll need to return to **Directories + subscriptions** and select the additional subscriptions (or uncheck and then recheck **Select all** again). --You can also work on delegated subscriptions or resource groups by selecting the subscription or resource group from within an individual service (as long as that service supports [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md#enhanced-services-and-scenarios)). --## Cloud Solution Provider (Preview) --A separate **Cloud Solution Provider (Preview)** section of the **My customers** page shows billing information and resources for your CSP customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started). For more information, see [Get started with your Microsoft Partner Agreement billing account](../../cost-management-billing/understand/mpa-overview.md). --These CSP customers appear in this section whether or not you also onboarded them to Azure Lighthouse. Similarly, a CSP customer doesn't have to appear in the **Cloud Solution Provider (Preview)** section of **My customers** in order for you to onboard them to Azure Lighthouse. --## Next steps --- Learn about [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md).-- Learn how your customers can [view and manage service providers](view-manage-service-providers.md) by going to **Service providers** in the Azure portal. |
lighthouse | View Manage Service Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/view-manage-service-providers.md | - Title: View and manage service providers -description: Customers can view info about Azure Lighthouse service providers, service provider offers, and delegated resources in the Azure portal. Previously updated : 01/25/2024----# View and manage service providers --The **Service providers** page in the [Azure portal](https://portal.azure.com) gives customers control and visibility for their service providers who use [Azure Lighthouse](../overview.md). Customers can delegate specific resources, review new or updated offers, remove service provider access, and more. --To access the **Service providers** page in the Azure portal, enter "Service providers" in the search box near the top of the Azure portal. You can also select **All services**, then search for **Azure Lighthouse**, or search for "Azure Lighthouse". From the Azure Lighthouse page, select **View service provider offers**. --> [!NOTE] -> To view the **Service providers** page, a user in the customer's tenant must have the [Reader built-in role](../../role-based-access-control/built-in-roles.md#reader) (or another built-in role which includes Reader access). -> -> To add or update offers, delegate resources, and remove offers, the user must have a role with the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner). --Keep in mind that the **Service providers** page only shows information about the service providers that have access to the customer's subscriptions or resource groups through Azure Lighthouse. It doesn't show any information about additional service providers who don't use Azure Lighthouse. --## View service provider details --To view details about the current service providers who use Azure Lighthouse to work on the customer's tenant, select **Service provider offers** on the left side of the **Service providers** page. --For each offer, you'll see the service provider's name and the offer associated with it. You can select an offer to view a description and other details, including the role assignments that the service provider has been granted. --In the **Delegations** column, you can see how many subscriptions and/or resource groups have been delegated to the service provider for that offer. The service provider will be able to access and manage these subscriptions and/or resource groups according to the access levels specified in the offer. --## Add service provider offers --You can add a new service provider offer from the **Service provider offers** page. --To add an offer from the marketplace, select the **Add offer** button in the middle of the page, or select **Add offer** near the top of the page and then choose **Add via marketplace**. If [Managed Service offers](../concepts/managed-services-offers.md) have been published specifically for this customer, select **Private offers** to view them. Select an offer to review details. To add the offer, select **Create**. --To add an offer from a template, select **Add offer** near the top of the page and then choose **Add via marketplace**. This will allow you to upload a template from your service provider and onboard your subscription (or resource group). For more information, see [Deploy in the Azure portal](onboard-customer.md#deploy-in-the-azure-portal). --## Update service provider offers --After a customer has added an offer, a service provider may publish an updated version of the same offer to Azure Marketplace, such as to add a new role definition. If a new version of the offer has been published, the **Service provider offers** page shows an "update" icon in the row for that offer. Select this icon to see the differences between the current version of the offer and the new one. -- ![Update offer icon](../media/update-offer.jpg) --After reviewing the changes, you can choose to update to the new version. The authorizations and other settings specified in the new version will then apply to any subscriptions and/or resource groups that have been delegated for that offer. --## Remove service provider offers --You can remove a service provider offer at any time by selecting the trash can icon in the row for that offer. --After you confirm the deletion, that service provider will no longer have access to the resources that were formerly delegated for that offer. --> [!IMPORTANT] -> If a subscription has two or more offers from the same service provider, removing one of them could cause some service provider users to lose the access granted via the other delegations. This only occurs when the same user and role are included in multiple delegations and then one of the delegations is removed. To fix this, the [onboarding process](onboard-customer.md) should be repeated for the offers that you aren't removing. --## Delegate resources --Before a service provider can access and manage a customer's resources, one or more specific subscriptions and/or resource groups must be delegated. When a customer adds an offer without delegating any resources, a note appears at the top of the **Service provider offers** section. The service provider can't work on any resources in the customer's tenant until the delegation is completed. --To delegate subscriptions or resource groups: --1. Check the box for the row containing the service provider, offer, and name. Then select **Delegate resources** at the top of the screen. -1. In the **Offer details** section of the **Delegate resources** page, review the details about the service provider and offer. To review role assignments for the offer, select **Click here to see the details of the selected offer**. -1. In the **Delegate** section, select **Delegate subscriptions** or **Delegate resource groups**. -1. Choose the subscriptions and/or resource groups you'd like to delegate for this offer, then select **Add**. -1. Select the checkbox at the bottom of the page to confirm that you want to grant this service provider access to these resources, then select **Delegate**. --## View delegations --Delegations represent an association of specific customer resources (subscriptions and/or resource groups) with role assignments that grant permissions to the service provider for those resources. To view delegation details, select **Delegations** on the left side of the **Service providers** page. --Filters at the top of the page let you sort and group your delegation information. You can also filter by specific service providers, offers, or keywords. --> [!NOTE] -> When [viewing role assignments for the delegated scope in the Azure portal](../../role-based-access-control/role-assignments-list-portal.yml#list-role-assignments-at-a-scope) or via APIs, customers won't see role assignments for users from the service provider tenant who have access through Azure Lighthouse. Similarly, users in the service provider tenant won't see role assignments for users in a customer's tenant, regardless of the role they've been assigned. -> -> Note that [classic administrator](../../role-based-access-control/classic-administrators.md) assignments in a customer tenant may be visible to users in the managing tenant, or the other way around, because classic administrator roles don't use the Resource Manager deployment model. --## Audit and restrict delegations in your environment --Customers may want to review all subscriptions and/or resource groups that have been delegated to Azure Lighthouse. This is especially useful for those customers with a large number of subscriptions, or who have many users who perform management tasks. --We provide an [Azure Policy built-in policy definition](../../governance/policy/samples/built-in-policies.md#lighthouse) to [audit delegation of scopes to a managing tenant](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Lighthouse/Delegations_Audit.json). You can assign this policy to a management group that includes all of the subscriptions that you want to audit. When you check for compliance with this policy, any delegated subscriptions and/or resource groups (within the management group to which the policy is assigned) are shown in a noncompliant state. You can then review the results and confirm that there are no unexpected delegations. --Another [built-in policy definition](../../governance/policy/samples/built-in-policies.md#lighthouse) lets you [restrict delegations to specific managing tenants](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Lighthouse/AllowCertainManagingTenantIds_Deny.json). This policy can be assigned to a management group that includes any subscriptions for which you want to limit delegations. After the policy is deployed, any attempts to delegate a subscription to a tenant outside of the ones you specify will be denied. --For more information about how to assign a policy and view compliance state results, see [Quickstart: Create a policy assignment](../../governance/policy/assign-policy-portal.md). --## Next steps --- Learn more about [Azure Lighthouse](../overview.md).-- Learn how to [audit service provider activity](view-service-provider-activity.md).-- Learn how service providers can [view and manage customers](view-manage-customers.md) on the **My customers** page in the Azure portal.-- Learn how [enterprises managing multiple tenants](../concepts/enterprise.md) can use Azure Lighthouse to consolidate their management experience. |
lighthouse | View Service Provider Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/view-service-provider-activity.md | - Title: Monitor service provider activity -description: Customers can monitor logged activity to see actions performed by service providers through Azure Lighthouse. Previously updated : 05/23/2023----# Monitor service provider activity --Customers who have delegated subscriptions to service providers through [Azure Lighthouse](../overview.md) can [view Azure Activity log](/azure/azure-monitor/essentials/activity-log) data to see all actions taken. This data provides full visibility for actions that service providers take on delegated customer resources. The activity log also shows operations from users within the customer's own Microsoft Entra tenant. --## View activity log data --[View the activity log](/azure/azure-monitor/essentials/activity-log-insights#view-the-activity-log) from the **Monitor** menu in the Azure portal. Use the filters if you want to show results from a specific subscription. --You can also [view and retrieve activity log events](/azure/azure-monitor/essentials/activity-log#other-methods-to-retrieve-activity-log-events) programmatically. --> [!NOTE] -> Users in a service provider's tenant can view activity log results for a delegated subscription if they were granted the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role which includes Reader access) when that subscription was onboarded to Azure Lighthouse. --In the activity log, you'll see the name of the operation and its status, along with the date and time it was performed. The **Event initiated by** column shows which user performed the operation, whether it was a user in a service provider's tenant acting through Azure Lighthouse, or a user in the customer's own tenant. Note that the name of the user is shown, rather than the tenant or the role that the user has been assigned for that subscription. --> [!NOTE] -> Users from the service provider appear in the activity log, but these users and their role assignments aren't shown in **Access Control (IAM)** or when retrieving role assignment info via APIs. --Logged activity is available in the Azure portal for the past 90 days. You can also [store this data for a longer period](/azure/azure-monitor/essentials/activity-log-insights#retention-period) if needed. --## Set alerts for critical operations --To stay aware of critical operations that service providers (or users in the customer's own tenant) are performing, we recommend creating [activity log alerts](/azure/azure-monitor/alerts/alerts-types#activity-log-alerts). For example, you may want to track all administrative actions for a subscription, or be notified when any virtual machine in a particular resource group is deleted. When you create alerts, they'll include actions performed by users both in the customer's tenant and in any managing tenants. --For more information, see [Create, view, and manage activity log alerts](/azure/azure-monitor/alerts/alerts-activity-log). --## Create log queries --Log queries can help you analyze your logged activity or focus on specific items. For example, an audit might require you to report on all administrative-level actions performed on a subscription. You can create a query to filter on only these actions and sort the results by user, date, or another value. --For more information, see [Log queries in Azure Monitor](/azure/azure-monitor/logs/log-query-overview). --## View user activity across domains --To view activity from individual users across multiple domains, use the [Activity Logs by Domain](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/workbook-activitylogs-by-domain) sample workbook. --Results can be filtered by domain name. You can also apply additional filters such as category, level, or resource group. --## Next steps --- Learn how to [audit and restrict delegations](view-manage-service-providers.md#audit-and-restrict-delegations-in-your-environment).-- Learn more about [Azure Monitor](/azure/azure-monitor/).-- Learn how to [view and manage service provider offers](view-manage-service-providers.md) in the Azure portal. |
lighthouse | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/overview.md | - Title: What is Azure Lighthouse? -description: Azure Lighthouse lets service providers deliver managed services for their customers with higher automation and efficiency at scale. Previously updated : 12/07/2023----# What is Azure Lighthouse? --Azure Lighthouse enables multitenant management with scalability, higher automation, and enhanced governance across resources. --With Azure Lighthouse, service providers can deliver managed services using [comprehensive and robust tooling built into the Azure platform](concepts/architecture.md). Customers maintain control over who has access to their tenant, which resources they can access, and what actions can be taken. [Enterprise organizations](concepts/enterprise.md) managing resources across multiple tenants can use Azure Lighthouse to streamline management tasks. --[Cross-tenant management experiences](concepts/cross-tenant-management-experience.md) let you work more efficiently with Azure services such as [Azure Policy](how-to/policy-at-scale.md), [Microsoft Sentinel](how-to/manage-sentinel-workspaces.md), [Azure Arc](how-to/manage-hybrid-infrastructure-arc.md), and many more. Users can see what changes were made and by whom [in the activity log](how-to/view-service-provider-activity.md), which is stored in the customer's tenant and can be viewed by users in the managing tenant. --![Diagram showing an overview of how Azure Lighthouse works.](media/azure-lighthouse-overview.jpg) --## Benefits --Azure Lighthouse helps service providers efficiently build and deliver managed services. Benefits include: --- **Management at scale**: Customer engagement and life-cycle operations to manage customer resources are easier and more scalable. Existing APIs, management tools, and workflows can be used with delegated resources, including machines hosted outside of Azure, regardless of the regions in which they're located.-- **Greater visibility and control for customers**: Customers have precise control over the scopes they delegate and the permissions that are allowed. They can [audit service provider actions](how-to/view-service-provider-activity.md) and remove access completely at any time.-- **Comprehensive and unified platform tooling**: Azure Lighthouse works with existing tools and APIs, [Azure managed applications](concepts/managed-applications.md), and partner programs like the [Cloud Solution Provider (CSP) program](concepts/cloud-solution-provider.md). This flexibility supports key service provider scenarios, including multiple licensing models such as EA, CSP and pay-as-you-go. You can integrate Azure Lighthouse into your existing workflows and applications, and track your impact on customer engagements by linking your partner ID.--## Capabilities --Azure Lighthouse includes multiple ways to help streamline engagement and management: --- **Azure delegated resource management**: [Manage your customers' Azure resources securely from within your own tenant](concepts/architecture.md), without having to switch context and control planes. Customer subscriptions and resource groups can be delegated to specified users and roles in the managing tenant, with the ability to remove access as needed.-- **New Azure portal experiences**: View cross-tenant information in the [**My customers** page](how-to/view-manage-customers.md) in the Azure portal. A corresponding [**Service providers** page](how-to/view-manage-service-providers.md) lets customers view and manage their service provider access.-- **Azure Resource Manager templates**: Use ARM templates to [onboard delegated customer resources](how-to/onboard-customer.md) and [perform cross-tenant management tasks](samples/index.md).-- **Managed Service offers in Azure Marketplace**: [Offer your services to customers](concepts/managed-services-offers.md) through private or public offers, and automatically onboard them to Azure Lighthouse.--> [!TIP] -> A similar offering, [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview), helps service providers onboard, monitor, and manage their Microsoft 365 customers at scale. --## Pricing and availability --There are no additional costs associated with using Azure Lighthouse to manage Azure resources. Any Azure customer or partner can use Azure Lighthouse. --## Cross-region and cloud considerations --Azure Lighthouse is a non-regional service. You can manage delegated resources that are located in different [regions](/azure/cloud-adoption-framework/ready/azure-setup-guide/regions). However, you can't delegate resources across a [national cloud](/entra/identity-platform/authentication-national-cloud) and the Azure public cloud, or across two separate national clouds. --## Support for Azure Lighthouse --For help with Azure Lighthouse, [open a support request](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. For **Issue type**, choose **Technical**. Select a subscription, then select **Lighthouse** (under **Monitoring & Management**). --## Next steps --- Learn [how Azure Lighthouse works on a technical level](concepts/architecture.md).-- Explore [cross-tenant management experiences](concepts/cross-tenant-management-experience.md).-- See how to [use Azure Lighthouse within an enterprise](concepts/enterprise.md).-- View [availability](https://azure.microsoft.com/global-infrastructure/services/?products=azure-lighthouse®ions=all) and [FedRAMP and DoD CC SRG audit scope](../azure-government/compliance/azure-services-in-fedramp-auditscope.md) details for Azure Lighthouse. |
lighthouse | Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/index.md | - Title: Azure Lighthouse samples and templates -description: These samples and Azure Resource Manager templates help you onboard customers and support Azure Lighthouse scenarios. - Previously updated : 07/10/2024--# Azure Lighthouse samples --The following table includes links to key Azure Resource Manager templates for Azure Lighthouse. These files and more can also be found in the [Azure Lighthouse samples repository](https://github.com/Azure/Azure-Lighthouse-samples/). --## Onboard customers ---## Azure Policy ---## Azure Monitor ---## Additional cross-tenant scenarios ---## Next steps --- Learn about [Azure Lighthouse architecture and technical concepts](../concepts/architecture.md).-- Explore the [Azure Lighthouse samples repository](https://github.com/Azure/Azure-Lighthouse-samples/). |
lighthouse | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md | - Title: Built-in policy definitions for Azure Lighthouse -description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/06/2024----# Azure Policy built-in definitions for Azure Lighthouse --This page is an index of [Azure Policy](../../governance/policy/overview.md) built-in policy -definitions for Azure Lighthouse. For additional Azure Policy built-ins for other -services, see -[Azure Policy built-in definitions](../../governance/policy/samples/built-in-policies.md). --The name of each built-in policy definition links to the policy definition in the Azure portal. Use -the link in the **Version** column to view the source on the -[Azure Policy GitHub repo](https://github.com/Azure/azure-policy). --## Azure Lighthouse ---## Next steps --- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).-- Review the [Azure Policy definition structure](../../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../../governance/policy/concepts/effects.md). |
logic-apps | Azure Arc Enabled Logic Apps Create Deploy Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/azure-arc-enabled-logic-apps-create-deploy-workflows.md | For more information, review the following documentation: - [What is Azure Arc-enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md) - [Single-tenant versus multitenant in Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md)-- [Azure Arc overview](../azure-arc/overview.md)+- [Azure Arc overview](/azure/azure-arc/overview) - [Azure Kubernetes Service overview](/azure/aks/intro-kubernetes)-- [What is Azure Arc-enabled Kubernetes?](../azure-arc/kubernetes/overview.md)-- [Custom locations on Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/conceptual-custom-locations.md)+- [What is Azure Arc-enabled Kubernetes?](/azure/azure-arc/kubernetes/overview) +- [Custom locations on Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/conceptual-custom-locations) - [App Service, Functions, and Logic Apps on Azure Arc (Preview)](../app-service/overview-arc-integration.md) - [Set up an Azure Arc-enabled Kubernetes cluster to run App Service, Functions, and Logic Apps (Preview)](../app-service/manage-create-arc-environment.md) This section describes the common prerequisites across all the approaches and to For more information, review the following documentation: - [App Service, Functions, and Logic Apps on Azure Arc (Preview)](../app-service/overview-arc-integration.md)- - [Cluster extensions on Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/conceptual-extensions.md) + - [Cluster extensions on Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/conceptual-extensions) - [Set up an Azure Arc-enabled Kubernetes cluster to run App Service, Functions, and Logic Apps (Preview)](../app-service/manage-create-arc-environment.md) - [Change the default scaling behavior](#change-scaling) If you prefer to use container tools and deployment processes, you can container - Set up a Docker registry for hosting your container images. -- To containerize your logic app, add the following Dockerfile to your logic app project's root folder, and follow the steps for building and publishing an image to your Docker registry, for example, review [Tutorial: Build and deploy container images in the cloud with Azure Container Registry Tasks](../container-registry/container-registry-tutorial-quick-task.md).+- To containerize your logic app, add the following Dockerfile to your logic app project's root folder, and follow the steps for building and publishing an image to your Docker registry, for example, review [Tutorial: Build and deploy container images in the cloud with Azure Container Registry Tasks](/azure/container-registry/container-registry-tutorial-quick-task). > [!NOTE] > If you [use SQL as your storage provider](set-up-sql-db-storage-single-tenant-standard-workflows.md), make sure that you use an Azure Functions image version 3.3.1 or later. |
logic-apps | Azure Arc Enabled Logic Apps Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/azure-arc-enabled-logic-apps-overview.md | For more information, review the following documentation: - [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md) - [Single-tenant versus multitenant in Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md)-- [Azure Arc overview](../azure-arc/overview.md)+- [Azure Arc overview](/azure/azure-arc/overview) - [Azure Kubernetes Service overview](/azure/aks/intro-kubernetes)-- [What is Azure Arc-enabled Kubernetes?](../azure-arc/kubernetes/overview.md)+- [What is Azure Arc-enabled Kubernetes?](/azure/azure-arc/kubernetes/overview) - [What is Kubernetes?](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/) <a name="why-use"></a> For more information, review the following documentation: - [Single-tenant versus multitenant in Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md) - [Azure Kubernetes Service overview](/azure/aks/intro-kubernetes)-- [What is Azure Arc-enabled Kubernetes?](../azure-arc/kubernetes/overview.md)-- [Custom locations on Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/conceptual-custom-locations.md)+- [What is Azure Arc-enabled Kubernetes?](/azure/azure-arc/kubernetes/overview) +- [Custom locations on Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/conceptual-custom-locations) - [App Service, Functions, and Logic Apps on Azure Arc (Preview)](../app-service/overview-arc-integration.md) - [Set up an Azure Arc-enabled Kubernetes cluster to run App Service, Functions, and Logic Apps (Preview)](../app-service/manage-create-arc-environment.md) |
logic-apps | Biztalk Server Azure Integration Services Migration Approaches | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-azure-integration-services-migration-approaches.md | Consider the following testing recommendations for your migration project: - Mock testing capabilities for HTTP actions and Azure connectors. - Configure tests to use different setting values from production. - - [Integration Playbook: Logic Apps Standard Testing](https://www.mikestephenson.me/2021/12/11/logic-app-standard-integration-testing/) from Michael Stephenson, Microsoft MVP + - [Integration Playbook: Logic Apps Standard Testing](https://mikestephenson.me/2021/12/11/logic-app-standard-integration-testing/) from Michael Stephenson, Microsoft MVP The [Integration Playbook testing framework](https://github.com/michaelstephensonuk/IntegrationPlaybook-LogicApp-Standard-Testing) builds on the Microsoft-provided test framework and supports additional scenarios: |
logic-apps | Biztalk Server To Azure Integration Services Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-to-azure-integration-services-overview.md | Administrators use the [BizTalk Server Administrator Console](/biztalk/core/usin #### Azure Integration Services -The [Azure portal](../azure-portal/azure-portal-overview.md) is a common tool that administrators and support personnel use to view and monitor the health of interfaces. For Azure Logic Apps, this experience includes rich transaction traces that are available through run history. +The [Azure portal](/azure/azure-portal/azure-portal-overview) is a common tool that administrators and support personnel use to view and monitor the health of interfaces. For Azure Logic Apps, this experience includes rich transaction traces that are available through run history. Granular [role-based access controls (RBAC)](../role-based-access-control/overview.md) are also available so you can manage and restrict access to Azure resources at various levels. |
logic-apps | Set Up Sql Db Storage Single Tenant Standard Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-sql-db-storage-single-tenant-standard-workflows.md | When you create your logic app using the **Logic App (Standard)** resource type | **Resource Group** | Yes | <*Azure-resource-group-name*> | The Azure resource group where you create your logic app and related resources. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a resource group named `Fabrikam-Workflows-RG`. | | **Type** | Yes | **Standard** | This logic app resource type runs in the single-tenant Azure Logic Apps environment and uses the [Standard usage, billing, and pricing model](logic-apps-pricing.md#standard-pricing). | | **Logic App name** | Yes | <*logic-app-name*> | The name to use for your logic app. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a logic app named `Fabrikam-Workflows`. <p><p>**Note**: Your logic app's name automatically gets the suffix, `.azurewebsites.net`, because the **Logic App (Standard)** resource is powered by the single-tenant Azure Logic Apps runtime, which uses the Azure Functions extensibility model and is hosted as an extension on the Azure Functions runtime. Azure Functions uses the same app naming convention. |- | **Publish** | Yes | <*deployment-environment*> | The deployment destination for your logic app. By default, **Workflow** is selected for deployment to single-tenant Azure Logic Apps. Azure creates an empty logic app resource where you have to add your first workflow. <p><p>**Note**: Currently, the **Docker Container** option requires a [*custom location*](../azure-arc/kubernetes/conceptual-custom-locations.md) on an Azure Arc enabled Kubernetes cluster, which you can use with [Azure Arc enabled Logic Apps (Preview)](azure-arc-enabled-logic-apps-overview.md). The resource locations for your logic app, custom location, and cluster must all be the same. | + | **Publish** | Yes | <*deployment-environment*> | The deployment destination for your logic app. By default, **Workflow** is selected for deployment to single-tenant Azure Logic Apps. Azure creates an empty logic app resource where you have to add your first workflow. <p><p>**Note**: Currently, the **Docker Container** option requires a [*custom location*](/azure/azure-arc/kubernetes/conceptual-custom-locations) on an Azure Arc enabled Kubernetes cluster, which you can use with [Azure Arc enabled Logic Apps (Preview)](azure-arc-enabled-logic-apps-overview.md). The resource locations for your logic app, custom location, and cluster must all be the same. | | **Region** | Yes | <*Azure-region*> | The location to use for creating your resource group and resources. This example deploys the sample logic app to Azure and uses **West US**. <p>- If you selected **Docker Container**, select your custom location. <p>- To deploy to an [ASEv3](../app-service/environment/overview.md) resource, which must first exist, select that environment resource from the **Region** list. | ||||| |
managed-grafana | Concept Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/concept-role-based-access-control.md | The following built-in roles are available in Azure Managed Grafana, each provid > | | | | > | <a name='grafana-admin'></a>[Grafana Admin](../role-based-access-control/built-in-roles/monitor.md#grafana-admin) | Perform all Grafana operations, including the ability to manage data sources, create dashboards, and manage role assignments within Grafana. | 22926164-76b3-42b3-bc55-97df8dab3e41 | > | <a name='grafana-editor'></a>[Grafana Editor](../role-based-access-control/built-in-roles/monitor.md#grafana-editor) | View and edit a Grafana instance, including its dashboards and alerts. | a79a5197-3a5c-4973-a920-486035ffd60f |+> | <a name='grafana-limited-viewer'></a>[Grafana Limited Viewer](../role-based-access-control/built-in-roles/monitor.md#grafana-limited-viewer) | View a Grafana home page. This role contains no permissions assigned by default and it is not available for Grafana v9 workspaces. | 41e04612-9dac-4699-a02b-c82ff2cc3fb5 | > | <a name='grafana-viewer'></a>[Grafana Viewer](../role-based-access-control/built-in-roles/monitor.md#grafana-viewer) | View a Grafana instance, including its dashboards and alerts. | 60921a7e-fef1-4a43-9b16-a26c52ad4769 | To access the Grafana user interface, users must possess one of these roles. -These permissions are included within the broader roles of resource group Contributor and resource group Owner roles. If you're not a resource group Contributor or resource group Owner, a User Access Administrator, you will need to ask a subscription Owner or resource group Owner to grant you one of the Grafana roles on the resource you want to access. +These permissions are included within the broader roles of resource group Contributor and resource group Owner roles. If you're not a resource group Contributor or a resource group Owner, you will need to ask a subscription Owner or resource group Owner to grant you one of the Grafana roles on the resource you want to access. ++You can find more information about the Grafana roles from the [Grafana documentation](https://grafana.com/docs/grafana/latest/administration/roles-and-permissions/#organization-roles). The Grafana Limited Viewer role in Azure maps to the "No Basic Role" in the Grafana docs. ## Adding a role assignment to an Azure Managed Grafana resource |
managed-grafana | How To Share Grafana Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-share-grafana-workspace.md | Azure Managed Grafana enables such collaboration by allowing you to set custom p ## Supported Grafana roles -Azure Managed Grafana supports the Grafana Admin, Grafana Editor, and Grafana Viewer roles: +Azure Managed Grafana supports the following Grafana roles: -- The Grafana Admin role provides full control of the instance including managing role assignments, viewing, editing, and configuring data sources.-- The Grafana Editor role provides read-write access to the dashboards in the instance.-- The Grafana Viewer role provides read-only access to dashboards in the instance.+- Grafana Admin: provides full control of the instance including managing role assignments, viewing, editing, and configuring data sources. +- Grafana Editor: provides read-write access to the dashboards in the instance. +- Grafana Limited Viewer: provides read-only access to the Grafana home page. This role contains no permissions assigned by default and it is not available for Grafana v9 workspaces. +- Grafana Viewer: provides read-only access to dashboards in the instance. -More details on Grafana roles can be found in the [Grafana documentation](https://grafana.com/docs/grafana/latest/permissions/organization_roles/#compare-roles). --Grafana user roles and assignments are fully [integrated within Microsoft Entra ID](../role-based-access-control/built-in-roles.md#grafana-admin). You can assign a Grafana role to any Microsoft Entra user, group, service principal or managed identity, and grant them access permissions associated with that role. You can manage these permissions from the Azure portal or the command line. This section explains how to assign Grafana roles to users in the Azure portal. +Go to [Azure role-based access control within Azure Managed Grafana](./concept-role-based-access-control.md) for more information about these roles in Azure, and to [Organization roles](https://grafana.com/docs/grafana/latest/administration/roles-and-permissions/#organization-roles) to learn about Grafana roles from the Grafana website. The Grafana Limited Viewer role in Azure maps to the "No Basic Role" in the Grafana documentation. ## Add a Grafana role assignment +Grafana user roles and assignments are fully [integrated within Microsoft Entra ID](../role-based-access-control/built-in-roles.md#grafana-admin). You can assign a Grafana role to any Microsoft Entra user, group, service principal or managed identity, and grant them access permissions associated with that role. You can manage these permissions from the Azure portal or the command line. This section explains how to assign Grafana roles to users in the Azure portal. + ### [Portal](#tab/azure-portal) 1. Open your Azure Managed Grafana instance. Grafana user roles and assignments are fully [integrated within Microsoft Entra :::image type="content" source="media/share/iam-page.png" alt-text="Screenshot of Add role assignment in the Azure platform."::: -1. Select a Grafana role to assign among **Grafana Admin**, **Grafana Editor** or **Grafana Viewer**, then select **Next**. +1. Select a Grafana role to assign among **Grafana Admin**, **Grafana Editor**, **Grafana Limited Viewer** or **Grafana Viewer**, then select **Next**. :::image type="content" source="media/share/role-assignment.png" alt-text="Screenshot of the Grafana roles in the Azure platform."::: In the code below, replace the following placeholders: - `<roleNameOrId>`: - For Grafana Admin, enter `Grafana Admin` or `22926164-76b3-42b3-bc55-97df8dab3e41`. - For Grafana Editor, enter `Grafana Editor` or `a79a5197-3a5c-4973-a920-486035ffd60f`.+ - For Grafana Limited Viewer, enter `Grafana Limited Viewer` or `41e04612-9dac-4699-a02b-c82ff2cc3fb5`. - For Grafana Viewer, enter `Grafana Viewer` or `60921a7e-fef1-4a43-9b16-a26c52ad4769`. - `<scope>`: enter the full ID of the Azure Managed Grafana instance. |
migrate | Onboard To Azure Arc With Azure Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/onboard-to-azure-arc-with-azure-migrate.md | -Azure Arc allows you to manage your hybrid IT estate with a single pane of glass by extending the Azure management experience to your on-premises servers that are not ideal candidates for migration. [Learn more](../azure-arc/servers/overview.md) about Azure Arc. +Azure Arc allows you to manage your hybrid IT estate with a single pane of glass by extending the Azure management experience to your on-premises servers that are not ideal candidates for migration. [Learn more](/azure/azure-arc/servers/overview) about Azure Arc. ## Before you get started Azure Arc allows you to manage your hybrid IT estate with a single pane of glass - _For Linux:_ On all target Linux servers, allow inbound connections on port 22 (SSH). - You can also add the IP addresses of the remote machines (discovered servers) to the WinRM TrustedHosts list on the appliance. 2. The Azure Migrate appliance should have a network line of sight to the target servers. -- Be sure to verify the [prerequisites for Azure Arc](../azure-arc/servers/prerequisites.md) and review the following considerations:+- Be sure to verify the [prerequisites for Azure Arc](/azure/azure-arc/servers/prerequisites) and review the following considerations: - Onboarding to Azure Arc can only be initiated after the vCenter Server discovery and software inventory is completed. It may take up to 6 hours for software inventory to complete after it is turned on.- - The [Azure Arc Hybrid Connected Machine agent](../azure-arc/servers/learn/quick-enable-hybrid-vm.md) will be installed on the discovered servers during the Arc onboarding process. Make sure you provide credentials with administrator permissions on the servers to install and configure the agent. On Linux, provide the root account, and on Windows, provide an account that is a member of the Local Administrators group. - - Verify that the servers are running [a supported operating system](../azure-arc/servers/prerequisites.md#supported-operating-systems). - - Ensure that the Azure account is granted assignment to the [required Azure roles](../azure-arc/servers/prerequisites.md#required-permissions). - - Make sure [the required URLs](../azure-arc/servers/network-requirements.md#urls) are not blocked if the discovered servers connect through a firewall or proxy server to communicate over the Internet. - - Review the [regions supported](../azure-arc/servers/overview.md#supported-regions) for Azure Arc. + - The [Azure Arc Hybrid Connected Machine agent](/azure/azure-arc/servers/learn/quick-enable-hybrid-vm) will be installed on the discovered servers during the Arc onboarding process. Make sure you provide credentials with administrator permissions on the servers to install and configure the agent. On Linux, provide the root account, and on Windows, provide an account that is a member of the Local Administrators group. + - Verify that the servers are running [a supported operating system](/azure/azure-arc/servers/prerequisites#supported-operating-systems). + - Ensure that the Azure account is granted assignment to the [required Azure roles](/azure/azure-arc/servers/prerequisites#required-permissions). + - Make sure [the required URLs](/azure/azure-arc/servers/network-requirements#urls) are not blocked if the discovered servers connect through a firewall or proxy server to communicate over the Internet. + - Review the [regions supported](/azure/azure-arc/servers/overview#supported-regions) for Azure Arc. - Azure Arc-enabled servers support up to 5,000 machine instances in a resource group. Once the vCenter Server discovery has been completed, software inventory (discov 3. In the **Region** drop-down list, select the Azure region to store the servers' metadata. -4. Provide the **Microsoft Entra service principal** details for onboarding at scale. Review this article to [create a service principal using the Azure portal or Azure PowerShell.](../azure-arc/servers/onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) <br/> +4. Provide the **Microsoft Entra service principal** details for onboarding at scale. Review this article to [create a service principal using the Azure portal or Azure PowerShell.](/azure/azure-arc/servers/onboard-service-principal#create-a-service-principal-for-onboarding-at-scale) <br/> The following inputs are required: - **Directory (tenant) ID** - The [unique identifier (GUID)](../active-directory/develop/howto-create-service-principal-portal.md#sign-in-to-the-application) that represents your dedicated instance of Microsoft Entra ID. Once the vCenter Server discovery has been completed, software inventory (discov If you receive an error when onboarding to Azure Arc using the Azure Migrate appliance, the following section can help identify the probable cause and suggested steps to resolve your problem. -If you don't see the error code listed below or if the error code starts with **_AZCM_**, refer to [this guide for troubleshooting Azure Arc](../azure-arc/servers/troubleshoot-agent-onboard.md). +If you don't see the error code listed below or if the error code starts with **_AZCM_**, refer to [this guide for troubleshooting Azure Arc](/azure/azure-arc/servers/troubleshoot-agent-onboard). ### Error 60001 - UnableToConnectToPhysicalServer Unable to connect to server. Either you have provided incorrect credentials on t - The server hosts an unsupported operating system for Azure Arc onboarding. **Recommended actions** -- [Review the supported operating systems](../azure-arc/servers/prerequisites.md#supported-operating-systems) for Azure Arc. +- [Review the supported operating systems](/azure/azure-arc/servers/prerequisites#supported-operating-systems) for Azure Arc. ### Error 10002 - ScriptExecutionTimedOutOnVm |
migrate | Quickstart Create Migrate Project | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/quickstart-create-migrate-project.md | Title: Quickstart to create an Azure Migrate project using an Azure Resource Manager template. description: In this quickstart, you learn how to create an Azure Migrate project using an Azure Resource Manager template (ARM template). Previously updated : 07/28/2021 Last updated : 09/19/2024 ms. -+ # Quickstart: Create an Azure Migrate project using an ARM template To confirm that the Azure Migrate project was created, use the Azure portal. ## Next steps -In this quickstart, you created an Azure Migrate project. To learn more about Azure Migrate and its capabilities, continue to the Azure Migrate overview. --> [!div class="nextstepaction"] -> [Azure Migrate overview](migrate-services-overview.md) -> +In this quickstart, you created an Azure Migrate project. +- To learn more about Azure Migrate and its capabilities, continue to the [Azure Migrate overview](migrate-services-overview.md). +- Follow these tutorials to discover [VMware VMs](./vmware/tutorial-discover-vmware.md), [Hyper-V VMs](tutorial-discover-hyper-v.md), and [Physical servers](tutorial-discover-physical.md). |
migrate | Tutorial App Containerization Aspnet App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-app-service.md | Title: ASP.NET app containerization and migration to App Service description: This tutorial demonstrates how to containerize ASP.NET applications and migrate them to Azure App Service.--++ ms. Previously updated : 03/06/2024 Last updated : 09/19/2024 # ASP.NET app containerization and migration to Azure App Service If you just created a free Azure account, you're the owner of your subscription. ![Screenshot that shows the User settings page.](./media/tutorial-discover-vmware/register-apps.png) + [!INCLUDE [global-admin-usage.md](includes/global-admin-usage.md)] + 10. If the **App registrations** option is set to **No**, ask the tenant/global admin to assign the required permission. Alternatively, the tenant/global admin can assign the Application developer role to an account to allow the registration of Microsoft Entra apps. For more information, see [Assign roles to users](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md). ## Download and install the Azure Migrate App Containerization tool Parameterizing the configuration makes it available as a deploy-time parameter. ## Build container image -1. In the dropdown list, select an [Azure container registry](../container-registry/index.yml) that will be used to build and store the container images for the apps. You can use an existing Azure container registry or create a new one by selecting **Create new registry**: +1. In the dropdown list, select an [Azure container registry](/azure/container-registry/) that will be used to build and store the container images for the apps. You can use an existing Azure container registry or create a new one by selecting **Create new registry**: ![Screenshot that shows the Build images window.](./media/tutorial-containerize-apps-aks/build-aspnet-app.png) > [!NOTE]- > Only Azure container registries with the admin user account enabled are displayed. The admin user account is currently required for deploying an image from an Azure container registry to Azure App Service. For more information, see [Authenticate with an Azure container registry](../container-registry/container-registry-authentication.md#admin-account). + > Only Azure container registries with the admin user account enabled are displayed. The admin user account is currently required for deploying an image from an Azure container registry to Azure App Service. For more information, see [Authenticate with an Azure container registry](/azure/container-registry/container-registry-authentication#admin-account). 2. The Dockerfiles needed to build the container images for each selected application are generated at the beginning of the build step. Select **Review** to review the Dockerfile. You can also add any necessary customizations to the Dockerfile in the review step and save the changes before you start the build process. |
migrate | Tutorial App Containerization Aspnet Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-kubernetes.md | Title: Azure App Containerization ASP.NET; Containerization and migration of ASP.NET applications to Azure Kubernetes. description: Tutorial - Containerize & migrate ASP.NET applications to Azure Kubernetes Service.-- -ms. ++ Previously updated : 03/06/2024 Last updated : 09/19/2024 # ASP.NET app containerization and migration to Azure Kubernetes Service If you just created a free Azure account, you're the owner of your subscription. ![Screenshot of verification in User Settings if users can register Active Directory apps.](./media/tutorial-discover-vmware/register-apps.png) + [!INCLUDE [global-admin-usage.md](includes/global-admin-usage.md)] + 1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Microsoft Entra App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md). ## Download and install Azure Migrate: App Containerization tool Parameterizing the configuration makes it available as a deployment time paramet To build a container image, follow these steps: -1. **Select Azure Container Registry**: Use the dropdown to select an [Azure Container Registry](../container-registry/index.yml) that will be used to build and store the container images for the apps. You can use an existing Azure Container Registry or choose to create a new one using the Create new registry option. +1. **Select Azure Container Registry**: Use the dropdown to select an [Azure Container Registry](/azure/container-registry/) that will be used to build and store the container images for the apps. You can use an existing Azure Container Registry or choose to create a new one using the Create new registry option. ![Screenshot for app ACR selection.](./media/tutorial-containerize-apps-aks/build-aspnet-app.png) |
migrate | Tutorial App Containerization Java App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-app-service.md | Title: Containerization and migration of Java web applications to Azure App Service. description: Tutorial:Containerize & migrate Java web applications to Azure App Service.-- -ms. ++ Previously updated : 07/05/2024 Last updated : 09/19/2024 # Java web app containerization and migration to Azure App Service If you just created a free Azure account, you're the owner of your subscription. ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-vmware/register-apps.png) + [!INCLUDE [global-admin-usage.md](includes/global-admin-usage.md)] + 10. In case the 'App registrations' setting is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Microsoft Entra App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md). ## Download and install Azure Migrate: App Containerization tool Parameterizing the configuration makes it available as a deployment time paramet ## Build container image -1. **Select Azure Container Registry**: Use the dropdown to select an [Azure Container Registry](../container-registry/index.yml) that will be used to build and store the container images for the apps. You can use an existing Azure Container Registry or choose to create a new one using the Create new registry option. +1. **Select Azure Container Registry**: Use the dropdown to select an [Azure Container Registry](/azure/container-registry/) that will be used to build and store the container images for the apps. You can use an existing Azure Container Registry or choose to create a new one using the Create new registry option. ![Screenshot for app ACR selection.](./media/tutorial-containerize-apps-aks/build-java-app.png) > [!NOTE]-> Only Azure container registries with admin user enabled are displayed. The admin account is currently required for deploying an image from an Azure container registry to Azure App Service. [Learn more](../container-registry/container-registry-authentication.md#admin-account). +> Only Azure container registries with admin user enabled are displayed. The admin account is currently required for deploying an image from an Azure container registry to Azure App Service. [Learn more](/azure/container-registry/container-registry-authentication#admin-account). 2. **Review the Dockerfile**: The Dockerfile needed to build the container images for each selected application are generated at the beginning of the build step. Click **Review** to review the Dockerfile. You can also add any necessary customizations to the Dockerfile in the review step and save the changes before starting the build process. |
migrate | Tutorial App Containerization Java Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-kubernetes.md | Title: Azure App Containerization Java; Containerization and migration of Java web applications to Azure Kubernetes. description: Tutorial:Containerize & migrate Java web applications to Azure Kubernetes Service.-- -ms. ++ Previously updated : 07/05/2024 Last updated : 09/19/2024 # Java web app containerization and migration to Azure Kubernetes Service If you just created a free Azure account, you're the owner of your subscription. ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-vmware/register-apps.png) + [!INCLUDE [global-admin-usage.md](includes/global-admin-usage.md)] + 1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Microsoft Entra App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md). ## Download and install Azure Migrate: App Containerization tool Parameterizing the configuration makes it available as a deployment time paramet ## Build container image -1. **Select Azure Container Registry**: Use the dropdown to select an [Azure Container Registry](../container-registry/index.yml) that will be used to build and store the container images for the apps. You can use an existing Azure Container Registry or choose to create a new one using the Create new registry option. +1. **Select Azure Container Registry**: Use the dropdown to select an [Azure Container Registry](/azure/container-registry/) that will be used to build and store the container images for the apps. You can use an existing Azure Container Registry or choose to create a new one using the Create new registry option. ![Screenshot for app ACR selection.](./media/tutorial-containerize-apps-aks/build-java-app.png) |
migrate | Tutorial Discover Spring Boot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-spring-boot.md | After you have performed server discovery and software inventory using the Azure - | - **Supported Linux OS** | Ubuntu 20.04, RHEL 9 **Hardware configuration required** | 8 GB RAM, with 30 GB storage, 4 Core CPU- **Network Requirements** | Access to the following endpoints: <br/><br/> *.docker.io <br/></br> *.docker.com <br/><br/>api.snapcraft.io <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> https://legoonboarding.blob.core.windows.net </br></br> [Azure Arc-enabled Kubernetes network requirements](../azure-arc/kubernetes/network-requirements.md) <br/><br/>[Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints) + **Network Requirements** | Access to the following endpoints: <br/><br/> *.docker.io <br/></br> *.docker.com <br/><br/>api.snapcraft.io <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> https://legoonboarding.blob.core.windows.net </br></br> [Azure Arc-enabled Kubernetes network requirements](/azure/azure-arc/kubernetes/network-requirements) <br/><br/>[Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints) After copying the script, you can go to your Linux server, save the script as *Deploy.sh* on the server. #### [Bring your own Kubernetes cluster](#tab/K8-byoc) -1. In **Choose connected cluster**, you need to select an existing Azure Arc connected cluster from your subscription. If you don't have an existing connected cluster, you can Arc enable a Kubernetes cluster running on-premises by following the steps [here](../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli). +1. In **Choose connected cluster**, you need to select an existing Azure Arc connected cluster from your subscription. If you don't have an existing connected cluster, you can Arc enable a Kubernetes cluster running on-premises by following the steps [here](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli). > [!Note] > You can only select an existing connected cluster that's deployed in the same region as your Azure Migrate project. |
migrate | Tutorial Migrate Aws Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-aws-virtual-machines.md | A Mobility service agent must be preinstalled on the source AWS VMs to be migrat - [AWS System Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) - [System Center Configuration Manager](../site-recovery/vmware-azure-mobility-install-configuration-mgr.md)-- [Azure Arc for servers and custom script extensions](../azure-arc/servers/overview.md)+- [Azure Arc for servers and custom script extensions](/azure/azure-arc/servers/overview) - [Install Mobility agent for Windows](../site-recovery/vmware-physical-mobility-service-overview.md#install-the-mobility-service-using-command-prompt-classic) - [Install Mobility agent for Linux](../site-recovery/vmware-physical-mobility-service-overview.md#linux-machine-1) |
migrate | Tutorial Migrate Gcp Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-gcp-virtual-machines.md | The first step of migration is to set up the replication appliance. To set up th A Mobility service agent must be preinstalled on the source GCP VMs to be migrated before you can initiate replication. The approach you choose to install the Mobility service agent might depend on your organization's preferences and existing tools. The "push" installation method built into Azure Site Recovery isn't currently supported. Approaches you might want to consider: - [System Center Configuration Manager](../site-recovery/vmware-azure-mobility-install-configuration-mgr.md)-- [Azure Arc for servers and custom script extensions](../azure-arc/servers/overview.md)+- [Azure Arc for servers and custom script extensions](/azure/azure-arc/servers/overview) - [Install Mobility agent for Windows](../site-recovery/vmware-physical-mobility-service-overview.md#install-the-mobility-service-using-command-prompt-classic) - [Install Mobility agent for Linux](../site-recovery/vmware-physical-mobility-service-overview.md#linux-machine-1) |
migrate | Tutorial Migrate Physical Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md | The mobility service agent must be installed on the servers to get them discover A Mobility service agent must be preinstalled on the source physical machines to be migrated before you can start replication. The approach you choose to install the Mobility service agent might depend on your organization's preferences and existing tools. The "push" installation method built into Site Recovery isn't currently supported. Approaches you might want to consider: - [System Center Configuration Manager](../site-recovery/vmware-azure-mobility-install-configuration-mgr.md)-- [Azure Arc for servers and custom script extensions](../azure-arc/servers/overview.md)+- [Azure Arc for servers and custom script extensions](/azure/azure-arc/servers/overview) - [Install Mobility agent for Windows](../site-recovery/vmware-physical-mobility-service-overview.md#install-the-mobility-service-using-command-prompt-classic) - [Install Mobility agent for Linux](../site-recovery/vmware-physical-mobility-service-overview.md#linux-machine-1) |
migrate | Tutorial Modernize Asp Net Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-modernize-asp-net-aks.md | The application is finally ready for migration: After successfully migrating your applications to AKS, you may explore the following articles to optimize your apps for cloud: -- Set up CI/CD with [Azure Pipelines](/azure/aks/devops-pipeline), [GitHub Actions](/azure/aks/kubernetes-action) or [through GitOps](../azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md).+- Set up CI/CD with [Azure Pipelines](/azure/aks/devops-pipeline), [GitHub Actions](/azure/aks/kubernetes-action) or [through GitOps](/azure/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd). - Use Azure Monitor to [monitor health and performance of AKS and your apps](/azure/aks/monitor-aks). - Harden the security posture of your AKS cluster and containers with [Microsoft Defender for Containers](/azure/defender-for-cloud/defender-for-containers-enable). - Optimize [Windows Dockerfiles](/virtualization/windowscontainers/manage-docker/optimize-windows-dockerfile?context=/azure/aks/context/aks-context). |
nat-gateway | Tutorial Dual Stack Outbound Nat Load Balancer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer.md | Before you can validate outbound connectivity, make not of the IPv4, and IPv6 pu 1. Select **public-ip-nat**. -1. Make note of the address in **IP address**. In this example, it's **20.230.191.5**. +1. Make note of the address in **IP address**. In this example, it's **203.0.113.5**. 1. Return to **Public IP addresses**. 1. Select **public-ip-ipv6**. -1. Make note of the address in **IP address**. In this example, it's **2603:1030:c02:8::14**. +1. Make note of the address in **IP address**. In this example, it's **2001:DB8::14**. # [**CLI**](#tab/dual-stack-outbound-cli) azureuser@Azure:~$ az network public-ip show \ --name public-ip-nat \ --query ipAddress \ --output tsv-40.90.217.214 +203.0.113.5 ``` ### IPv6 azureuser@Azure:~$ az network public-ip show \ --name public-ip-ipv6 \ --query ipAddress \ --output tsv-2603:1030:c04:3::4d +2001:DB8::14 ``` Make note of both IP addresses. Use the IPs to verify the outbound connectivity ```output azureuser@vm-1:~$ curl -4 icanhazip.com- 20.230.191.5 + 203.0.113.5 ``` 1. At the command line, enter the following command to verify the IPv4 address. Make note of both IP addresses. Use the IPs to verify the outbound connectivity ```output azureuser@vm-1:~$ curl -6 icanhazip.com- 2603:1030:c02:8::14 + 2001:DB8::14 ``` 1. Close the bastion connection to **vm-1**. Make note of both IP addresses. Use the IPs to verify the outbound connectivity ```output azureuser@vm-1:~$ curl -4 icanhazip.com- 40.90.217.214 + 203.0.113.5 ``` 1. At the command line, enter the following command to verify the IPv4 address. Make note of both IP addresses. Use the IPs to verify the outbound connectivity ```output azureuser@vm-1:~$ curl -6 icanhazip.com- 2603:1030:c04:3::4d + 2001:DB8::14 ``` 1. Close the bastion connection to **vm-1**. |
network-watcher | Azure Monitor Agent With Connection Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/azure-monitor-agent-with-connection-monitor.md | To install agents for Azure virtual machines and Virtual Machine Scale Sets, see To make connection monitor recognize your on-premises machines as sources for monitoring, follow these steps: -* Enable your hybrid endpoints to [Azure Arc-enabled servers](../azure-arc/overview.md). +* Enable your hybrid endpoints to [Azure Arc-enabled servers](/azure/azure-arc/overview). -* Connect hybrid machines by installing the [Azure Connected Machine agent](../azure-arc/servers/overview.md) on each machine. +* Connect hybrid machines by installing the [Azure Connected Machine agent](/azure/azure-arc/servers/overview) on each machine. This agent doesn't deliver any other functionality, and it doesn't replace Azure Monitor agent. The Azure Connected Machine agent simply enables you to manage the Windows and Linux machines that are hosted outside of Azure on your corporate network or other cloud providers. |
network-watcher | Connection Monitor Connected Machine Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-connected-machine-agent.md | This article describes how to install the Azure Connected Machine agent. * An Azure account with an active subscription. If you don't already have an account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Administrator permissions to install and configure the Connected Machine agent. On Linux, you install and configure it using the root account, and on Windows, you use an account that's a member of the Local Administrators group.-* Register the Microsoft.HybridCompute, Microsoft.GuestConfiguration, and Microsoft.HybridConnectivity resource providers on your subscription. You can [register these resource providers](../azure-arc/servers/prerequisites.md#azure-resource-providers) either ahead of time or as you're completing the steps in this article. -* Review the [agent prerequisites](../azure-arc/servers/prerequisites.md), and ensure that: - * Your target machine is running a supported [operating system](../azure-arc/servers/prerequisites.md#supported-operating-systems). - * Your account has the [required Azure built-in roles](../azure-arc/servers/prerequisites.md#required-permissions). - * The machine is in a [supported region](../azure-arc/overview.md). - * If the machine connects through a firewall or proxy server to communicate over the internet, the listed URLs in [Connected Machine agent network requirements](../azure-arc/servers/network-requirements.md#urls) aren't blocked. +* Register the Microsoft.HybridCompute, Microsoft.GuestConfiguration, and Microsoft.HybridConnectivity resource providers on your subscription. You can [register these resource providers](/azure/azure-arc/servers/prerequisites#azure-resource-providers) either ahead of time or as you're completing the steps in this article. +* Review the [agent prerequisites](/azure/azure-arc/servers/prerequisites), and ensure that: + * Your target machine is running a supported [operating system](/azure/azure-arc/servers/prerequisites#supported-operating-systems). + * Your account has the [required Azure built-in roles](/azure/azure-arc/servers/prerequisites#required-permissions). + * The machine is in a [supported region](/azure/azure-arc/overview). + * If the machine connects through a firewall or proxy server to communicate over the internet, the listed URLs in [Connected Machine agent network requirements](/azure/azure-arc/servers/network-requirements#urls) aren't blocked. ## Generate an installation script For servers that are enabled with Azure Arc, you can take the previously mention Alternatively, you can use the PowerShell cmdlet `Connect-AzConnectedMachine` to download the Azure Connected Machine agent, install the agent, and register the machine with Azure Arc. The cmdlet downloads the Windows agent package (Windows Installer) from the Microsoft Download Center, and it downloads the Linux agent package from the Microsoft package repository. -Refer to the linked document to discover the required steps to install the [Azure Arc agent via PowerShell](../azure-arc/servers/onboard-powershell.md). +Refer to the linked document to discover the required steps to install the [Azure Arc agent via PowerShell](/azure/azure-arc/servers/onboard-powershell). ## Connect hybrid machines to Azure from Windows Admin Center -You can enable Azure Arc-enabled servers for one or more Windows machines in your environment manually, or you can use the Windows Admin Center to deploy the Azure Connected Machine agent and register your on-premises servers without having to perform any steps outside of this tool. For more information about installing the Azure Arc agent via Windows Admin Center, see [Connect hybrid machines to Azure from Windows Admin Center](../azure-arc/servers/onboard-windows-admin-center.md). +You can enable Azure Arc-enabled servers for one or more Windows machines in your environment manually, or you can use the Windows Admin Center to deploy the Azure Connected Machine agent and register your on-premises servers without having to perform any steps outside of this tool. For more information about installing the Azure Arc agent via Windows Admin Center, see [Connect hybrid machines to Azure from Windows Admin Center](/azure/azure-arc/servers/onboard-windows-admin-center). ## Next step |
network-watcher | Diagnose Network Security Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-network-security-rules.md | In this section, you create a virtual network with two subnets and an Azure Bast ```azurepowershell-interactive # Create an Azure Bastion host.- New-AzBastion -ResourceGroupName 'myResourceGroup' -Name 'myVNet-Bastion' --PublicIpAddressRgName 'myResourceGroup' -PublicIpAddressName 'myBastionIp' -VirtualNetwork $vnet + New-AzBastion -ResourceGroupName 'myResourceGroup' -Name 'myVNet-Bastion' -PublicIpAddressRgName 'myResourceGroup' -PublicIpAddressName 'myBastionIp' -VirtualNetwork $vnet ``` # [**Azure CLI**](#tab/cli) |
openshift | Connect Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/connect-cluster.md | You can find the cluster console URL by running the following command, which wil Launch the console URL in a browser and login using the `kubeadmin` credentials. -![Azure Red Hat OpenShift login screen](media/aro4-login.png) - ## Install the OpenShift CLI Once you're logged into the OpenShift Web Console, select the **?** at the top right and then on **Command Line Tools**. Download the release appropriate to your machine. |
openshift | Create Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/create-cluster.md | Azure Red Hat OpenShift is a managed OpenShift service that lets you quickly dep If you choose to install and use the CLI locally, you'll need to run Azure CLI version 2.30.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). -Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The default Azure resource quota for a new Azure subscription doesn't meet this requirement. To request an increase in your resource limit, see [Standard quota: Increase limits by VM series](../azure-portal/supportability/per-vm-quota-requests.md). +Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The default Azure resource quota for a new Azure subscription doesn't meet this requirement. To request an increase in your resource limit, see [Standard quota: Increase limits by VM series](/azure/azure-portal/supportability/per-vm-quota-requests). * For example, to check the current subscription quota of the smallest supported virtual machine family SKU "Standard DSv3": |
openshift | Howto Create Private Cluster 4X | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-4x.md | You can find the cluster console URL by running the following command, which loo Launch the console URL in a browser and sign in using the `kubeadmin` credentials. -![Screenshot that shows the Azure Red Hat OpenShift login screen.](media/aro4-login.png) - ## Install the OpenShift CLI Once you're signed into the OpenShift Web Console, select the **?** at the top right and then on **Command Line Tools**. Download the release appropriate to your machine. |
openshift | Howto Use Acr With Aro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-use-acr-with-aro.md | Azure Container Registry (ACR) is a managed container registry service that you ## Prerequisites -This guide assumes that you have an existing Azure Container Registry. If you do not, use the Azure portal or [Azure CLI instructions](../container-registry/container-registry-get-started-azure-cli.md) to create a container registry. +This guide assumes that you have an existing Azure Container Registry. If you do not, use the Azure portal or [Azure CLI instructions](/azure/container-registry/container-registry-get-started-azure-cli) to create a container registry. This article also assumes that you have an existing Azure Red Hat OpenShift cluster and have the `oc` CLI installed. If not, follow instructions in the [Create ARO cluster tutorial](create-cluster.md). hello-world 1/1 Running 0 30s ## Next steps -* [Azure Container Registry](../container-registry/container-registry-concepts.md) -* [Quickstart: Create a private container registry using the Azure CLI](../container-registry/container-registry-get-started-azure-cli.md) +* [Azure Container Registry](/azure/container-registry/container-registry-concepts) +* [Quickstart: Create a private container registry using the Azure CLI](/azure/container-registry/container-registry-get-started-azure-cli) |
openshift | Howto Use Key Vault Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-use-key-vault-secrets.md | keywords: azure, openshift, red hat, key vault Azure Key Vault Provider for Secrets Store CSI Driver allows you to get secret contents stored in an [Azure Key Vault instance](/azure/key-vault/general/basic-concepts) and use the [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/introduction.html) to mount them into Kubernetes pods. This article explains how to use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift. > [!NOTE]-> As an alternative to the open source solution presented in this article, you can use [Azure Arc](../azure-arc/overview.md) to manage your ARO clusters along with its [Azure Key Vault Provider for Secrets Store CSI Driver extension](../azure-arc/kubernetes/tutorial-akv-secrets-provider.md). This method is fully supported by Microsoft and is recommended instead of the open source solution below. +> As an alternative to the open source solution presented in this article, you can use [Azure Arc](/azure/azure-arc/overview) to manage your ARO clusters along with its [Azure Key Vault Provider for Secrets Store CSI Driver extension](/azure/azure-arc/kubernetes/tutorial-akv-secrets-provider). This method is fully supported by Microsoft and is recommended instead of the open source solution below. ## Prerequisites |
operational-excellence | Relocation App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-app-service.md | This section is a planning checklist in the following areas: - **Analyze and plan for API (or application) dependencies** Cross-region communication is significantly less performant if the app in the target region reaches back to dependencies that are still in the source region. We recommend that you relocate all downstream dependencies as part of the workload relocation. However, \*on-premises* resources are the exception, in particular those resources that are geographically closer to the target region (as may be the case for repatriation scenarios). - Azure Container Registry can be a downstream (runtime) dependency for App Service that's configured to run against Custom Container Images. It makes more sense for the Container Registry to be in the same region as the App itself. Consider uploading the required images to a new ACR in the target get region. Otherwise, consider using the [geo-replication feature](../container-registry/container-registry-geo-replication.md) if you plan on keeping the images in the source region. + Azure Container Registry can be a downstream (runtime) dependency for App Service that's configured to run against Custom Container Images. It makes more sense for the Container Registry to be in the same region as the App itself. Consider uploading the required images to a new ACR in the target get region. Otherwise, consider using the [geo-replication feature](/azure/container-registry/container-registry-geo-replication) if you plan on keeping the images in the source region. - **Analyze and plan for regional services.** Application Insights and Log Analytics data are regional services. Consider the creation of new Application Insights and Log Analytics storage in the target region. For App Insights, a new resource also impacts the connection string that must be updated as part of the change in App Configuration. |
operational-excellence | Relocation Container Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-container-registry.md | +- You can only relocate a registry within the same Active Directory tenant. This limitation applies to registries that are encrypted and unencrypted with a [customer-managed key](/azure/container-registry/tutorial-enable-customer-managed-keys). -- If the source registry has [availability zones](../reliability/availability-zones-overview.md) enabled, then the target region must also support availability zones. For more information on availability zone support for Azure Container Registry, see [Enable zone redundancy in Azure Container Registry](../container-registry/zone-redundancy.md).+- If the source registry has [availability zones](../reliability/availability-zones-overview.md) enabled, then the target region must also support availability zones. For more information on availability zone support for Azure Container Registry, see [Enable zone redundancy in Azure Container Registry](/azure/container-registry/zone-redundancy). To understand the possible downtimes involved, see [Cloud Adoption Framework for ``` -1. Use [ACR Tasks](../container-registry/container-registry-tasks-overview.md) to retrieve automation configurations of the source registry for import into the target registry. +1. Use [ACR Tasks](/azure/container-registry/container-registry-tasks-overview) to retrieve automation configurations of the source registry for import into the target registry. ### Export template -To get started, export a Resource Manager template. This template contains settings that describe your Container Registry. For more information on how to use exported templates, see [Use exported template from the Azure portal](../azure-resource-manager/templates/template-tutorial-Azure portale.md) and the [template reference](/azure/templates/microsoft.containerregistry/registries). -+To get started, export a Resource Manager template. This template contains settings that describe your Container Registry. For more information on how to use exported templates, see [Use exported template from the Azure portal](../azure-resource-manager/templates/template-tutorial-export-template.md) and the [template reference](/azure/templates/microsoft.containerregistry/registries). 1. In the [Azure portal](https://portal.azure.com), navigate to your source registry. 1. In the menu, under **Automation**, select **Export template** > **Download**. Inspect the registry properties in the template JSON file you downloaded, and ma - Validate all the associated resources detail in the downloaded template such as Registry scopeMaps, replications configuration, Diagnostic settings like log analytics. -- If the source registry is encrypted, then [encrypt the target registry using a customer-managed key](../container-registry/tutorial-enable-customer-managed-keys.md#enable-a-customer-managed-key-by-using-a-resource-manager-template) and update the template with settings for the required managed identity, key vault, and key. You can only enable the customer-managed key when you deploy the registry.+- If the source registry is encrypted, then [encrypt the target registry using a customer-managed key](/azure/container-registry/tutorial-enable-customer-managed-keys#enable-a-customer-managed-key-by-using-a-resource-manager-template) and update the template with settings for the required managed identity, key vault, and key. You can only enable the customer-managed key when you deploy the registry. az deployment group create --resource-group myResourceGroup \ After creating the registry in the target region: -1. Use the [az acr import](/cli/azure/acr#az-acr-import) command, or the equivalent PowerShell command `Import-AzContainerImage`, to import images and other artifacts you want to preserve from the source registry to the target registry. For command examples, see [Import container images to a container registry](../container-registry/container-registry-import-images.md). +1. Use the [az acr import](/cli/azure/acr#az-acr-import) command, or the equivalent PowerShell command `Import-AzContainerImage`, to import images and other artifacts you want to preserve from the source registry to the target registry. For command examples, see [Import container images to a container registry](/azure/container-registry/container-registry-import-images). 1. Use the Azure CLI commands [az acr repository list](/cli/azure/acr/repository#az-acr-repository-list) and [az acr repository show-tags](/cli/azure/acr/repository#az-acr-repository-show-tags), or Azure PowerShell equivalents, to help enumerate the contents of your source registry. 1. Run the import command for individual artifacts, or script it to run over a list of artifacts. -The following sample Azure CLI script enumerates the source repositories and tags and then imports the artifacts to a target registry in the same Azure subscription. Modify as needed to import specific repositories or tags. To import from a registry in a different subscription or tenant, see examples in [Import container images to a container registry](../container-registry/container-registry-import-images.md). +The following sample Azure CLI script enumerates the source repositories and tags and then imports the artifacts to a target registry in the same Azure subscription. Modify as needed to import specific repositories or tags. To import from a registry in a different subscription or tenant, see examples in [Import container images to a container registry](/azure/container-registry/container-registry-import-images). ```azurecli #!/bin/bash After you have successfully deployed the target registry, migrated content, and - To move registry resources to a new resource group either in the same subscription or a [new subscription], see [Move Azure resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md). -* Learn more about [importing container images](../container-registry/container-registry-import-images.md) to an Azure container registry from a public registry or another private registry. +* Learn more about [importing container images](/azure/container-registry/container-registry-import-images) to an Azure container registry from a public registry or another private registry. * See the [Resource Manager template reference](/azure/templates/microsoft.containerregistry/registries) for Azure Container Registry. |
operator-nexus | Concepts Nexus Workload Network Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-nexus-workload-network-types.md | This same cross-cluster IPAM capability is used to guarantee that containers con ## Nexus Relay -Nexus Kubernetes utilizes the [Arc](../azure-arc/overview.md) [Azure Relay](../azure-relay/relay-what-is-it.md) functionality by integrating the Nexus kubernetes Hybrid Relay infrastructure in each region where the Nexus Cluster service operates. +Nexus Kubernetes utilizes the [Arc](/azure/azure-arc/overview) [Azure Relay](../azure-relay/relay-what-is-it.md) functionality by integrating the Nexus kubernetes Hybrid Relay infrastructure in each region where the Nexus Cluster service operates. This setup uses dedicated Nexus relay infrastructure within Nexus owned subscriptions, ensuring that Nexus kubernetes cluster Arc Connectivity doesn't rely on shared public relay networks. Each Nexus kubernetes cluster and node instance is equipped with its own relay, and customers can manage Network ACL rules through the Nexus Cluster Azure Resource Manager APIs. These rules determine which networks can access both the az connectedk8s proxy and az ssh for their Nexus Arc resources within that specific on-premises Nexus Cluster. This feature enhances operator security by adhering to security protocols established after previous Arc/Relay security incidents, requiring remote Arc connectivity to have customer-defined network filters or ACLs. |
operator-nexus | Howto Monitor Naks Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitor-naks-cluster.md | Install latest version of the ## Monitor Nexus Kubernetes cluster ΓÇô VM layer -This how-to guide provides steps and utility scripts to [Arc connect](../azure-arc/servers/overview.md) the Nexus Kubernetes cluster Virtual Machines to Azure and enable monitoring agents for the collection of System logs from these VMs using [Azure Monitoring Agent](/azure/azure-monitor/agents/agents-overview). +This how-to guide provides steps and utility scripts to [Arc connect](/azure/azure-arc/servers/overview) the Nexus Kubernetes cluster Virtual Machines to Azure and enable monitoring agents for the collection of System logs from these VMs using [Azure Monitoring Agent](/azure/azure-monitor/agents/agents-overview). The instructions further capture details on how to set up log data collection into a Log Analytics workspace. The following resources provide you with support: |
operator-nexus | Howto Monitor Virtualized Network Functions Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitor-virtualized-network-functions-virtual-machines.md | Azure Arc-enabled servers let you manage Linux physical servers and Virtual Mach ### Prerequisites -Before you start, be sure to review the [prerequisites](../azure-arc/servers/prerequisites.md) and verify that your subscription, and resources meet the requirements. +Before you start, be sure to review the [prerequisites](/azure/azure-arc/servers/prerequisites) and verify that your subscription, and resources meet the requirements. Some of the prerequisites are: - Your VNF VM is connected to CloudServicesNetwork (the network that the VM uses to communicate with Operator Nexus services). echo "http\_proxy=http://169.254.0.11:3128" \>\> /etc/environment echo "https\_proxy=http://169.254.0.11:3128" \>\> /etc/environment ``` -- You have appropriate permissions on VNF VM to be able to run scripts, install package dependencies etc. For more information visit [link](../azure-arc/servers/prerequisites.md#required-permissions) for more details.+- You have appropriate permissions on VNF VM to be able to run scripts, install package dependencies etc. For more information visit [link](/azure/azure-arc/servers/prerequisites#required-permissions) for more details. - To use Azure Arc-enabled servers, the following Azure resource providers must be registered in your subscription: - Microsoft.HybridCompute - Microsoft.GuestConfiguration |
operator-nexus | Howto Virtual Machine Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-virtual-machine-image.md | Before you begin creating a virtual machine (VM) image, ensure you have the foll * This article requires version 2.49.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. - * Azure Container Registry (ACR): Set up a working Azure Container Registry to store and manage your container images. ACR provides a secure and private registry for storing Docker images used in your VM image creation process. You can create an ACR by following the official documentation at [Azure Container Registry](../container-registry/container-registry-intro.md) documentation. + * Azure Container Registry (ACR): Set up a working Azure Container Registry to store and manage your container images. ACR provides a secure and private registry for storing Docker images used in your VM image creation process. You can create an ACR by following the official documentation at [Azure Container Registry](/azure/container-registry/container-registry-intro) documentation. * Docker: Install Docker on your local machine. Docker is a platform that enables you to build, package, and distribute applications as lightweight containers. You use Docker to build and package your VM image. You can download Docker from Docker's [official website](https://docs.docker.com/engine/install/). |
operator-nexus | Quickstarts Tenant Workload Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-prerequisites.md | After setting up the cloud services network, you can use it to create a VM or cl > [!NOTE] > To ensure that the VNF image can be pulled correctly, ensure the ACR URL is in the egress allow list of the cloud services network that you will use with your Operator Nexus virtual machine. >-> In addition, if your ACR has dedicated data endpoints enabled, you will need to add all the new data-endpoints to the egress allow list. To find all the possible endpoints for your ACR follow the instruction [here](../container-registry/container-registry-dedicated-data-endpoints.md#dedicated-data-endpoints). +> In addition, if your ACR has dedicated data endpoints enabled, you will need to add all the new data-endpoints to the egress allow list. To find all the possible endpoints for your ACR follow the instruction [here](/azure/container-registry/container-registry-dedicated-data-endpoints#dedicated-data-endpoints). ### Use the proxy to reach outside of the virtual machine |
operator-service-manager | Quickstart Containerized Network Function Operator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-containerized-network-function-operator.md | az aks create -g ${resourceGroup} -n ${clusterName} --node-count 3 --generate-ss ## Enable Azure Arc Enable Azure Arc for the Azure Kubernetes Service (AKS) cluster. Running the commands below should be sufficient. If you would like to find out more, see-[Create and manage custom locations on Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/custom-locations.md). +[Create and manage custom locations on Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/custom-locations). ## Retrieve the config file for AKS cluster |
operator-service-manager | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/release-notes.md | Azure Operator Service Manager is a cloud orchestration service that enables aut * Is NFO update required: YES, UPDATE ONLY ### Release Installation-This release can be installed with as an update on top of release 2.0.2763-119. +This release can be installed with as an update on top of release 2.0.2763-119. Please see the following [learn documentation](manage-network-function-operator.md) for additional installation guidance. ### Issues Resolved in This Release Azure Operator Service Manager is a cloud orchestration service that enables aut * Dependency Versions: Go/1.22.4 Helm/3.15.2 ### Release Installation-This release can be installed with as an update on top of release 2.0.2783-134. +This release can be installed with as an update on top of release 2.0.2783-134. Please see the following [learn documentation](manage-network-function-operator.md) for additional installation guidance. ### Release Highlights #### Cluster Registry ΓÇô Garbage Collection Azure Operator Service Manager is a cloud orchestration service that enables aut * Dependency Versions: Go/1.22.4 - Helm/3.15.2 ### Release Installation-This release can be installed with as an update on top of release 2.0.2788-135. +This release can be installed with as an update on top of release 2.0.2788-135. Please see the following [learn documentation](manage-network-function-operator.md) for additional installation guidance. ### Release Highlights #### High availability for cluster registry and webhook. The following bug fixes, or other defect resolutions, are delivered with this re * CVE - A total of one CVE is addressed in this release. -* ## Release 2.0.2810-144 +## Release 2.0.2810-144 Document Revision 1.1 Azure Operator Service Manager is a cloud orchestration service that enables aut * Dependency Versions: Go/1.22.4 - Helm/3.15.2 ### Release Installation-This release can be installed with as an update on top of release 2.0.2788-144. Please see the following [learn documentation](manage-network-function-operator.md) for additional installation guidance. +This release can be installed with as an update on top of release 2.0.2804-137. Please see the following [learn documentation](manage-network-function-operator.md) for additional installation guidance. ### Issues Resolved in This Release |
orbital | Organize Stac Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/organize-stac-data.md | The following Azure services are used in this architecture. - [Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/overview) is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. It has richer capabilities such as zone resilient high availability (HA), predictable performance, maximum control, custom maintenance window, cost optimization controls, and simplified developer experience suitable for your enterprise workloads. - [API Management Services](https://azure.microsoft.com/services/api-management/) offers a scalable, multicloud API management platform for securing, publishing and analyzing APIs. - [Azure Kubernetes Services](/azure/aks/intro-kubernetes) offers the quickest way to start developing and deploying cloud-native apps, with built-in code-to-cloud pipelines and guardrails.-- [Container Registry](../container-registry/container-registry-intro.md) to store and manage your container images and related artifacts.+- [Container Registry](/azure/container-registry/container-registry-intro) to store and manage your container images and related artifacts. - [Virtual Machine](/azure/virtual-machines/overview) (VM) gives you the flexibility of virtualization for a wide range of computing solutions. In a fully secured deployment, a user connects to a VM via Azure Bastion (described in the next item below) to perform a range of operations like copying files to storage accounts, running Azure CLI commands, and interacting with other services. - [Azure Bastion](../bastion/bastion-overview.md) enables you to securely and seamlessly RDP & SSH to your VMs in Azure virtual network, without the need of public IP on the VM, directly from the Azure portal, and without the need of any other client/agent or any piece of software. - [Application Insights](/azure/azure-monitor/app/app-insights-overview) provides extensible application performance management and monitoring for live web apps. |
private-5g-core | Complete Private Mobile Network Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md | Review and apply the firewall recommendations for the following :::zone pivot="ase-pro-gpu" - [Azure Stack Edge](../databox-online/azure-stack-edge-gpu-system-requirements.md#url-patterns-for-firewall-rules)-- [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/network-requirements.md?tabs=azure-cloud)+- [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/network-requirements?tabs=azure-cloud) - [Azure Network Function Manager](../network-function-manager/requirements.md) :::zone-end :::zone pivot="ase-pro-2" - [Azure Stack Edge](../databox-online/azure-stack-edge-pro-2-system-requirements.md#url-patterns-for-firewall-rules)-- [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/network-requirements.md?tabs=azure-cloud)+- [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/network-requirements?tabs=azure-cloud) - [Azure Network Function Manager](../network-function-manager/requirements.md) :::zone-end |
private-5g-core | Enable Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/enable-azure-active-directory.md | To support Microsoft Entra ID on Azure Private 5G Core applications, you'll need You'll need to apply your Kubernetes Secret Objects if you're enabling Microsoft Entra ID for a site, after a packet core outage, or after updating the Kubernetes Secret Object YAML file. -1. Sign in to [Azure Cloud Shell](../cloud-shell/overview.md) and select **PowerShell**. If this is your first time accessing your cluster via Azure Cloud Shell, follow [Access your cluster](../azure-arc/kubernetes/cluster-connect.md?tabs=azure-cli) to configure kubectl access. +1. Sign in to [Azure Cloud Shell](../cloud-shell/overview.md) and select **PowerShell**. If this is your first time accessing your cluster via Azure Cloud Shell, follow [Access your cluster](/azure/azure-arc/kubernetes/cluster-connect?tabs=azure-cli) to configure kubectl access. 1. Apply the Secret Object for both distributed tracing and the packet core dashboards, specifying the core kubeconfig filename. `kubectl apply -f $HOME/secret-azure-ad-local-monitoring.yaml --kubeconfig=<core kubeconfig>` |
private-5g-core | Modify Local Access Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-local-access-configuration.md | If you switched from local usernames and passwords to Microsoft Entra ID, follow If you switched from Microsoft Entra ID to local usernames and passwords: -1. Sign in to [Azure Cloud Shell](../cloud-shell/overview.md) and select **PowerShell**. If this is your first time accessing your cluster via Azure Cloud Shell, follow [Access your cluster](../azure-arc/kubernetes/cluster-connect.md?tabs=azure-cli) to configure kubectl access. +1. Sign in to [Azure Cloud Shell](../cloud-shell/overview.md) and select **PowerShell**. If this is your first time accessing your cluster via Azure Cloud Shell, follow [Access your cluster](/azure/azure-arc/kubernetes/cluster-connect?tabs=azure-cli) to configure kubectl access. 1. Delete the Kubernetes Secret Objects: `kubectl delete secrets sas-auth-secrets grafana-auth-secrets --kubeconfig=<core kubeconfig> -n core` |
private-5g-core | Open Support Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/open-support-request.md | Before creating your request, review the details and diagnostics that you'll sen ## Next steps -Learn how to [Manage an Azure support request](../azure-portal/supportability/how-to-manage-azure-support-request.md). +Learn how to [Manage an Azure support request](/azure/azure-portal/supportability/how-to-manage-azure-support-request). |
private-5g-core | Private 5G Core Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-5g-core-overview.md | You'll also need the following to deploy a private mobile network using Azure Pr Packet core instances run on a Kubernetes cluster, which is connected to Azure Arc and deployed on an Azure Stack Edge Pro with GPU device. These platforms provide security and manageability for the entire core network stack from Azure. Additionally, Azure Arc allows Microsoft to provide support at the edge. - For more information, see [Azure Arc overview](../azure-arc/overview.md) and [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/). + For more information, see [Azure Arc overview](/azure/azure-arc/overview) and [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/). - **RANs and SIMs** |
private-link | Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md | The following tables list the Private Link services and the regions where they'r |Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--|-|Azure Container Registry | All public regions<br/> All Government regions | Supported with premium tier of container registry. [Select for tiers](../container-registry/container-registry-skus.md)| GA <br/> [Learn how to create a private endpoint for Azure Container Registry.](../container-registry/container-registry-private-link.md) | +|Azure Container Registry | All public regions<br/> All Government regions | Supported with premium tier of container registry. [Select for tiers](/azure/container-registry/container-registry-skus)| GA <br/> [Learn how to create a private endpoint for Azure Container Registry.](/azure/container-registry/container-registry-private-link) | |Azure Kubernetes Service - Kubernetes API | All public regions <br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Kubernetes Service.](/azure/aks/private-clusters) | ### Databases |
quotas | How To Guide Monitoring Alerting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/how-to-guide-monitoring-alerting.md | - Title: Create alerts for quotas -description: Learn how to create alerts for quotas Previously updated : 09/04/2024----# Create alerts for quotas --You can create alerts for quotas and manage them. --## Create an alert rule --### Prerequisites --Users must have the necessary [permissions to create alerts](/azure/azure-monitor/alerts/alerts-overview#azure-role-based-access-control-for-alerts). --The [managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp) must have the **Reader** role (or another role that includes read access) on the subscription. --### Create alerts in the Azure portal --The simplest way to create a quota alert is to use the Azure portal. Follow these steps to create an alert rule for your quota. --1. Sign in to the [Azure portal](https://portal.azure.com) and enter **"quotas"** in the search box, then select **Quotas**. In Quotas page, select **My quotas** and choose **Compute** Resource Provider. Once the page loads, select **Quota Name** to create a new alert rule. -- :::image type="content" source="media/monitoring-alerting/my-quotas-create-rule-navigation-inline.png" alt-text="Screenshot showing how to select Quotas to navigate to create Alert rule screen." lightbox="media/monitoring-alerting/my-quotas-create-rule-navigation-expanded.png"::: --1. When the **Create usage alert rule** page appears, populate the fields with data as shown in the table. Make sure you have the [permissions to create alerts](/azure/azure-monitor/alerts/alerts-overview#azure-role-based-access-control-for-alerts). -- :::image type="content" source="media/monitoring-alerting/quota-details-create-rule-inline.png" alt-text="Screenshot showing create Alert rule screen with required fields." lightbox="media/monitoring-alerting/quota-details-create-rule-expanded.png"::: -- | **Fields** | **Description** | - |:--|:--| - | Alert Rule Name | Alert rule name must be **distinct** and can't be duplicated, even across different resource groups | - | Alert me when the usage % reaches | **Adjust** the slider to select your desired usage percentage for **triggering** alerts. For example, at the default 80%, you receive an alert when your quota reaches 80% capacity.| - | Severity | Select the **severity** of the alert when the **ruleΓÇÖs condition** is met.| - | [Frequency of evaluation](/azure/azure-monitor/alerts/alerts-overview#stateful-alerts) | Choose how **often** the alert rule should **run**, by selecting 5, 10, or 15 minutes. If the frequency is smaller than the aggregation granularity, frequency of evaluation results in sliding window evaluation. | - | [Resource Group](../azure-resource-manager/management/manage-resource-groups-portal.md) | Resource Group is a collection of resources that share the same lifecycles, permissions, and policies. Select a resource group similar to other quotas in your subscription, or create a new resource group. | - | [Managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp) | Select from the dropdown, or **Create New**. Managed Identity should have **read permissions** for the selected Subscription (to read Usage data from ARG). | - | Notify me by | There are three notifications methods and you can check one or all three check boxes, depending on your notification preference. | - | [Use an existing action group](/azure/azure-monitor/alerts/action-groups) | Check the box to use an existing action group. An action group **invokes** a defined set of **notifications** and actions when an alert is triggered. You can create Action Group to automatically Increase the Quota whenever possible. | - | [Dimensions](/azure/azure-monitor/alerts/alerts-types#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) | Here are the options for selecting **multiple Quotas** and **regions** within a single alert rule. Adding dimensions is a cost-effective approach compared to creating a new alert for each quota or region.| - - > [!TIP] - > Within the same subscription, we advise using the same **Resource Group** and **Managed identity** values for all alert rules. --1. After you've made your selections, select **Create Alert**. You'll see a confirmation if the rule was successfully created, or a message if any problems occurred. --### Create alerts using API --Alerts can be created programmatically using the [**Monitoring API**](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update?tabs=HTTP). This API can be used to create or update a log search rule. --`PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Insights/scheduledQueryRules/{ruleName}?api-version=2018-04-16` --For a sample request body, see the [API documentation](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update?tabs=HTTP) --### Create alerts using Azure Resource Graph query --You can use the **Azure Monitor Alerts** pane to [create alerts using a query](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=log). Resource Graph Explorer lets you run and test queries before using them to create an alert. To learn more, see the [Configure Azure alerts](/training/modules/configure-azure-alerts/) training module. --For quota alerts, make sure the **Scope** is your Subscription and the **Signal type** is the customer query log. Add a sample query for quota usages. Follow the remaining steps as described in the [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=log). --The following example shows a query that creates quota alerts. --```kusto -arg("").QuotaResources -| where subscriptionId =~ '<SubscriptionId>' -| where type =~ 'microsoft.compute/locations/usages' -| where isnotempty(properties) -| mv-expand propertyJson = properties.value limit 400 -| extend - usage = propertyJson.currentValue, - quota = propertyJson.['limit'], - quotaName = tostring(propertyJson.['name'].value) -| extend usagePercent = toint(usage)*100 / toint(quota)| project-away properties| where location in~ ('westus2')| where quotaName in~ ('cores') -``` --## Manage quota alerts --Once you've created your alert rule, you can view and edit the alerts. --### View alert rules --Select **Quotas > Alert rules** to see all quota alert rules that have been created for a given subscription. You can edit, enable, or disable rules from this page. -- :::image type="content" source="media/monitoring-alerting/view-alert-rules-inline.png" alt-text="Screenshot showing how the quota alert rule screen in the Azure portal." lightbox="media/monitoring-alerting/view-alert-rules-expanded.png"::: --### View Fired Alerts --Select **Quotas > Fired Alert Rules** to see all the alerts that have been triggered for a given subscription. Select an alert to view its details, including the history of how many times it was triggered and the status of each occurrence. -- :::image type="content" source="media/monitoring-alerting/view-fired-alerts-inline.png" alt-text="Screenshot showing the Fired Alert screen in the Azure portal." lightbox="media/monitoring-alerting/view-fired-alerts-expanded.png"::: --### Edit, update, enable, or disable alerts --You can make changes from within an alert rule by expanding the options below the dots, then selecting an action. ---When you select **Edit**, you can add multiple quotas or locations for the same alert rule. -- :::image type="content" source="media/monitoring-alerting/edit-dimension-inline.png" alt-text="Screenshot showing how to add dimensions while editing a quota rule in the Azure portal." lightbox="media/monitoring-alerting/edit-dimension-expanded.png"::: --You can also make changes by navigating to the **Alert rules** page, then select the specific alert rule you want to change. -- :::image type="content" source="media/monitoring-alerting/alert-rule-edit-inline.png" alt-text="Screenshot showing how to edit rules from the Alert rule screen in the Azure portal." lightbox="media/monitoring-alerting/alert-rule-edit-expanded.png"::: - -## Respond to alerts --For created alerts, an action group can be established to automate quota increases. By using an existing action group, you can invoke the Quota API to automatically increase quotas wherever possible, eliminating the need for manual intervention. --You can use functions to call the Quota API and request for more quota. Use `Test_SetQuota()` code to write an Azure function to set the quota. For more information, see this [example on GitHub](https://github.com/allison-inman/azure-sdk-for-net/blob/main/sdk/quota/Microsoft.Azure.Management.Quota/tests/ScenarioTests/QuotaTests.cs). --## Query using Resource Graph Explorer --Using [Azure Resource Graph](../governance/resource-graph/overview.md), alerts can be [managed programatically](/azure/azure-monitor/alerts/alerts-manage-alert-instances#manage-your-alerts-programmatically). This allows you to query your alert instances and analyze your alerts to identify patterns and trends. --The **QuotaResources** table in [Azure Resource Graph](../governance/resource-graph/overview.md) explorer provides usage and limit/quota data for a given resource, region, and/or subscription. You can also query usage and quota data across multiple subscriptions with Azure Resource Graph queries. --You must have at least the **Reader** role for the subscription to query this data using Resource Graph Explorer. --### Sample queries --Query to view current usages, quota/limit, and usage percentage for a subscription, region, and VCM family: -->[!Note] ->Currently, Compute is the only supported resource for NRT limit/quota data. Don't rely on the below queries to pull other resource types such as Disks and/or Galleries. You can get the latest snapshot for the said resources with the current [usages API](/rest/api/compute/usage/list?tabs=HTTP). --```kusto -QuotaResources -| where type =~ "microsoft.compute/locations/usages" -| where location =~ "northeurope" or location =~ "westeurope" -| where subscriptionId in~ ("<Subscription1>","<Subscription2>") -| mv-expand json = properties.value limit 400 -| extend usagevCPUs = json.currentValue, QuotaLimit = json['limit'], quotaName = tostring(json['name'].localizedValue) -|where quotaName !contains "Disks" and quotaName !contains "Disk" and quotaName !contains "gallery" and quotaName !contains "Snapshots" -|where usagevCPUs > 0 -|extend usagePercent = toint(usagevCPUs)*100 / toint(QuotaLimit) -|project subscriptionId,quotaName,usagevCPUs,QuotaLimit,usagePercent,location,json -| order by ['usagePercent'] desc -``` --Query to summarize total vCPUs (On-demand, Low Priority/Spot) per subscription per region: --```kusto -QuotaResources -| where type =~ "microsoft.compute/locations/usages" -| where subscriptionId in~ ("<Subscription1>","<Subscription2>") -| mv-expand json = properties.value limit 400 -| extend usagevCPUs = json.currentValue, QuotaLimit = json['limit'], quotaName = tostring(json['name'].localizedValue) -|extend usagePercent = toint(usagevCPUs)*100 / toint(QuotaLimit) -|where quotaName =~ "Total Regional vCPUs" or quotaName =~ "Total Regional Low-priority vCPUs" -|project subscriptionId,quotaName,usagevCPUs,QuotaLimit,usagePercent,location,['json'] -| order by ['usagePercent'] desc -``` --## Provide feedback --We encourage you to use the **Feedback** button on every Azure Quotas page to share your thoughts, questions, or concerns with our team. ---If you encounter problems while creating alert rules for quotas, [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). --## Next steps --- Learn about [quota monitoring and alerting](monitoring-alerting.md)-- Learn more about [quotas](quotas-overview.md) and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).-- Learn how to request increases for [VM-family vCPU quotas](per-vm-quota-requests.md), [vCPU quotas by region](regional-quota-requests.md), [spot vCPU quotas](spot-quota.md), and [storage accounts](storage-account-quota-requests.md). |
quotas | Monitoring Alerting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/monitoring-alerting.md | - Title: Quota monitoring & alerting -description: Learn about monitoring and alerting for quota usage. Previously updated : 11/29/2023----# Quota monitoring and alerting --Monitoring and alerting in Azure provides real-time insights into resource utilization, enabling proactive issue resolution and resource optimization. Use monitoring and alerting to help detect anomalies and potential issues before they impact services. --To view the features on **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. --> [!NOTE] -> When monitoring and alerting is enabled for your account, the Quotas in **MyQuotas** will be highlighted and clickable. --## Monitoring --Monitoring for quotas lets you proactively manage your Azure resources. Azure sets predefined limits, or quotas, for various resources like **Compute**, **Azure Machine Learning**, and **HPC Cache**. This monitoring involves continuous tracking of resource usage to ensure it remains within allocated limits, including notifications when these limits are approached or reached. --## Alerting --Quota alerts in Azure are notifications triggered when the usage of a specific Azure resource nears the **predefined quota limit**. These alerts are crucial for informing Azure users and administrators about resource consumption, facilitating proactive resource management. AzureΓÇÖs alert rule capabilities allow you to create multiple alert rules for a given quota or across quotas in your subscription. --For more information, see [Create alerts for quotas](how-to-guide-monitoring-alerting.md). --> [!NOTE] -> [General Role based access control](/azure/azure-monitor/alerts/alerts-overview#azure-role-based-access-control-for-alerts) applies while creating alerts. --## Next steps --- Learn [how to create quota alerts](how-to-guide-monitoring-alerting.md).-- Learn more about [alerts](/azure/azure-monitor/alerts/alerts-overview)-- Learn about [Azure Resource Graph](../governance/resource-graph/overview.md)- |
quotas | Networking Quota Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/networking-quota-requests.md | - Title: Increase networking quotas -description: Learn how to request a networking quota increase in the Azure portal. Previously updated : 03/13/2024----# Increase networking quotas --This article shows how to request increases for networking quotas from [Azure Home](https://portal.azure.com) or from **My quotas**, a centralized location where you can view your quota usage and request quota increases. --For quick access to request an increase, select **Quotas** on the Azure Home page. ---If you don't see **Quotas** on Azure Home, type "quotas" in the search box, then select **Quotas**. The **Quotas** icon will then appear on your Home page the next time you visit. --You can also use the following options to view your network quota usage and limits: --- [Azure CLI](/cli/azure/network#az-network-list-usages)-- [Azure PowerShell](/powershell/module/azurerm.network/get-azurermnetworkusage)-- [REST API](/rest/api/virtualnetwork/virtualnetworks/listusage)-- **Usage + quotas** (in the left pane when viewing your subscription in the Azure portal) --Based on your subscription, you can typically request increases for these quotas: --- Public IP Addresses-- Public IP Addresses - Standard-- Public IPv4 Prefix Length--## Request networking quota increases --Follow these steps to request a networking quota increase from Azure Home. You must have an Azure account with the Contributor role (or another role that includes Contributor access). --1. From [Azure Home](https://portal.azure.com), select **Quotas** and then select **Microsoft.Network**. --1. Find the quota you want to increase, then select the support icon. -- :::image type="content" source="media/networking-quota-request/quota-support-icon.png" alt-text="Screenshot showing the support icon for a networking quota."::: --1. In the **New support request** form, on the **Problem description** screen, some fields will be pre-filled for you. In the **Quota type** list, select **Networking**, then select **Next**. -- :::image type="content" source="media/networking-quota-request/new-networking-quota-request.png" alt-text="Screenshot of a networking quota support request in the Azure portal."::: --1. On the **Additional details** screen, under P**rovide details for the request**, select **Enter details**. --1. In the **Quota details** pane, enter the information for your request. -- > [!IMPORTANT] - > To increase a static public IP address quota, select **Other** in the **Resources** list, then specify this information in the **Details** section. -- :::image type="content" source="media/networking-quota-request/quota-details-network.png" alt-text="Screenshot of the Quota details pane for a networking quota increase request."::: --1. Select **Save and continue**. The information you entered will appear in the **Request summary** under **Problem details**. --1. Continue to fill out the form, including your preferred contact method. When you're finished, select **Next**. -1. Review your quota increase request information, then select **Create**. --After your networking quota increase request has been submitted, a support engineer will contact you and assist you with the request. --For more information about support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). --## Next steps --- Review details on [networking limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).-- Learn about [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). |
quotas | Per Vm Quota Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/per-vm-quota-requests.md | - Title: Increase VM-family vCPU quotas -description: Learn how to request an increase in the vCPU quota limit for a VM family in the Azure portal, which increases the total regional vCPU limit by the same amount. Previously updated : 03/13/2024----# Increase VM-family vCPU quotas --Azure Resource Manager enforces two types of vCPU quotas for virtual machines: --- standard vCPU quotas-- spot vCPU quotas--Standard vCPU quotas apply to pay-as-you-go VMs and reserved VM instances. They're enforced at two tiers, for each subscription, in each region: --- The first tier is the total regional vCPU quota.-- The second tier is the VM-family vCPU quota such as D-series vCPUs.--This article shows how to request increases for VM-family vCPU quotas. You can also request increases for [vCPU quotas by region](regional-quota-requests.md) or [spot vCPU quotas](spot-quota.md). --## Adjustable and non-adjustable quotas --When requesting a quota increase, the steps differ depending on whether the quota is adjustable or non-adjustable. --- **Adjustable quotas**: Quotas for which you can request quota increases fall into this category. Each subscription has a default quota value for each quota. You can request an increase for an adjustable quota from the [Azure Home](https://portal.azure.com/#home) **My quotas** page, providing an amount or usage percentage and submitting it directly. This is the quickest way to increase quotas.-- **Non-adjustable quotas**: These are quotas which have a hard limit, usually determined by the scope of the subscription. To make changes, you must submit a support request, and the Azure support team will help provide solutions.--## Request an increase for adjustable quotas --You can submit a request for a standard vCPU quota increase per VM-family from **My quotas**, quickly accessed from [Azure Home](https://portal.azure.com/#home). You must have an Azure account with the Contributor role (or another role that includes Contributor access). --1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. -- > [!TIP] - > After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal/azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it. --1. On the **Overview** page, select **Compute**. -1. On the **My quotas** page, select one or more quotas that you want to increase. -- :::image type="content" source="media/per-vm-quota-requests/select-per-vm-quotas.png" alt-text="Screenshot showing per-VM quota selection in the Azure portal."::: --1. Near the top of the page, select **New Quota Request**, then select the way you'd like to increase the quota(s): **Enter a new limit** or **Adjust the usage %**. -- > [!TIP] - > For quotas with very high usage, we recommend choosing **Adjust the usage %**. This option allows you to select one usage percentage to apply to all the selected quotas without requiring you to calculate an absolute number (limit) for each quota. --1. If you selected **Enter a new limit**: In the **New Quota Request** pane, enter a numerical value for each new quota limit. --1. If you selected **Adjust the usage %**: In the **New Quota Request** pane, adjust the slider to a new usage percent. Adjusting the percentage automatically calculates the new limit for each quota to be increased. This option is particularly useful when the selected quotas have very high usage. --1. When you're finished, select **Submit**. --Your request will be reviewed, and you'll be notified if the request can be fulfilled. This usually happens within a few minutes. If your request isn't fulfilled, you'll see a link where you can [open a support request](../azure-portal/supportability/how-to-create-azure-support-request.md) so that a support engineer can assist you with the increase. --> [!NOTE] -> If your request to increase your VM-family quota is approved, Azure will automatically increase the regional vCPU quota for the region where your VM is deployed. --> [!TIP] -> When creating or resizing a virtual machine and selecting your VM size, you may see some options listed under **Insufficient quota - family limit**. If so, you can request a quota increase directly from the VM creation page by selecting the **Request quota** link. --## Request an increase when a quota isn't available --At times you may see a message that a selected quota isnΓÇÖt available for an increase. To see which quotas are unavailable, look for the Information icon next to the quota name. ---If a quota you want to increase isn't currently available, the quickest solution may be to consider other series or regions. If you want to continue and receive assistance for your specified quota, you can submit a support request for the increase. --1. When following the steps above, if a quota isn't available, select the Information icon next to the quota. Then select **Create a support request**. -1. In the **Quota details** pane, confirm the pre-filled information is correct, then enter the desired new vCPU limit(s). -- :::image type="content" source="media/per-vm-quota-requests/quota-details.png" alt-text="Screenshot of the Quota details pane in the Azure portal."::: --1. Select **Save and continue** to open the **New support request** form. Continue to enter the required information, then select **Next**. -1. Review your request information and select **Previous** to make changes, or **Create** to submit the request. --## Request an increase for non-adjustable quotas --To request an increase for a non-adjustable quota, such as Virtual Machines or Virtual Machine Scale Sets, you must submit a support request so that a support engineer can assist you. --1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. -1. From the **Overview** page, select **Compute**. -1. Find the quota you want to increase, then select the support icon. -- :::image type="content" source="media/per-vm-quota-requests/support-icon.png" alt-text="Screenshot showing the support icon in the Azure portal."::: --1. In the **New support request form**, on the first page, confirm that the pre-filled information is correct. -1. For **Quota type**, select **Other Requests**, then select **Next**. -- :::image type="content" source="media/per-vm-quota-requests/new-per-vm-quota-request.png" alt-text="Screenshot showing a new quota increase support request in the Azure portal."::: --1. On the **Additional details** page, under **Problem details**, enter the information required for your quota increase, including the new limit requested. -- :::image type="content" source="media/per-vm-quota-requests/quota-request-problem-details.png" alt-text="Screenshot showing the Problem details step of a quota increase request in the Azure portal."::: --1. Scroll down and complete the form. When finished, select **Next**. -1. Review your request information and select **Previous** to make changes, or **Create** to submit the request. --For more information, see [Create a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). --## Next steps --- Learn more about [vCPU quotas](/azure/virtual-machines/windows/quotas).-- Learn more in [Quotas overview](quotas-overview.md).-- Learn about [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). |
quotas | Quickstart Increase Quota Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/quickstart-increase-quota-portal.md | - Title: Quickstart - Request a quota increase in the Azure portal -description: This quickstart shows you how to increase a quota in the Azure portal. Previously updated : 03/13/2024----# Quickstart: Request a quota increase in the Azure portal --Get started with Azure Quotas by using the Azure portal to request a quota increase. --For more information about quotas, see [Quotas overview](quotas-overview.md). --## Prerequisites --An Azure account with the Contributor role (or another role that includes Contributor access). --If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --## Request a quota increase --You can submit a request for a quota increase directly from **My quotas**. Follow the steps below to request an increase for a quota. For this example, you can select any adjustable quota in your subscription. --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Enter "quotas" into the search box, and then select **Quotas**. -- :::image type="content" source="media/quickstart-increase-quota-portal/quotas-portal.png" alt-text="Screenshot of the Quotas service page in the Azure portal."::: --1. On the Overview page, select a provider, such as **Compute** or **AML**. -- > [!NOTE] - > For all providers other than Compute, you'll see a **Request increase** column instead of the **Adjustable** column described below. There, you can request an increase for a specific quota, or create a support request for the increase. --1. On the **My quotas** page, under **Quota name**, select the quota you want to increase. Make sure that the **Adjustable** column shows **Yes** for this quota. -1. Near the top of the page, select **New Quota Request**, then select **Enter a new limit**. -- :::image type="content" source="media/quickstart-increase-quota-portal/enter-new-quota-limit.png" alt-text="Screenshot of the Enter a new limit option in My quotas in the Azure portal."::: --1. In the **New Quota Request** pane, enter a numerical value for your new quota limit, then select **Submit**. --Your request will be reviewed, and you'll be notified if the request can be fulfilled. This usually happens within a few minutes. --If your request isn't fulfilled, you'll see a link to create a support request. When you use this link, a support engineer will assist you with your increase request. --> [!TIP] -> You can request an increase for a quota that is non-adjustable by submitting a support request. For more information, see [Request an increase for non-adjustable quotas](per-vm-quota-requests.md#request-an-increase-for-non-adjustable-quotas). --## Next steps --- [Increase VM-family vCPU quotas](per-vm-quota-requests.md)-- [Increase regional vCPU quotas](regional-quota-requests.md)-- [Increase spot vCPU family quotas](spot-quota.md)-- [Increase networking quotas](networking-quota-requests.md)-- [Increase Azure Storage account quotas](storage-account-quota-requests.md) |
quotas | Quotas Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/quotas-overview.md | - Title: Quotas overview -description: Learn about to view quotas and request increases in the Azure portal. Previously updated : 08/17/2023----# Quotas overview --Many Azure services have quotas, which are the assigned number of resources for your Azure subscription. Each quota represents a specific countable resource, such as the number of virtual machines you can create, the number of storage accounts you can use concurrently, the number of networking resources you can consume, or the number of API calls to a particular service you can make. --The concept of quotas is designed to help protect customers from things like inaccurately resourced deployments and mistaken consumption. For Azure, it helps minimize risks from deceptive or inappropriate consumption and unexpected demand. Quotas are set and enforced in the scope of the [subscription](/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings). --## Quotas or limits? --Quotas were previously referred to as limits. Quotas do have limits, but the limits are variable and dependent on many factors. Each subscription has a default value for each quota. --> [!NOTE] -> There is no cost associated with requesting a quota increase. Costs are incurred based on resource usage, not the quotas themselves. --## Usage Alerts --The Quotas page allows you to [Monitor & Create Alerts](monitoring-alerting.md) for specific Quotas, enabling you to receive notifications when the usage reaches predefined thresholds. --## Adjustable and non-adjustable quotas --Quotas can be adjustable or non-adjustable. --- **Adjustable quotas**: Quotas for which you can request quota increases fall into this category. Each subscription has a default quota value for each quota. You can request an increase for an adjustable quota from the [Azure Home](https://portal.azure.com/#home) **My quotas** page, providing an amount or usage percentage and submitting it directly. This is the quickest way to increase quotas.-- **Non-adjustable quotas**: These are quotas which have a hard limit, usually determined by the scope of the subscription. To make changes, you must submit a support request, and the Azure support team will help provide solutions.--## Work with quotas --Different entry points, data views, actions, and programming options are available, depending on your organization and administrator preferences. --| Option | Azure portal | Quota APIs | Support API | -||||| -| Summary | The portal provides a customer-friendly user interface for accessing quota information.<br><br>From [Azure Home](https://portal.azure.com/#home), **Quotas** is a centralized location to directly view quotas and quota usage and request quota increases.<br><br>From the Subscriptions page, **Quotas + usage** offers quick access to requesting quota increases for a given subscription.| The [Azure Quota Service REST API](/rest/api/quota) programmatically provides the ability to get current quota limits, find current usage, and request quota increases by subscription, resource provider, and location. | The [Azure Support REST API](/rest/api/support/) enables customers to create service quota support tickets programmatically. | -| Availability | All customers | All customers | All customers with unified, premier, professional direct support plans | -| Which to choose? | Useful for customers desiring a central location and an efficient visual interface for viewing and managing quotas. Provides quick access to requesting quota increases. | Useful for customers who want granular and programmatic control of quota management for adjustable quotas. Intended for end to end automation of quota usage validation and quota increase requests through APIs. | Customers who want end to end automation of support request creation and management. Provides an alternative path to Azure portal for requests. | -| Providers supported | All providers | Compute, Machine Learning | All providers | --## Next steps --- Learn more about [viewing quotas in the Azure portal](view-quotas.md).-- Learn more about [Monitoring & Creating Alerts](how-to-guide-monitoring-alerting.md) for Quota usages.-- Learn how to request increases for [VM-family vCPU quotas](per-vm-quota-requests.md), [vCPU quotas by region](regional-quota-requests.md), and [spot vCPU quotas](spot-quota.md).-- Learn about [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). |
quotas | Regional Quota Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/regional-quota-requests.md | - Title: Increase regional vCPU quotas -description: Learn how to request an increase in the vCPU quota limit for a region in the Azure portal. Previously updated : 03/13/2024-----# Increase regional vCPU quotas --Azure Resource Manager enforces two types of vCPU quotas for virtual machines: --- standard vCPU quotas-- spot vCPU quotas--Standard vCPU quotas apply to pay-as-you-go VMs and reserved VM instances. They're enforced at two tiers, for each subscription, in each region: --- The first tier is the total regional vCPU quota.-- The second tier is the VM-family vCPU quota such as D-series vCPUs.--This article shows how to request regional vCPU quota increases for all VMs in a given region. You can also request increases for [VM-family vCPU quotas](per-vm-quota-requests.md) or [spot vCPU quotas](spot-quota.md). --## Special considerations --When considering your vCPU needs across regions, keep in mind the following: --- Regional vCPU quotas are enforced across all VM series in a given region. As a result, decide how many vCPUs you need in each region in your subscription. If you don't have enough vCPU quota in each region, submit a request to increase the vCPU quota in that region. For example, if you need 30 vCPUs in West Europe and you don't have enough quota, specifically request a quota for 30 vCPUs in West Europe. When you do so, the vCPU quotas in your subscription in other regions aren't increased. Only the vCPU quota limit in West Europe is increased to 30 vCPUs.--- When you request an increase in the vCPU quota for a VM series, Azure increases the regional vCPU quota limit by the same amount.--- When you create a new subscription, the default value for the total number of vCPUs in a region might not be equal to the total default vCPU quota for all individual VM series. This can result in a subscription without enough quota for each individual VM series that you want to deploy. However, there might not be enough quota to accommodate the total regional vCPUs for all deployments. In this case, you must submit a request to explicitly increase the quota limit of the regional vCPU quotas.--## Request an increase for regional vCPU quotas --To request quota increases, you must have an Azure account with the Contributor role (or another role that includes Contributor access). --1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. -- > [!TIP] - > After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal/azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it. --1. On the **Overview** page, select **Compute**. -1. On the **My quotas** page, select **Region** and then unselect **All**. -1. In the **Region** list, select the regions you want to include for the quota increase request. -1. Filter for any other requirements, such as **Usage**, as needed. -1. Select the quota(s) that you want to increase. -- :::image type="content" source="media/regional-quota-requests/select-regional-quotas.png" alt-text="Screenshot showing regional quota selection in the Azure portal"::: --1. Near the top of the page, select **New Quota Request**, then select the way you'd like to increase the quota(s): **Enter a new limit** or **Adjust the usage %**. -- > [!TIP] - > For quotas with very high usage, we recommend choosing **Adjust the usage %**. This option allows you to select one usage percentage to apply to all the selected quotas without requiring you to calculate an absolute number (limit) for each quota. --1. If you selected **Enter a new limit**: In the **New Quota Request** pane, enter a numerical value for each new quota limit. --1. If you selected **Adjust the usage %**: In the **New Quota Request** pane, adjust the slider to a new usage percent. Adjusting the percentage automatically calculates the new limit for each quota to be increased. This option is particularly useful when the selected quotas have very high usage. --1. When you're finished, select **Submit**. --Your request will be reviewed, and you'll be notified if the request can be fulfilled. This usually happens within a few minutes. If your request isn't fulfilled, you'll see a link where you can [open a support request](../azure-portal/supportability/how-to-create-azure-support-request.md) so that a support engineer can assist you with the increase. --## Next steps --- Learn more about [vCPU quotas](/azure/virtual-machines/windows/quotas).-- Learn more in [Quotas overview](quotas-overview.md).-- Learn about [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).-- Review the [list of Azure regions and their locations](https://azure.microsoft.com/regions/). |
quotas | Spot Quota | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/spot-quota.md | - Title: Increase spot vCPU family quotas -description: Learn how to request increases for spot vCPU quotas in the Azure portal. Previously updated : 03/13/2024----# Increase spot vCPU family quotas --Azure Resource Manager enforces two types of vCPU quotas for virtual machines: --- standard vCPU quotas-- spot vCPU quotas--Standard vCPU quotas apply to pay-as-you-go VMs and reserved VM instances. They're enforced at two tiers, for each subscription, in each region: --- The first tier is the total regional vCPU quota.-- The second tier is the VM-family vCPU quota such as D-series vCPUs.--Spot vCPU quotas apply to [spot virtual machines (VMs)](/azure/virtual-machines/spot-vms) across all VM families (SKUs). --This article shows you how to request quota increases for spot vCPUs. You can also request increases for [VM-family vCPU quotas](per-vm-quota-requests.md) or [vCPU quotas by region](regional-quota-requests.md). --## Special considerations --When considering your spot vCPU needs, keep in mind the following: --- When you deploy a new spot VM, the total new and existing vCPU usage for all spot VM instances must not exceed the approved spot vCPU quota limit. If the spot quota is exceeded, the spot VM can't be deployed.--- At any point in time when Azure needs the capacity back, the Azure infrastructure will evict spot VMs.--## Request an increase for spot vCPU quotas --To request quota increases, you must have an Azure account with the Contributor role (or another role that includes Contributor access). --1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. -- > [!TIP] - > After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal/azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it. --1. On the **Overview** page, select **Compute**. -1. On the **My quotas** page, enter "spot" in the **Search** box. -1. Filter for any other requirements, such as **Usage**, as needed. -1. Find the quota or quotas you want to increase, and select them. -- :::image type="content" source="media/spot-quota/select-spot-quotas.png" alt-text="Screenshot showing spot quota selection in the Azure portal"::: --1. Near the top of the page, select **New Quota Request**, then select the way you'd like to increase the quota(s): **Enter a new limit** or **Adjust the usage %**. -- > [!TIP] - > For quotas with very high usage, we recommend choosing **Adjust the usage %**. This option allows you to select one usage percentage to apply to all the selected quotas without requiring you to calculate an absolute number (limit) for each quota. --1. If you selected **Enter a new limit**: In the **New Quota Request** pane, enter a numerical value for each new quota limit. --1. If you selected **Adjust the usage %**: In the **New Quota Request** pane, adjust the slider to a new usage percent. Adjusting the percentage automatically calculates the new limit for each quota to be increased. This option is particularly useful when the selected quotas have very high usage. --1. When you're finished, select **Submit**. --Your request will be reviewed, and you'll be notified if the request can be fulfilled. This usually happens within a few minutes. If your request isn't fulfilled, you'll see a link where you can [open a support request](../azure-portal/supportability/how-to-create-azure-support-request.md) so that a support engineer can assist you with the increase. --## Next steps --- Learn more about [Azure virtual machines](/azure/virtual-machines/spot-vms).-- Learn more in [Quotas overview](quotas-overview.md).-- Learn about [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). |
quotas | Storage Account Quota Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/storage-account-quota-requests.md | - Title: Increase Azure Storage account quotas -description: Learn how to request an increase in the quota limit for Azure Storage accounts within a subscription from 250 to 500 for a given region. Quota increases apply to both standard and premium account types. Previously updated : 03/13/2024----# Increase Azure Storage account quotas --This article shows how to request increases for storage account quotas from the [Azure portal](https://portal.azure.com) or from **My quotas**, a centralized location where you can view your quota usage and request quota increases. --To quickly request an increase, select **Quotas** on the Home page in the Azure portal. ---If you don't see **Quotas** on in the Azure portal, type *quotas* in the search box, then select **Quotas**. The **Quotas** icon will then appear on your Home page the next time you visit. --You can also use the following tools or APIs to view your storage account quota usage and limits: --- [Azure PowerShell](/powershell/module/az.storage/get-azstorageusage)-- [Azure CLI](/cli/azure/storage/account#az-storage-account-show-usage)-- [REST API](/rest/api/storagerp/usages/list-by-location)--You can request an increase from 250 to up to 500 storage accounts per region for your subscription. This quota increase applies to storage accounts with standard endpoints. --## View current quotas for a region --To view your current storage account quotas for a subscription in a given region, follow these steps: --1. From the [Azure portal](https://portal.azure.com), select **Quotas** and then select **Storage**. --1. Select your subscription from the drop-down. --1. Use the **Region** filter to specify the regions you're interested in. You can then see your storage account quotas for each of those regions. -- :::image type="content" source="media/storage-account-quota-requests/view-quotas-region-portal.png" alt-text="Screenshow showing how to filter on regions to show quotas for specific regions" lightbox="media/storage-account-quota-requests/view-quotas-region-portal.png"::: --## Request storage account quota increases --Follow these steps to request a storage account quota increase from Azure Home. To request quota increases, you must have an Azure account with the Contributor role (or another role that includes Contributor access). --1. From the [Azure portal](https://portal.azure.com), select **Quotas** and then select **Storage**. --1. Select the subscription for which you want to increase your storage account quota. --1. Locate the region where you want to increase your storage account quota, then select the pencil icon in the **Request adjustment** column. --1. In the **New Quota Request** pane, enter a number up to 500. -- :::image type="content" source="media/storage-account-quota-requests/request-quota-increase-portal.png" alt-text="Screenshot showing how to increase your storage account quota"::: --1. Select **Submit**. It may take a few minutes to process your request. --## See also --- [Scalability and performance targets for standard storage accounts](../storage/common/scalability-targets-standard-account.md)-- [Scalability targets for premium block blob storage accounts](../storage/blobs/scalability-targets-premium-block-blobs.md)-- [Scalability and performance targets for premium page blob storage accounts](../storage/blobs/scalability-targets-premium-page-blobs.md)-- [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md) |
quotas | View Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/view-quotas.md | - Title: View quotas -description: Learn how to view quotas in the Azure portal. Previously updated : 05/02/2023----# View quotas --The **Quotas** page in the Azure portal is the centralized location where you can view your quotas. **My quotas** provides a comprehensive, customizable view of usage and other quota information so that you can assess quota usage. You can also request quota increases directly from **My quotas**. --To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. --> [!TIP] -> After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal/azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it. --## View quota details --To view detailed information about your quotas, select **My quotas** in the left pane on the **Quotas** page. --> [!NOTE] -> You can also select a specific Azure provider from the **Quotas** overview page to view quotas and usage for that provider. If you don't see a provider, check the [Azure subscription and service limits page](../azure-resource-manager/management/azure-subscription-service-limits.md) for more information. --On the **My quotas** page, you can choose which quotas and usage data to display. The filter options at the top of the page let you filter by location, provider, subscription, and usage. You can also use the search box to look for a specific quota. Depending on the provider you select, you may see some differences in filters and columns. ---In the list of quotas, you can toggle the arrow shown next to **Quota** to expand and close categories. You can do the same next to each category to drill down and create a view of the information you need. --## Next steps --- Learn more in [Quota overview](quotas-overview.md).-- about [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).-- Learn how to request increases for [VM-family vCPU quotas](per-vm-quota-requests.md), [vCPU quotas by region](regional-quota-requests.md), [spot vCPU quotas](spot-quota.md), and [storage accounts](storage-account-quota-requests.md). |
reliability | Availability Zones Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-migration-overview.md | The table below lists each product that offers migration guidance and/or informa | | | [Azure Application Gateway (V2)](migrate-app-gateway-v2.md) | | [Azure Backup and Azure Site Recovery](migrate-recovery-services-vault.md) | +| [Azure ExpressRoute](/azure/expressroute/expressroute-howto-gateway-migration-portal) | | [Azure Functions](migrate-functions.md)| | [Azure Load Balancer](migrate-load-balancer.md)| | [Azure Service Fabric](migrate-service-fabric.md) | The table below lists each product that offers migration guidance and/or informa | [Azure Elastic SAN](reliability-elastic-san.md#availability-zone-migration)| | [Azure Functions](reliability-functions.md#availability-zone-migration)| | [Azure HDInsight](reliability-hdinsight.md#availability-zone-migration)|-| [Azure Key Vault](/azure/key-vault/general/disaster-recovery-guidance?toc=/azure/reliability)| | [Azure Kubernetes Service](/azure/aks/availability-zones?toc=/azure/reliability)| | [Azure Logic Apps](/azure/logic-apps/set-up-zone-redundancy-availability-zones?tabs=standard&toc=/azure/reliability)| | [Azure Monitor: Log Analytics](migrate-monitor-log-analytics.md)|-| [Azure Service Bus](/azure/service-bus-messaging/service-bus-geo-dr#availability-zones?toc=/azure/reliability)| +| [Azure Service Bus](/azure/service-bus-messaging/service-bus-outages-disasters#availability-zones)| | [Azure SQL Managed Instance](migrate-sql-managed-instance.md)| |
reliability | Availability Zones Service Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md | Azure offerings are grouped into three categories that reflect their _regional_ | [Azure AI Search](/azure/search/search-reliability#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Container Apps](reliability-azure-container-apps.md#availability-zone-support) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Container Instances](migrate-container-instances.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) |-| [Azure Container Registry](../container-registry/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | +| [Azure Container Registry](/azure/container-registry/zone-redundancy) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Data Factory](../data-factory/concepts-data-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Database for MySQL – Flexible Server](/azure/mysql/flexible-server/concepts-high-availability) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | |
reliability | Cross Region Replication Azure No Pair | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure-no-pair.md | Title: Cross-region replication for non-paired regions -description: Learn about cross-region replication for non-paired regions + Title: Cross-region replication for nonpaired regions +description: Learn about cross-region replication for nonpaired regions Previously updated : 06/14/2024 Last updated : 09/10/2024 -# Cross-region replication solutions for non-paired regions +# Cross-region replication solutions for nonpaired regions Some Azure services support cross-region replication to ensure business continuity and protect against data loss. These services make use of another secondary region that uses *cross-region replication*. Both the primary and secondary regions together form a [region pair](./cross-region-replication-azure.md#azure-paired-regions). -However, there are some [regions that are non-paired](./cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair) and so require alternative methods to achieving geo-replication. +However, there are some [regions that are nonpaired](./cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair) and so require alternative methods to achieving geo-replication. This document lists some of the services and possible solutions that support geo-replication methods without requiring paired regions. +## Azure API Management +Azure API Management doesn't provide a real cross-region replication feature. However, you can use its [backup and restore feature](/azure/api-management/api-management-howto-disaster-recovery-backup-restore) to export the configuration of an API Management service instance in one region and import it into another region. As long as the storage account used for the backup is accessible from the target region, there's no paired region dependency. An operational guidance is provided in [this article](/azure/api-management/api-management-howto-migrate). ++ ## Azure App Service-For App Service, custom backups are stored on a selected storage account. As a result, there's a dependency for cross-region restore on GRS and paired regions. For automatic backup type, you can't backup/restore across regions. As a workaround, you can implement a custom file copy mechanism for the saved data set to manually copy across non-paired regions and different storage accounts. +For App Service, custom backups are stored on a selected storage account. As a result, there's a dependency for cross-region restore on GRS and paired regions. For automatic backup type, you can't backup/restore across regions. As a workaround, you can implement a custom file copy mechanism for the saved data set to manually copy across nonpaired regions and different storage accounts. -## Azure Backup -To achieve geo-replication in non-paired regions: +## Azure Cache for Redis +Azure Cache for Redis provide two distinct cross-region replication options that are [active geo-replication](/azure/azure-cache-for-redis/cache-how-to-active-geo-replication) and [passive geo-replication](/azure/azure-cache-for-redis/cache-how-to-geo-replication). In both cases, there's no explicit dependency on region pairs. -- Use [Azure Site Recovery](/azure/site-recovery/azure-to-azure-enable-global-disaster-recovery).  Azure Site Recovery is the Disaster Recovery service from Azure that provides business continuity and disaster recovery by replicating workloads from the primary location to the secondary location. The secondary location can be a non-paired region if it is supported by Azure Site Recovery. You can have maximum data retention up to 15 days with Azure Site Recovery.-- Use [Zone-redundant Storage](../backup/backup-overview.md#why-use-azure-backup) to replicate your data in availability zones, guaranteeing data residency and resiliency in the same region. +## Azure Container Registry +Geo-replication enables an Azure container registry to function as a single registry, serving multiple regions with multi-primary regional registries. There's no restrictions dictated by region pairs for this feature. For more information, see [Geo-replication in Azure Container Registry](/azure/container-registry/container-registry-geo-replication). -## Azure Database for MySQL +## Azure Cosmos DB +If your solution requires continuous uptime during region outages, you can configure Azure Cosmos DB to replicate your data across [multiple regions](/azure/cosmos-db/how-to-manage-database-account#add-remove-regions-from-your-database-account) and to transparently fail over to operating regions when required. Azure Cosmos DB supports [multi-region writes](/azure/cosmos-db/multi-region-writes) and can distribute your data globally to provide low-latency access to your data from any region without any pairing restriction. +## Azure Database for MySQL Choose any [Azure Database for MySQL available Azure regions](/azure/mysql/flexible-server/overview#azure-region) to spin up your [read replicas](/azure/mysql/flexible-server/concepts-read-replicas#cross-region-replication). ## Azure Database for PostgreSQL+For geo-replication in nonpaired regions with Azure Database for PostgreSQL, you can use: -For geo-replication in non-paired regions with Azure Database for PostgreSQL, you can use: -**Managed service with geo-replication**: Azure PostgreSQL Managed service supports active [geo-replication](/azure/postgresql/flexible-server/concepts-read-replicas) to create a continuously readable secondary replica of your primary server. The readable secondary may be in the same Azure region as the primary or, more commonly, in a different region. This kind of readable secondary replica is also known as *geo-replica*. +**Managed service with geo-replication**: Azure PostgreSQL Managed service supports active [geo-replication](/azure/postgresql/flexible-server/concepts-read-replicas) to create a continuously readable secondary replica of your primary server. The readable secondary might be in the same Azure region as the primary or, more commonly, in a different region. This kind of readable secondary replica is also known as *geo-replica*. -You can also utilize any of the two customer-managed data migration methods listed below to replicate the data to a non-paired region. +You can also utilize any of the two customer-managed data migration methods listed to replicate the data to a nonpaired region. - [Copy](/azure/postgresql/migrate/how-to-migrate-using-dump-and-restore?tabs=psql). - [Logical Replication & Logical Decoding](/azure/postgresql/flexible-server/concepts-logical).-- -## Azure Data Factory -For geo-replication in non-paired regions, Azure Data Factory (ADF) supports Infrastructure-as-code provisioning of ADF pipelines combined with [Source Control for ADF](/azure/data-factory/concepts-data-redundancy#using-source-control-in-azure-data-factory). +## Azure Data Factory +For geo-replication in nonpaired regions, Azure Data Factory (ADF) supports Infrastructure-as-code provisioning of ADF pipelines combined with [Source Control for ADF](/azure/data-factory/concepts-data-redundancy#using-source-control-in-azure-data-factory). ## Azure Event Grid+For geo-replication of Event Grid topics in nonpaired regions, you can implement [client-side failover](/azure/event-grid/custom-disaster-recovery-client-side). -For geo-replication of Event Grid topics in non-paired regions, you can implement [client-side failover](/azure/event-grid/custom-disaster-recovery-client-side). ## Azure IoT Hub --For geo-replication in non-paired regions, use the [concierge pattern](/azure/iot-hub/iot-hub-ha-dr#achieve-cross-region-ha) for routing to a secondary IoT Hub. ---## Azure Key Vault --+For geo-replication in nonpaired regions, use the [concierge pattern](/azure/iot-hub/iot-hub-ha-dr#achieve-cross-region-ha) for routing to a secondary IoT Hub. ## Azure Kubernetes Service (AKS)- Azure Backup can provide protection for AKS clusters, including a [cross-region restore (CRR)](/azure/backup/tutorial-restore-aks-backups-across-regions) feature that's currently in preview and only supports Azure Disks. Although the CRR feature relies on GRS paired regions replicas, any dependency on CRR can be avoided if the AKS cluster stores data only in external storage and avoids using "in-cluster" solutions. ## Azure Monitor Logs- Log Analytics workspaces in Azure Monitor Logs don't use paired regions. To ensure business continuity and protect against data loss, enable cross-region workspace replication.+For more information, see [Enhance resilience by replicating your Log Analytics workspace across regions](/azure/azure-monitor/logs/workspace-replication). -For more information, see [Enhance resilience by replicating your Log Analytics workspace across regions](/azure/azure-monitor/logs/workspace-replication) +## Azure Service Bus +Azure Service Bus can provide regional resiliency, without a dependency on region pairs, by using either [Geo Replication](/azure/service-bus-messaging/service-bus-geo-replication) or [Geo-Disaster Recovery](/azure/service-bus-messaging/service-bus-geo-replication) features. -## Azure SQL Database -For geo-replication in non-paired regions with Azure SQL Database, you can use: +## Azure SQL Database +For geo-replication in nonpaired regions with Azure SQL Database, you can use: - [Failover group feature](/azure/azure-sql/database/failover-group-sql-db?view=azuresql&preserve-view=true) that replicates across any combination of Azure regions without any dependency on underlying storage GRS. -- [Active geo-replication feature](/azure/azure-sql/database/active-geo-replication-overview?view=azuresql&preserve-view=true) to create a continuously synchronized readable secondary database for a primary database. The readable secondary database may be in the same Azure region as the primary or, more commonly, in a different region. This kind of readable secondary database is also known as a *geo-secondary* or *geo-replica*.+- [Active geo-replication feature](/azure/azure-sql/database/active-geo-replication-overview?view=azuresql&preserve-view=true) to create a continuously synchronized readable secondary database for a primary database. The readable secondary database might be in the same Azure region as the primary or, more commonly, in a different region. This kind of readable secondary database is also known as a *geo-secondary* or *geo-replica*. -## Azure SQL Managed Instance -For geo-replication in non-paired regions with Azure SQL Managed Instance, you can use: +## Azure SQL Managed Instance +For geo-replication in nonpaired regions with Azure SQL Managed Instance, you can use: - [Failover group feature](/azure/azure-sql/managed-instance/failover-group-sql-mi?view=azuresql&preserve-view=true) that replicates across any combination of Azure regions without any dependency on underlying storage GRS. ## Azure Storage---To achieve geo-replication in non-paired regions: +To achieve geo-replication in nonpaired regions: - **For Azure Object Storage**: To achieve geo-replication in non-paired regions: >Object replication isn't supported for [Azure Data Lake Storage](../storage/blobs/data-lake-storage-best-practices.md). +- **For Azure NetApp Files (ANF)**, you can replicate to a set of nonstandard pairs besides Azure region pairs. See [Azure NetApp Files (ANF) cross-region replication](/azure/azure-netapp-files/cross-region-replication-introduction). -- **For Azure NetApp Files (ANF)**, you can replicate to a set of non-standard pairs besides Azure region pairs. See [Azure NetApp Files (ANF) cross-region replication](/azure/azure-netapp-files/cross-region-replication-introduction). - **For Azure Files:** To achieve geo-replication in non-paired regions: > You must disable cloud tiering to ensure that all data is present locally, and provision enough storage on the Azure Virtual Machine to hold the entire dataset. To ensure changes replicate quickly to the secondary region, files should only be accessed and modified on the server endpoint rather than in Azure. --+## Azure Virtual Machines +To achieve geo-replication in nonpaired regions, [Azure Site Recovery](/azure/site-recovery/azure-to-azure-enable-global-disaster-recovery) service can be sued. Azure Site Recovery is the Disaster Recovery service from Azure that provides business continuity and disaster recovery by replicating workloads from the primary location to the secondary location. The secondary location can be a nonpaired region if supported by Azure Site Recovery. |
reliability | Migrate Workload Aks Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-workload-aks-mysql.md | Using the Application Gateway Ingress Controller add-on with your AKS cluster is #### Azure Container Registry (ACR) -*Zone-redundant*: We recommend that you create a zone-redundant registry in the Premium service tier. You can also create a zone-redundant registry replica by setting the `zoneRedundancy` property for the replica. To learn how to enable zone redundancy for your ACR, see [Enable zone redundancy in Azure Container Registry for resiliency and high availability](../container-registry/zone-redundancy.md). +*Zone-redundant*: We recommend that you create a zone-redundant registry in the Premium service tier. You can also create a zone-redundant registry replica by setting the `zoneRedundancy` property for the replica. To learn how to enable zone redundancy for your ACR, see [Enable zone redundancy in Azure Container Registry for resiliency and high availability](/azure/container-registry/zone-redundancy). #### Azure Cache for Redis |
reliability | Overview Reliability Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview-reliability-guidance.md | For a more detailed overview of reliability principles in Azure, see [Reliabilit |Azure Service Bus|[Azure Service Bus - Availability zones](../service-bus-messaging/service-bus-outages-disasters.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zones)| [Azure Service Bus Geo-Disaster Recovery](../service-bus-messaging/service-bus-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) / [Azure Service Bus Geo-Replication](../service-bus-messaging/service-bus-geo-replication.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Service Fabric| [Deploy an Azure Service Fabric cluster across Availability Zones](/azure/service-fabric/service-fabric-cross-availability-zones?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Disaster recovery in Azure Service Fabric](/azure/service-fabric/service-fabric-disaster-recovery?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Site Recovery|| [Set up disaster recovery for Azure VMs](../site-recovery/azure-to-azure-tutorial-enable-replication.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |-|Azure SQL|[Azure SQL - High availability](/azure/azure-sql/database/high-availability-sla?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure SQL - Recovery using backup and restore](/azure/azure-sql/database/recovery-using-backups#geo-restore) | +|Azure SQL|[Azure SQL - High availability](/azure/azure-sql/database/high-availability-sla?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Disaster recovery guidance - Azure SQL Database](/azure/azure-sql/database/disaster-recovery-guidance) | |Azure SQL-Managed Instance|| [Azure SQL-Managed Instance](/azure/azure-sql/managed-instance/failover-group-sql-mi?view=azuresql&preserve-view=true) |-|Azure Storage-Disk Storage||[Create an incremental snapshot for managed disks](/azure/virtual-machines/disks-incremental-snapshots?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | +|Azure Storage-Disk Storage| [Best practices for achieving high availability with Azure virtual machines and managed disks](/azure/virtual-machines/disks-high-availability) | [Create an incremental snapshot for managed disks](/azure/virtual-machines/disks-incremental-snapshots?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Storage Mover| [Reliability in Azure Storage Mover](reliability-azure-storage-mover.md)|[Reliability in Azure Storage Mover](reliability-azure-storage-mover.md)| |Azure Virtual Machine Scale Sets|[Azure Virtual Machine Scale Sets](reliability-virtual-machine-scale-sets.md)|| |Azure Virtual Machines|[Reliability in Virtual Machines](reliability-virtual-machines.md)|[Reliability in Virtual Machines](reliability-virtual-machines.md)| For a more detailed overview of reliability principles in Azure, see [Reliabilit |Azure API Management|[Ensure API Management availability and reliability](../api-management/high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [How to implement disaster recovery using service backup and restore](../api-management/api-management-howto-disaster-recovery-backup-restore.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure App Configuration|[How does App Configuration ensure high data availability?](../azure-app-configuration/faq.yml?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-does-app-configuration-ensure-high-data-availability)| [Resiliency and disaster recovery](../azure-app-configuration/concept-disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json&tabs=core2x)| |Azure App Service|[Azure App Service](./reliability-app-service.md)| [Azure App Service](reliability-app-service.md#cross-region-disaster-recovery-and-business-continuity)|-|Azure Application Gateway (V2)|[Autoscaling and High Availability)](../application-gateway/application-gateway-autoscaling-zone-redundant.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|| +|Azure Application Gateway (V2)|[Autoscaling and High Availability](../application-gateway/application-gateway-autoscaling-zone-redundant.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|| |Azure Backup|[Reliability in Azure Backup](reliability-backup.md)| [Reliability in Azure Backup](reliability-backup.md) | |Azure Bastion|[Reliability in Azure Bastion](reliability-bastion.md) |[Reliability in Azure Bastion](reliability-bastion.md) | |Azure Batch|[Reliability in Azure Batch](reliability-batch.md)| [Reliability in Azure Batch](reliability-batch.md#cross-region-disaster-recovery-and-business-continuity) | For a more detailed overview of reliability principles in Azure, see [Reliabilit |Azure Communications Gateway|[Reliability in Azure Communications Gateway](../communications-gateway/reliability-communications-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[Reliability in Azure Communications Gateway](../communications-gateway/reliability-communications-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |Azure Container Apps|[Reliability in Azure Container Apps](reliability-azure-container-apps.md)|[Reliability in Azure Container Apps](reliability-azure-container-apps.md)| |Azure Container Instances|[Reliability in Azure Container Instances](reliability-containers.md)| [Reliability in Azure Container Instances](reliability-containers.md#disaster-recovery) |-|Azure Container Registry|[Enable zone redundancy in Azure Container Registry for resiliency and high availability](../container-registry/zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +|Azure Container Registry|[Enable zone redundancy in Azure Container Registry for resiliency and high availability](/azure/container-registry/zone-redundancy?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | [Geo-replication in Azure Container Registry](/azure/container-registry/container-registry-geo-replication) | |Azure Data Explorer|| [Azure Data Explorer - Business continuity](/azure/data-explorer/business-continuity-overview) |-|Azure Data Factory|[Azure Data Factory data redundancy](../data-factory/concepts-data-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|| +|Azure Data Factory|[Azure Data Factory data redundancy](../data-factory/concepts-data-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [BCDR for Azure Data Factory and Azure Synapse Analytics pipelines](/azure/architecture/example-scenario/analytics/pipelines-disaster-recovery) | |Azure Database for MySQL|| [Azure Database for MySQL- Business continuity](/azure/mysql/single-server/concepts-business-continuity?#recover-from-an-azure-regional-data-center-outage) | |Azure Database for MySQL - Flexible Server|[Azure Database for MySQL Flexible Server High availability](/azure/mysql/flexible-server/concepts-high-availability?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Database for MySQL Flexible Server - Restore to latest restore point](/azure/mysql/flexible-server/how-to-restore-server-portal?#geo-restore-to-latest-restore-point) | |Azure Database for PostgreSQL - Flexible Server|[Azure Database for PostgreSQL - Flexible Server](./reliability-postgresql-flexible-server.md)|[Azure Database for PostgreSQL - Flexible Server](reliability-postgre-flexible.md#cross-region-disaster-recovery-and-business-continuity) | For a more detailed overview of reliability principles in Azure, see [Reliabilit |Azure Logic Apps|[Protect logic apps from region failures with zone redundancy and availability zones](../logic-apps/set-up-zone-redundancy-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Business continuity and disaster recovery for Azure Logic Apps](../logic-apps/business-continuity-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Media Services||[High Availability with Media Services and Video on Demand (VOD)](/azure/media-services/latest/architecture-high-availability-encoding-concept) | |Azure Migrate|| [Does Azure Migrate offer Backup and Disaster Recovery?](../migrate/resources-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#does-azure-migrate-offer-backup-and-disaster-recovery) |-|Azure Monitor-Log Analytics |[Enhance data and service resilience in Azure Monitor Logs with availability zones](/azure/azure-monitor/logs/availability-zones?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Enable data export](/azure/azure-monitor/logs/logs-data-export?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#enable-data-export) | +|Azure Monitor-Log Analytics |[Enhance data and service resilience in Azure Monitor Logs with availability zones](/azure/azure-monitor/logs/availability-zones)| [Log Analytics workspace replication](/azure/azure-monitor/logs/workspace-replication) | |Azure Network Watcher|[Service availability and redundancy](../network-watcher/frequently-asked-questions.yml?bc=%2fazure%2freliability%2fbreadcrumb%2ftoc.json&toc=%2fazure%2freliability%2ftoc.json#service-availability-and-redundancy)|| |Azure Notification Hubs|[Reliability Azure Notification Hubs](reliability-notification-hubs.md)|[Reliability Azure Notification Hubs](reliability-notification-hubs.md)| |Azure Operator Nexus|[Reliability in Azure Operator Nexus](reliability-operator-nexus.md)|[Reliability in Azure Operator Nexus](reliability-operator-nexus.md)| |Azure Private Link|[Azure Private Link availability](../private-link/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|| |Azure Route Server|[Azure Route Server FAQ](../route-server/route-server-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|-|Azure Storage - Blob Storage|[Choose the right redundancy option](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#choose-the-right-redundancy-option)|[Azure storage disaster recovery planning and failover](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| +|Azure Storage - Blob Storage|[Choose the right redundancy option](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#choose-the-right-redundancy-option)|[Azure storage disaster recovery planning and failover](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) / [Azure Blob backup](/azure/backup/blob-backup-overview)| |Azure Stream Analytics|| [Achieve geo-redundancy for Azure Stream Analytics jobs](../stream-analytics/geo-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |-|Azure Virtual WAN|[How are Availability Zones and resiliency handled in Virtual WAN?](../virtual-wan/virtual-wan-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-are-availability-zones-and-resiliency-handled-in-virtual-wan)| [Designing for disaster recovery with ExpressRoute private peering](../expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | +|Azure Virtual WAN|[How are Availability Zones and resiliency handled in Virtual WAN?](../virtual-wan/virtual-wan-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-are-availability-zones-and-resiliency-handled-in-virtual-wan)| [Disaster recovery design](/azure/virtual-wan/disaster-recovery-design) | |Azure Web Application Firewall|[Deploy an Azure Firewall with Availability Zones using Azure PowerShell](../firewall/deploy-availability-zone-powershell.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[How do I achieve a disaster recovery scenario across datacenters by using Application Gateway?](../application-gateway/application-gateway-faq.yml?#how-do-i-achieve-a-disaster-recovery-scenario-across-datacenters-by-using-application-gateway) | ### ![An icon that signifies this service is strategic.](media/icon-strategic.svg) Strategic services For a more detailed overview of reliability principles in Azure, see [Reliabilit |Azure Health Insights|[Reliability in Azure Health Insights](reliability-health-insights.md)|[Reliability in Azure Health Insights](reliability-health-insights.md)| |Azure IoT Hub| [IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Machine Learning Service|| [Failover for business continuity and disaster recovery](/azure/machine-learning/how-to-high-availability-machine-learning?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |-|Azure NetApp Files|| [Manage disaster recovery using cross-region replication](../azure-netapp-files/cross-region-replication-manage-disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | +|Azure NetApp Files|| [Manage disaster recovery using cross-region replication](../azure-netapp-files/cross-region-replication-manage-disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) / [Azure NetApp Files backup](/azure/azure-netapp-files/backup-introduction) | |Azure Private 5G Core|[Reliability for Azure Private 5G Core](../private-5g-core/reliability-private-5g-core.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[Reliability for Azure Private 5G Core](../private-5g-core/reliability-private-5g-core.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |Azure SignalR Service|| [Resiliency and disaster recovery in Azure SignalR Service](../azure-signalr/signalr-concept-disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Spring Apps|[Reliability in Azure Spring Apps](reliability-spring-apps.md) |[Reliability in Azure Spring Apps](reliability-spring-apps.md)| |
reliability | Reliability Image Builder | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-image-builder.md | When a regional disaster occurs, Microsoft is responsible for outage detection, #### Outage detection, notification, and management -Microsoft sends a notification if there's an outage in the Azure Image Builder (AIB) Service. One common outage symptom is image templates getting 500 errors when attempting to run. You can review Azure Image Builder outage notifications and status updates through [support request management.](../azure-portal/supportability/how-to-manage-azure-support-request.md) +Microsoft sends a notification if there's an outage in the Azure Image Builder (AIB) Service. One common outage symptom is image templates getting 500 errors when attempting to run. You can review Azure Image Builder outage notifications and status updates through [support request management.](/azure/azure-portal/supportability/how-to-manage-azure-support-request) #### Set up disaster recovery and outage detection |
role-based-access-control | Built In Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md | The following table provides a brief description of each built-in role. Click th > | <a name='application-insights-snapshot-debugger'></a>[Application Insights Snapshot Debugger](./built-in-roles/monitor.md#application-insights-snapshot-debugger) | Gives user permission to view and download debug snapshots collected with the Application Insights Snapshot Debugger. Note that these permissions are not included in the [Owner](/azure/role-based-access-control/built-in-roles#owner) or [Contributor](/azure/role-based-access-control/built-in-roles#contributor) roles. When giving users the Application Insights Snapshot Debugger role, you must grant the role directly to the user. The role is not recognized when it is added to a custom role. | 08954f03-6346-4c2e-81c0-ec3a5cfae23b | > | <a name='grafana-admin'></a>[Grafana Admin](./built-in-roles/monitor.md#grafana-admin) | Perform all Grafana operations, including the ability to manage data sources, create dashboards, and manage role assignments within Grafana. | 22926164-76b3-42b3-bc55-97df8dab3e41 | > | <a name='grafana-editor'></a>[Grafana Editor](./built-in-roles/monitor.md#grafana-editor) | View and edit a Grafana instance, including its dashboards and alerts. | a79a5197-3a5c-4973-a920-486035ffd60f |+> | <a name='grafana-limited-viewer'></a>[Grafana Limited Viewer](./built-in-roles/monitor.md#grafana-limited-viewer) | View home page. | 41e04612-9dac-4699-a02b-c82ff2cc3fb5 | > | <a name='grafana-viewer'></a>[Grafana Viewer](./built-in-roles/monitor.md#grafana-viewer) | View a Grafana instance, including its dashboards and alerts. | 60921a7e-fef1-4a43-9b16-a26c52ad4769 | > | <a name='monitoring-contributor'></a>[Monitoring Contributor](./built-in-roles/monitor.md#monitoring-contributor) | Can read all monitoring data and edit monitoring settings. See also [Get started with roles, permissions, and security with Azure Monitor](/azure/azure-monitor/roles-permissions-security#built-in-monitoring-roles). | 749f88d5-cbae-40b8-bcfc-e573ddc772fa | > | <a name='monitoring-metrics-publisher'></a>[Monitoring Metrics Publisher](./built-in-roles/monitor.md#monitoring-metrics-publisher) | Enables publishing metrics against Azure resources | 3913510d-42f4-4e42-8a64-420c390055eb | |
role-based-access-control | Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/monitor.md | Gives user permission to view and download debug snapshots collected with the Ap Perform all Grafana operations, including the ability to manage data sources, create dashboards, and manage role assignments within Grafana. -[Learn more](/azure/managed-grafana/how-to-share-grafana-workspace) +[Learn more](/azure/managed-grafana/concept-role-based-access-control) > [!div class="mx-tableFixed"] > | Actions | Description | Perform all Grafana operations, including the ability to manage data sources, cr View and edit a Grafana instance, including its dashboards and alerts. -[Learn more](/azure/managed-grafana/how-to-share-grafana-workspace) +[Learn more](/azure/managed-grafana/concept-role-based-access-control) > [!div class="mx-tableFixed"] > | Actions | Description | View and edit a Grafana instance, including its dashboards and alerts. } ``` +## Grafana Limited Viewer ++View a Grafana home page. ++[Learn more](/azure/managed-grafana/concept-role-based-access-control) ++> [!div class="mx-tableFixed"] +> | Actions | Description | +> | | | +> | *none* | | +> | **NotActions** | | +> | *none* | | +> | **DataActions** | | +> | [Microsoft.Dashboard](../permissions/monitor.md#microsoftdashboard)/grafana/ActAsGrafanaLimitedViewer/action | Act as Grafana Limited Viewer role | +> | **NotDataActions** | | +> | *none* | | ++```json +{ + "id": "/providers/Microsoft.Authorization/roleDefinitions/41e04612-9dac-4699-a02b-c82ff2cc3fb5", + "properties": { + "roleName": "Grafana Limited Viewer", + "description": "View home page.", + "assignableScopes": [ + "/" + ], + "permissions": [ + { + "actions": [], + "notActions": [], + "dataActions": [ + "Microsoft.Dashboard/grafana/ActAsGrafanaLimitedViewer/action" + ], + "notDataActions": [] + } + ] + } +} +``` + ## Grafana Viewer View a Grafana instance, including its dashboards and alerts. -[Learn more](/azure/managed-grafana/how-to-share-grafana-workspace) +[Learn more](/azure/managed-grafana/concept-role-based-access-control) > [!div class="mx-tableFixed"] > | Actions | Description | |
role-based-access-control | Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/monitor.md | Azure service: [Azure Managed Grafana](/azure/managed-grafana/) > | **DataAction** | **Description** | > | Microsoft.Dashboard/grafana/ActAsGrafanaAdmin/action | Act as Grafana Admin role | > | Microsoft.Dashboard/grafana/ActAsGrafanaEditor/action | Act as Grafana Editor role |+> | Microsoft.Dashboard/grafana/ActAsGrafanaLimitedViewer/action | Act as Grafana Limited Viewer role | > | Microsoft.Dashboard/grafana/ActAsGrafanaViewer/action | Act as Grafana Viewer role | ## Microsoft.Insights |
role-based-access-control | Role Assignments List Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-rest.md | -> If your organization has outsourced management functions to a service provider who uses [Azure Lighthouse](../lighthouse/overview.md), role assignments authorized by that service provider won't be shown here. Similarly, users in the service provider tenant won't see role assignments for users in a customer's tenant, regardless of the role they've been assigned. +> If your organization has outsourced management functions to a service provider who uses [Azure Lighthouse](/azure/lighthouse/overview), role assignments authorized by that service provider won't be shown here. Similarly, users in the service provider tenant won't see role assignments for users in a customer's tenant, regardless of the role they've been assigned. [!INCLUDE [gdpr-dsr-and-stp-note](~/reusable-content/ce-skilling/azure/includes/gdpr-dsr-and-stp-note.md)] |
role-based-access-control | Transfer Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/transfer-subscription.md | The following are some reasons why you might want to transfer a subscription: Transferring a subscription requires downtime to complete the process. Depending on your scenario, you can consider the following alternate approaches: - Re-create the resources and copy data to the target directory and subscription.-- Adopt a multi-directory architecture and leave the subscription in the source directory. Use Azure Lighthouse to delegate resources so that users in the target directory can access the subscription in the source directory. For more information, see [Azure Lighthouse in enterprise scenarios](../lighthouse/concepts/enterprise.md).+- Adopt a multi-directory architecture and leave the subscription in the source directory. Use Azure Lighthouse to delegate resources so that users in the target directory can access the subscription in the source directory. For more information, see [Azure Lighthouse in enterprise scenarios](/azure/lighthouse/concepts/enterprise). ### Understand the impact of transferring a subscription If your intent is to remove access from users in the source directory so that th - [Transfer billing ownership of an Azure subscription to another account](../cost-management-billing/manage/billing-subscription-transfer.md) - [Transfer Azure subscriptions between subscribers and CSPs](../cost-management-billing/manage/transfer-subscriptions-subscribers-csp.yml) - [Associate or add an Azure subscription to your Microsoft Entra tenant](../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md)-- [Azure Lighthouse in enterprise scenarios](../lighthouse/concepts/enterprise.md)+- [Azure Lighthouse in enterprise scenarios](/azure/lighthouse/concepts/enterprise) |
route-server | Quickstart Configure Route Server Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-powershell.md | Title: 'Quickstart: Create and configure Route Server - Azure PowerShell' -description: In this quickstart, you learn how to create and configure an Azure Route Server using Azure PowerShell. + Title: 'Quickstart: Create an Azure Route Server - PowerShell' +description: In this quickstart, you learn how to create an Azure Route Server using Azure PowerShell. Previously updated : 08/14/2024 Last updated : 09/19/2024 -# Quickstart: Create and configure Route Server using Azure PowerShell +# Quickstart: Create an Azure Route Server using PowerShell -This article helps you configure Azure Route Server to peer with a Network Virtual Appliance (NVA) in your virtual network using Azure PowerShell. Route Server learns routes from your NVA and program them on the virtual machines in the virtual network. Azure Route Server also advertises the virtual network routes to the NVA. For more information, see [Azure Route Server](overview.md). +In this quickstart, you learn how to create an Azure Route Server to peer with a Network Virtual Appliance (NVA) in your virtual network using Azure PowerShell. :::image type="content" source="media/quickstart-configure-route-server-portal/environment-diagram.png" alt-text="Diagram of Route Server deployment environment using the Azure PowerShell." lightbox="media/quickstart-configure-route-server-portal/environment-diagram.png"::: This article helps you configure Azure Route Server to peer with a Network Virtu ## Prerequisites -* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* Make sure you have the latest PowerShell modules, or you can use Azure Cloud Shell in the portal. -* Review the [service limits for Azure Route Server](route-server-faq.md#limitations). -* If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. +- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -## Create resource group and a virtual network +- Review the [service limits for Azure Route Server](route-server-faq.md#limitations). -### Create a resource group +- Azure Cloud Shell or Azure PowerShell. -Before you can create an Azure Route Server, you have to create a resource group to host the Route Server. Create a resource group with [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup). This example creates a resource group named **myRouteServerRG** in the **WestUS** location: + The steps in this article run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the cmdlets in the Cloud Shell, select **Open Cloud Shell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal. -```azurepowershell-interactive -$rg = @{ - Name = 'myRouteServerRG' - Location = 'WestUS' -} -New-AzResourceGroup @rg -``` --### Create a virtual network + You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. -Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates a default virtual network named **myVirtualNetwork** in the **WestUS** location: If you already have a virtual network, you can skip to the next section. +## Create a route server -```azurepowershell-interactive -$vnet = @{ - Name = 'myVirtualNetwork' - ResourceGroupName = 'myRouteServerRG' - Location = 'WestUS' - AddressPrefix = '10.0.0.0/16' -} -$virtualNetwork = New-AzVirtualNetwork @vnet -``` +In this section, you create a route server. Prior to creating the route server, you create a resource group to host all resources including the route server. You'll also create a virtual network with a dedicated subnet for the route server. -### Add a dedicated subnet +1. Create a resource group using [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup). The following example creates a resource group named **RouteServerRG** in the **WestUS** region: -Azure Route Server requires a dedicated subnet named *RouteServerSubnet*. The subnet size has to be at least /27 or shorter prefix (such as /26 or /25) or you'll receive an error message when deploying the Route Server. Create a subnet configuration named **RouteServerSubnet** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig): + ```azurepowershell-interactive + # Create a resource group. + New-AzResourceGroup = -Name 'RouteServerRG' -Location 'WestUS' + ``` -```azurepowershell-interactive -$subnet = @{ - Name = 'RouteServerSubnet' - VirtualNetwork = $virtualNetwork - AddressPrefix = '10.0.0.0/24' -} -$subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet +1. The route server requires a dedicated subnet named *RouteServerSubnet*. The subnet size has to be at least /27 or shorter prefix (such as /26 or /25) or you'll receive an error message when deploying the route server. Create a subnet configuration for **RouteServerSubnet** using [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig). -$virtualnetwork | Set-AzVirtualNetwork + ```azurepowershell-interactive + # Create subnet configuration. + $subnet = New-AzVirtualNetworkSubnetConfig -Name 'RouteServerSubnet' -AddressPrefix '10.0.1.0/27' + ``` -$vnetInfo = Get-AzVirtualNetwork -Name myVirtualNetwork -ResourceGroupName myRouteServerRG -$subnetId = (Get-AzVirtualNetworkSubnetConfig -Name RouteServerSubnet -VirtualNetwork $vnetInfo).Id -``` +1. Create a virtual network using [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a default virtual network named **myRouteServerVNet** in the **WestUS** region. -## Create the Route Server + ```azurepowershell-interactive + # Create a virtual network and place into a variable. + $vnet = New-AzVirtualNetwork -Name 'myRouteServerVNet' -ResourceGroupName 'RouteServerRG' -Location 'WestUS' -AddressPrefix '10.0.0.0/16' -Subnet $subnet + # Place the subnet ID into a variable. + $subnetId = (Get-AzVirtualNetworkSubnetConfig -Name 'RouteServerSubnet' -VirtualNetwork $vnet).Id + ``` -1. To ensure connectivity to the backend service that manages Route Server configuration, assigning a public IP address is required. Create a Standard Public IP named **RouteServerIP** with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress): +1. To ensure connectivity to the backend service that manages Route Server configuration, assigning a public IP address is required. Create a Standard Public IP named **RouteServerIP** using [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress). ```azurepowershell-interactive- $ip = @{ - Name = 'myRouteServerIP' - ResourceGroupName = 'myRouteServerRG' - Location = 'WestUS' - AllocationMethod = 'Static' - IpAddressVersion = 'Ipv4' - Sku = 'Standard' - } - $publicIp = New-AzPublicIpAddress @ip + # Create a Standard public IP and place it into a variable. + $publicIp = New-AzPublicIpAddress -ResourceGroupName 'RouteServerRG' -Name 'myRouteServerIP' -Location 'WestUS' -AllocationMethod 'Static' -Sku 'Standard' -IpAddressVersion 'Ipv4' ```- -2. Create the Azure Route Server with [New-AzRouteServer](/powershell/module/az.network/new-azrouteserver). This example creates an Azure Route Server named **myRouteServer** in the **WestUS** location. The *HostedSubnet* is the resource ID of the RouteServerSubnet created in the previous section. -- ```azurepowershell-interactive - $rs = @{ - RouteServerName = 'myRouteServer' - ResourceGroupName = 'myRouteServerRG' - Location = 'WestUS' - HostedSubnet = $subnetId - PublicIP = $publicIp - } - New-AzRouteServer @rs ++1. Create the route server using [New-AzRouteServer](/powershell/module/az.network/new-azrouteserver). The following example creates a route server named **myRouteServer** in the **WestUS** region. The *HostedSubnet* is the resource ID of the RouteServerSubnet created in the previous section. ++ ```azurepowershell-interactive + # Create a route server. + New-AzRouteServer -RouteServerName 'myRouteServer' -ResourceGroupName 'RouteServerRG' -Location 'WestUS' -HostedSubnet $subnetId -PublicIP $publicIp ``` [!INCLUDE [Deployment note](../../includes/route-server-note-creation-time.md)] -## Create BGP peering with an NVA --To establish BGP peering from the Route Server to your NVA use [Add-AzRouteServerPeer](/powershell/module/az.network/add-azrouteserverpeer): +## Set up peering with NVA -The `your_nva_ip` is the virtual network IP assigned to the NVA. The `your_nva_asn` is the Autonomous System Number (ASN) configured in the NVA. The ASN can be any 16-bit number other than the ones in the range of 65515-65520. This range of ASNs is reserved by Microsoft. +In this section, you learn how to configure BGP peering with a network virtual appliance (NVA). Use [Add-AzRouteServerPeer](/powershell/module/az.network/add-azrouteserverpeer) to establish BGP peering from the route server to your NVA. The following example adds a peer named **myNVA** that has an IP address of **10.0.0.4** and an ASN of **65001**. For more information, see [What Autonomous System Numbers (ASNs) can I use?](route-server-faq.md#what-autonomous-system-numbers-asns-can-i-use) ```azurepowershell-interactive-$peer = @{ - PeerName = 'myNVA' - PeerIp = 'your_nva_ip' - PeerAsn = 'your_nva_asn' - RouteServerName = 'myRouteServer' - ResourceGroupName = myRouteServerRG' -} -Add-AzRouteServerPeer @peer +# Add a peer. +Add-AzRouteServerPeer -ResourceGroupName 'RouteServerRG' -RouteServerName 'myRouteServer' -PeerName 'myNVA' -PeerAsn '65001' -PeerIp '10.0.0.4' ``` -To set up peering with a different NVA or another instance of the same NVA for redundancy, use the same command as above with different *PeerName*, *PeerIp*, and *PeerAsn*. - ## Complete the configuration on the NVA -To complete the configuration on the NVA and enable the BGP sessions, you need the IP and the ASN of Azure Route Server. You can get this information by using [Get-AzRouteServer](/powershell/module/az.network/get-azrouteserver): +To complete the peering setup, you must configure the NVA to establish a BGP session with the route server's peer IPs and ASN. Use [Get-AzRouteServer](/powershell/module/az.network/get-azrouteserver) to get the IP and ASN of the route server. ```azurepowershell-interactive-$routeserver = @{ - RouteServerName = 'myRouteServer' - ResourceGroupName = 'myRouteServerRG' -} -Get-AzRouteServer @routeserver +# Get the route server details. +Get-AzRouteServer -ResourceGroupName 'RouteServerRG' -RouteServerName 'myRouteServer' ``` -The output looks like the following: +The output looks like the following example: -``` -RouteServerAsn : 65515 -RouteServerIps : {10.5.10.4, 10.5.10.5} +```output +ResourceGroupName Name Location RouteServerAsn RouteServerIps ProvisioningState HubRoutingPreference AllowBranchToBranchTraffic +-- - -- -- -- -- -- -- +RouteServerRG myRouteServer westus 65515 {10.0.1.4, 10.0.1.5} Succeeded ExpressRoute False ``` [!INCLUDE [NVA peering note](../../includes/route-server-note-nva-peering.md)] -## <a name = "route-exchange"></a>Configure route exchange --If you have a virtual network gateway (ExpressRoute or VPN) in the same virtual network, you can enable *BranchToBranchTraffic* to exchange routes between the gateway and the Route Server. ----1. To enable route exchange between Azure Route Server and the gateway(s), use [Update-AzRouteServer](/powershell/module/az.network/update-azrouteserver) with the *-AllowBranchToBranchTraffic* flag: --```azurepowershell-interactive -$routeserver = @{ - RouteServerName = 'myRouteServer' - ResourceGroupName = 'myRouteServerRG' - AllowBranchToBranchTraffic -} -Update-AzRouteServer @routeserver -``` --2. To disable route exchange between Azure Route Server and the gateway(s), use [Update-AzRouteServer](/powershell/module/az.network/update-azrouteserver) without the *-AllowBranchToBranchTraffic* flag: --```azurepowershell-interactive -$routeserver = @{ - RouteServerName = 'myRouteServer' - ResourceGroupName = 'myRouteServerRG' -} -Update-AzRouteServer @routeserver -``` --## Troubleshooting --Use the [Get-AzRouteServerPeerAdvertisedRoute](/powershell/module/az.network/get-azrouteserverpeeradvertisedroute) to view routes advertised by the Azure Route Server. --```azurepowershell-interactive -$remotepeer = @{ - RouteServerName = 'myRouteServer' - ResourceGroupName = 'myRouteServerRG' - PeerName = 'myNVA' -} -Get-AzRouteServerPeerAdvertisedRoute @remotepeer -``` --Use the [Get-AzRouteServerPeerLearnedRoute](/powershell/module/az.network/get-azrouteserverpeerlearnedroute) to view routes learned by the Azure Route Server. --```azurepowershell-interactive -$remotepeer = @{ - RouteServerName = 'myRouteServer' - ResourceGroupName = 'myRouteServerRG' - PeerName = 'myNVA' -} -Get-AzRouteServerPeerLearnedRoute @remotepeer -``` ## Clean up resources -If you no longer need the Azure Route Server, use the first command to remove the BGP peering, and then the second command to remove the Route Server. --1. Remove the BGP peering between Azure Route Server and an NVA with [Remove-AzRouteServerPeer](/powershell/module/az.network/remove-azrouteserverpeer): +When no longer needed, delete the resource group and all of the resources it contains using [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup). ```azurepowershell-interactive-$remotepeer = @{ - PeerName = 'myNVA' - RouteServerName = 'myRouteServer' - ResourceGroupName = 'myRouteServerRG' -} -Remove-AzRouteServerPeer @remotepeer +# Delete the resource group and all the resources it contains. +Remove-AzResourceGroup -Name 'RouteServerRG' -Force ``` -2. Remove the Azure Route Server with [Remove-AzRouteServer](/powershell/module/az.network/remove-azrouteserver): --```azurepowershell-interactive -$routeserver = @{ - RouteServerName = 'myRouteServer' - ResourceGroupName = 'myRouteServerRG' -} -Remove-AzRouteServer @routeserver -``` --## Next steps --After you've created the Azure Route Server, continue on to learn more about how Azure Route Server interacts with ExpressRoute and VPN Gateways: +## Next step > [!div class="nextstepaction"]-> [Azure ExpressRoute and Azure VPN support](expressroute-vpn-support.md) +> [Configure peering between a route server and NVA](peer-route-server-with-virtual-appliance.md) |
sap | Deploy S4hana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/deploy-s4hana.md | In this how-to guide, you'll learn how to deploy S/4HANA infrastructure in *Azur - A **User-assigned managed** [identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) which has Contributor role access on the Subscription or atleast all resource groups (Compute, Network,Storage). If you wish to install SAP Software through the Azure Center for SAP solutions, also provide Storage Blob data Reader, Reader and Data Access roles to the identity on SAP bits storage account where you would store the SAP Media. - A [network set up for your infrastructure deployment](prepare-network.md). - Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3 SKUS which will be used during Infrastructure deployment and Software Installation-- [Review the quotas for your Azure subscription](../../quotas/view-quotas.md). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error. +- [Review the quotas for your Azure subscription](/azure/quotas/view-quotas). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error. - Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow Azure Center for SAP solutions to size your SAP system. If you're not sure, you can also select the VMs. There are: - A single or cluster of ASCS VMs, which make up a single ASCS instance in the VIS. - A single or cluster of Database VMs, which make up a single Database instance in the VIS. |
sap | Prepare Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/prepare-network.md | If you have an existing network that you're ready to use with Azure Center for S ## Prerequisites - An Azure subscription.-- [Review the quotas for your Azure subscription](../../quotas/view-quotas.md). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error.+- [Review the quotas for your Azure subscription](/azure/quotas/view-quotas). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error. - It's recommended to have multiple IP addresses in the subnet or subnets before you begin deployment. For example, it's always better to have a `/26` mask instead of `/29`. - The names including AzureFirewallSubnet, AzureFirewallManagementSubnet, AzureBastionSubnet and GatewaySubnet are reserved names within Azure. Please do not use these as the subnet names. - Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow Azure Center for SAP solutions to size your SAP system. If you're not sure, you can also select the VMs. There are: |
sap | Quickstart Create Distributed Non High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-create-distributed-non-high-availability.md | After you deploy infrastructure and [install SAP software](install-software.md) - A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Subscription or atleast all resource groups (Compute, Network,Storage). If you wish to install SAP Software through the Azure Center for SAP solutions, also provide **Reader and Data Access** role to the identity on SAP bits storage account where you would store the SAP Media. - A [network set up for your infrastructure deployment](prepare-network.md). - Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3 SKUS which will be used during Infrastructure deployment and Software Installation-- [Review the quotas for your Azure subscription](../../quotas/view-quotas.md). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error. +- [Review the quotas for your Azure subscription](/azure/quotas/view-quotas). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error. - Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow Azure Center for SAP solutions to size your SAP system. If you're not sure, you can also select the VMs. There are: - A single or cluster of ASCS VMs, which make up a single ASCS instance in the VIS. - A single or cluster of Database VMs, which make up a single Database instance in the VIS. |
sap | Quickstart Create High Availability Namecustom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-create-high-availability-namecustom.md | After you deploy infrastructure and [install SAP software](install-software.md) - A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Subscription or atleast all resource groups (Compute, Network,Storage). If you wish to install SAP Software through the Azure Center for SAP solutions, also provide **Reader and Data Access** role to the identity on SAP bits storage account where you would store the SAP Media. - A [network set up for your infrastructure deployment](prepare-network.md). - Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3, SKUS which will be used during Infrastructure deployment and Software Installation-- [Review the quotas for your Azure subscription](../../quotas/view-quotas.md). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error. +- [Review the quotas for your Azure subscription](/azure/quotas/view-quotas). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error. - Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow Azure Center for SAP solutions to size your SAP system. If you're not sure, you can also select the VMs. There are: - A single or cluster of ASCS VMs, which make up a single ASCS instance in the VIS. - A single or cluster of Database VMs, which make up a single Database instance in the VIS. |
sap | Tutorial Create High Availability Name Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/tutorial-create-high-availability-name-custom.md | This tutorial shows you how to use Azure CLI to deploy infrastructure for an SAP - A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Subscription or at least all resource groups (Compute, Network,Storage). If you wish to install SAP Software through the Azure Center for SAP solutions, also provide **Reader and Data Access** role to the identity on SAP bits storage account where you would store the SAP Media. - A [network set up for your infrastructure deployment](prepare-network.md). - Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3, SKUS which will be used during Infrastructure deployment and Software Installation-- [Review the quotas for your Azure subscription](../../quotas/view-quotas.md). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error. +- [Review the quotas for your Azure subscription](/azure/quotas/view-quotas). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error. - Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow Azure Center for SAP solutions to size your SAP system. If you're not sure, you can also select the VMs. There are: - A single or cluster of ASCS VMs, which make up a single ASCS instance in the VIS. - A single or cluster of Database VMs, which make up a single Database instance in the VIS. |
sap | Deployment Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/deployment-checklist.md | Further included in same technical document(s) should be: - Security operations for Azure resources and workloads within - Security concept for protecting your SAP workload. This should include all aspects ΓÇô networking and perimeter monitoring, application and database security, operating systems securing, and any infrastructure measures required, such as encryption. Identify the requirements with your compliance and security teams. - Microsoft recommends either Professional Direct, Premier or Unified Support contract. Identify your escalation paths and contacts for support with Microsoft. For SAP support requirements, see [SAP note 2015553](https://launchpad.support.sap.com/#/notes/2015553).-- The number of Azure subscriptions and core quota for the subscriptions. [Open support requests to increase quotas of Azure subscriptions](../../azure-portal/supportability/regional-quota-requests.md) as needed.+- The number of Azure subscriptions and core quota for the subscriptions. [Open support requests to increase quotas of Azure subscriptions](/azure/azure-portal/supportability/regional-quota-requests) as needed. - Data reduction and data migration plan for migrating SAP data into Azure. For SAP NetWeaver systems, SAP has guidelines on how to limit the volume of large amounts of data. See [this SAP guide](https://wiki.scn.sap.com/wiki/download/attachments/247399467/DVM_%20Guide_7.2.pdf?version=1&modificationDate=1549365516000&api=v2) about data management in SAP ERP systems. Some of the content also applies to NetWeaver and S/4HANA systems in general. - An automated deployment approach. Many customers start with scripts, using a combination of PowerShell, CLI, Ansible and Terraform. Microsoft developed solutions for SAP deployment automation are: |
security | Antimalware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware.md | The following code sample is available: To enable and configure Microsoft Antimalware for Azure Arc-enabled servers using PowerShell cmdlets: 1. Set up your PowerShell environment using this [documentation](https://github.com/Azure/azure-powershell) on GitHub.-2. Use the [New-AzConnectedMachineExtension](../../azure-arc/servers/manage-vm-extensions-powershell.md) cmdlet to enable and configure Microsoft Antimalware for your Arc-enabled servers. +2. Use the [New-AzConnectedMachineExtension](/azure/azure-arc/servers/manage-vm-extensions-powershell) cmdlet to enable and configure Microsoft Antimalware for your Arc-enabled servers. The following code samples are available: |
sentinel | Automate Incident Handling With Automation Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md | When you're configuring an automation rule and adding a **run playbook** action, #### Permissions in a multitenant architecture -Automation rules fully support cross-workspace and [multitenant deployments](extend-sentinel-across-workspaces-tenants.md#manage-workspaces-across-tenants-using-azure-lighthouse) (in the case of multitenant, using [Azure Lighthouse](../lighthouse/index.yml)). +Automation rules fully support cross-workspace and [multitenant deployments](extend-sentinel-across-workspaces-tenants.md#manage-workspaces-across-tenants-using-azure-lighthouse) (in the case of multitenant, using [Azure Lighthouse](/azure/lighthouse/)). Therefore, if your Microsoft Sentinel deployment uses a multitenant architecture, you can have an automation rule in one tenant run a playbook that lives in a different tenant, but permissions for Sentinel to run the playbooks must be defined in the tenant where the playbooks reside, not in the tenant where the automation rules are defined. |
sentinel | Connect Cef Syslog Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-syslog-ama.md | If you're collecting messages from a log forwarder, the following prerequisites - [Create a Linux VM in the Azure portal](/azure/virtual-machines/linux/quick-create-portal). - [Supported Linux operating systems for Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview#linux). -- If your log forwarder *isn't* an Azure virtual machine, it must have the Azure Arc [Connected Machine agent](../azure-arc/servers/overview.md) installed on it.+- If your log forwarder *isn't* an Azure virtual machine, it must have the Azure Arc [Connected Machine agent](/azure/azure-arc/servers/overview) installed on it. - The Linux log forwarder VM must have Python 2.7 or 3 installed. Use the ``python --version`` or ``python3 --version`` command to check. If you're using Python 3, make sure it's set as the default command on the machine, or run scripts with the 'python3' command instead of 'python'. |
sentinel | Connect Custom Logs Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-custom-logs-ama.md | Certain custom applications are hosted on closed appliances that necessitate sen - [Create a Linux VM in the Azure portal](/azure/virtual-machines/linux/quick-create-portal). - [Supported Linux operating systems for Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview#linux). -- If your log forwarder *isn't* an Azure virtual machine, it must have the Azure Arc [Connected Machine agent](../azure-arc/servers/overview.md) installed on it.+- If your log forwarder *isn't* an Azure virtual machine, it must have the Azure Arc [Connected Machine agent](/azure/azure-arc/servers/overview) installed on it. - The Linux log forwarder VM must have Python 2.7 or 3 installed. Use the ``python --version`` or ``python3 --version`` command to check. If you're using Python 3, make sure it's set as the default command on the machine, or run scripts with the 'python3' command instead of 'python'. |
sentinel | Data Connectors Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md | Filter and ingest logs in text-file format from network or security applications ## Codeless connector platform connectors -The following connectors use the current codeless connector platform but don't have a specific documentation page generated. They're available from the content hub in Microsoft Sentinel as part of a solution. For instructions on how to configure these data connectors, review the instructions available with each data connectors within Microsoft Sentinel. +The following connectors use the current codeless connector platform but don't have a specific documentation page generated. They're available from the content hub in Microsoft Sentinel as part of a solution. For instructions on how to configure these data connectors, review the instructions available with each data connector within Microsoft Sentinel. |Codeless connector name |Azure Marketplace solution | ||| For more information about the codeless connector platform, see [Create a codele - [AbnormalSecurity (using Azure Functions)](data-connectors/abnormalsecurity.md) -## Akamai --- [[Recommended] Akamai Security Events via AMA](data-connectors/recommended-akamai-security-events-via-ama.md)- ## AliCloud - [AliCloud (using Azure Functions)](data-connectors/alicloud.md) For more information about the codeless connector platform, see [Create a codele - [Amazon Web Services](data-connectors/amazon-web-services.md) - [Amazon Web Services S3](data-connectors/amazon-web-services-s3.md) -## Apache Software Foundation --- [Apache HTTP Server](data-connectors/apache-http-server.md)- ## archTIS - [NC Protect](data-connectors/nc-protect.md) For more information about the codeless connector platform, see [Create a codele - [Armorblox (using Azure Functions)](data-connectors/armorblox.md) -## Aruba --- [[Recommended] Aruba ClearPass via AMA](data-connectors/recommended-aruba-clearpass-via-ama.md)- ## Atlassian - [Atlassian Confluence Audit (using Azure Functions)](data-connectors/atlassian-confluence-audit.md) For more information about the codeless connector platform, see [Create a codele - [Box (using Azure Functions)](data-connectors/box.md) -## Broadcom --- [[Recommended] Broadcom Symantec DLP via AMA](data-connectors/recommended-broadcom-symantec-dlp-via-ama.md)- ## Cisco - [Cisco AS) - [Cisco Duo Security (using Azure Functions)](data-connectors/cisco-duo-security.md)-- [Cisco Identity Services Engine](data-connectors/cisco-identity-services-engine.md)-- [Cisco Meraki](data-connectors/cisco-meraki.md) - [Cisco Secure Endpoint (AMP) (using Azure Functions)](data-connectors/cisco-secure-endpoint-amp.md)-- [Cisco Secure Cloud Analytics](data-connectors/cisco-secure-cloud-analytics.md)-- [Cisco UCS](data-connectors/cisco-ucs.md) - [Cisco Umbrella (using Azure Functions)](data-connectors/cisco-umbrella.md)-- [Cisco Web Security Appliance](data-connectors/cisco-web-security-appliance.md) ## Cisco Systems, Inc. - [Cisco Software Defined WAN](data-connectors/cisco-software-defined-wan.md) - [Cisco ETD (using Azure Functions)](data-connectors/cisco-etd.md) -## Citrix --- [Citrix ADC (former NetScaler)](data-connectors/citrix-adc-former-netscaler.md)- ## Claroty -- [[Recommended] Claroty via AMA](data-connectors/recommended-claroty-via-ama.md) - [Claroty xDome](data-connectors/claroty-xdome.md) ## Cloudflare For more information about the codeless connector platform, see [Create a codele - [Crowdstrike Falcon Data Replicator (using Azure Functions)](data-connectors/crowdstrike-falcon-data-replicator.md) - [Crowdstrike Falcon Data Replicator V2 (using Azure Functions)](data-connectors/crowdstrike-falcon-data-replicator-v2.md) -## Cyber Defense Group B.V. --- [ESET PROTECT](data-connectors/eset-protect.md)- ## CyberArk - [CyberArkAudit (using Azure Functions)](data-connectors/cyberarkaudit.md) For more information about the codeless connector platform, see [Create a codele - [Derdack SIGNL4](data-connectors/derdack-signl4.md) -## Digital Guardian --- [Digital Guardian Data Loss Prevention](data-connectors/digital-guardian-data-loss-prevention.md)- ## Digital Shadows - [Digital Shadows Searchlight (using Azure Functions)](data-connectors/digital-shadows-searchlight.md) For more information about the codeless connector platform, see [Create a codele - [Elastic Agent (Standalone)](data-connectors/elastic-agent-standalone.md) -## Exabeam --- [Exabeam Advanced Analytics](data-connectors/exabeam-advanced-analytics.md)- ## F5, Inc. - [F5 BIG-IP](data-connectors/f5-big-ip.md) For more information about the codeless connector platform, see [Create a codele - [Feedly](data-connectors/feedly.md) -## Fireeye --- [[Recommended] FireEye Network Security (NX) via AMA](data-connectors/recommended-fireeye-network-security-nx-via-ama.md)- ## Flare Systems - [Flare](data-connectors/flare.md) For more information about the codeless connector platform, see [Create a codele - [Gigamon AMX Data Connector](data-connectors/gigamon-amx-data-connector.md) -## GitLab --- [GitLab](data-connectors/gitlab.md)- ## Google - [Google Cloud Platform DNS (using Azure Functions)](data-connectors/google-cloud-platform-dns.md) For more information about the codeless connector platform, see [Create a codele - [Holm Security Asset Data (using Azure Functions)](data-connectors/holm-security-asset-data.md) -## Illumio --- [[Recommended] Illumio Core via AMA](data-connectors/recommended-illumio-core-via-ama.md)- ## Imperva - [Imperva Cloud WAF (using Azure Functions)](data-connectors/imperva-cloud-waf.md) For more information about the codeless connector platform, see [Create a codele - [Rapid7 Insight Platform Vulnerability Management Reports (using Azure Functions)](data-connectors/rapid7-insight-platform-vulnerability-management-reports.md) -## ISC --- [ISC Bind](data-connectors/isc-bind.md)- ## Island Technology Inc. - [Island Enterprise Browser Admin Audit (Polling CCP)](data-connectors/island-enterprise-browser-admin-audit-polling-ccp.md) - [Island Enterprise Browser User Activity (Polling CCP)](data-connectors/island-enterprise-browser-user-activity-polling-ccp.md) -## Ivanti --- [Ivanti Unified Endpoint Management](data-connectors/ivanti-unified-endpoint-management.md)- ## Jamf Software, LLC - [Jamf Protect](data-connectors/jamf-protect.md) -## Juniper --- [Juniper IDP](data-connectors/juniper-idp.md)-- [Juniper SRX](data-connectors/juniper-srx.md)- ## Linux - [Microsoft Sysmon For Linux](data-connectors/microsoft-sysmon-for-linux.md) For more information about the codeless connector platform, see [Create a codele - [MailGuard 365](data-connectors/mailguard-365.md) -## MarkLogic --- [MarkLogic Audit](data-connectors/marklogic-audit.md)- ## McAfee - [McAfee ePolicy Orchestrator (ePO)](data-connectors/mcafee-epolicy-orchestrator-epo.md) For more information about the codeless connector platform, see [Create a codele ## Microsoft Sentinel Community, Microsoft Corporation -- [[Recommended] Forcepoint CASB via AMA](data-connectors/recommended-forcepoint-casb-via-ama.md)-- [[Recommended] Forcepoint CSG via AMA](data-connectors/recommended-forcepoint-csg-via-ama.md)-- [[Recommended] Forcepoint NGFW via AMA](data-connectors/recommended-forcepoint-ngfw-via-ama.md)-- [Barracuda CloudGen Firewall](data-connectors/barracuda-cloudgen-firewall.md) - [Exchange Security Insights Online Collector (using Azure Functions)](data-connectors/exchange-security-insights-online-collector.md) - [Exchange Security Insights On-Premises Collector](data-connectors/exchange-security-insights-on-premises-collector.md) - [Microsoft Exchange Logs and Events](data-connectors/microsoft-exchange-logs-and-events.md) For more information about the codeless connector platform, see [Create a codele - [Qualys Vulnerability Management (using Azure Functions)](data-connectors/qualys-vulnerability-management.md) - [Qualys VM KnowledgeBase (using Azure Functions)](data-connectors/qualys-vm-knowledgebase.md) -## RedHat --- [JBoss Enterprise Application Platform](data-connectors/jboss-enterprise-application-platform.md)- ## Ridge Security Technology Inc. - [RIDGEBOT - data connector for Microsoft Sentinel](data-connectors/ridgebot-data-connector-for-microsoft-sentinel.md) |
sentinel | Apache Http Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/apache-http-server.md | - Title: "Apache HTTP Server connector for Microsoft Sentinel" -description: "Learn how to install the connector Apache HTTP Server to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Apache HTTP Server connector for Microsoft Sentinel --The Apache HTTP Server data connector provides the capability to ingest [Apache HTTP Server](http://httpd.apache.org/) events into Microsoft Sentinel. Refer to [Apache Logs documentation](https://httpd.apache.org/docs/2.4/logs.html) for more information. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | ApacheHTTPServer_CL<br/> | -| **Data collection rules support** | Not currently supported | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Clients (Source IP)** -- ```kusto -ApacheHTTPServer - - | summarize count() by SrcIpAddr - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ApacheHTTPServer and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/ApacheHTTPServer/Parsers/ApacheHTTPServer.txt). The function usually takes 10-15 minutes to activate after solution installation/update. --1. Install and onboard the agent for Linux or Windows --Install the agent on the Apache HTTP Server where the logs are generated. --> Logs from Apache HTTP Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents. -----2. Configure the logs to be collected --Configure the custom log directory to be collected ----1. Select the link above to open your workspace advanced settings -2. From the left pane, select **Data**, select **Custom Logs** and click **Add+** -3. Click **Browse** to upload a sample of a Apache HTTP Server log file (e.g. access.log or error.log). Then, click **Next >** -4. Select **New line** as the record delimiter and click **Next >** -5. Select **Windows** or **Linux** and enter the path to Apache HTTP logs based on your configuration. Example: -6. After entering the path, click the '+' symbol to apply, then click **Next >** -7. Add **ApacheHTTPServer_CL** as the custom log Name and click **Done** ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-apachehttpserver?tab=Overview) in the Azure Marketplace. |
sentinel | Barracuda Cloudgen Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/barracuda-cloudgen-firewall.md | - Title: "Barracuda CloudGen Firewall connector for Microsoft Sentinel" -description: "Learn how to install the connector Barracuda CloudGen Firewall to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Barracuda CloudGen Firewall connector for Microsoft Sentinel --The Barracuda CloudGen Firewall (CGFW) connector allows you to easily connect your Barracuda CGFW logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (Barracuda)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) | --## Query samples --**All logs** -- ```kusto -CGFWFirewallActivity - - | sort by TimeGenerated - ``` --**Top 10 Active Users (Last 24 Hours)** -- ```kusto -CGFWFirewallActivity - - | extend User = coalesce(User, "Unauthenticated") - - | summarize count() by User - - | take 10 - ``` --**Top 10 Applications (Last 24 Hours)** -- ```kusto -CGFWFirewallActivity - - | where isnotempty(Application) - - | summarize count() by Application - - | take 10 - ``` ----## Prerequisites --To integrate with Barracuda CloudGen Firewall make sure you have: --- **Barracuda CloudGen Firewall**: must be configured to export logs via Syslog---## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CGFWFirewallActivity and load the function code or click [here](https://aka.ms/sentinel-barracudacloudfirewall-parser). The function usually takes 10-15 minutes to activate after solution installation/update. --1. Install and onboard the agent for Linux -- Typically, you should install the agent on a different computer from the one on which the logs are generated. -- Syslog logs are collected only from **Linux** agents. ---2. Configure the logs to be collected --Configure the facilities you want to collect and their severities. --1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**. -2. Select **Apply below configuration to my machines** and select the facilities and severities. -3. Click **Save**. --Configure and connect the Barracuda CloudGen Firewall --[Follow instructions](https://aka.ms/sentinel-barracudacloudfirewall-connector) to configure syslog streaming. Use the IP address or hostname for the Linux machine with the Microsoft Sentinel agent installed for the Destination IP address. -----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-barracudacloudgenfirewall?tab=Overview) in the Azure Marketplace. |
sentinel | Cisco Identity Services Engine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-identity-services-engine.md | - Title: "Cisco Identity Services Engine connector for Microsoft Sentinel" -description: "Learn how to install the connector Cisco Identity Services Engine to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Cisco Identity Services Engine connector for Microsoft Sentinel --The Cisco Identity Services Engine (ISE) data connector provides the capability to ingest [Cisco ISE](https://www.cisco.com/c/en/us/products/security/identity-services-engine/https://docsupdatetracker.net/index.html) events into Microsoft Sentinel. It helps you gain visibility into what is happening in your network, such as who is connected, which applications are installed and running, and much more. Refer to [Cisco ISE logging mechanism documentation](https://www.cisco.com/c/en/us/td/docs/security/ise/2-7/admin_guide/b_ise_27_admin_guide/b_ISE_admin_27_maintain_monitor.html#reference_BAFBA5FA046A45938810A5DF04C00591) for more information. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Kusto function alias** | CiscoISEEvent | -| **Kusto function url** | https://aka.ms/sentinel-ciscoise-parser | -| **Log Analytics table(s)** | Syslog(CiscoISE)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Reporting Devices** -- ```kusto -CiscoISEEvent - - | summarize count() by DvcHostname - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-ciscoise-parser) to create the Kusto Functions alias, **CiscoISEEvent** --1. Install and onboard the agent for Linux --Typically, you should install the agent on a different computer from the one on which the logs are generated. --> Syslog logs are collected only from **Linux** agents. ---2. Configure the logs to be collected --Configure the facilities you want to collect and their severities. --1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**. -2. Select **Apply below configuration to my machines** and select the facilities and severities. -3. Click **Save**. ---3. Configure Cisco ISE Remote Syslog Collection Locations --[Follow these instructions](https://www.cisco.com/c/en/us/td/docs/security/ise/2-7/admin_guide/b_ise_27_admin_guide/b_ISE_admin_27_maintain_monitor.html#ID58) to configure remote syslog collection locations in your Cisco ISE deployment. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoise?tab=Overview) in the Azure Marketplace. |
sentinel | Cisco Meraki | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-meraki.md | - Title: "Cisco Meraki connector for Microsoft Sentinel" -description: "Learn how to install the connector Cisco Meraki to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Cisco Meraki connector for Microsoft Sentinel --The [Cisco Meraki](https://meraki.cisco.com/) connector allows you to easily connect your Cisco Meraki (MX/MR/MS) logs with Microsoft Sentinel. This gives you more insight into your organization's network and improves your security operation capabilities. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | meraki_CL<br/> | -| **Data collection rules support** | Not currently supported | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Total Events by Log Type** -- ```kusto -CiscoMeraki - - | summarize count() by LogType - ``` --**Top 10 Blocked Connections** -- ```kusto -CiscoMeraki - - | where LogType == "security_event" - - | where Action == "block" - - | summarize count() by SrcIpAddr, DstIpAddr, Action, Disposition - - | top 10 by count_ - ``` ----## Prerequisites --To integrate with Cisco Meraki make sure you have: --- **Cisco Meraki**: must be configured to export logs via Syslog---## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CiscoMeraki and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/CiscoMeraki/Parsers/CiscoMeraki.txt). The function usually takes 10-15 minutes to activate after solution installation/update. --1. Install and onboard the agent for Linux --Typically, you should install the agent on a different computer from the one on which the logs are generated. --> Syslog logs are collected only from **Linux** agents. ---2. Configure the logs to be collected --Follow the configuration steps below to get Cisco Meraki device logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps. - For Cisco Meraki logs, we have issues while parsing the data by OMS agent data using default settings. -So we advice to capture the logs into custom table **meraki_CL** using below instructions. -1. Login to the server where you have installed OMS agent. -2. Download config file [meraki.conf](https://aka.ms/sentinel-ciscomerakioms-conf) - wget -v https://aka.ms/sentinel-ciscomerakioms-conf -O meraki.conf -3. Copy meraki.conf to the /etc/opt/microsoft/omsagent/**workspace_id**/conf/omsagent.d/ folder. - cp meraki.conf /etc/opt/microsoft/omsagent/<<workspace_id>>/conf/omsagent.d/ -4. Edit meraki.conf as follows: -- a. meraki.conf uses the port **22033** by default. Ensure this port is not being used by any other source on your server -- b. If you would like to change the default port for **meraki.conf** make sure that you dont use default Azure monitoring /log analytic agent ports I.e.(For example CEF uses TCP port **25226** or **25224**) -- c. replace **workspace_id** with real value of your Workspace ID (lines 14,15,16,19) -5. Save changes and restart the Azure Log Analytics agent for Linux service with the following command: - sudo /opt/microsoft/omsagent/bin/service_control restart -6. Modify /etc/rsyslog.conf file - add below template preferably at the beginning / before directives section - $template meraki,"%timestamp% %hostname% %msg%\n" -7. Create a custom conf file in /etc/rsyslog.d/ for example 10-meraki.conf and add following filter conditions. -- With an added statement you will need to create a filter which will specify the logs coming from the Cisco Meraki to be forwarded to the custom table. -- reference: [Filter Conditions ΓÇö rsyslog 8.18.0.master documentation](https://rsyslog.readthedocs.io/en/latest/configuration/filters.html) -- Here is an example of filtering that can be defined, this is not complete and will require additional testing for each installation. - if $rawmsg contains "flows" then @@127.0.0.1:22033;meraki - & stop - if $rawmsg contains "urls" then @@127.0.0.1:22033;meraki - & stop - if $rawmsg contains "ids-alerts" then @@127.0.0.1:22033;meraki - & stop - if $rawmsg contains "events" then @@127.0.0.1:22033;meraki - & stop - if $rawmsg contains "ip_flow_start" then @@127.0.0.1:22033;meraki - & stop - if $rawmsg contains "ip_flow_end" then @@127.0.0.1:22033;meraki - & stop -8. Restart rsyslog - systemctl restart rsyslog ---3. Configure and connect the Cisco Meraki device(s) --[Follow these instructions](https://documentation.meraki.com/General_Administration/Monitoring_and_Reporting/Meraki_Device_Reporting_-_Syslog%2C_SNMP_and_API) to configure the Cisco Meraki device(s) to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscomeraki?tab=Overview) in the Azure Marketplace. |
sentinel | Cisco Secure Cloud Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-secure-cloud-analytics.md | - Title: "Cisco Secure Cloud Analytics connector for Microsoft Sentinel" -description: "Learn how to install the connector Cisco Secure Cloud Analytics to connect your data source to Microsoft Sentinel." -- Previously updated : 05/30/2024------# Cisco Secure Cloud Analytics connector for Microsoft Sentinel --The [Cisco Secure Cloud Analytics](https://www.cisco.com/c/en/us/products/security/stealthwatch/https://docsupdatetracker.net/index.html) data connector provides the capability to ingest [Cisco Secure Cloud Analytics events](https://www.cisco.com/c/dam/en/us/td/docs/security/stealthwatch/management_console/securit_events_alarm_categories/7_4_2_Security_Events_and_Alarm_Categories_DV_2_1.pdf) into Microsoft Sentinel. Refer to [Cisco Secure Cloud Analytics documentation](https://www.cisco.com/c/dam/en/us/td/docs/security/stealthwatch/system_installation_configuration/7_5_0_System_Configuration_Guide_DV_1_3.pdf) for more information. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (StealthwatchEvent)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Sources** -- ```kusto -StealthwatchEvent - - | summarize count() by tostring(DvcHostname) - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**StealthwatchEvent**](https://aka.ms/sentinel-stealthwatch-parser) which is deployed with the Microsoft Sentinel Solution. - > This data connector has been developed using Cisco Secure Cloud Analytics version 7.3.2 --1. Install and onboard the agent for Linux or Windows --Install the agent on the Server where the Cisco Secure Cloud Analytics logs are forwarded. --Logs from Cisco Secure Cloud Analytics Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents. -----2. Configure Cisco Secure Cloud Analytics event forwarding --Follow the configuration steps below to get Cisco Secure Cloud Analytics logs into Microsoft Sentinel. -1. Log in to the Stealthwatch Management Console (SMC) as an administrator. -2. In the menu bar, click **Configuration** **>** **Response Management**. -3. From the **Actions** section in the **Response Management** menu, click **Add > Syslog Message**. -4. In the Add Syslog Message Action window, configure parameters. -5. Enter the following custom format: -`|Lancope|Stealthwatch|7.3|{alarm_type_id}|0x7C|src={source_ip}|dst={target_ip}|dstPort={port}|proto={protocol}|msg={alarm_type_description}|fullmessage={details}|start={start_active_time}|end={end_active_time}|cat={alarm_category_name}|alarmID={alarm_id}|sourceHG={source_host_group_names}|targetHG={target_host_group_names}|sourceHostSnapshot={source_url}|targetHostSnapshot={target_url}|flowCollectorName={device_name}|flowCollectorIP={device_ip}|domain={domain_name}|exporterName={exporter_hostname}|exporterIPAddress={exporter_ip}|exporterInfo={exporter_label}|targetUser={target_username}|targetHostname={target_hostname}|sourceUser={source_username}|alarmStatus={alarm_status}|alarmSev={alarm_severity_name}` --6. Select the custom format from the list and click **OK** -7. Click **Response Management > Rules**. -8. Click **Add** and select **Host Alarm**. -9. Provide a rule name in the **Name** field. -10. Create rules by selecting values from the Type and Options menus. To add more rules, click the ellipsis icon. For a Host Alarm, combine as many possible types in a statement as possible. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscostealthwatch?tab=Overview) in the Azure Marketplace. |
sentinel | Cisco Ucs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-ucs.md | - Title: "Cisco UCS connector for Microsoft Sentinel" -description: "Learn how to install the connector Cisco UCS to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Cisco UCS connector for Microsoft Sentinel --The [Cisco Unified Computing System (UCS)](https://www.cisco.com/c/en/us/products/servers-unified-computing/https://docsupdatetracker.net/index.html) connector allows you to easily connect your Cisco UCS logs with Microsoft Sentinel This gives you more insight into your organization's network and improves your security operation capabilities. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (CiscoUCS)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Target User Names for Audit Events** -- ```kusto -CiscoUCS - - | where Mneumonic == "AUDIT" - - | summarize count() by DstUserName - - | top 10 by DstUserName - ``` --**Top 10 Devices generating Audit Events** -- ```kusto -CiscoUCS - - | where Mneumonic == "AUDIT" - - | summarize count() by Computer - - | top 10 by Computer - ``` ----## Prerequisites --To integrate with Cisco UCS make sure you have: --- **Cisco UCS**: must be configured to export logs via Syslog---## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CiscoUCS and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Cisco%20UCS/Parsers/CiscoUCS.txt). The function usually takes 10-15 minutes to activate after solution installation/update. --1. Install and onboard the agent for Linux --Typically, you should install the agent on a different computer from the one on which the logs are generated. --> Syslog logs are collected only from **Linux** agents. ---2. Configure the logs to be collected --Configure the facilities you want to collect and their severities. - 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**. - 2. Select **Apply below configuration to my machines** and select the facilities and severities. - 3. Click **Save**. ---3. Configure and connect the Cisco UCS --[Follow these instructions](https://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-manager/110265-setup-syslog-for-ucs.html#configsremotesyslog) to configure the Cisco UCS to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoucs?tab=Overview) in the Azure Marketplace. |
sentinel | Cisco Web Security Appliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-web-security-appliance.md | - Title: "Cisco Web Security Appliance connector for Microsoft Sentinel" -description: "Learn how to install the connector Cisco Web Security Appliance to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Cisco Web Security Appliance connector for Microsoft Sentinel --[Cisco Web Security Appliance (WSA)](https://www.cisco.com/c/en/us/products/security/web-security-appliance/https://docsupdatetracker.net/index.html) data connector provides the capability to ingest [Cisco WSA Access Logs](https://www.cisco.com/c/en/us/td/docs/security/wsa/wsa_14-0/User-Guide/b_WSA_UserGuide_14_0/b_WSA_UserGuide_11_7_chapter_010101.html) into Microsoft Sentinel. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (CiscoWSAEvent)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Clients (Source IP)** -- ```kusto -CiscoWSAEvent - - | where notempty(SrcIpAddr) - - | summarize count() by SrcIpAddr - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**CiscoWSAEvent**](https://aka.ms/sentinel-CiscoWSA-parser) which is deployed with the Microsoft Sentinel Solution. ---> [!NOTE] - > This data connector has been developed using AsyncOS 14.0 for Cisco Web Security Appliance --1. Configure Cisco Web Security Appliance to forward logs via Syslog to remote server where you will install the agent. --[Follow these steps](https://www.cisco.com/c/en/us/td/docs/security/esa/esa14-0/user_guide/b_ESA_Admin_Guide_14-0/b_ESA_Admin_Guide_12_1_chapter_0100111.html#con_1134718) to configure Cisco Web Security Appliance to forward logs via Syslog -->**NOTE:** Select **Syslog Push** as a Retrieval Method. --2. Install and onboard the agent for Linux or Windows --Install the agent on the Server to which the logs will be forwarded. --> Logs on Linux or Windows servers are collected by **Linux** or **Windows** agents. -----3. Check logs in Microsoft Sentinel --Open Log Analytics to check if the logs are received using the Syslog schema. -->**NOTE:** It may take up to 15 minutes before new logs will appear in Syslog table. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscowsa?tab=Overview) in the Azure Marketplace. |
sentinel | Citrix Adc Former Netscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/citrix-adc-former-netscaler.md | - Title: "Citrix ADC (former NetScaler) connector for Microsoft Sentinel" -description: "Learn how to install the connector Citrix ADC (former NetScaler) to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Citrix ADC (former NetScaler) connector for Microsoft Sentinel --The [Citrix ADC (former NetScaler)](https://www.citrix.com/products/citrix-adc/) data connector provides the capability to ingest Citrix ADC logs into Microsoft Sentinel. If you want to ingest Citrix WAF logs into Microsoft Sentinel, refer this [documentation](/azure/sentinel/data-connectors/citrix-waf-web-app-firewall). --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Event Types** -- ```kusto -CitrixADCEvent - - | where isnotempty(EventType) - - | summarize count() by EventType - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] -> 1. This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CitrixADCEvent and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Citrix%20ADC/Parsers/CitrixADCEvent.txt), this function maps Citrix ADC (former NetScaler) events to Advanced Security Information Model [ASIM](/azure/sentinel/normalization). The function usually takes 10-15 minutes to activate after solution installation/update. -> 2. This parser requires a watchlist named **`Sources_by_SourceType`** --> i. If you don't have watchlist already created, please click [here](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Sentinel%2Fmaster%2FASIM%2Fdeploy%2FWatchlists%2FASimSourceType.json) to create. --> ii. Open watchlist **`Sources_by_SourceType`** and add entries for this data source. --> iii. The SourceType value for CitrixADC is **`CitrixADC`**. --> You can refer [this](/azure/sentinel/normalization-manage-parsers?WT.mc_id=Portal-fx#configure-the-sources-relevant-to-a-source-specific-parser) documentation for more details --1. Install and onboard the agent for Linux --Typically, you should install the agent on a different computer from the one on which the logs are generated. --> Syslog logs are collected only from **Linux** agents. ---2. Configure the logs to be collected --Configure the facilities you want to collect and their severities. - 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**. - 2. Select **Apply below configuration to my machines** and select the facilities and severities. - 3. Click **Save**. ---3. Configure Citrix ADC to forward logs via Syslog --3.1 Navigate to **Configuration tab > System > Auditing > Syslog > Servers tab** -- 3.2 Specify **Syslog action name**. -- 3.3 Set IP address of remote Syslog server and port. -- 3.4 Set **Transport type** as **TCP** or **UDP** depending on your remote Syslog server configuration. -- 3.5 You can refer Citrix ADC (former NetScaler) [documentation](https://docs.netscaler.com/) for more details. --4. Check logs in Microsoft Sentinel --Open Log Analytics to check if the logs are received using the Syslog schema. -->**NOTE:** It may take up to 15 minutes before new logs will appear in Syslog table. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-citrixadc?tab=Overview) in the Azure Marketplace. |
sentinel | Digital Guardian Data Loss Prevention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/digital-guardian-data-loss-prevention.md | - Title: "Digital Guardian Data Loss Prevention connector for Microsoft Sentinel" -description: "Learn how to install the connector Digital Guardian Data Loss Prevention to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Digital Guardian Data Loss Prevention connector for Microsoft Sentinel --[Digital Guardian Data Loss Prevention (DLP)](https://digitalguardian.com/platform-overview) data connector provides the capability to ingest Digital Guardian DLP logs into Microsoft Sentinel. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (DigitalGuardianDLPEvent)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Clients (Source IP)** -- ```kusto -DigitalGuardianDLPEvent - - | where notempty(SrcIpAddr) - - | summarize count() by SrcIpAddr - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**DigitalGuardianDLPEvent**](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Digital%20Guardian%20Data%20Loss%20Prevention/Parsers/DigitalGuardianDLPEvent.yaml) which is deployed with the Microsoft Sentinel Solution. --1. Configure Digital Guardian to forward logs via Syslog to remote server where you will install the agent. --Follow these steps to configure Digital Guardian to forward logs via Syslog: --1.1. Log in to the Digital Guardian Management Console. --1.2. Select **Workspace** > **Data Export** > **Create Export**. --1.3. From the **Data Sources** list, select **Alerts** or **Events** as the data source. --1.4. From the **Export type** list, select **Syslog**. --1.5. From the **Type list**, select **UDP** or **TCP** as the transport protocol. --1.6. In the **Server** field, type the IP address of your Remote Syslog server. --1.7. In the **Port** field, type 514 (or other port if your Syslog server was configured to use non-default port). --1.8. From the **Severity Level** list, select a severity level. --1.9. Select the **Is Active** check box. --1.9. Click **Next**. --1.10. From the list of available fields, add Alert or Event fields for your data export. --1.11. Select a Criteria for the fields in your data export and click **Next**. --1.12. Select a group for the criteria and click **Next**. --1.13. Click **Test Query**. --1.14. Click **Next**. --1.15. Save the data export. --2. Install and onboard the agent for Linux or Windows --Install the agent on the Server to which the logs will be forwarded. --> Logs on Linux or Windows servers are collected by **Linux** or **Windows** agents. -----3. Check logs in Microsoft Sentinel --Open Log Analytics to check if the logs are received using the Syslog schema. -->**NOTE:** It may take up to 15 minutes before new logs will appear in Syslog table. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-digitalguardiandlp?tab=Overview) in the Azure Marketplace. |
sentinel | Eset Protect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/eset-protect.md | - Title: "ESET PROTECT connector for Microsoft Sentinel" -description: "Learn how to install the connector ESET PROTECT to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# ESET PROTECT connector for Microsoft Sentinel --This connector gathers all events generated by ESET software through the central management solution ESET PROTECT (formerly ESET Security Management Center). This includes Anti-Virus detections, Firewall detections but also more advanced EDR detections. For a complete list of events please refer to [the documentation](https://help.eset.com/protect_admin/latest/en-US/events-exported-to-json-format.html). --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (ESETPROTECT)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [ESET Netherlands](https://techcenter.eset.nl/en/) | --## Query samples --**ESET threat events** -- ```kusto -ESETPROTECT -- | where EventType == 'Threat_Event' -- | sort by TimeGenerated desc - ``` --**Top 10 detected threats** -- ```kusto -ESETPROTECT -- | where EventType == 'Threat_Event' -- | summarize ThreatCount = count() by tostring(ThreatName) -- | top 10 by ThreatCount - ``` --**ESET firewall events** -- ```kusto -ESETPROTECT -- | where EventType == 'FirewallAggregated_Event' -- | sort by TimeGenerated desc - ``` --**ESET threat events** -- ```kusto -ESETPROTECT -- | where EventType == 'Threat_Event' -- | sort by TimeGenerated desc - ``` --**ESET threat events from Real-time file system protection** -- ```kusto -ESETPROTECT -- | where EventType == 'Threat_Event' -- | where ScanId == 'Real-time file system protection' -- | sort by TimeGenerated desc - ``` --**Query ESET threat events from On-demand scanner** -- ```kusto -ESETPROTECT -- | where EventType == 'Threat_Event' -- | where ScanId == 'On-demand scanner' -- | sort by TimeGenerated desc - ``` --**Top hosts by number of threat events** -- ```kusto -ESETPROTECT -- | where EventType == 'Threat_Event' -- | summarize threat_events_count = count() by HostName -- | sort by threat_events_count desc - ``` --**ESET web sites filter** -- ```kusto -ESETPROTECT -- | where EventType == 'FilteredWebsites_Event' -- | sort by TimeGenerated desc - ``` --**ESET audit events** -- ```kusto -ESETPROTECT -- | where EventType == 'Audit_Event' -- | sort by TimeGenerated desc - ``` ----## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ESETPROTECT and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/ESETPROTECT/Parsers/ESETPROTECT.txt).The function usually takes 10-15 minutes to activate after solution installation/update. --1. Install and onboard the agent for Linux --Typically, you should install the agent on a different computer from the one on which the logs are generated. --> Syslog logs are collected only from **Linux** agents. ---2. Configure the logs to be collected --Configure the facilities you want to collect and their severities. --1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**. -2. Select **Apply below configuration to my machines** and select the facilities and severities. The default ESET PROTECT facility is **user**. -3. Click **Save**. ---3. Configure ESET PROTECT --Configure ESET PROTECT to send all events through Syslog. --1. Follow [these instructions](https://help.eset.com/protect_admin/latest/en-US/admin_server_settings_syslog.html) to configure syslog output. Make sure to select **BSD** as the format and **TCP** as the transport. --2. Follow [these instructions](https://help.eset.com/protect_admin/latest/en-US/admin_server_settings_export_to_syslog.html) to export all logs to syslog. Select **JSON** as the output format. --Note:- Refer to the [documentation](/azure/sentinel/connect-log-forwarder?tabs=rsyslog#security-considerations) for setting up the log forwarder for both local and cloud storage. -----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberdefensegroupbv1625581149103.eset_protect?tab=Overview) in the Azure Marketplace. |
sentinel | Exabeam Advanced Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/exabeam-advanced-analytics.md | - Title: "Exabeam Advanced Analytics connector for Microsoft Sentinel" -description: "Learn how to install the connector Exabeam Advanced Analytics to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Exabeam Advanced Analytics connector for Microsoft Sentinel --The [Exabeam Advanced Analytics](https://www.exabeam.com/ueba/advanced-analytics-and-mitre-detect-and-stop-threats/) data connector provides the capability to ingest Exabeam Advanced Analytics events into Microsoft Sentinel. Refer to [Exabeam Advanced Analytics documentation](https://docs.exabeam.com/) for more information. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (Exabeam)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Clients (Source IP)** -- ```kusto -ExabeamEvent - - | summarize count() by SrcIpAddr - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Exabeam Advanced Analytics and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Exabeam%20Advanced%20Analytics/Parsers/ExabeamEvent.txt), on the second line of the query, enter the hostname(s) of your Exabeam Advanced Analytics device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. ---> [!NOTE] - > This data connector has been developed using Exabeam Advanced Analytics i54 (Syslog) --1. Install and onboard the agent for Linux or Windows --Install the agent on the server where the Exabeam Advanced Analytic logs are generated or forwarded. --> Logs from Exabeam Advanced Analytic deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents. -----2. Configure the logs to be collected --Configure the custom log directory to be collected ---3. Configure Exabeam event forwarding to Syslog --[Follow these instructions](https://docs.exabeam.com/en/advanced-analytics/i56/advanced-analytics-administration-guide/125351-advanced-analytics.html#UUID-7ce5ff9d-56aa-93f0-65de-c5255b682a08) to send Exabeam Advanced Analytics activity log data via syslog. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-exabeamadvancedanalytics?tab=Overview) in the Azure Marketplace. |
sentinel | Gitlab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/gitlab.md | - Title: "GitLab connector for Microsoft Sentinel" -description: "Learn how to install the connector GitLab to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# GitLab connector for Microsoft Sentinel --The [GitLab](https://about.gitlab.com/solutions/devops-platform/) connector allows you to easily connect your GitLab (GitLab Enterprise Edition - Standalone) logs with Microsoft Sentinel. This gives you more security insight into your organization's DevOps pipelines. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (GitlabAccess)<br/> Syslog (GitlabAudit)<br/> Syslog (GitlabApp)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**GitLab Application Logs** -- ```kusto -GitLabApp - | sort by TimeGenerated - ``` --**GitLab Audit Logs** -- ```kusto -GitLabAudit - | sort by TimeGenerated - ``` --**GitLab Access Logs** -- ```kusto -GitLabAccess - | sort by TimeGenerated - ``` ----## Vendor installation instructions --Configuration -->This data connector depends on three parsers based on a Kusto Function to work as expected [**GitLab Access Logs**](https://aka.ms/sentinel-GitLabAccess-parser), [**GitLab Audit Logs**](https://aka.ms/sentinel-GitLabAudit-parser) and [**GitLab Application Logs**](https://aka.ms/sentinel-GitLabApp-parser) which are deployed with the Microsoft Sentinel Solution. --1. Install and onboard the agent for Linux --Typically, you should install the agent on a different computer from the one on which the logs are generated. --> Syslog logs are collected only from **Linux** agents. ---2. Configure the logs to be collected --Configure the facilities you want to collect and their severities. --1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**. -2. Select **Apply below configuration to my machines** and select the facilities and severities. -3. Click **Save**. -----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-gitlab?tab=Overview) in the Azure Marketplace. |
sentinel | Isc Bind | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/isc-bind.md | - Title: "ISC Bind connector for Microsoft Sentinel" -description: "Learn how to install the connector ISC Bind to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# ISC Bind connector for Microsoft Sentinel --The [ISC Bind](https://www.isc.org/bind/) connector allows you to easily connect your ISC Bind logs with Microsoft Sentinel. This gives you more insight into your organization's network traffic data, DNS query data, traffic statistics and improves your security operation capabilities. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (ISCBind)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | --## Query samples --**Top 10 Domains Queried** -- ```kusto -ISCBind -- | where EventSubType == "request" -- | summarize count() by DnsQuery -- | top 10 by count_ - ``` --**Top 10 clients by Source IP Address** -- ```kusto -ISCBind -- | where EventSubType == "request" -- | summarize count() by SrcIpAddr -- | top 10 by count_ - ``` ----## Prerequisites --To integrate with ISC Bind make sure you have: --- **ISC Bind**: must be configured to export logs via Syslog---## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ISCBind and load the function code or click [here](https://aka.ms/sentinel-iscbind-parser).The function usually takes 10-15 minutes to activate after solution installation/update. --1. Install and onboard the agent for Linux --Typically, you should install the agent on a different computer from the one on which the logs are generated. --> Syslog logs are collected only from **Linux** agents. ---2. Configure the logs to be collected --Configure the facilities you want to collect and their severities. - 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**. - 2. Select **Apply below configuration to my machines** and select the facilities and severities. - 3. Click **Save**. ---3. Configure and connect the ISC Bind --1. Follow these instructions to configure the ISC Bind to forward syslog: -2. Configure Syslog to send the Syslog traffic to Agent. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-iscbind?tab=Overview) in the Azure Marketplace. |
sentinel | Ivanti Unified Endpoint Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ivanti-unified-endpoint-management.md | - Title: "Ivanti Unified Endpoint Management connector for Microsoft Sentinel" -description: "Learn how to install the connector Ivanti Unified Endpoint Management to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Ivanti Unified Endpoint Management connector for Microsoft Sentinel --The [Ivanti Unified Endpoint Management](https://www.ivanti.com/products/unified-endpoint-manager) data connector provides the capability to ingest [Ivanti UEM Alerts](https://help.ivanti.com/ld/help/en_US/LDMS/11.0/Windows/alert-c-monitoring-overview.htm) into Microsoft Sentinel. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (IvantiUEMEvent)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Sources** -- ```kusto -IvantiUEMEvent - - | summarize count() by tostring(SrcHostname) - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**IvantiUEMEvent**](https://aka.ms/sentinel-ivantiuem-parser) which is deployed with the Microsoft Sentinel Solution. ---> [!NOTE] - > This data connector has been developed using Ivanti Unified Endpoint Management Release 2021.1 Version 11.0.3.374 --1. Install and onboard the agent for Linux or Windows --Install the agent on the Server where the Ivanti Unified Endpoint Management Alerts are forwarded. --> Logs from Ivanti Unified Endpoint Management Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents. -----2. Configure Ivanti Unified Endpoint Management alert forwarding. --[Follow the instructions](https://help.ivanti.com/ld/help/en_US/LDMS/11.0/Windows/alert-t-define-action.htm) to set up Alert Actions to send logs to syslog server. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ivantiuem?tab=Overview) in the Azure Marketplace. |
sentinel | Jboss Enterprise Application Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/jboss-enterprise-application-platform.md | - Title: "JBoss Enterprise Application Platform connector for Microsoft Sentinel" -description: "Learn how to install the connector JBoss Enterprise Application Platform to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# JBoss Enterprise Application Platform connector for Microsoft Sentinel --The JBoss Enterprise Application Platform data connector provides the capability to ingest [JBoss](https://www.redhat.com/en/technologies/jboss-middleware/application-platform) events into Microsoft Sentinel. Refer to [Red Hat documentation](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/configuration_guide/logging_with_jboss_eap) for more information. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | JBossLogs_CL<br/> | -| **Data collection rules support** | Not currently supported | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Processes** -- ```kusto -JBossEvent - - | summarize count() by ActingProcessName - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**JBossEvent**](https://aka.ms/sentinel-jbosseap-parser) which is deployed with the Microsoft Sentinel Solution. ---> [!NOTE] - > This data connector has been developed using JBoss Enterprise Application Platform 7.4.0. --1. Install and onboard the agent for Linux or Windows --Install the agent on the JBoss server where the logs are generated. --> Logs from JBoss Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents. - -----2. Configure the logs to be collected --Configure the custom log directory to be collected ----1. Select the link above to open your workspace advanced settings -2. Click **+Add custom** -3. Click **Browse** to upload a sample of a JBoss log file (e.g. server.log). Then, click **Next >** -4. Select **Timestamp** as the record delimiter and select Timestamp format **YYYY-MM-DD HH:MM:SS** from the dropdown list then click **Next >** -5. Select **Windows** or **Linux** and enter the path to JBoss logs based on your configuration. Example: -->Standalone server: EAP_HOME/standalone/log/server.log -->Managed domain: EAP_HOME/domain/servers/SERVER_NAME/log/server.log --6. After entering the path, click the '+' symbol to apply, then click **Next >** -7. Add **JBossLogs** as the custom log Name and click **Done** --3. Check logs in Microsoft Sentinel --Open Log Analytics to check if the logs are received using the JBossLogs_CL Custom log table. -->**NOTE:** It may take up to 30 minutes before new logs will appear in JBossLogs_CL table. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-jboss?tab=Overview) in the Azure Marketplace. |
sentinel | Juniper Idp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/juniper-idp.md | - Title: "Juniper IDP connector for Microsoft Sentinel" -description: "Learn how to install the connector Juniper IDP to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Juniper IDP connector for Microsoft Sentinel --The [Juniper](https://www.juniper.net/) IDP data connector provides the capability to ingest [Juniper IDP](https://www.juniper.net/documentation/us/en/software/junos/idp-policy/topics/topic-map/security-idp-overview.html) events into Microsoft Sentinel. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | JuniperIDP_CL<br/> | -| **Data collection rules support** | Not currently supported | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Clients (Source IP)** -- ```kusto -JuniperIDP - - | summarize count() by SrcIpAddr - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on Kusto Function to work as expected [**JuniperIDP**](https://aka.ms/sentinel-JuniperIDP-parser) which is deployed with the Microsoft Sentinel Solution. ---> [!NOTE] - > IDP OS 5.1 and above is supported by this data connector. --1. Install and onboard the agent for Linux or Windows --Install the agent on the Server. -----2. Configure the logs to be collected --Follow the configuration steps below to get Juniper IDP logs into Microsoft Sentinel. This configuration enriches events generated by Juniper IDP module to provide visibility on log source information for Juniper IDP logs. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps. -1. Download config file [juniper_idp.conf](https://aka.ms/sentinel-JuniperIDP-conf). -2. Login to the server where you have installed Azure Log Analytics agent. -3. Copy juniper_idp.conf to the /etc/opt/microsoft/omsagent/**workspace_id**/conf/omsagent.d/ folder. -4. Edit juniper_idp.conf as follows: -- i. change the listen port for receiving logs based on your configuration (line 3) -- ii. replace **workspace_id** with real value of your Workspace ID (lines 58,59,60,63) -5. Save changes and restart the Azure Log Analytics agent for Linux service with the following command: - sudo /opt/microsoft/omsagent/bin/service_control restart -6. To configure a remote syslog destination, please reference the [SRX Getting Started - Configure System Logging](https://kb.juniper.net/InfoCenter/index?page=content&id=kb16502). -----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-juniperidp?tab=Overview) in the Azure Marketplace. |
sentinel | Juniper Srx | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/juniper-srx.md | - Title: "Juniper SRX connector for Microsoft Sentinel" -description: "Learn how to install the connector Juniper SRX to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Juniper SRX connector for Microsoft Sentinel --The [Juniper SRX](https://www.juniper.net/us/en/products-services/security/srx-series/) connector allows you to easily connect your Juniper SRX logs with Microsoft Sentinel. This gives you more insight into your organization's network and improves your security operation capabilities. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (JuniperSRX)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Users with Failed Passwords** -- ```kusto -JuniperSRX -- | where EventType == "sshd" -- | where EventName == "Failed password" -- | summarize count() by UserName -- | top 10 by count_ - ``` --**Top 10 IDS Detections by Source IP Address** -- ```kusto -JuniperSRX -- | where EventType == "RT_IDS" -- | summarize count() by SrcIpAddr -- | top 10 by count_ - ``` ----## Prerequisites --To integrate with Juniper SRX make sure you have: --- **Juniper SRX**: must be configured to export logs via Syslog---## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias JuniperSRX and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Juniper%20SRX/Parsers/JuniperSRX.txt), on the second line of the query, enter the hostname(s) of your JuniperSRX device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. --1. Install and onboard the agent for Linux --Typically, you should install the agent on a different computer from the one on which the logs are generated. --> Syslog logs are collected only from **Linux** agents. ---2. Configure the logs to be collected --Configure the facilities you want to collect and their severities. - 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**. - 2. Select **Apply below configuration to my machines** and select the facilities and severities. - 3. Click **Save**. ---3. Configure and connect the Juniper SRX --1. Follow these instructions to configure the Juniper SRX to forward syslog: -2. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-junipersrx?tab=Overview) in the Azure Marketplace. |
sentinel | Marklogic Audit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/marklogic-audit.md | - Title: "MarkLogic Audit connector for Microsoft Sentinel" -description: "Learn how to install the connector MarkLogic Audit to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# MarkLogic Audit connector for Microsoft Sentinel --MarkLogic data connector provides the capability to ingest [MarkLogicAudit](https://www.marklogic.com/) logs into Microsoft Sentinel. Refer to [MarkLogic documentation](https://docs.marklogic.com/guide/getting-started) for more information. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | MarkLogicAudit_CL<br/> | -| **Data collection rules support** | Not currently supported | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**MarkLogicAudit - All Activities.** -- ```kusto -MarkLogicAudit_CL - - | sort by TimeGenerated desc - ``` ----## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias MarkLogicAudit and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/MarkLogicAudit/Parsers/MarkLogicAudit.txt) on the second line of the query, enter the hostname(s) of your MarkLogicAudit device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. --1. Install and onboard the agent for Linux or Windows --Install the agent on the Tomcat Server where the logs are generated. --> Logs from MarkLogic Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents. -----2. Configure MarkLogicAudit to enable auditing --Perform the following steps to enable auditing for a group: -->Access the Admin Interface with a browser; -->Open the Audit Configuration screen (Groups > group_name > Auditing); -->Select True for the Audit Enabled radio button; -->Configure any audit events and/or audit restrictions you want; -->Click OK. -- Refer to the [MarkLogic documentation for more details](https://docs.marklogic.com/guide/admin/auditing) --3. Configure the logs to be collected --Configure the custom log directory to be collected ----1. Select the link above to open your workspace advanced settings -2. From the left pane, select **Settings**, select **Custom Logs** and click **+Add custom log** -3. Click **Browse** to upload a sample of a MarkLogicAudit log file. Then, click **Next >** -4. Select **Timestamp** as the record delimiter and click **Next >** -5. Select **Windows** or **Linux** and enter the path to MarkLogicAudit logs based on your configuration -6. After entering the path, click the '+' symbol to apply, then click **Next >** -7. Add **MarkLogicAudit** as the custom log Name (the '_CL' suffix will be added automatically) and click **Done**. --Validate connectivity --It may take upwards of 20 minutes until your logs start to appear in Microsoft Sentinel. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-marklogicaudit?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Akamai Security Events Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-akamai-security-events-via-ama.md | - Title: "[Recommended] Akamai Security Events via AMA connector for Microsoft Sentinel" -description: "Learn how to install the connector [Recommended] Akamai Security Events via AMA to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Recommended] Akamai Security Events via AMA connector for Microsoft Sentinel --Akamai Solution for Microsoft Sentinel provides the capability to ingest [Akamai Security Events](https://www.akamai.com/us/en/products/security/) into Microsoft Sentinel. Refer to [Akamai SIEM Integration documentation](https://developer.akamai.com/tools/integrations/siem) for more information. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (AkamaiSecurityEvents)<br/> | -| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Countries** - ```kusto -AkamaiSIEMEvent - - | summarize count() by SrcGeoCountry - - | top 10 by count_ - ``` ----## Prerequisites --To integrate with [Recommended] Akamai Security Events via AMA make sure you have: --- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)---## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Akamai Security Events and load the function code or click [here](https://aka.ms/sentinel-akamaisecurityevents-parser), on the second line of the query, enter the hostname(s) of your Akamai Security Events device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. ---2. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-akamai?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Aruba Clearpass Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-aruba-clearpass-via-ama.md | - Title: "[Recommended] Aruba ClearPass via AMA connector for Microsoft Sentinel" -description: "Learn how to install the connector [Recommended] Aruba ClearPass via AMA to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Recommended] Aruba ClearPass via AMA connector for Microsoft Sentinel --The [Aruba ClearPass](https://www.arubanetworks.com/products/security/network-access-control/secure-access/) connector allows you to easily connect your Aruba ClearPass with Microsoft Sentinel, to create custom dashboards, alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs network and improves your security operation capabilities. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (ArubaClearPass)<br/> | -| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | --## Query samples --**Top 10 Events by Username** - ```kusto -ArubaClearPass - - | summarize count() by UserName -- | top 10 by count_ - ``` --**Top 10 Error Codes** - ```kusto -ArubaClearPass - - | summarize count() by ErrorCode -- | top 10 by count_ - ``` ----## Prerequisites --To integrate with [Recommended] Aruba ClearPass via AMA make sure you have: --- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)---## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ArubaClearPass and load the function code or click [here](https://aka.ms/sentinel-arubaclearpass-parser).The function usually takes 10-15 minutes to activate after solution installation/update. ---2. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Broadcom Symantec Dlp Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-broadcom-symantec-dlp-via-ama.md | - Title: "[Recommended] Broadcom Symantec DLP via AMA connector for Microsoft Sentinel" -description: "Learn how to install the connector [Recommended] Broadcom Symantec DLP via AMA to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Recommended] Broadcom Symantec DLP via AMA connector for Microsoft Sentinel --The [Broadcom Symantec Data Loss Prevention (DLP)](https://www.broadcom.com/products/cyber-security/information-protection/data-loss-prevention) connector allows you to easily connect your Symantec DLP with Microsoft Sentinel, to create custom dashboards, alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs information, where it travels, and improves your security operation capabilities. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (SymantecDLP)<br/> | -| **Data collection rules support** |[Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Triggered Activities** - ```kusto -SymantecDLP - - | summarize count() by Activity -- | top 10 by count_ - ``` --**Top 10 Filenames** - ```kusto -SymantecDLP - - | summarize count() by FileName -- | top 10 by count_ - ``` ----## Prerequisites --To integrate with [Recommended] Broadcom Symantec DLP via AMA make sure you have: --- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)---## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias SymantecDLP and load the function code or click [here](https://aka.ms/sentinel-symantecdlp-parser). The function usually takes 10-15 minutes to activate after solution installation/update. ---2. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-broadcomsymantecdlp?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Cisco Secure Email Gateway Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-cisco-secure-email-gateway-via-ama.md | - Title: "[Recommended] Cisco Secure Email Gateway via AMA connector for Microsoft Sentinel" -description: "Learn how to install the connector [Recommended] Cisco Secure Email Gateway via AMA to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Recommended] Cisco Secure Email Gateway via AMA connector for Microsoft Sentinel --The [Cisco Secure Email Gateway (SEG)](https://www.cisco.com/c/en/us/products/security/email-security/https://docsupdatetracker.net/index.html) data connector provides the capability to ingest [Cisco SEG Consolidated Event Logs](https://www.cisco.com/c/en/us/td/docs/security/esa/esa14-0/user_guide/b_ESA_Admin_Guide_14-0/b_ESA_Admin_Guide_12_1_chapter_0100111.html#con_1061902) into Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (CiscoSEG)<br/> | -| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Senders** - ```kusto -CiscoSEGEvent - - | where isnotempty(SrcUserName) - - | summarize count() by SrcUserName - - | top 10 by count_ - ``` ----## Prerequisites --To integrate with [Recommended] Cisco Secure Email Gateway via AMA make sure you have: --- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)---## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**CiscoSEGEvent**](https://aka.ms/sentinel-CiscoSEG-parser) which is deployed with the Microsoft Sentinel Solution. ---2. Secure your machine --2Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoseg?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Claroty Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-claroty-via-ama.md | - Title: "[Recommended] Claroty via AMA connector for Microsoft Sentinel" -description: "Learn how to install the connector [Recommended] Claroty via AMA to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Recommended] Claroty via AMA connector for Microsoft Sentinel --The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/resources/datasheets/continuous-threat-detection) and [Secure Remote Access](https://claroty.com/industrial-cybersecurity/sra) events into Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (Claroty)<br/> | -| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Destinations** - ```kusto -ClarotyEvent - - | where isnotempty(DstIpAddr) - - | summarize count() by DstIpAddr - - | top 10 by count_ - ``` ----## Prerequisites --To integrate with [Recommended] Claroty via AMA make sure you have: --- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)---## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**ClarotyEvent**](https://aka.ms/sentinel-claroty-parser) which is deployed with the Microsoft Sentinel Solution. ---2. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-claroty?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Fireeye Network Security Nx Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-fireeye-network-security-nx-via-ama.md | - Title: "[Recommended] FireEye Network Security (NX) via AMA connector for Microsoft Sentinel" -description: "Learn how to install the connector [Recommended] FireEye Network Security (NX) via AMA to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Recommended] FireEye Network Security (NX) via AMA connector for Microsoft Sentinel --The [FireEye Network Security (NX)](https://www.fireeye.com/products/network-security.html) data connector provides the capability to ingest FireEye Network Security logs into Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (FireEyeNX)<br/> | -| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Sources** - ```kusto -FireEyeNXEvent - - | where isnotempty(SrcIpAddr) - - | summarize count() by SrcIpAddr - - | top 10 by count_ - ``` ----## Prerequisites --To integrate with [Recommended] FireEye Network Security (NX) via AMA make sure you have: --- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)---## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**FireEyeNXEvent**](https://aka.ms/sentinel-FireEyeNX-parser) which is deployed with the Microsoft Sentinel Solution. ---2. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fireeyenx?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Forcepoint Casb Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-forcepoint-casb-via-ama.md | - Title: "[Recommended] Forcepoint CASB via AMA connector for Microsoft Sentinel" -description: "Learn how to install the connector [Recommended] Forcepoint CASB via AMA to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Recommended] Forcepoint CASB via AMA connector for Microsoft Sentinel --The Forcepoint CASB (Cloud Access Security Broker) Connector allows you to automatically export CASB logs and events into Microsoft Sentinel in real-time. This enriches visibility into user activities across locations and cloud applications, enables further correlation with data from Azure workloads and other feeds, and improves monitoring capability with Workbooks inside Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (ForcepointCASB)<br/> | -| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) | -| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) | --## Query samples --**Top 5 Users With The Highest Number Of Logs** - ```kusto -CommonSecurityLog -- | summarize Count = count() by DestinationUserName -- | top 5 by DestinationUserName -- | render barchart - ``` --**Top 5 Users by Number of Failed Attempts ** - ```kusto -CommonSecurityLog -- | extend outcome = coalesce(column_ifexists("EventOutcome", ""), tostring(split(split(AdditionalExtensions, ";", 2)[0], "=", 1)[0]), "") -- | extend reason = coalesce(column_ifexists("Reason", ""), tostring(split(split(AdditionalExtensions, ";", 3)[0], "=", 1)[0]), "") -- | where outcome =="Failure" -- | summarize Count= count() by DestinationUserName -- | render barchart - ``` ----## Prerequisites --To integrate with [Recommended] Forcepoint CASB via AMA make sure you have: --- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)---## Vendor installation instructions ---Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace ---2. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) --3. Forcepoint integration installation guide --To complete the installation of this Forcepoint product integration, follow the guide linked below. --[Installation Guide >](https://frcpnt.com/casb-sentinel) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-casb?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Forcepoint Csg Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-forcepoint-csg-via-ama.md | - Title: "[Recommended] Forcepoint CSG via AMA connector for Microsoft Sentinel" -description: "Learn how to install the connector [Recommended] Forcepoint CSG via AMA to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Recommended] Forcepoint CSG via AMA connector for Microsoft Sentinel --Forcepoint Cloud Security Gateway is a converged cloud security service that provides visibility, control, and threat protection for users and data, wherever they are. For more information visit: https://www.forcepoint.com/product/cloud-security-gateway --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (Forcepoint CSG)<br/> CommonSecurityLog (Forcepoint CSG)<br/> | -| **Data collection rules support** |[Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent)| -| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) | --## Query samples --**Top 5 Web requested Domains with log severity equal to 6 (Medium)** - ```kusto -CommonSecurityLog -- | where TimeGenerated <= ago(0m) -- | where DeviceVendor == "Forcepoint CSG" -- | where DeviceProduct == "Web" -- | where LogSeverity == 6 -- | where DeviceCustomString2 != "" -- | summarize Count=count() by DeviceCustomString2 -- | top 5 by Count -- | render piechart - ``` --**Top 5 Web Users with 'Action' equal to 'Blocked'** - ```kusto -CommonSecurityLog -- | where TimeGenerated <= ago(0m) -- | where DeviceVendor == "Forcepoint CSG" -- | where DeviceProduct == "Web" -- | where Activity == "Blocked" -- | where SourceUserID != "Not available" -- | summarize Count=count() by SourceUserID -- | top 5 by Count -- | render piechart - ``` --**Top 5 Sender Email Addresses Where Spam Score Greater Than 10.0** - ```kusto -CommonSecurityLog -- | where TimeGenerated <= ago(0m) -- | where DeviceVendor == "Forcepoint CSG" -- | where DeviceProduct == "Email" -- | where DeviceCustomFloatingPoint1 > 10.0 -- | summarize Count=count() by SourceUserName -- | top 5 by Count -- | render barchart - ``` ----## Prerequisites --To integrate with [Recommended] Forcepoint CSG via AMA make sure you have: --- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)---## Vendor installation instructions ----2. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF). ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-csg?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Forcepoint Ngfw Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-forcepoint-ngfw-via-ama.md | - Title: "[Recommended] Forcepoint NGFW via AMA connector for Microsoft Sentinel" -description: "Learn how to install the connector [Recommended] Forcepoint NGFW via AMA to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Recommended] Forcepoint NGFW via AMA connector for Microsoft Sentinel --The Forcepoint NGFW (Next Generation Firewall) connector allows you to automatically export user-defined Forcepoint NGFW logs into Microsoft Sentinel in real-time. This enriches visibility into user activities recorded by NGFW, enables further correlation with data from Azure workloads and other feeds, and improves monitoring capability with Workbooks inside Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (ForcePointNGFW)<br/> | -| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) | -| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) | --## Query samples --**Show all terminated actions from the Forcepoint NGFW** - ```kusto --CommonSecurityLog -- | where DeviceVendor == "Forcepoint" -- | where DeviceProduct == "NGFW" -- | where DeviceAction == "Terminate" -- ``` --**Show all Forcepoint NGFW with suspected compromise behaviour** - ```kusto --CommonSecurityLog -- | where DeviceVendor == "Forcepoint" -- | where DeviceProduct == "NGFW" -- | where Activity contains "compromise" -- ``` --**Show chart grouping all Forcepoint NGFW events by Activity type** - ```kusto --CommonSecurityLog -- | where DeviceVendor == "Forcepoint" -- | where DeviceProduct == "NGFW" -- | summarize count=count() by Activity - - | render barchart -- ``` ----## Prerequisites --To integrate with [Recommended] Forcepoint NGFW via AMA make sure you have: --- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)---## Vendor installation instructions ---Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace ---2. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) --3. Forcepoint integration installation guide --To complete the installation of this Forcepoint product integration, follow the guide linked below. --[Installation Guide >](https://frcpnt.com/ngfw-sentinel) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-ngfw?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Illumio Core Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-illumio-core-via-ama.md | - Title: "[Recommended] Illumio Core via AMA connector for Microsoft Sentinel" -description: "Learn how to install the connector [Recommended] Illumio Core via AMA to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Recommended] Illumio Core via AMA connector for Microsoft Sentinel --The [Illumio Core](https://www.illumio.com/products/) data connector provides the capability to ingest Illumio Core logs into Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (IllumioCore)<br/> | -| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) | -| **Supported by** | [Microsoft](https://support.microsoft.com) | --## Query samples --**Top 10 Event Types** - ```kusto -IllumioCoreEvent - - | where isnotempty(EventType) - - | summarize count() by EventType - - | top 10 by count_ - ``` ----## Prerequisites --To integrate with [Recommended] Illumio Core via AMA make sure you have: --- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)---## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias IllumioCoreEvent and load the function code or click [here](https://aka.ms/sentinel-IllumioCore-parser).The function usually takes 10-15 minutes to activate after solution installation/update and maps Illumio Core events to Microsoft Sentinel Information Model (ASIM). ---2. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-illumiocore?tab=Overview) in the Azure Marketplace. |
sentinel | Extend Sentinel Across Workspaces Tenants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/extend-sentinel-across-workspaces-tenants.md | To configure and manage multiple Log Analytics workspaces enabled for Microsoft ## Manage workspaces across tenants using Azure Lighthouse -As mentioned above, in many scenarios, the different Log Analytics workspaces enabled for Microsoft Sentinels can be located in different Microsoft Entra tenants. You can use [Azure Lighthouse](../lighthouse/overview.md) to extend all cross-workspace activities across tenant boundaries, allowing users in your managing tenant to work on workspaces across all tenants. +As mentioned above, in many scenarios, the different Log Analytics workspaces enabled for Microsoft Sentinels can be located in different Microsoft Entra tenants. You can use [Azure Lighthouse](/azure/lighthouse/overview) to extend all cross-workspace activities across tenant boundaries, allowing users in your managing tenant to work on workspaces across all tenants. -Once Azure Lighthouse is [onboarded](../lighthouse/how-to/onboard-customer.md), use the [directory + subscription selector](./multiple-tenants-service-providers.md#how-to-access-microsoft-sentinel-in-managed-tenants) on the Azure portal to select all the subscriptions containing workspaces you want to manage, in order to ensure that they'll all be available in the different workspace selectors in the portal. +Once Azure Lighthouse is [onboarded](/azure/lighthouse/how-to/onboard-customer), use the [directory + subscription selector](./multiple-tenants-service-providers.md#how-to-access-microsoft-sentinel-in-managed-tenants) on the Azure portal to select all the subscriptions containing workspaces you want to manage, in order to ensure that they'll all be available in the different workspace selectors in the portal. When using Azure Lighthouse, it's recommended to create a group for each Microsoft Sentinel role and delegate permissions from each tenant to those groups. |
sentinel | Forward Syslog Monitor Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md | To complete the steps in this tutorial, you must have the following resources an - A Log Analytics workspace. - A Linux server that's running an operating system that supports Azure Monitor Agent. - [Supported Linux operating systems for Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview#linux).- - [Create a Linux VM in the Azure portal](/azure/virtual-machines/linux/quick-create-portal) or [add an on-premises Linux server to Azure Arc](../azure-arc/servers/learn/quick-enable-hybrid-vm.md). + - [Create a Linux VM in the Azure portal](/azure/virtual-machines/linux/quick-create-portal) or [add an on-premises Linux server to Azure Arc](/azure/azure-arc/servers/learn/quick-enable-hybrid-vm). - A Linux-based device that generates event log data like a firewall network device. ## Configure Azure Monitor Agent to collect Syslog data |
sentinel | Mssp Protect Intellectual Property | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/mssp-protect-intellectual-property.md | In this image: This allows MSSPs to hide Microsoft Sentinel components as needed, like Analytics Rules and Hunting Queries. -For more information, also see the [Azure Lighthouse documentation](../lighthouse/concepts/cloud-solution-provider.md). +For more information, also see the [Azure Lighthouse documentation](/azure/lighthouse/concepts/cloud-solution-provider). ## Enterprise Agreements (EA) / Pay-as-you-go (PAYG) |
sentinel | Multiple Tenants Service Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/multiple-tenants-service-providers.md | -If you're a managed security service provider (MSSP) and you're using [Azure Lighthouse](../lighthouse/overview.md) to offer security operations center (SOC) services to your customers, you can manage your customers' Microsoft Sentinel resources directly from your own Azure tenant, without having to connect to the customer's tenant. +If you're a managed security service provider (MSSP) and you're using [Azure Lighthouse](/azure/lighthouse/overview) to offer security operations center (SOC) services to your customers, you can manage your customers' Microsoft Sentinel resources directly from your own Azure tenant, without having to connect to the customer's tenant. ## Prerequisites -- [Onboard Azure Lighthouse](../lighthouse/how-to/onboard-customer.md)+- [Onboard Azure Lighthouse](/azure/lighthouse/how-to/onboard-customer) - For this to work properly, your tenant (the MSSP tenant) must have the Microsoft Sentinel resource providers registered on at least one subscription. In addition, each of your customers' tenants must have the resource providers registered. If you have registered Microsoft Sentinel in your tenant, and your customers in theirs, you are ready to get started. To verify registration, take the following steps: |
sentinel | Prepare Multiple Workspaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/prepare-multiple-workspaces.md | In case of an MSSP, many if not all of the above requirements apply, making mult - [Partner data connectors](data-connectors-reference.md) are often based on API or agent collections, and therefore are not attached to a specific Microsoft Entra tenant. -Use [Azure Lighthouse](../lighthouse/how-to/onboard-customer.md) to help manage multiple Microsoft Sentinel instances in different tenants.u +Use [Azure Lighthouse](/azure/lighthouse/how-to/onboard-customer) to help manage multiple Microsoft Sentinel instances in different tenants.u ## Microsoft Sentinel multiple workspace architecture |
sentinel | Resource Context Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/resource-context-rbac.md | If you have multiple teams, make sure that you have separate log forwarding VMs For example, separating your VMs ensures that Syslog events that belong to Team A are collected using the collector VM A. > [!TIP]-> - When using an on-premises VM or another cloud VM, such as AWS, as your log forwarder, ensure that it has a resource ID by implementing [Azure Arc](../azure-arc/servers/overview.md). +> - When using an on-premises VM or another cloud VM, such as AWS, as your log forwarder, ensure that it has a resource ID by implementing [Azure Arc](/azure/azure-arc/servers/overview). > - To scale your log forwarding VM environment, consider creating a [VM scale set](https://techcommunity.microsoft.com/t5/azure-sentinel/scaling-up-syslog-cef-collection/ba-p/1185854) to collect your CEF and Sylog logs. |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | For more information, see: - [Migrate to Azure Monitor Agent from Log Analytics agent](/azure/azure-monitor/agents/azure-monitor-agent-migration) - [AMA migration for Microsoft Sentinel](ama-migrate.md) - Blogs:- - [Revolutionizing log collection with Azure Monitor Agent](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/revolutionizing-log-collection-with-azure-monitor-agent/ba-p/4218129) - [The power of Data Collection Rules: Collecting events for advanced use cases in Microsoft USOP](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/the-power-of-data-collection-rules-collecting-events-for/ba-p/4236486) |
service-bus-messaging | Service Bus Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-insights.md | By default, the **Time Range** field displays data from the **Last 4 hours**. Yo ## Pin and export -You can pin any one of the metric sections to an [Azure Dashboard](../azure-portal/azure-portal-dashboards.md) by selecting the pushpin icon at the top right of the section. +You can pin any one of the metric sections to an [Azure Dashboard](/azure/azure-portal/azure-portal-dashboards) by selecting the pushpin icon at the top right of the section. :::image type="content" source="./media/service-bus-insights/pin.png" alt-text="Screenshot that shows the Pin button at the top of the section."::: |
service-bus-messaging | Service Bus Partitioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-partitioning.md | Service Bus supports automatic message forwarding from, to, or between partition ## Partitioned entities limitations Currently Service Bus imposes the following limitations on partitioned queues and topics: -- For partitioned premium namespaces, the message size is limited to 1 MB when the messages are sent individually, and the batch size is limited to 1 MB when the messages are sent in a batch. If [Large message support](/azure/service-bus-messaging/service-bus-premium-messaging) is enabled the size limit can be up to 100MB.+- For partitioned premium namespaces, the message size is limited to 1 MB when the messages are sent individually, and the batch size is limited to 1 MB when the messages are sent in a batch. - Partitioned queues and topics don't support sending messages that belong to different sessions in a single transaction. - Service Bus currently allows up to 100 partitioned queues or topics per namespace for the Basic and Standard SKU. Each partitioned queue or topic counts towards the quota of 10,000 entities per namespace. |
service-connector | How To Use Service Connector In Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-use-service-connector-in-aks.md | If an error happens and couldn't be mitigated by retrying when creating a servic ### Check Service Connector kubernetes extension -Service Connector kubernetes extension is built on top of [Azure Arc-enabled Kubernetes cluster extensions](../azure-arc/kubernetes/extensions.md). Use the following commands to investigate if there are any errors during the extension installation or updating. +Service Connector kubernetes extension is built on top of [Azure Arc-enabled Kubernetes cluster extensions](/azure/azure-arc/kubernetes/extensions). Use the following commands to investigate if there are any errors during the extension installation or updating. 1. Install the `k8s-extension` Azure CLI extension. Check the permissions on the Azure resources specified in the error message. Obt `The subscription is not registered to use namespace 'Microsoft.KubernetesConfiguration'` **Reason:**-Service Connector requires the subscription to be registered for `Microsoft.KubernetesConfiguration`, which is the resource provider for [Azure Arc-enabled Kubernetes cluster extensions](../azure-arc/kubernetes/extensions.md). +Service Connector requires the subscription to be registered for `Microsoft.KubernetesConfiguration`, which is the resource provider for [Azure Arc-enabled Kubernetes cluster extensions](/azure/azure-arc/kubernetes/extensions). **Mitigation:** Register the `Microsoft.KubernetesConfiguration` resource provider by running the following command. For more information on resource provider registration errors, please refer to this [tutorial](../azure-resource-manager/troubleshooting/error-register-resource-provider.md). |
service-connector | Tutorial Python Aks Openai Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-aks-openai-connection-string.md | In this tutorial, you learn how to create a pod in an Azure Kubernetes (AKS) clu --capacity 1 ``` -1. Create an Azure Container Registry (ACR) resource with the [az acr create](/cli/azure/acr#az-acr-create) command, or referring to [this tutorial](../container-registry/container-registry-get-started-portal.md). The registry hosts the container image of the sample application, which the AKS pod definition consumes. +1. Create an Azure Container Registry (ACR) resource with the [az acr create](/cli/azure/acr#az-acr-create) command, or referring to [this tutorial](/azure/container-registry/container-registry-get-started-portal). The registry hosts the container image of the sample application, which the AKS pod definition consumes. ```azurecli-interactive az acr create \ |
service-connector | Tutorial Python Aks Openai Workload Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-aks-openai-workload-identity.md | You start this tutorial by creating several Azure resources. --capacity 1 ``` -1. Create an Azure Container Registry (ACR) resource with the [az acr create](/cli/azure/acr#az-acr-create) command, or referring to [this tutorial](../container-registry/container-registry-get-started-portal.md). The registry hosts the container image of the sample application, which the AKS pod definition consumes. +1. Create an Azure Container Registry (ACR) resource with the [az acr create](/cli/azure/acr#az-acr-create) command, or referring to [this tutorial](/azure/container-registry/container-registry-get-started-portal). The registry hosts the container image of the sample application, which the AKS pod definition consumes. ```azurecli-interactive az acr create \ |
service-connector | Tutorial Python Aks Storage Workload Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-aks-storage-workload-identity.md | Learn how to create a pod in an AKS cluster, which talks to an Azure storage acc --sku Standard_LRS ``` -1. Create an Azure container registry with the following command, or referring to the [tutorial](../container-registry/container-registry-get-started-portal.md). The registry hosts the container image of the sample application, which will be consumed by the AKS pod definition. +1. Create an Azure container registry with the following command, or referring to the [tutorial](/azure/container-registry/container-registry-get-started-portal). The registry hosts the container image of the sample application, which will be consumed by the AKS pod definition. ```azurecli az acr create \ |
site-recovery | Azure To Azure Troubleshoot Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-errors.md | Replication couldn't be enabled for the virtual machine <VmName>. ### Fix the problem -Contact [Azure billing support](../azure-portal/supportability/regional-quota-requests.md) to enable your subscription to create VMs of the required sizes in the target location. Then retry the failed operation. +Contact [Azure billing support](/azure/azure-portal/supportability/regional-quota-requests) to enable your subscription to create VMs of the required sizes in the target location. Then retry the failed operation. If the target location has a capacity constraint, disable replication to that location. Then, enable replication to a different location where your subscription has sufficient quota to create VMs of the required sizes. |
site-recovery | Vmware Physical Large Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-large-deployment.md | We want to make sure that available quotas in the target subscription are suffic **Task** | **Details** | **Action** | | -**Check cores** | If cores in the available quota don't equal or exceed the total target count at the time of failover, failovers will fail. | For VMware VMs, check you have enough cores in the target subscription to meet the Deployment Planner core recommendation.<br/><br/> For physical servers, check that Azure cores meet your manual estimations.<br/><br/> To check quotas, in the Azure portal > **Subscription**, click **Usage + quotas**.<br/><br/> [Learn more](../azure-portal/supportability/regional-quota-requests.md) about increasing quotas. +**Check cores** | If cores in the available quota don't equal or exceed the total target count at the time of failover, failovers will fail. | For VMware VMs, check you have enough cores in the target subscription to meet the Deployment Planner core recommendation.<br/><br/> For physical servers, check that Azure cores meet your manual estimations.<br/><br/> To check quotas, in the Azure portal > **Subscription**, click **Usage + quotas**.<br/><br/> [Learn more](/azure/azure-portal/supportability/regional-quota-requests) about increasing quotas. **Check failover limits** | The number of failovers mustn't exceed Site Recovery failover limits. | If failovers exceed the limits, you can add subscriptions, and fail over to multiple subscriptions, or increase quota for a subscription. |
spring-apps | Concept Security Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-security-controls.md | A security control is a quality or feature of an Azure service that contributes | Security control | Yes/No | Notes | Documentation | |:-|:-|:-|:-|-| Server-side encryption at rest: Microsoft-managed keys | Yes | User uploaded source and artifacts, config server settings, app settings, and data in persistent storage are stored in Azure Storage, which automatically encrypts the content at rest.<br><br>Config server cache, runtime binaries built from uploaded source, and application logs during the application lifetime are saved to Azure managed disk, which automatically encrypts the content at rest.<br><br>Container images built from user uploaded source are saved in Azure Container Registry, which automatically encrypts the image content at rest. | [Azure Storage encryption for data at rest](../../storage/common/storage-service-encryption.md)<br><br>[Server-side encryption of Azure managed disks](/azure/virtual-machines/disk-encryption)<br><br>[Container image storage in Azure Container Registry](../../container-registry/container-registry-storage.md) | +| Server-side encryption at rest: Microsoft-managed keys | Yes | User uploaded source and artifacts, config server settings, app settings, and data in persistent storage are stored in Azure Storage, which automatically encrypts the content at rest.<br><br>Config server cache, runtime binaries built from uploaded source, and application logs during the application lifetime are saved to Azure managed disk, which automatically encrypts the content at rest.<br><br>Container images built from user uploaded source are saved in Azure Container Registry, which automatically encrypts the image content at rest. | [Azure Storage encryption for data at rest](../../storage/common/storage-service-encryption.md)<br><br>[Server-side encryption of Azure managed disks](/azure/virtual-machines/disk-encryption)<br><br>[Container image storage in Azure Container Registry](/azure/container-registry/container-registry-storage) | | Encryption in transient | Yes | User app public endpoints use HTTPS for inbound traffic by default. | | | API calls encrypted | Yes | Management calls to configure Azure Spring Apps service occur via Azure Resource Manager calls over HTTPS. | [Azure Resource Manager](../../azure-resource-manager/index.yml) | | Customer Lockbox | Yes | Provide Microsoft with access to relevant customer data during support scenarios. | [Customer Lockbox for Microsoft Azure](../../security/fundamentals/customer-lockbox-overview.md) |
spring-apps | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/faq.md | We're not actively developing more capabilities for Service Binding. Instead, th ### How can I provide feedback and report issues? -If you encounter any issues with Azure Spring Apps, create an [Azure Support Request](../../azure-portal/supportability/how-to-create-azure-support-request.md). To submit a feature request or provide feedback, go to [Azure Feedback](https://feedback.azure.com/d365community/forum/79b1327d-d925-ec11-b6e6-000d3a4f06a4). +If you encounter any issues with Azure Spring Apps, create an [Azure Support Request](/azure/azure-portal/supportability/how-to-create-azure-support-request). To submit a feature request or provide feedback, go to [Azure Feedback](https://feedback.azure.com/d365community/forum/79b1327d-d925-ec11-b6e6-000d3a4f06a4). ### How do I get VMware Spring Runtime support (Enterprise plan only) |
spring-apps | How To Prepare App Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-prepare-app-deployment.md | This article shows how to prepare an existing Steeltoe application for deploymen This article explains the dependencies, configuration, and code that are required to run a .NET Core Steeltoe app in Azure Spring Apps. For information about how to deploy an application to Azure Spring Apps, see [Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md). > [!NOTE]-> Steeltoe support for Azure Spring Apps is currently offered as a public preview. Public preview offerings allow customers to experiment with new features prior to their official release. Public preview features and services are not meant for production use. For more information about support during previews, see the [FAQ](https://azure.microsoft.com/support/faq/) or file a [Support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). +> Steeltoe support for Azure Spring Apps is currently offered as a public preview. Public preview offerings allow customers to experiment with new features prior to their official release. Public preview features and services are not meant for production use. For more information about support during previews, see the [FAQ](https://azure.microsoft.com/support/faq/) or file a [Support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). ## Supported versions |
spring-apps | Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quotas.md | The following table defines limits for the pricing plans in Azure Spring Apps. ## Next steps -Some default limits can be increased. For more information, see [create a support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). +Some default limits can be increased. For more information, see [create a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). |
static-web-apps | Get Started Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-cli.md | ms.devlang: azurecli # Quickstart: Building your first static site using the Azure CLI -[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://go.microsoft.com/fwlink/?linkid=2262845) +[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://go.microsoft.com/fwlink/?linkid=2286315) Azure Static Web Apps publishes websites to production by building apps from a code repository. |
storage-actions | Storage Task Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-known-issues.md | During the public preview, you can target only storage accounts that are in the | Storage tasks per subscription | 100 | | Storage task assignments per storage task | 50 | | Storage task assignments per storage account | 50 |-| Storage task definition versions | 50 | +| Storage task nested grouping of clauses per condition | 10 | Azure Storage Actions autoscales its processing tasks based on the volume of data in a storage account, subject to internal limits. The duration of execution depends on the number of blobs in the storage account, as well as their hierarchy in Azure Data Lake Storage Gen2. The first execution of a task over a path prefix might take longer than subsequent executions. Azure Storage Actions are also designed to be self-regulating and to allow application workloads on the storage account to take precedence. As a result, the scale and the duration of execution also depend on the available transaction capacity given the storage account's maximum request limit. The following are typical processing scales, which might be higher if you have more transaction capacity available, or might be lower for lesser spare transaction capacity on the storage account. When you apply storage task assignments to storage accounts that have IP or netw ## Storage Tasks won't be trigger on regional account migrated in GRS / GZRS accounts -If you migrate your storage account from a GRS or GZRS primary region to a secondary region or vice versa, then any storage tasks that target the storage account won't be triggered and any existing task executions might fail.  +If you migrate your storage account from a GRS or GZRS primary region to a secondary region or vice versa, then any storage tasks that target the storage account won't be triggered and any existing task executions might fail. + ## See Also |
storage-mover | Agent Register | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-register.md | You can reference this Azure Resource Manager (ARM) resource when you want to as ### Azure Arc service -The agent is also registered with the [Azure Arc service](../azure-arc/overview.md). Arc is used to assign and maintain an [Microsoft Entra managed identity](../active-directory/managed-identities-azure-resources/overview.md) for this registered agent. +The agent is also registered with the [Azure Arc service](/azure/azure-arc/overview). Arc is used to assign and maintain an [Microsoft Entra managed identity](../active-directory/managed-identities-azure-resources/overview.md) for this registered agent. Azure Storage Mover uses a system-assigned managed identity. A managed identity is a service principal of a special type that can only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is also automatically removed. |
storage | Storage Account Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md | The following table describes the fields on the **Networking** tab. | Network connectivity | Endpoint type | Required | Azure Storage supports two types of endpoints: [standard endpoints](storage-account-overview.md#standard-endpoints) (the default) and [Azure DNS zone endpoints](storage-account-overview.md#azure-dns-zone-endpoints-preview) (preview). Within a given subscription, you can create up to 250<sup>1</sup> accounts with standard endpoints per region, and up to 5000 accounts with Azure DNS zone endpoints per region, for a total of 5250 storage accounts. To register for the preview, see [About the preview](storage-account-overview.md#about-the-preview). | | Network routing | Routing preference | Required | The network routing preference specifies how network traffic is routed to the public endpoint of your storage account from clients over the internet. By default, a new storage account uses Microsoft network routing. You can also choose to route network traffic through the POP closest to the storage account, which might lower networking costs. For more information, see [Network routing preference for Azure Storage](network-routing-preference.md). | -<sup>1</sup> With a quota increase, you can create up to 500 storage accounts with standard endpoints per region in a given subscription, for a total of 5500 storage accounts per region. For more information, see [Increase Azure Storage account quotas](../../quotas/storage-account-quota-requests.md). +<sup>1</sup> With a quota increase, you can create up to 500 storage accounts with standard endpoints per region in a given subscription, for a total of 5500 storage accounts per region. For more information, see [Increase Azure Storage account quotas](/azure/quotas/storage-account-quota-requests). The following image shows a standard configuration of the networking properties for a new storage account. |
storage | Storage Account Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md | A storage account provides a unique namespace in Azure for your data. Every obje There are two types of service endpoints available for a storage account: -- [Standard endpoints](#standard-endpoints) (recommended). By default, you can create up to 250 storage accounts per region with standard endpoints in a given subscription. With a quota increase, you can create up to 500 storage accounts with standard endpoints per region. For more information, see [Increase Azure Storage account quotas](../../quotas/storage-account-quota-requests.md).+- [Standard endpoints](#standard-endpoints) (recommended). By default, you can create up to 250 storage accounts per region with standard endpoints in a given subscription. With a quota increase, you can create up to 500 storage accounts with standard endpoints per region. For more information, see [Increase Azure Storage account quotas](/azure/quotas/storage-account-quota-requests). - [Azure DNS zone endpoints](#azure-dns-zone-endpoints-preview) (preview). You can create up to 5000 storage accounts per region with Azure DNS zone endpoints in a given subscription. Within a single subscription, you can create accounts with either standard or Azure DNS Zone endpoints, for a maximum of 5250 accounts per region per subscription. With a quota increase, you can create up to 5500 storage accounts per region per subscription. The following table describes the legacy storage account types. These account ty | Type of legacy storage account | Supported storage services | Redundancy options | Deployment model | Usage | |--|--|--|--|--|-| Standard general-purpose v1 | Blob Storage, Queue Storage, Table Storage, and Azure Files | LRS/GRS/RA-GRS | Resource Manager, classic<sup>1</sup> | General-purpose v1 accounts may not have the latest features or the lowest per-gigabyte pricing. Consider using it for these scenarios:<br /><ul><li>Your applications require the Azure [classic deployment model](../../azure-portal/supportability/classic-deployment-model-quota-increase-requests.md)<sup>1</sup>.</li><li>Your applications are transaction-intensive or use significant geo-replication bandwidth, but donΓÇÖt require large capacity. In this case, a general-purpose v1 account may be the most economical choice.</li><li>You use a version of the Azure Storage REST API that is earlier than February 14, 2014, or a client library with a version lower than 4.x, and you canΓÇÖt upgrade your application.</li><li>You're selecting a storage account to use as a cache for Azure Site Recovery. Because Site Recovery is transaction-intensive, a general-purpose v1 account may be more cost-effective. For more information, see [Support matrix for Azure VM disaster recovery between Azure regions](../../site-recovery/azure-to-azure-support-matrix.md#cache-storage).</li></ul> | +| Standard general-purpose v1 | Blob Storage, Queue Storage, Table Storage, and Azure Files | LRS/GRS/RA-GRS | Resource Manager, classic<sup>1</sup> | General-purpose v1 accounts may not have the latest features or the lowest per-gigabyte pricing. Consider using it for these scenarios:<br /><ul><li>Your applications require the Azure [classic deployment model](/azure/azure-portal/supportability/classic-deployment-model-quota-increase-requests)<sup>1</sup>.</li><li>Your applications are transaction-intensive or use significant geo-replication bandwidth, but donΓÇÖt require large capacity. In this case, a general-purpose v1 account may be the most economical choice.</li><li>You use a version of the Azure Storage REST API that is earlier than February 14, 2014, or a client library with a version lower than 4.x, and you canΓÇÖt upgrade your application.</li><li>You're selecting a storage account to use as a cache for Azure Site Recovery. Because Site Recovery is transaction-intensive, a general-purpose v1 account may be more cost-effective. For more information, see [Support matrix for Azure VM disaster recovery between Azure regions](../../site-recovery/azure-to-azure-support-matrix.md#cache-storage).</li></ul> | | Blob Storage | Blob Storage (block blobs and append blobs only) | LRS/GRS/RA-GRS | Resource Manager | Microsoft recommends using standard general-purpose v2 accounts instead when possible. | <sup>1</sup> Beginning August 1, 2022, you'll no longer be able to create new storage accounts with the classic deployment model. Resources created prior to that date will continue to be supported through August 31, 2024. For more information, see [Azure classic storage accounts will be retired on 31 August 2024](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024). |
storage | Files Smb Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md | Azure Files exposes the following settings: - **Kerberos ticket encryption**: Which encryption algorithms are allowed. Supported encryption algorithms are AES-256 (recommended) and RC4-HMAC. - **SMB channel encryption**: Which SMB channel encryption algorithms are allowed. Supported encryption algorithms are AES-256-GCM, AES-128-GCM, and AES-128-CCM. If you select only AES-256-GCM, you'll need to tell connecting clients to use it by opening a PowerShell terminal as administrator on each client and running `Set-SmbClientConfiguration -EncryptionCiphers "AES_256_GCM" -Confirm:$false`. Using AES-256-GCM isn't supported on Windows clients older than Windows 11/Windows Server 2022. -You can view and change the SMB security settings using the Azure portal, PowerShell, or CLI. Select the desired tab to see the steps on how to get and set the SMB security settings. +You can view and change the SMB security settings using the Azure portal, PowerShell, or CLI. Select the desired tab to see the steps on how to get and set the SMB security settings. Note that these settings are checked when an SMB session is established and if not met, the SMB session setup fails with the error "STATUS_ACCESS_DENIED". # [Portal](#tab/azure-portal) To view or change the SMB security settings using the Azure portal, follow these steps: |
storage | Storage Files Migration Nas Cloud Databox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-cloud-databox.md | Remember that an Azure file share is deployed in the cloud in an Azure storage a As a general rule, you can pool multiple Azure file shares into the same storage account if you have archival shares or you expect low day-to-day activity in them. However, if you have highly active shares (shares used by many users and/or applications), you'll want to deploy storage accounts with one file share each. These limitations don't apply to FileStorage (premium) storage accounts, where performance is explicitly provisioned and guaranteed for each share. > [!NOTE]-> There's a limit of 250 storage accounts per subscription per Azure region. With a quota increase, you can create up to 500 storage accounts per region. For more information, see [Increase Azure Storage account quotas](../../quotas/storage-account-quota-requests.md). +> There's a limit of 250 storage accounts per subscription per Azure region. With a quota increase, you can create up to 500 storage accounts per region. For more information, see [Increase Azure Storage account quotas](/azure/quotas/storage-account-quota-requests). Another consideration when you're deploying a storage account is redundancy. See [Azure Files redundancy](files-redundancy.md). |
storage | Storage Files Migration Robocopy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-robocopy.md | Remember that an Azure file share is deployed in the cloud in an Azure storage a As a general rule, you can pool multiple Azure file shares into the same storage account if you have archival shares or you expect low day-to-day activity in them. However, if you have highly active shares (shares used by many users and/or applications), you'll want to deploy storage accounts with one file share each. These limitations don't apply to FileStorage (premium) storage accounts, where performance is explicitly provisioned and guaranteed for each share. > [!NOTE]-> There's a limit of 250 storage accounts per subscription per Azure region. With a quota increase, you can create up to 500 storage accounts per region. For more information, see [Increase Azure Storage account quotas](../../quotas/storage-account-quota-requests.md). +> There's a limit of 250 storage accounts per subscription per Azure region. With a quota increase, you can create up to 500 storage accounts per region. For more information, see [Increase Azure Storage account quotas](/azure/quotas/storage-account-quota-requests). Another consideration when you're deploying a storage account is redundancy. See [Azure Files redundancy](files-redundancy.md). |
storage | Storage Files Scale Targets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md | Storage account scale targets apply at the storage account level. There are two | Management write operations | 10 per second/1200 per hour | 10 per second/1200 per hour | | Management list operations | 100 per 5 minutes | 100 per 5 minutes | -<sup>1</sup> With a quota increase, you can create up to 500 storage accounts with standard endpoints per region. For more information, see [Increase Azure Storage account quotas](../../quotas/storage-account-quota-requests.md). +<sup>1</sup> With a quota increase, you can create up to 500 storage accounts with standard endpoints per region. For more information, see [Increase Azure Storage account quotas](/azure/quotas/storage-account-quota-requests). <sup>2</sup> General-purpose version 2 storage accounts support higher capacity limits and higher limits for ingress by request. To request an increase in account limits, contact [Azure Support](https://azure.microsoft.com/support/faq/). ### Azure file share scale targets |
synapse-analytics | Proof Of Concept Playbook Spark Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-spark-pool.md | resources identified in the POC plan. - Record your results in a consumable and readily understandable format. 1. Monitor for troubleshooting and performance. For more information, see: - [Monitor Apache Spark activities](../get-started-monitor.md#apache-spark-activities)- - [Monitor with web user interfaces - Spark's history server](https://spark.apache.org/docs/3.0.0-preview/web-ui.html) + - [Monitor with web user interfaces - Spark's history server](https://archive.apache.org/dist/spark/docs/3.0.0-preview/web-ui.html) - [Monitoring resource utilization and query activity in Azure Synapse Analytics](../../sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity.md) 1. Monitor data skewness, time skewness and executor usage percentage by opening the **Diagnostic** tab of Spark's history server. |
synapse-analytics | Apache Spark Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-concepts.md | The following article describes how to request an increase in workspace vCore qu - Select "Azure Synapse Analytics" as the service type. - In the Quota details window, select Apache Spark (vCore) per workspace -[Request a capacity increase via the Azure portal](../../azure-portal/supportability/per-vm-quota-requests.md) +[Request a capacity increase via the Azure portal](/azure/azure-portal/supportability/per-vm-quota-requests) ### Spark pool level |
synapse-analytics | Sql Data Warehouse Monitor Workload Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md | AzureDiagnostics ## Next steps -- Now that you have set up and configured Azure monitor logs, [customize Azure dashboards](../../azure-portal/azure-portal-dashboards.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) to share across your team.+- Now that you have set up and configured Azure monitor logs, [customize Azure dashboards](/azure/azure-portal/azure-portal-dashboards?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) to share across your team. |
synapse-analytics | Resources Self Help Sql On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md | Your query might fail with the error message `Websocket connection was closed un - To resolve this issue, rerun your query. - Try [Azure Data Studio](/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) for the same queries instead of Synapse Studio for further investigation. - If this message occurs often in your environment, get help from your network administrator. You can also check firewall settings, and check the [Troubleshooting guide](../troubleshoot/troubleshoot-synapse-studio.md).-- If the issue continues, create a [support ticket](../../azure-portal/supportability/how-to-create-azure-support-request.md) through the Azure portal. +- If the issue continues, create a [support ticket](/azure/azure-portal/supportability/how-to-create-azure-support-request) through the Azure portal. ### Serverless databases aren't shown in Synapse Studio Make sure that your Delta Lake dataset isn't corrupted. Verify that you can read Try to create a checkpoint on the Delta Lake dataset by using Apache Spark pool and rerun the query. The checkpoint aggregates transactional JSON log files and might solve the issue. -If the dataset is valid, [create a support ticket](../../azure-portal/supportability/how-to-create-azure-support-request.md#create-a-support-request) and provide more information: +If the dataset is valid, [create a support ticket](/azure/azure-portal/supportability/how-to-create-azure-support-request#create-a-support-request) and provide more information: - Don't make any changes like adding or removing the columns or optimizing the table because this operation might change the state of the Delta Lake transaction log files. - Copy the content of the `_delta_log` folder into a new empty folder. *Do not* copy the `.parquet data` files. |
synapse-analytics | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md | This section is an archive of features and capabilities of [Apache Spark for Azu | April 2022 | **Apache Spark notebook snapshot** | You can access a snapshot of the Notebook when there's a Pipeline Notebook run failure or when there's a long-running Notebook job. To learn more, read [Transform data by running a Synapse notebook](synapse-notebook-activity.md?tabs=classical#see-notebook-activity-run-history) and [Introduction to Microsoft Spark utilities](./spark/microsoft-spark-utilities.md?pivots=programming-language-scala#reference-a-notebook-1). | | March 2022 | **Synapse Spark Common Data Model (CDM) connector** | The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). | | March 2022 | **Performance optimization for Synapse Spark dedicated SQL pool connector** | New improvements to the [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](spark/synapse-spark-sql-pool-import-export.md) reduce data movement and leverage `COPY INTO`. Performance tests indicated at least ~5x improvement over the previous version. No action is required from the user to leverage these enhancements. For more information, see [Blog: Synapse Spark Dedicated SQL Pool (DW) Connector: Performance Improvements](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_10).|-| March 2022 | **Support for all Spark Dataframe SaveMode choices** | The [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](spark/synapse-spark-sql-pool-import-export.md) now supports all four Spark Dataframe SaveMode choices: Append, Overwrite, ErrorIfExists, Ignore. For more information on Spark SaveMode, read the [official Apache Spark documentation](https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/sql/SaveMode.html?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). | +| March 2022 | **Support for all Spark Dataframe SaveMode choices** | The [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](spark/synapse-spark-sql-pool-import-export.md) now supports all four Spark Dataframe SaveMode choices: Append, Overwrite, ErrorIfExists, Ignore. For more information on Spark SaveMode, read the [official Apache Spark documentation](https://archive.apache.org/dist/spark/docs/1.6.0/api/java/org/apache/spark/sql/SaveMode.html?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). | | March 2022 | **Apache Spark in Azure Synapse Analytics Intelligent Cache feature** | Intelligent Cache for Spark automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more on this preview feature, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md) or see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_12).| ## Data integration |
trusted-signing | How To Signing Integrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/how-to-signing-integrations.md | To sign by using Trusted Signing, you need to provide the details of your Truste } ``` - The `"Endpoint"` URI value must be a URI that aligns with the region where you created your Trusted Signing account and certificate profile when you set up these resources. The table shows regions and their corresponding URIs. + The `"Endpoint"` URI value must be a URI that aligns with the region where you created your Trusted Signing account and certificate profile when you set up these resources. The table shows regions and their corresponding URIs. | Region | Region class fields | Endpoint URI value | |--|--|| To invoke SignTool to sign a file: 1. Replace the placeholders in the following path with the specific values that you noted in step 1: ```console- & "<Path to SDK bin folder>\x64\signtool.exe" sign /v /debug /fd SHA256 /tr "http://timestamp.acs.microsoft.com" /td SHA256 /dlib "<Path to Trusted Signing dlib bin folder>\x64\Azure.CodeSigning.Dlib.dll" /dmdf "<Path to metadata file>\metadata.json" <File to sign> + & "<Path to SDK bin folder>\x64\signtool.exe" sign /v /debug /fd SHA256 /tr "http://timestamp.acs.microsoft.com" /td SHA256 /dlib "<Path to Trusted Signing dlib bin folder>\x64\Azure.CodeSigning.Dlib.dll" /dmdf "<Path to metadata file>\metadata.json" <File to sign> ``` - Both the x86 and the x64 version of SignTool are included in the Windows SDK. Be sure to reference the corresponding version of *Azure.CodeSigning.Dlib.dll*. The preceding example is for the x64 version of SignTool. |
update-manager | Guidance Migration Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-azure.md | Azure Update Manager can be used on-premises by using Azure Arc. Azure Arc is a - [Check update compliance](view-updates.md) - [Deploy updates now (on-demand) for single machine](deploy-updates.md) - [Schedule recurring updates](scheduled-patching.md)-- [An overview of Azure Arc-enabled servers](../azure-arc/servers/overview.md)+- [An overview of Azure Arc-enabled servers](/azure/azure-arc/servers/overview) |
update-manager | Migration Manual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/migration-manual.md | The article provides the guidance to move various resources when you migrate man **S.No** | **Capability** | **Automation Update Management** | **Azure Update Manager** | **Steps using Azure portal** | **Steps using API/script** | | | | | | | -1 | Patch management for Off-Azure machines. | Could run with or without Arc connectivity. | Azure Arc is a prerequisite for non-Azure machines. | 1. [Create service principal](../app-service/quickstart-php.md#1get-the-sample-repository) </br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | 1. [Create service principal](../azure-arc/servers/onboard-service-principal.md#azure-powershell) <br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | +1 | Patch management for Off-Azure machines. | Could run with or without Arc connectivity. | Azure Arc is a prerequisite for non-Azure machines. | 1. [Create service principal](../app-service/quickstart-php.md#1get-the-sample-repository) </br> 2. [Generate installation script](/azure/azure-arc/servers/onboard-service-principal#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](/azure/azure-arc/servers/onboard-service-principal#install-the-agent-and-connect-to-azure) | 1. [Create service principal](/azure/azure-arc/servers/onboard-service-principal#azure-powershell) <br> 2. [Generate installation script](/azure/azure-arc/servers/onboard-service-principal#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](/azure/azure-arc/servers/onboard-service-principal#install-the-agent-and-connect-to-azure) | 2 | Enable periodic assessment to check for latest updates automatically every few hours. | Machines automatically receive the latest updates every 12 hours for Windows and every 3 hours for Linux. | Periodic assessment is an update setting on your machine. If it's turned on, the Update Manager fetches updates every 24 hours for the machine and shows the latest update status. | 1. [Single machine](manage-update-settings.md#configure-settings-on-a-single-vm) </br> 2. [At scale](manage-update-settings.md#configure-settings-at-scale) </br> 3. [At scale using policy](periodic-assessment-at-scale.md) | 1. [For Azure VM](/azure/virtual-machines/automatic-vm-guest-patching#azure-powershell-when-updating-a-windows-vm) </br> 2. [For Arc-enabled VM](/powershell/module/az.connectedmachine/update-azconnectedmachine) | 3 | Static Update deployment schedules (Static list of machines for update deployment). | Automation Update management had its own schedules. | Azure Update Manager creates a [maintenance configuration](/azure/virtual-machines/maintenance-configurations) object for a schedule. So, you need to create this object, copying all schedule settings from Automation Update Management to Azure Update Manager schedule. | 1. [Single VM](scheduled-patching.md#schedule-recurring-updates-on-a-single-vm) </br> 2. [At scale](scheduled-patching.md#schedule-recurring-updates-at-scale) </br> 3. [At scale using policy](scheduled-patching.md#onboard-to-schedule-by-using-azure-policy) | [Create a static scope](manage-vms-programmatically.md) | 4 | Dynamic Update deployment schedules (Defining scope of machines using resource group, tags, etc. that is evaluated dynamically at runtime).| Same as static update schedules. | Same as static update schedules. | [Add a dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope) | [Create a dynamic scope]( tutorial-dynamic-grouping-for-scheduled-patching.md#create-a-dynamic-scope) | |
update-manager | Migration Using Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/migration-using-portal.md | After you review the resources that must be moved, you can proceed with the migr This includes two steps: - a. **Onboard non-Azure non-Arc-enabled machines to Arc** - This is because Arc connectivity is a prerequisite for Azure Update Manager. Onboarding your machines to Azure Arc is free of cost, and once you do so, you can avail all management services as you can do for any Azure machine. For more information, see [Azure Arc documentation](../azure-arc/servers/onboard-service-principal.md) + a. **Onboard non-Azure non-Arc-enabled machines to Arc** - This is because Arc connectivity is a prerequisite for Azure Update Manager. Onboarding your machines to Azure Arc is free of cost, and once you do so, you can avail all management services as you can do for any Azure machine. For more information, see [Azure Arc documentation](/azure/azure-arc/servers/onboard-service-principal) on how to onboard your machines. b. **Download and run PowerShell script locally** - This is required for the creation of a user identity and appropriate role assignments so that the migration can take place. This script gives proper RBAC to the User Identity on the subscription to which the automation account belongs, machines onboarded to Automation Update Management, scopes that are part of dynamic queries etc. so that the configuration can be assigned to the machines, MRP configurations can be created and updates solution can be removed. |
update-manager | Migration Using Runbook Scripts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/migration-using-runbook-scripts.md | The information mentioned in each of the above steps is explained in detail belo **What to do** -Migration automation runbook ignores resources that aren't onboarded to Arc. It's therefore a prerequisite to onboard all non-Azure machines on to Azure Arc before running the migration runbook. Follow the steps to [onboard machines on to Azure Arc](../azure-arc/servers/onboard-service-principal.md). +Migration automation runbook ignores resources that aren't onboarded to Arc. It's therefore a prerequisite to onboard all non-Azure machines on to Azure Arc before running the migration runbook. Follow the steps to [onboard machines on to Azure Arc](/azure/azure-arc/servers/onboard-service-principal). #### Prerequisite 2: Create User Identity and Role Assignments by running PowerShell script |
update-manager | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/prerequisites.md | Azure VM extensions and Azure Arc-enabled VM extensions are required to run on t ### Network planning -To prepare your network to support Update Manager, you might need to configure some infrastructure components. For more information, see the [network requirements for Arc-enabled servers](../azure-arc/servers/network-requirements.md). +To prepare your network to support Update Manager, you might need to configure some infrastructure components. For more information, see the [network requirements for Arc-enabled servers](/azure/azure-arc/servers/network-requirements). For Windows machines, you must allow traffic to any endpoints required by the Windows Update agent. You can find an updated list of required endpoints in [issues related to HTTP Proxy](/troubleshoot/windows-client/installing-updates-features-roles/windows-update-issues-troubleshooting?toc=%2Fwindows%2Fdeployment%2Ftoc.json&bc=%2Fwindows%2Fdeployment%2Fbreadcrumb%2Ftoc.json#issues-related-to-httpproxy). If you have a local [WSUS](/windows-server/administration/windows-server-update-services/plan/plan-your-wsus-deployment) deployment, you must allow traffic to the server specified in your [WSUS key](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry). |
update-manager | Roles Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/roles-permissions.md | The built-in roles provide blanket permissions on a virtual machine, which inclu | **Resource** | **Role** | ||| | **Azure VM** | Azure Virtual Machine Contributor or Azure [Owner](../role-based-access-control/built-in-roles.md)|-| **Azure Arc-enabled server** | [Azure Connected Machine Resource Administrator](../azure-arc/servers/security-overview.md)| +| **Azure Arc-enabled server** | [Azure Connected Machine Resource Administrator](/azure/azure-arc/servers/security-overview)| ## Permissions |
update-manager | Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md | We support VMs created from customized images (including images uploaded to [Azu # [Azure Arc-enabled servers](#tab/azurearc-os) -The following table lists the operating systems supported on [Azure Arc-enabled servers](../azure-arc/servers/overview.md). +The following table lists the operating systems supported on [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). |**Operating system**| |-| |
update-manager | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshoot.md | To review the logs related to all actions performed by the extension, check for #### [Arc-enabled Servers](#tab/azure-arc) -For Azure Arc-enabled servers, see [Troubleshoot VM extensions](../azure-arc/servers/troubleshoot-vm-extensions.md) for general troubleshooting steps. +For Azure Arc-enabled servers, see [Troubleshoot VM extensions](/azure/azure-arc/servers/troubleshoot-vm-extensions) for general troubleshooting steps. To review the logs related to all actions performed by the extension, on Windows, check for more information in `C:\ProgramData\GuestConfig\extension_Logs\Microsoft.SoftwareUpdateManagement\WindowsOsUpdateExtension`. It includes the following two log files of interest: The Windows/Linux OS Update extension must be successfully installed on Arc mach Trigger an on-demand assessment or patching to install the extension on the machine. You can also attach the machine to a maintenance configuration schedule which will install the extension when patching is performed as per the schedule. -If the extension is already present on the machine but the extension status is not **Succeeded**, ensure that you [remove the extension](../azure-arc/servers/manage-vm-extensions-portal.md#remove-extensions) and trigger an on-demand operation so that it is installed again. +If the extension is already present on the machine but the extension status is not **Succeeded**, ensure that you [remove the extension](/azure/azure-arc/servers/manage-vm-extensions-portal#remove-extensions) and trigger an on-demand operation so that it is installed again. ### Windows/Linux patch update extension isn't installed The Windows/Linux patch update extension must be successfully installed on Azure #### Resolution Trigger an on-demand assessment or patching to install the extension on the machine. You can also attach the machine to a maintenance configuration schedule which will install the extension when patching is performed as per the schedule. -If the extension is already present on the machine but the extension status is not **Succeeded**, ensure that you [remove the extension](../azure-arc/servers/manage-vm-extensions-portal.md#remove-extensions) and trigger an on-demand operation which will install it again. +If the extension is already present on the machine but the extension status is not **Succeeded**, ensure that you [remove the extension](/azure/azure-arc/servers/manage-vm-extensions-portal#remove-extensions) and trigger an on-demand operation which will install it again. ### Allow Extension Operations check failed |
update-manager | Update Manager Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/update-manager-faq.md | Azure Update Manager provides a SaaS solution to manage and govern software upda Following are the benefits of using Azure Update - Oversee update compliance for your entire fleet of machines in Azure (Azure VMs), on premises, and multicloud environments (Arc-enabled Servers). - View and deploy pending updates to secure your machines [instantly](updates-maintenance-schedules.md#update-nowone-time-update).-- Manage [extended security updates (ESUs)](../azure-arc/servers/prepare-extended-security-updates.md) for your Azure Arc-enabled Windows Server 2012/2012 R2 machines. Get consistent experience for deployment of ESUs and other updates.+- Manage [extended security updates (ESUs)](/azure/azure-arc/servers/prepare-extended-security-updates) for your Azure Arc-enabled Windows Server 2012/2012 R2 machines. Get consistent experience for deployment of ESUs and other updates. - Define recurring time windows during which your machines receive updates and might undergo reboots using [scheduled patching](scheduled-patching.md). Enforce machines grouped together based on standard Azure constructs (Subscriptions, Location, Resource Group, Tags etc.) to have common patch schedules using [dynamic scoping](dynamic-scope-overview.md). Sync patch schedules for Windows machines in relation to patch Tuesday, the unofficial term for month. - Enable incremental rollout of updates to Azure VMs in off-peak hours using [automatic VM guest patching](/azure/virtual-machines/automatic-vm-guest-patching) and reduce reboots by enabling [hotpatching](updates-maintenance-schedules.md#hotpatching). - Automatically [assess](assessment-options.md#periodic-assessment) machines for pending updates every 24 hours, and flag machines that are out of compliance. Enforce enabling periodic assessments on multiple machines at scale using [Azure Policy](periodic-assessment-at-scale.md). |
update-manager | Workflow Update Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/workflow-update-manager.md | Update Manager assesses and applies updates to all Azure machines and Azure Arc- ## Update Manager VM extensions -When an Azure Update Manager operation(AUM) is enabled or triggered on your Azure or Arc-enabled server, AUM installs an [Azure extension](/azure/virtual-machines/extensions/overview) or [Arc-enabled servers extensions](../azure-arc/servers/manage-vm-extensions.md) respectively on your machine to manage the updates. +When an Azure Update Manager operation(AUM) is enabled or triggered on your Azure or Arc-enabled server, AUM installs an [Azure extension](/azure/virtual-machines/extensions/overview) or [Arc-enabled servers extensions](/azure/azure-arc/servers/manage-vm-extensions) respectively on your machine to manage the updates. The extension is automatically installed on your machine when you initiate any Update Manager operation on your machine for the first time, such as Check for updates, Install one-time update, Periodic Assessment or when scheduled update deployment runs on your machine for the first time. Customer doesn't have to explicitly install the extension and its lifecycle as it is managed by Azure Update Manager including installation and configuration. The Update Manager extension is installed and managed by using the below agents, which are required for Update Manager to work on your machines: - [Azure VM Windows agent](/azure/virtual-machines/extensions/agent-windows) or the [Azure VM Linux agent](/azure/virtual-machines/extensions/agent-linux) for Azure VMs.-- [Azure Arc-enabled servers agent](../azure-arc/servers/agent-overview.md) +- [Azure Arc-enabled servers agent](/azure/azure-arc/servers/agent-overview) >[!NOTE] > Arc connectivity is a prerequisite for Update Manager, non-Azure machines including Arc-enabled VMWare, SCVMM etc. |
virtual-desktop | Administrative Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/administrative-template.md | You can configure the following features with the administrative template: - [RDP Shortpath for managed networks](rdp-shortpath.md?tabs=managed-networks) - [Screen capture protection](screen-capture-protection.md) - [Watermarking](watermarking.md)+- [High Efficiency Video Coding (H.265) hardware acceleration](graphics-enable-gpu-acceleration.md) ## Prerequisites To add the administrative template to Group Policy, select a tab for your scenar - [Watermarking](watermarking.md) ++## Related content ++Learn how to use the administrative template with the following features: ++- [Graphics related data logging](connection-latency.md#connection-graphics-data-preview) +- [Screen capture protection](screen-capture-protection.md) +- [RDP Shortpath for managed networks](rdp-shortpath.md?tabs=managed-networks) +- [Watermarking](watermarking.md) +- [High Efficiency Video Coding (H.265) hardware acceleration](graphics-enable-gpu-acceleration.md) |
virtual-desktop | Azure Stack Hci Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md | To use session hosts on Azure Stack HCI with Azure Virtual Desktop, you also nee - License and activate the virtual machines. For activating Windows 10 and Windows 11 Enterprise multi-session, and Windows Server 2022 Datacenter: Azure Edition, use [Azure verification for VMs](/azure-stack/hci/deploy/azure-verification). For all other OS images (such as Windows 10 and Windows 11 Enterprise, and other editions of Windows Server), you should continue to use existing activation methods. For more information, see [Activate Windows Server VMs on Azure Stack HCI](/azure-stack/hci/manage/vm-activate). -- Install the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) on the virtual machines so they can communicate with [Azure Instance Metadata Service](/azure/virtual-machines/instance-metadata-service), which is a [required endpoint for Azure Virtual Desktop](../virtual-desktop/required-fqdn-endpoint.md). The Azure Connected Machine agent is automatically installed when you add session hosts using the Azure portal as part of the process to [Deploy Azure Virtual Desktop](deploy-azure-virtual-desktop.md) or [Add session hosts to a host pool](add-session-hosts-host-pool.md).+- Install the [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) on the virtual machines so they can communicate with [Azure Instance Metadata Service](/azure/virtual-machines/instance-metadata-service), which is a [required endpoint for Azure Virtual Desktop](../virtual-desktop/required-fqdn-endpoint.md). The Azure Connected Machine agent is automatically installed when you add session hosts using the Azure portal as part of the process to [Deploy Azure Virtual Desktop](deploy-azure-virtual-desktop.md) or [Add session hosts to a host pool](add-session-hosts-host-pool.md). Finally, users can connect using the same [Remote Desktop clients](users/remote-desktop-clients-overview.md) as Azure Virtual Desktop. |
virtual-desktop | Client Device Redirection Intune | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/client-device-redirection-intune.md | description: Learn how to configure redirection settings for Windows App and the Previously updated : 05/29/2024 Last updated : 08/21/2024 # Configure client device redirection settings for Windows App and the Remote Desktop app using Microsoft Intune -> [!IMPORTANT] -> Configure redirection settings for Windows App and the Remote Desktop app using Microsoft Intune is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - > [!TIP] > This article contains information for multiple products that use the Remote Desktop Protocol (RDP) to provide remote access to Windows desktops and applications. For Windows App: | Device platform | Managed devices | Unmanaged devices | |--|:--:|:--:|-| iOS and iPadOS | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | +| iOS and iPadOS | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | For the Remote Desktop app: You need to create an [app protection policy](/mem/intune/apps/app-protection-po To create and apply an app protection policy, follow the steps in [How to create and assign app protection policies](/mem/intune/apps/app-protection-policies) and use the following settings. You need to create an app protection policy for each platform you want to target. -- On the **Apps** tab, do the following, depending on whether you're targeting Windows App or the Remote Desktop app+- On the **Apps** tab, do the following, depending on whether you're targeting Windows App or the Remote Desktop app. - For Windows App on iOS/iPadOS, select **Select custom apps**, then for **Bundle or Package ID**, enter `com.microsoft.rdc.apple`. |
virtual-desktop | Graphics Chroma Value Increase 4 4 4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/graphics-chroma-value-increase-4-4-4.md | + + Title: Increase the chroma value to 4:4:4 for Azure Virtual Desktop +description: Learn how to increase the chroma value from 4:2:0 to 4:4:4. +++ Last updated : 05/21/2024+++# Increase the chroma value to 4:4:4 for Azure Virtual Desktop using the Advanced Video Coding (AVC) video codec ++The chroma value determines the color space used for encoding. By default, the chroma value is set to 4:2:0, which provides a good balance between image quality and network bandwidth. When you use the Advanced Video Coding (AVC) video codec, you can increase the chroma value to 4:4:4 to improve image quality. You don't need to use GPU acceleration to change the chroma value. ++This article shows you how to set the chroma value. You can use Microsoft Intune or Group Policy to configure your session hosts. ++## Prerequisites ++Before you can configure the chroma value, you need: ++- An existing host pool with session hosts. ++- To configure Microsoft Intune, you need: ++ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role. ++ - A group containing the devices you want to configure. ++- To configure Group Policy, you need: ++ - A domain account that is a member of the **Domain Admins** security group. ++ - A security group or organizational unit (OU) containing the devices you want to configure. ++## Increase the chroma value to 4:4:4 ++By default, the chroma value is set to 4:2:0. You can increase the chroma value to 4:4:4 using Microsoft Intune or Group Policy. ++Select the relevant tab for your scenario. ++# [Microsoft Intune](#tab/intune) ++To increase the chroma value to 4:4:4 using Microsoft Intune: ++1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/). ++1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type. ++1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**. ++ :::image type="content" source="media/enable-gpu-acceleration/remote-session-environment-intune.png" alt-text="A screenshot showing the redirection options in the Microsoft Intune portal." lightbox="media/enable-gpu-acceleration/remote-session-environment-intune.png"::: ++1. Check the box for the following settings, then close the settings picker: ++ 1. **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections** ++ 1. **Configure image quality for RemoteFX Adaptive Graphics** ++1. Expand the **Administrative templates** category, then set each setting as follows: ++ 1. Set toggle the switch for **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections** to **Enabled**. ++ 1. Set toggle the switch for **Configure image quality for RemoteFX Adaptive Graphics** to **Enabled**, then for **Image quality: (Device)**, select **High**. ++1. Select **Next**. ++1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags). ++1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**. ++1. On the **Review + create** tab, review the settings, then select **Create**. ++1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect. ++# [Group Policy](#tab/group-policy) ++To increase the chroma value to 4:4:4 using Group Policy: ++1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain. ++1. Create or edit a policy that targets the computers providing a remote session you want to configure. ++1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**. ++ :::image type="content" source="media/enable-gpu-acceleration/remote-session-environment-group-policy.png" alt-text="A screenshot showing the redirection options in the Group Policy editor." lightbox="media/enable-gpu-acceleration/remote-session-environment-group-policy.png"::: ++1. Configure the following settings: ++ 1. Double-click the policy setting **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections** to open it. Select **Enabled**, then select **OK**. ++ 1. Double-click the policy setting **Configure image quality for RemoteFX Adaptive Graphics** to **Enabled**, then for **Image quality**, select **High**. Select **OK**. ++1. Ensure the policy is applied to your session hosts, then restart them for the settings to take effect. ++++## Verify a remote session is using a chroma value of 4:4:4 ++To verify that a remote session is using a chroma value of 4:4:4, you need to [open an Azure support request](https://azure.microsoft.com/support/create-ticket/) with Microsoft Support who can verify the chroma value from telemetry. ++## Related content ++- [Configure GPU acceleration](enable-gpu-acceleration.md) |
virtual-desktop | Graphics Enable Gpu Acceleration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/graphics-enable-gpu-acceleration.md | + + Title: Enable GPU acceleration for Azure Virtual Desktop +description: Learn how to enable GPU-accelerated rendering and encoding, including HEVC/H.265 and AVC/H.264 support, in Azure Virtual Desktop. +++ Last updated : 09/19/2024+++# Enable GPU acceleration for Azure Virtual Desktop ++> [!IMPORTANT] +> High Efficiency Video Coding (H.265) hardware acceleration is currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++Azure Virtual Desktop supports graphics processing unit (GPU) acceleration in rendering and encoding for improved app performance and scalability using the Remote Desktop Protocol (RDP). GPU acceleration is crucial for graphics-intensive applications, such as those used by graphic designers, video editors, 3D modelers, data analysts, or visualization specialists. ++There are three components to GPU acceleration in Azure Virtual Desktop that work together to improve the user experience: ++- **GPU-accelerated application rendering**: Use the GPU to render graphics in a remote session. ++- **GPU-accelerated frame encoding**: The Remote Desktop Protocol encodes all graphics rendered for transmission to the local device. When part of the screen is frequently updated, it's encoded with the Advanced Video Coding (AVC) video codec, also known as H.264. ++- **Full-screen video encoding**: A full-screen video profile provides a higher frame rate and better user experience, but uses more network bandwidth and both session host and client resources. It benefits applications such as 3D modeling, CAD/CAM, or video playback and editing. You can choose to encode it with: + - AVC/H.264. + - High Efficiency Video Coding (HEVC), also known as H.265. This allows for 25-50% data compression compared to AVC/H.264, at the same video quality or improved quality at the same bitrate.is encoded with AVC/H.264. ++> [!NOTE] +> - If you enable both HEVC/H.265 and AVC/H.264 hardware acceleration, but HEVC/H.265 isn't available on the local device, AVC/H.264 is used instead. +> +> - You can enable full-screen video encoding even without GPU acceleration. +> +> - You can also increase the [default chroma value](configure-default-chroma-value.md) to improve the image quality. ++This article shows you which Azure VM sizes you can use as a session host with GPU acceleration, and how to enable GPU acceleration for rendering and encoding. ++## Supported GPU-optimized Azure VM sizes ++The following table lists which Azure VM sizes are optimized for GPU acceleration and supported as session hosts in Azure Virtual Desktop: ++| Azure VM size | GPU-accelerated application rendering | GPU-accelerated frame encoding | Full-screen video encoding | +|--|--|--|--| +| [NVv3-series](/azure/virtual-machines/nvv3-series) | Supported | AVC/H.264 | HEVC/H.265<br />AVC/H.264 | +| [NVv4-series](/azure/virtual-machines/nvv4-series) | Supported | Not available | Supported | +| [NVadsA10 v5-series](/azure/virtual-machines/nva10v5-series) | Supported | AVC/H.264 | HEVC/H.265<br />AVC/H.264 | +| [NCasT4_v3-series](/azure/virtual-machines/nct4-v3-series) | Supported | AVC/H.264 | HEVC/H.265<br />AVC/H.264 | ++The right choice of VM size depends on many factors, including your particular application workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density. Smaller and fractional GPU sizes allow more fine-grained control over cost and quality. ++VM sizes with an NVIDIA GPU come with a GRID license that supports 25 concurrent users. ++> [!IMPORTANT] +> Azure NC, NCv2, NCv3, ND, and NDv2 series VMs aren't generally appropriate as session hosts. These VM sizes are tailored for specialized, high-performance compute or machine learning tools, such as those built with NVIDIA CUDA. They don't support GPU acceleration for most applications or the Windows user interface. ++## Prerequisites ++Before you can enable GPU acceleration, you need: ++- An existing host pool with session hosts using a [supported GPU-optimized Azure VM size](#supported-gpu-optimized-azure-vm-sizes) for the graphics features you want to enable. Supported graphics drivers are listed in [Install supported graphics drivers in your session hosts](#install-supported-graphics-drivers-in-your-session-hosts). ++- To configure Microsoft Intune, you need: ++ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role. + - A group containing the devices you want to configure. ++- To configure Group Policy, you need: ++ - A domain account that has permission to create or edit Group Policy objects. + - A security group or organizational unit (OU) containing the devices you want to configure. ++In addition, for HEVC/H.265 hardware acceleration you also need: ++- Session hosts must be running [Windows 10 or Windows 11](prerequisites.md#operating-systems-and-licenses). ++- A desktop application group. RemoteApp isn't supported. ++- If you [increased the chroma value to 4:4:4](graphics-chroma-value-increase-4-4-4.md), the chroma value falls back to 4:2:0 when using HEVC hardware acceleration. ++- Disable [multimedia redirection](multimedia-redirection.md) on your session hosts by uninstalling the host component. ++- The [Administrative template for Azure Virtual Desktop](administrative-template.md) available in Group Policy to configure your session hosts. ++- A local Windows device you use to connect to a remote session must have: ++ - A GPU that has HEVC (H.265) 4K YUV 4:2:0 decode support. For more information, see the manufacturer's documentation. Here are some links to documentation for some manufacturers: + - [NVIDIA](https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new) + - [AMD](https://www.amd.com/en/products/specifications/graphics) + - [Intel](https://www.intel.com/content/www/us/en/docs/onevpl/developer-reference-media-intel-hardware/1-0/overview.html#DECODE-SUPPORT) ++ - Microsoft HEVC codec installed. The Microsoft HEVC codec is included in clean installs of Windows 11 22H2 or later. You can also [purchase the Microsoft HEVC codec from the Microsoft Store](https://www.microsoft.com/store/productid/9NMZLZ57R3T7?ocid=pdpshare). ++ - One of the following apps to connect to a remote session. Other platforms and versions aren't supported. + - Windows App on Windows, version 1.3.278.0 or later. + - Remote Desktop app on Windows, version 1.2.4671.0 or later. ++## Install supported graphics drivers in your session hosts ++To take advantage of the GPU capabilities of Azure N-series VMs in Azure Virtual Desktop, you must install the appropriate graphics drivers. Follow the instructions at [Supported operating systems and drivers](/azure/virtual-machines/sizes-gpu#supported-operating-systems-and-drivers) to learn how to install drivers. ++> [!IMPORTANT] +> Only Azure-distributed drivers are supported. ++When installing drivers, here are some important guidelines: ++- For VMs sizes with an NVIDIA GPU, only NVIDIA *GRID* drivers support GPU acceleration for most applications and the Windows user interface. NVIDIA *CUDA* drivers don't support GPU acceleration for these VM sizes. To download and learn how to install the driver, see [Install NVIDIA GPU drivers on N-series VMs running Windows](/azure/virtual-machines/windows/n-series-driver-setup) and be sure to install the GRID driver. If you install the driver by using the [NVIDIA GPU Driver Extension](/azure/virtual-machines/extensions/hpccompute-gpu-windows), the GRID driver is automatically installed for these VM sizes. ++ - For HEVC/H.265 hardware acceleration, you must use NVIDIA GPU driver GRID 16.2 (537.13) or later. ++- For VMs sizes with an AMD GPU, install the AMD drivers that Azure provides. To download and learn how to install the driver, see [Install AMD GPU drivers on N-series VMs running Windows](/azure/virtual-machines/windows/n-series-amd-driver-setup). ++## Enable GPU-accelerated application rendering, frame encoding, and full-screen video encoding ++By default, remote sessions are rendered with the CPU and don't use available GPUs. You can enable GPU-accelerated application rendering, frame encoding, and full-screen video encoding using Microsoft Intune or Group Policy. ++Select the relevant tab for your scenario. ++# [Microsoft Intune](#tab/intune) ++> [!IMPORTANT] +> HEVC/H.265 hardware acceleration isn't available in the Intune Settings Catalog yet. ++To enable GPU-accelerated application rendering using Intune: ++1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/). ++1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type. ++1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**. ++ :::image type="content" source="media/enable-gpu-acceleration/remote-session-environment-intune.png" alt-text="A screenshot showing the redirection options in the Microsoft Intune portal." lightbox="media/enable-gpu-acceleration/remote-session-environment-intune.png"::: ++1. Select the following settings, then close the settings picker: ++ 1. For GPU-accelerated application rendering, check the box for **Use hardware graphics adapters for all Remote Desktop Services sessions**. ++ 1. For GPU accelerated frame encoding, check the box for **Configure H.264/AVC hardware encoding for Remote Desktop connections**. ++ 1. For full-screen video encoding, check the box for **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections**. ++1. Expand the **Administrative templates** category, then set toggle the switch for each setting as follows: ++ 1. For GPU-accelerated application rendering, set **Use hardware graphics adapters for all Remote Desktop Services sessions** to **Enabled**. ++ 1. For GPU accelerated frame encoding, set **Configure H.264/AVC hardware encoding for Remote Desktop connections** to **Enabled**. ++ 1. For full-screen video encoding, set **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections** to **Enabled**. ++1. Select **Next**. ++1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags). ++1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**. ++1. On the **Review + create** tab, review the settings, then select **Create**. ++1. After the policy applies to the computers providing a remote session, restart them for the settings to take effect. ++# [Group Policy](#tab/group-policy) ++To enable GPU-accelerated application rendering using Group Policy: ++1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain. ++1. Create or edit a policy that targets the computers providing a remote session you want to configure. ++1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**. ++ :::image type="content" source="media/enable-gpu-acceleration/remote-session-environment-group-policy.png" alt-text="A screenshot showing the redirection options in the Group Policy editor." lightbox="media/enable-gpu-acceleration/remote-session-environment-group-policy.png"::: ++1. Configure the following settings: ++ 1. For GPU-accelerated application rendering, double-click the policy setting **Use hardware graphics adapters for all Remote Desktop Services sessions** to open it. Select **Enabled**, then select **OK**. ++ 1. For GPU accelerated frame encoding, double-click the policy setting **Configure H.264/AVC hardware encoding for Remote Desktop Connections** to open it. Select **Enabled**, then select **OK**. If you're using Windows Server 2016, you see an extra drop-down menu in the setting; set **Prefer AVC Hardware Encoding** to **Always attempt**. ++ 1. For full-screen video encoding using AVC/H.264 only, double-click the policy setting **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections** to open it. Select **Enabled**, then select **OK**. ++1. For full-screen video encoding using HEVC/H.265 only, navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Azure Virtual Desktop**. ++ :::image type="content" source="media/administrative-template/azure-virtual-desktop-gpo.png" alt-text="A screenshot showing the Azure Virtual Desktop options in Group Policy." lightbox="media/administrative-template/azure-virtual-desktop-gpo.png"::: ++1. Double-click the policy setting **Configure H.265/HEVC hardware encoding for Remote Desktop Connections** to open it. Select **Enabled**, then select **OK**. ++1. Ensure the policy is applied to your session hosts, then restart them for the settings to take effect. ++++## Verify GPU acceleration ++To verify that a remote session is using GPU acceleration, GPU-accelerated application rendering, frame encoding, or full-screen video encoding: ++1. If you want to verify HEVC/H.265 hardware acceleration, complete the following extra steps: ++ 1. Make sure the local Windows device has the Microsoft HEVC codec installed by opening a PowerShell prompt and run the following command: ++ ```powershell + Get-AppxPackage -Name "Microsoft.HEVCVideoExtension" | FT Name, Version + ``` + + The output should be similar to the following output: ++ ```output + Name Version + - - + Microsoft.HEVCVideoExtension 2.1.1161.0 + ``` ++ 1. Make sure [multimedia redirection](multimedia-redirection.md) is disabled on the session host if you're using it. ++1. Connect to one of the session hosts you configured, either through Azure Virtual Desktop or a direct RDP connection. ++1. Open an application that uses GPU acceleration and generate some load for the GPU. ++1. Open Task Manager and go to the **Performance** tab. Select the GPU to see whether the GPU is being utilized by the application. ++ :::image type="content" source="media/enable-gpu-acceleration/task-manager-rdp-gpu.png" alt-text="A screenshot showing the GPU usage in Task Manager when in a Remote Desktop session." lightbox="media/enable-gpu-acceleration/task-manager-rdp-gpu.png"::: ++ > [!TIP] + > For NVIDIA GPUs, you can also use the `nvidia-smi` utility to check for GPU utilization when running your application. For more information, see [Verify driver installation](/azure/virtual-machines/windows/n-series-driver-setup#verify-driver-installation). ++1. Open Event Viewer from the start menu, or run `eventvwr.msc` from the command line. ++1. Navigate to one of the following locations: ++ 1. For connections through Azure Virtual Desktop, go to **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **Operational**. ++ 1. For connections through a direct RDP connection, go to **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreTs** > **Operational**. ++1. Look for the following event IDs: ++ - **Event ID 170**: If you see **AVC hardware encoder enabled: 1** in the event text, GPU-accelerated frame encoding is in use. ++ - **Event ID 162**: + - If you see **AVC available: 1, Initial Profile: 2048** in the event text, GPU-accelerated frame encoding with AVC/H.264 and full-screen video encoding is in use. + - If you see **AVC available: 1, Initial Profile: 32768** in the event text, GPU-accelerated frame encoding with HEVC/H.265 is in use. ++## Related content ++Increase the [default chroma value](configure-default-chroma-value.md) to improve the image quality. |
virtual-desktop | Screen Capture Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/screen-capture-protection.md | To configure screen capture protection using Microsoft Intune: To configure screen capture protection using Group Policy: -1. Follow the steps to make the [Administrative template for Azure Virtual Desktop](administrative-template.md) available to Group Policy. +1. Follow the steps to make the [Administrative template for Azure Virtual Desktop](administrative-template.md) available in Group Policy. 1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain. |
virtual-desktop | Troubleshoot Multimedia Redirection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-multimedia-redirection.md | The following issues are ones we're already aware of, so you won't need to repor If you can start a call with multimedia redirection enabled and can see the green phone icon on the extension icon while calling, but the call quality is low, you should contact the app provider for help. -If calls aren't going through, certain features don't work as expected while multimedia redirection is enabled, or multimedia redirection won't enable at all, you must submit a [Microsoft support ticket](../azure-portal/supportability/how-to-create-azure-support-request.md). +If calls aren't going through, certain features don't work as expected while multimedia redirection is enabled, or multimedia redirection won't enable at all, you must submit a [Microsoft support ticket](/azure/azure-portal/supportability/how-to-create-azure-support-request). -If you encounter any video playback issues that this guide doesn't address or resolve, submit a [Microsoft support ticket](../azure-portal/supportability/how-to-create-azure-support-request.md). +If you encounter any video playback issues that this guide doesn't address or resolve, submit a [Microsoft support ticket](/azure/azure-portal/supportability/how-to-create-azure-support-request). ## Log collection |
virtual-desktop | Watermarking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/watermarking.md | To enable watermarking using Microsoft Intune: To enable watermarking using Group Policy: -1. Follow the steps to make the [Administrative template for Azure Virtual Desktop](administrative-template.md?tabs=group-policy-domain) available. +1. Follow the steps to make the [Administrative template for Azure Virtual Desktop](administrative-template.md?tabs=group-policy-domain) available in Group Policy. 1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain, then create or edit a policy that targets the computers providing a remote session you want to configure. |
virtual-desktop | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md | Make sure to check back here often to keep up with new updates. > [!TIP] > See [What's new in documentation](whats-new-documentation.md), where we highlight new and updated articles for Azure Virtual Desktop. +## September 2024 ++Here's what changed in September 2024: ++### Windows App is now available ++Windows App is now generally available on Windows, macOS, iOS, iPadOS, and web browsers, and in preview on Android. You can use it to connect to Azure Virtual Desktop, Windows 365, Microsoft Dev Box, Remote Desktop Services, and remote PCs, securely connecting you to Windows devices and apps. To learn more about what each platform supports, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features?toc=admins%2Ftoc.json&pivots=azure-virtual-desktop). Windows App is now available through the appropriate store for each client platform, ensuring a smooth update process.  ++For more information, see [What is Windows App?](/windows-app/overview) and [Windows App get started](/windows-app/get-started-connect-devices-desktops-apps?tabs=windows-avd%2Cwindows-w365%2Cwindows-devbox%2Cmacos-rds%2Cmacos-pc&pivots=azure-virtual-desktop). +++### Enabling HEVC GPU acceleration for Azure Virtual Desktop is now in preview ++High Efficiency Video Coding (H.265) hardware acceleration is currently in preview. Azure Virtual Desktop supports graphics processing unit (GPU) acceleration for frame encoding which will result in improved graphical experience when using the Remote Desktop Protocol (RDP) with a GPU-enabled Virtual Machine. GPU acceleration is crucial for delivering high-fidelity graphical experiences in graphics-intensive applications, such as those used by graphic designers, video editors, and 3D modelers. ++For more information, see [Enable GPU acceleration for Azure Virtual Desktop](graphics-enable-gpu-acceleration.md). ++ ## August 2024 Here's what changed in August 2024: |
virtual-network | Public Ip Address Prefix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-address-prefix.md | Resource|Scenario|Steps| - You can't specify the set of IP addresses for the prefix (though you can [specify which IP you want from the prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix)). Azure gives the IP addresses for the prefix, based on the size that you specify. Additionally, all public IP addresses created from the prefix must exist in the same Azure region and subscription as the prefix. Addresses must be assigned to resources in the same region and subscription. -- You can create a prefix of up to 16 IP addresses for Microsoft owned prefixes. Review [Network limits increase requests](../../azure-portal/supportability/networking-quota-requests.md) and [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) for more information if larger prefixes are required. Also note there is no limit on the number of Public IP Prefixes per region, but the overall number of Public IP addresses per region is limited (each public IP prefix consumes that number of IPs from the public IP address quota for that region).+- You can create a prefix of up to 16 IP addresses for Microsoft owned prefixes. Review [Network limits increase requests](/azure/azure-portal/supportability/networking-quota-requests) and [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) for more information if larger prefixes are required. Also note there is no limit on the number of Public IP Prefixes per region, but the overall number of Public IP addresses per region is limited (each public IP prefix consumes that number of IPs from the public IP address quota for that region). - The size of the range can't be modified after the prefix has been created. |
virtual-network | Virtual Network Service Endpoints Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoints-overview.md | Service endpoints are available for the following Azure services and regions. Th **Public Preview** -- **[Azure Container Registry](../container-registry/container-registry-vnet.md)** (*Microsoft.ContainerRegistry*): Preview available in limited Azure regions where Azure Container Registry is available.+- **[Azure Container Registry](/azure/container-registry/container-registry-vnet)** (*Microsoft.ContainerRegistry*): Preview available in limited Azure regions where Azure Container Registry is available. For the most up-to-date notifications, check the [Azure Virtual Network updates](https://azure.microsoft.com/updates/?product=virtual-network) page. |
virtual-network | Virtual Networks Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-overview.md | Integrating Azure services with an Azure virtual network enables private access ## Limits -There are limits to the number of Azure resources that you can deploy. Most Azure networking limits are at the maximum values. However, you can [increase certain networking limits](../azure-portal/supportability/networking-quota-requests.md). For more information, see [Networking limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits). +There are limits to the number of Azure resources that you can deploy. Most Azure networking limits are at the maximum values. However, you can [increase certain networking limits](/azure/azure-portal/supportability/networking-quota-requests). For more information, see [Networking limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits). ## Virtual networks and availability zones |
virtual-wan | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md | The following features are currently in gated public preview. After working with |4| ExpressRoute ECMP Support | Today, ExpressRoute ECMP is not enabled by default for virtual hub deployments. When multiple ExpressRoute circuits are connected to a Virtual WAN hub, ECMP enables traffic from spoke virtual networks to on-premises over ExpressRoute to be distributed across all ExpressRoute circuits advertising the same on-premises routes. | | To enable ECMP for your Virtual WAN hub, please reach out to virtual-wan-ecmp@microsoft.com. | | 5| Virtual WAN hub address prefixes are not advertised to other Virtual WAN hubs in the same Virtual WAN.| You can't leverage Virtual WAN hub-to-hub full mesh routing capabilities to provide connectivity between NVA orchestration software deployed in a VNET or on-premises connected to a Virtual WAN hub to an Integrated NVA or SaaS solution deployed in a different Virtual WAN hub. | | If your NVA or SaaS orchestrator is deployed on-premises, connect that on-premises site to all Virtual WAN hubs with NVAs or SaaS solutions deployed in them. If your orchestrator is in an Azure VNET, manage NVAs or SaaS solutions using public IP. Support for Azure VNET orchestrators is on the roadmap.| |6| Configuring routing intent to route between connectivity and firewall NVAs in the same Virtual WAN Hub| Virtual WAN routing intent private routing policy does not support routing between a SD-WAN NVA and a Firewall NVA (or SaaS solution) deployed in the same Virtual hub.| | Deploy the connectivity and firewall integrated NVAs in two different hubs in the same Azure region. Alternatively, deploy the connectivity NVA to a spoke Virtual Network connected to your Virtual WAN Hub and leverage the [BGP peering](scenario-bgp-peering-hub.md).|-| 7| BGP between the Virtual WAN hub router and NVAs deployed in the Virtual WAN hub does not come up if the ASN used for BGP peering is updated post-deployment.|Delete and recreate the NVA with the correct ASN. | +| 7| BGP between the Virtual WAN hub router and NVAs deployed in the Virtual WAN hub does not come up if the ASN used for BGP peering is updated post-deployment.|Virtual Hub router expects NVA in the hub to use the ASN that was configured on the router when the NVA was first deployed. Updating the ASN associated with the NVA on the NVA resource does not properly register the new ASN with the Virtual Hub router so the router rejects BGP sessions from the NVA if the NVA OS is configured to use the new ASN. | |Delete and recreate the NVA in the Virtual WAN hub with the correct ASN.| +|8| Advertising default route (0.0.0.0/0) from on-premises (VPN, ExpressRoute, BGP endpoint) or statically configured on a Virtual Network connection is not supported for forced tunneling use cases.| The 0.0.0.0/0 route advertised from on-premises (or statically configured on a Virtual Network connection) is not applied to the Azure Firewall or other security solutions deployed in the Virtual WAN hub. Packets inspected by the security solution in the hub are routed directly to the internet, bypassing the route learnt from on-premises||Publish the default route from on-premises only in non-secure hub scenarios.| + ## Next steps |