Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md | Watch this video to learn how to configure monitoring for Azure AD B2C using Azu ## Deployment overview Azure AD B2C uses [Microsoft Entra monitoring](../active-directory/reports-monitoring/overview-monitoring-health.md). Unlike Microsoft Entra tenants, an Azure AD B2C tenant can't have a subscription associated with it. So, we need to take extra steps to enable the integration between Azure AD B2C and Log Analytics, which is where we send the logs.-To enable _Diagnostic settings_ in Microsoft Entra ID within your Azure AD B2C tenant, you use [Azure Lighthouse](../lighthouse/overview.md) to [delegate a resource](../lighthouse/concepts/architecture.md), which allows your Azure AD B2C (the **Service Provider**) to manage a Microsoft Entra ID (the **Customer**) resource. +To enable _Diagnostic settings_ in Microsoft Entra ID within your Azure AD B2C tenant, you use [Azure Lighthouse](/azure/lighthouse/overview) to [delegate a resource](/azure/lighthouse/concepts/architecture), which allows your Azure AD B2C (the **Service Provider**) to manage a Microsoft Entra ID (the **Customer**) resource. > [!TIP] > Azure Lighthouse is typically used to manage resources for multiple customers. However, it can also be used to manage resources **within an enterprise that has multiple Microsoft Entra tenants of its own**, which is what we are doing here, except that we are only delegating the management of single resource group. To create the custom authorization and delegation in Azure Lighthouse, we use an 1. Sign in to the [Azure portal](https://portal.azure.com). 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu.-1. Use the **Deploy to Azure** button to open the Azure portal and deploy the template directly in the portal. For more information, see [create an Azure Resource Manager template](../lighthouse/how-to/onboard-customer.md#create-an-azure-resource-manager-template). +1. Use the **Deploy to Azure** button to open the Azure portal and deploy the template directly in the portal. For more information, see [create an Azure Resource Manager template](/azure/lighthouse/how-to/onboard-customer#create-an-azure-resource-manager-template). [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure-ad-b2c%2Fsiem%2Fmaster%2Ftemplates%2FrgDelegatedResourceManagement.json) To create the custom authorization and delegation in Azure Lighthouse, we use an ] ``` -After you deploy the template, it can take a few minutes (typically no more than five) for the resource projection to complete. You can verify the deployment in your Microsoft Entra tenant and get the details of the resource projection. For more information, see [View and manage service providers](../lighthouse/how-to/view-manage-service-providers.md). +After you deploy the template, it can take a few minutes (typically no more than five) for the resource projection to complete. You can verify the deployment in your Microsoft Entra tenant and get the details of the resource projection. For more information, see [View and manage service providers](/azure/lighthouse/how-to/view-manage-service-providers). ## 4. Select your subscription |
api-management | Api Management Key Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts.md | API Management is offered in a variety of pricing tiers to meet the needs of dif API Management integrates with many complementary Azure services to create enterprise solutions, including: * **[Azure API Center](../api-center/overview.md)** to build a complete inventory of APIsΓÇï in the organization - regardless of their type, lifecycle stage, or deployment locationΓÇï - for API discovery, reuse, and governance-* **[Copilot in Azure](../copilot/overview.md)** to help author API Management policies or explain already configured policiesΓÇï +* **[Copilot in Azure](/azure/copilot/overview)** to help author API Management policies or explain already configured policiesΓÇï * **[Azure Key Vault](/azure/key-vault/general/overview)** for secure safekeeping and management of [client certificates](api-management-howto-mutual-certificates.md) and [secretsΓÇï](api-management-howto-properties.md) * **[Azure Monitor](api-management-howto-use-azure-monitor.md)** for logging, reporting, and alerting on management operations, systems events, and API requestsΓÇï * **[Application Insights](api-management-howto-app-insights.md)** for live metrics, end-to-end tracing, and troubleshooting |
api-management | Api Management Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md | More information about policies: + [Policy overview](api-management-howto-policies.md) + [Set or edit policies](set-edit-policies.md) + [Policy expressions](api-management-policy-expressions.md)-+ [Author policies using Microsoft Copilot in Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) ++ [Author policies using Microsoft Copilot in Azure](/azure/copilot/author-api-management-policies?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) > [!IMPORTANT] > [Limit call rate by subscription](rate-limit-policy.md) and [Set usage quota by subscription](quota-policy.md) have a dependency on the subscription key. A subscription key isn't required when other policies are applied. |
api-management | Api Management Policy Expressions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md | For more information working with policies, see: + [Tutorial: Transform and protect APIs](transform-api.md) + [Policy reference](./api-management-policies.md) for a full list of policy statements and their settings + [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) -+ [Author policies using Microsoft Copilot in Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) ++ [Author policies using Microsoft Copilot in Azure](/azure/copilot/author-api-management-policies?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) For more information: |
api-management | How To Configure Local Metrics Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-local-metrics-logs.md | The self-hosted gateway supports [StatsD](https://github.com/statsd/statsd), whi The following sample YAML configuration deploys StatsD and Prometheus to the Kubernetes cluster where a self-hosted gateway is deployed. It also creates a [Service](https://kubernetes.io/docs/concepts/services-networking/service/) for each. The self-hosted gateway then publishes metrics to the StatsD Service. We'll access the Prometheus dashboard via its Service. > [!NOTE]-> The following example pulls public container images from Docker Hub. We recommend that you set up a pull secret to authenticate using a Docker Hub account instead of making an anonymous pull request. To improve reliability when working with public content, import and manage the images in a private Azure container registry. [Learn more about working with public images.](../container-registry/buffer-gate-public-content.md) +> The following example pulls public container images from Docker Hub. We recommend that you set up a pull secret to authenticate using a Docker Hub account instead of making an anonymous pull request. To improve reliability when working with public content, import and manage the images in a private Azure container registry. [Learn more about working with public images.](/azure/container-registry/buffer-gate-public-content) ```yaml apiVersion: v1 |
api-management | How To Deploy Self Hosted Gateway Azure Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md | Last updated 06/12/2023 [!INCLUDE [api-management-availability-premium-dev](../../includes/api-management-availability-premium-dev.md)] -With the integration between Azure API Management and [Azure Arc on Kubernetes](../azure-arc/kubernetes/overview.md), you can deploy the API Management gateway component as an [extension in an Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/extensions.md). +With the integration between Azure API Management and [Azure Arc on Kubernetes](/azure/azure-arc/kubernetes/overview), you can deploy the API Management gateway component as an [extension in an Azure Arc-enabled Kubernetes cluster](/azure/azure-arc/kubernetes/extensions). Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster expands API Management support for hybrid and multicloud environments. Enable the deployment using a cluster extension to make managing and applying policies to your Azure Arc-enabled cluster a consistent experience. Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster ## Prerequisites -* [Connect your Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md) within a supported Azure Arc region. +* [Connect your Kubernetes cluster](/azure/azure-arc/kubernetes/quickstart-connect-cluster) within a supported Azure Arc region. * Install the `k8s-extension` Azure CLI extension: ```azurecli To enable monitoring of the self-hosted gateway, configure the following Log Ana * To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md). * Learn more about the [observability capabilities of the Azure API Management gateways](observability.md).-* Discover all [Azure Arc-enabled Kubernetes extensions](../azure-arc/kubernetes/extensions.md). -* Learn more about [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). +* Discover all [Azure Arc-enabled Kubernetes extensions](/azure/azure-arc/kubernetes/extensions). +* Learn more about [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview). * Learn more about guidance to [run the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md). * For configuration options, see the self-hosted gateway extension [reference](self-hosted-gateway-arc-reference.md). |
api-management | How To Deploy Self Hosted Gateway Azure Kubernetes Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md | This article provides the steps for deploying self-hosted gateway component of A [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)] > [!NOTE]-> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). +> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](/azure/azure-arc/kubernetes/extensions). ## Prerequisites |
api-management | How To Deploy Self Hosted Gateway Kubernetes Helm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md | This article provides the steps for deploying self-hosted gateway component of A [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)] > [!NOTE]-> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). +> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](/azure/azure-arc/kubernetes/extensions). ## Prerequisites |
api-management | How To Deploy Self Hosted Gateway Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md | This article describes the steps for deploying the self-hosted gateway component [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)] > [!NOTE]-> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). +> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](/azure/azure-arc/kubernetes/extensions). ## Prerequisites |
api-management | Policy Fragments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md | For more information about working with policies, see: + [Set or edit policies](set-edit-policies.md) + [Policy reference](./api-management-policies.md) for a full list of policy statements + [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) -+ [Author policies using Microsoft Copilot in Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) ++ [Author policies using Microsoft Copilot in Azure](/azure/copilot/author-api-management-policies?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) |
api-management | Set Edit Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md | For more information about working with policies, see: + [Set or edit policies](set-edit-policies.md) + [Policy reference](./api-management-policies.md) for a full list of policy statements and their settings + [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) -+ [Author policies using Microsoft Copilot in Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) ++ [Author policies using Microsoft Copilot in Azure](/azure/copilot/author-api-management-policies?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) |
api-management | Upgrade And Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/upgrade-and-scale.md | If you're scaling from or to the **Developer** tier, there will be downtime. Oth ## Compute isolation -If your security requirements include [compute isolation](../azure-government/azure-secure-isolation-guidance.md#compute-isolation), you can use the **Isolated** pricing tier. This tier ensures the compute resources of an API Management service instance consume the entire physical host and provide the necessary level of isolation required to support, for example, US Department of Defense Impact Level 5 (IL5) workloads. To get access to the Isolated tier, [create a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +If your security requirements include [compute isolation](../azure-government/azure-secure-isolation-guidance.md#compute-isolation), you can use the **Isolated** pricing tier. This tier ensures the compute resources of an API Management service instance consume the entire physical host and provide the necessary level of isolation required to support, for example, US Department of Defense Impact Level 5 (IL5) workloads. To get access to the Isolated tier, [create a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). ## Related content |
app-service | App Service Sql Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-github-actions.md | To run the create Azure resources workflow: ## Build, push, and deploy your image -The build, push, and deploy workflow builds a container with the latest app changes, pushes the container to [Azure Container Registry](../container-registry/index.yml) and, updates the web application staging slot to point to the latest container pushed. The workflow containers a build and deploy job: +The build, push, and deploy workflow builds a container with the latest app changes, pushes the container to [Azure Container Registry](/azure/container-registry/) and, updates the web application staging slot to point to the latest container pushed. The workflow containers a build and deploy job: - The build job checks out source code with the [Checkout action](https://github.com/marketplace/actions/checkout). The job then uses the [Docker login action](https://github.com/marketplace/actions/docker-login) and a custom script to authenticate with Azure Container Registry, build a container image, and deploy it to Azure Container Registry. - The deployment job logs into Azure with the [Azure Login action](https://github.com/marketplace/actions/azure-login) and gathers environment and Azure resource information. The job then updates Web App Settings with the [Azure App Service Settings action](https://github.com/marketplace/actions/azure-app-service-settings) and deploys to an App Service staging slot with the [Azure Web Deploy action](https://github.com/marketplace/actions/azure-webapp). Last, the job runs a custom script to update the SQL database and swaps staging slot to production. |
app-service | Configure Ssl Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md | description: Create a free certificate, import an App Service certificate, impor tags: buy-ssl-certificates Previously updated : 07/28/2023 Last updated : 09/19/2024 -You can add digital security certificates to [use in your application code](configure-ssl-certificate-in-code.md) or to [secure custom DNS names](configure-ssl-bindings.md) in [Azure App Service](overview.md), which provides a highly scalable, self-patching web hosting service. Currently called Transport Layer Security (TLS) certificates, also previously known as Secure Socket Layer (SSL) certificates, these private or public certificates help you secure internet connections by encrypting data sent between your browser, websites that you visit, and the website server. +You can add digital security certificates to [use in your application code](configure-ssl-certificate-in-code.md) or to [help secure custom DNS names](configure-ssl-bindings.md) in [Azure App Service](overview.md), which provides a highly scalable, self-patching web hosting service. Currently called Transport Layer Security (TLS) certificates, also previously known as Secure Socket Layer (SSL) certificates, these private or public certificates help you secure internet connections by encrypting data sent between your browser, websites that you visit, and the website server. The following table lists the options for you to add certificates in App Service: |Option|Description| |-|-|-| Create a free App Service managed certificate | A private certificate that's free of charge and easy to use if you just need to secure your [custom domain](app-service-web-tutorial-custom-domain.md) in App Service. | +| Create a free App Service managed certificate | A private certificate that's free of charge and easy to use if you just need to improve security for your [custom domain](app-service-web-tutorial-custom-domain.md) in App Service. | | Import an App Service certificate | A private certificate that's managed by Azure. It combines the simplicity of automated certificate management and the flexibility of renewal and export options. | | Import a certificate from Key Vault | Useful if you use [Azure Key Vault](/azure/key-vault/) to manage your [PKCS12 certificates](https://wikipedia.org/wiki/PKCS_12). See [Private certificate requirements](#private-certificate-requirements). | | Upload a private certificate | If you already have a private certificate from a third-party provider, you can upload it. See [Private certificate requirements](#private-certificate-requirements). | The following table lists the options for you to add certificates in App Service - Map the domain where you want the certificate to App Service. For information, see [Tutorial: Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md). - - For a root domain (like contoso.com), make sure your app doesn't have any [IP restrictions](app-service-ip-restrictions.md) configured. Both certificate creation and its periodic renewal for a root domain depends on your app being reachable from the internet. + - For a root domain (like contoso.com), make sure your app doesn't have any [IP restrictions](app-service-ip-restrictions.md) configured. Both certificate creation and its periodic renewal for a root domain depend on your app being reachable from the internet. ## Private certificate requirements The [free App Service managed certificate](#create-a-free-managed-certificate) and the [App Service certificate](configure-ssl-app-service-certificate.md) already satisfy the requirements of App Service. If you choose to upload or import a private certificate to App Service, your certificate must meet the following requirements: -* Exported as a [password-protected PFX file](https://en.wikipedia.org/w/index.php?title=X.509§ion=4#Certificate_filename_extensions), encrypted using triple DES. +* Exported as a [password-protected PFX file](https://en.wikipedia.org/w/index.php?title=X.509§ion=4#Certificate_filename_extensions), encrypted using triple DES * Contains private key at least 2048 bits long-* Contains all intermediate certificates and the root certificate in the certificate chain. +* Contains all intermediate certificates and the root certificate in the certificate chain -To secure a custom domain in a TLS binding, the certificate has more requirements: +If you want to help secure a custom domain in a TLS binding, the certificate must meet these additional requirements: * Contains an [Extended Key Usage](https://en.wikipedia.org/w/index.php?title=X.509§ion=4#Extensions_informing_a_specific_usage_of_a_certificate) for server authentication (OID = 1.3.6.1.5.5.7.3.1) * Signed by a trusted certificate authority To secure a custom domain in a TLS binding, the certificate has more requirement ## Create a free managed certificate -The free App Service managed certificate is a turn-key solution for securing your custom DNS name in App Service. Without any action from you, this TLS/SSL server certificate is fully managed by App Service and is automatically renewed continuously in six-month increments, 45 days before expiration, as long as the prerequisites that you set up stay the same. All the associated bindings are updated with the renewed certificate. You create and bind the certificate to a custom domain, and let App Service do the rest. +The free App Service managed certificate is a turn-key solution for helping to secure your custom DNS name in App Service. Without any action from you, this TLS/SSL server certificate is fully managed by App Service and is automatically renewed continuously in six-month increments, 45 days before expiration, as long as the prerequisites that you set up stay the same. All the associated bindings are updated with the renewed certificate. You create and bind the certificate to a custom domain, and let App Service do the rest. > [!IMPORTANT] > Before you create a free managed certificate, make sure you have [met the prerequisites](#prerequisites) for your app. The free certificate comes with the following limitations: - Doesn't support usage as a client certificate by using certificate thumbprint, which is planned for deprecation and removal. - Doesn't support private DNS. - Isn't exportable.-- Isn't supported in an App Service Environment (ASE).+- Isn't supported in an App Service Environment. - Only supports alphanumeric characters, dashes (-), and periods (.). - Only custom domains of length up to 64 characters are supported. The free certificate comes with the following limitations: - Must have an A record pointing to your web app's IP address. - Must be on apps that are publicly accessible. - Isn't supported with root domains that are integrated with Traffic Manager.-- Must meet all the above for successful certificate issuances and renewals.+- Must meet all of the above for successful certificate issuances and renewals. ### [Subdomain](#tab/subdomain) - Must have CNAME mapped _directly_ to `<app-name>.azurewebsites.net` or [trafficmanager.net](configure-domain-traffic-manager.md#enable-custom-domain). Mapping to an intermediate CNAME value blocks certificate issuance and renewal. The free certificate comes with the following limitations: When the operation completes, the certificate appears in the **Managed certificates** list. - :::image type="content" source="media/configure-ssl-certificate/create-free-cert-finished.png" alt-text="Screenshot of 'Managed certificates' pane with newly created certificate listed."::: + :::image type="content" source="media/configure-ssl-certificate/create-free-cert-finished.png" alt-text="Screenshot of the Managed certificates pane with the new certificate listed."::: -1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). +1. To provide security for a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). ## Import an App Service certificate -To import an App Service certificate, first [buy and configure an App Service certificate](configure-ssl-app-service-certificate.md#buy-and-configure-an-app-service-certificate), then follow the steps here. +To import an App Service certificate, first [buy and configure an App Service certificate](configure-ssl-app-service-certificate.md#buy-and-configure-an-app-service-certificate), and then follow the steps here. 1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**. To import an App Service certificate, first [buy and configure an App Service ce :::image type="content" source="media/configure-ssl-certificate/import-app-service-cert-finished.png" alt-text="Screenshot of 'Bring your own certificates (.pfx)' pane with purchased certificate listed."::: -1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). +1. To help secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). ## Import a certificate from Key Vault By default, the App Service resource provider doesn't have access to your key va |--|--|--| | **Microsoft Azure App Service** or **Microsoft.Azure.WebSites** | - `abfa0a7c-a6b6-4736-8310-5855508787cd` for public Azure cloud environment <br><br>- `6a02c803-dafd-4136-b4c3-5a6f318b4714` for Azure Government cloud environment | Certificate User | -The service principal app ID or assignee value is the ID for App Service resource provider. To learn how to authorize key vault permissions for App Service resource provider using access policy refer to the [provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control documentation](/azure/key-vault/general/rbac-guide?tabs=azure-portal#key-vault-scope-role-assignment). +The service principal app ID or assignee value is the ID for the App Service resource provider. To learn how to authorize key vault permissions for the App Service resource provider using an access policy, see the [provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control documentation](/azure/key-vault/general/rbac-guide?tabs=azure-portal#key-vault-scope-role-assignment). > [!NOTE]-> Do not delete these RBAC permissions from key vault, otherwise App Service will not be able to sync your web app with the latest key vault certificate version. +> Do not delete these RBAC permissions from key vault. If you do, App Service will not be able to sync your web app with the latest key vault certificate version. ### [Access policy permissions](#tab/accesspolicy) The service principal app ID or assignee value is the ID for App Service resourc |--|--|--|--| | **Microsoft Azure App Service** or **Microsoft.Azure.WebSites** | - `abfa0a7c-a6b6-4736-8310-5855508787cd` for public Azure cloud environment <br><br>- `6a02c803-dafd-4136-b4c3-5a6f318b4714` for Azure Government cloud environment | Get | Get | -The service principal app ID or assignee value is the ID for App Service resource provider. To learn how to authorize key vault permissions for App Service resource provider using access policy refer to the [assign a Key Vault access policy documentation](/azure/key-vault/general/assign-access-policy?tabs=azure-portal). +The service principal app ID or assignee value is the ID for the App Service resource provider. To learn how to authorize key vault permissions for the App Service resource provider using an access policy, see the [assign a Key Vault access policy documentation](/azure/key-vault/general/assign-access-policy?tabs=azure-portal). > [!NOTE]-> Do not delete these access policy permissions from key vault, otherwise App Service will not be able to sync your web app with the latest key vault certificate version. +> Do not delete these access policy permissions from key vault. If you do, App Service will not be able to sync your web app with the latest key vault certificate version. The service principal app ID or assignee value is the ID for App Service resourc 1. Select **Select key vault certificate**. - :::image type="content" source="media/configure-ssl-certificate/import-key-vault-cert.png" alt-text="Screenshot of app management page with 'Certificates', 'Bring your own certificates (.pfx)', and 'Import from Key Vault' selected"::: + :::image type="content" source="media/configure-ssl-certificate/import-key-vault-cert.png" alt-text="Screenshot of the app management page with 'Certificates', 'Bring your own certificates (.pfx)', and 'Import from Key Vault' selected."::: 1. To help you select the certificate, use the following table: The service principal app ID or assignee value is the ID for App Service resourc | **Key vault** | The key vault that has the certificate you want to import. | | **Certificate** | From this list, select a PKCS12 certificate that's in the vault. All PKCS12 certificates in the vault are listed with their thumbprints, but not all are supported in App Service. | -1. When finished with your selection, select **Select**, **Validate**, then **Add**. +1. When finished with your selection, select **Select**, **Validate**, and then **Add**. When the operation completes, the certificate appears in the **Bring your own certificates** list. If the import fails with an error, the certificate doesn't meet the [requirements for App Service](#private-certificate-requirements). The service principal app ID or assignee value is the ID for App Service resourc > [!NOTE] > If you update your certificate in Key Vault with a new certificate, App Service automatically syncs your certificate within 24 hours. -1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). +1. To helps secure custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). ## Upload a private certificate If your certificate authority gives you multiple certificates in the certificate --END CERTIFICATE-- ``` -#### Export merged private certificate to PFX +#### Export the merged private certificate to PFX Now, export your merged TLS/SSL certificate with the private key that was used to generate your certificate request. If you generated your certificate request using OpenSSL, then you created a private key file. > [!NOTE]-> OpenSSL v3 changed default cipher from 3DES to AES256, but this can be overridden on the command line -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -macalg SHA1. -> OpenSSL v1 uses 3DES as default, so the PFX files generated are supported without any special modifications. +> OpenSSL v3 changed the default cipher from 3DES to AES256, but this can be overridden on the command line: -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -macalg SHA1. +> OpenSSL v1 uses 3DES as the default, so the PFX files generated are supported without any special modifications. 1. To export your certificate to a PFX file, run the following command, but replace the placeholders _<private-key-file>_ and _<merged-certificate-file>_ with the paths to your private key and your merged certificate file. Now, export your merged TLS/SSL certificate with the private key that was used t 1. If you used IIS or _Certreq.exe_ to generate your certificate request, install the certificate to your local computer, and then [export the certificate to a PFX file](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754329(v=ws.11)). -#### Upload certificate to App Service +#### Upload the certificate to App Service You're now ready upload the certificate to App Service. You're now ready upload the certificate to App Service. 1. From your app's navigation menu, select **Certificates** > **Bring your own certificates (.pfx)** > **Upload Certificate**. - :::image type="content" source="media/configure-ssl-certificate/upload-private-cert.png" alt-text="Screenshot of 'Certificates', 'Bring your own certificates (.pfx)', 'Upload Certificate' selected."::: + :::image type="content" source="media/configure-ssl-certificate/upload-private-cert.png" alt-text="Screenshot of the app management page with 'Certificates', 'Bring your own certificates (.pfx)', and 'Upload Certificate' selected."::: 1. To help you upload the .pfx certificate, use the following table: You're now ready upload the certificate to App Service. | **Certificate password** | Enter the password that you created when you exported the PFX file. | | **Certificate friendly name** | The certificate name that will be shown in your web app. | -1. When finished with your selection, select **Select**, **Validate**, then **Add**. +1. When finished with your selection, select **Select**, **Validate**, and then **Add**. When the operation completes, the certificate appears in the **Bring your own certificates** list. - :::image type="content" source="media/configure-ssl-certificate/import-app-service-cert-finished.png" alt-text="Screenshot of 'Bring your own certificates' pane with uploaded certificate listed."::: + :::image type="content" source="media/configure-ssl-certificate/import-app-service-cert-finished.png" alt-text="Screenshot of the 'Bring your own certificates' pane with the uploaded certificate listed."::: -1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). +1. To provide security for a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). ## Upload a public certificate Public certificates are supported in the *.cer* format. > [!NOTE]-> After you upload a public certificate to an app, it is only accessible by the app it is uploaded to. Public certificates must be uploaded to each individual web app that needs access. For App Service Environment specific scenarios, refer to [the documentation for certificates and the App Service Environment](../app-service/environment/overview-certificates.md) +> After you upload a public certificate to an app, it's only accessible by the app it's uploaded to. Public certificates must be uploaded to each individual web app that needs access. For App Service Environment specific scenarios, refer to [the documentation for certificates and the App Service Environment](../app-service/environment/overview-certificates.md). > > You can upload up to 1000 public certificates per App Service Plan. Public certificates are supported in the *.cer* format. 1. When you're done, select **Add**. - :::image type="content" source="media/configure-ssl-certificate/upload-public-cert.png" alt-text="Screenshot of name and public key certificate to upload."::: + :::image type="content" source="media/configure-ssl-certificate/upload-public-cert.png" alt-text="Screenshot of the app management page. It shows the public key certificate to upload and its name."::: 1. After the certificate is uploaded, copy the certificate thumbprint, and then review [Make the certificate accessible](configure-ssl-certificate-in-code.md#make-the-certificate-accessible). Public certificates are supported in the *.cer* format. Before a certificate expires, make sure to add the renewed certificate to App Service, and update any certificate bindings where the process depends on the certificate type. For example, a [certificate imported from Key Vault](#import-a-certificate-from-key-vault), including an [App Service certificate](configure-ssl-app-service-certificate.md), automatically syncs to App Service every 24 hours and updates the TLS/SSL binding when you renew the certificate. For an [uploaded certificate](#upload-a-private-certificate), there's no automatic binding update. Based on your scenario, review the corresponding section: -- [Renew an uploaded certificate](#renew-uploaded-certificate)+- [Renew an uploaded certificate](#renew-an-uploaded-certificate) - [Renew an App Service certificate](configure-ssl-app-service-certificate.md#renew-an-app-service-certificate) - [Renew a certificate imported from Key Vault](#renew-a-certificate-imported-from-key-vault) -#### Renew uploaded certificate +#### Renew an uploaded certificate -When you replace an expiring certificate, the way you update the certificate binding with the new certificate might adversely affect user experience. For example, your inbound IP address might change when you delete a binding, even if that binding is IP-based. This result is especially impactful when you renew a certificate that's already in an IP-based binding. To avoid a change in your app's IP address, and to avoid downtime for your app due to HTTPS errors, follow these steps in the specified sequence: +When you replace an expiring certificate, the way you update the certificate binding with the new certificate might adversely affect the user experience. For example, your inbound IP address might change when you delete a binding, even if that binding is IP-based. This result is especially impactful when you renew a certificate that's already in an IP-based binding. To avoid a change in your app's IP address, and to avoid downtime for your app due to HTTPS errors, follow these steps in the specified sequence: 1. [Upload the new certificate](#upload-a-private-certificate). -1. Go to the **Custom domains** page for your app, select the **...** actions button, and select **Update binding**. +1. Go to the **Custom domains** page for your app, select the **...** button, and then select **Update binding**. -1. Select the new certificate and select **Update**. +1. Select the new certificate and then select **Update**. 1. Delete the existing certificate. When you replace an expiring certificate, the way you update the certificate bin To renew a certificate that you imported into App Service from Key Vault, review [Renew your Azure Key Vault certificate](/azure/key-vault/certificates/overview-renew-certificate). -After the certificate renews inside your key vault, App Service automatically syncs the new certificate, and updates any applicable certificate binding within 24 hours. To sync manually, follow these steps: +After the certificate renews in your key vault, App Service automatically syncs the new certificate and updates any applicable certificate binding within 24 hours. To sync manually, follow these steps: 1. Go to your app's **Certificate** page. -1. Under **Bring your own certificates (.pfx)**, select the **...** details button for the imported key vault certificate, and then select **Sync**. +1. Under **Bring your own certificates (.pfx)**, select the **...** button for the imported key vault certificate, and then select **Sync**. ## Frequently asked questions -### How can I automate adding a bring-your-owncertificate to an app? +### How can I automate adding a bring-your-own certificate to an app? - [Azure CLI: Bind┬áa┬ácustom┬áTLS/SSL┬ácertificate┬áto┬áa┬áweb┬áapp](scripts/cli-configure-ssl-certificate.md)-- [Azure PowerShell Bind a custom TLS/SSL certificate to a web app using PowerShell](scripts/powershell-configure-ssl-certificate.md)+- [Azure PowerShell: Bind a custom TLS/SSL certificate to a web app using PowerShell](scripts/powershell-configure-ssl-certificate.md) ### Can I use a private CA (certificate authority) certificate for inbound TLS on my app?-You can use a private CA certificate for inbound TLS in an [App Service Environment version 3 (ASEv3)](./environment/overview-certificates.md). This isn't possible in App Service (multi-tenant). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). +You can use a private CA certificate for inbound TLS in [App Service Environment version 3](./environment/overview-certificates.md). This isn't possible in App Service (multi-tenant). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). -### Can I make outbound calls using a private CA (certificate authority) client certificate from my app? -This is only supported for Windows container apps in multi-tenant App Service. In addition, you can make outbound calls using a private CA client certificate with both code-based and container-based apps in an [App Service Environment version 3 (ASEv3)](./environment/overview-certificates.md). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). +### Can I make outbound calls using a private CA client certificate from my app? +This is only supported for Windows container apps in multi-tenant App Service. In addition, you can make outbound calls using a private CA client certificate with both code-based and container-based apps in [App Service Environment version 3](./environment/overview-certificates.md). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). -### Can I load a private CA (certificate authority) certificate in my App Service Trusted Root Store? -You can load your own CA certificate into the Trusted Root Store in an [App Service Environment version 3 (ASEv3)](./environment/overview-certificates.md). You can't modify the list of Trusted Root Certificates in App Service (multi-tenant). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). +### Can I load a private CA certificate in my App Service Trusted Root Store? +You can load your own CA certificate into the Trusted Root Store in [App Service Environment version 3](./environment/overview-certificates.md). You can't modify the list of Trusted Root Certificates in App Service (multi-tenant). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md). ## More resources You can load your own CA certificate into the Trusted Root Store in an [App Serv * [Enforce HTTPS](configure-ssl-bindings.md#enforce-https) * [Enforce TLS 1.1/1.2](configure-ssl-bindings.md#enforce-tls-versions) * [Use a TLS/SSL certificate in your code in Azure App Service](configure-ssl-certificate-in-code.md)-* [FAQ : App Service Certificates](./faq-configuration-and-management.yml) +* [FAQ: App Service Certificates](./faq-configuration-and-management.yml) |
app-service | Deploy Ci Cd Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ci-cd-custom-container.md | When you enable this option, App Service adds a webhook to your repository in Az ::: zone pivot="container-linux" > [!NOTE] > Support for multi-container (Docker Compose) apps is limited: -> - For Azure Container Registry, App Service creates a webhook in the selected registry with the registry as the scope. A `docker push` to any repository in the registry (including the ones not referenced by your Docker Compose file) triggers an app restart. You may want to [modify the webhook](../container-registry/container-registry-webhook.md) to a narrower scope. +> - For Azure Container Registry, App Service creates a webhook in the selected registry with the registry as the scope. A `docker push` to any repository in the registry (including the ones not referenced by your Docker Compose file) triggers an app restart. You may want to [modify the webhook](/azure/container-registry/container-registry-webhook) to a narrower scope. > - Docker Hub doesn't support webhooks at the registry level. You must **add** the webhooks manually to the images specified in your Docker Compose file. ::: zone-end |
app-service | Manage Create Arc Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-create-arc-environment.md | Last updated 03/24/2023 # Set up an Azure Arc-enabled Kubernetes cluster to run App Service, Functions, and Logic Apps (Preview) -If you have an [Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/overview.md), you can use it to create an [App Service enabled custom location](overview-arc-integration.md) and deploy web apps, function apps, and logic apps to it. +If you have an [Azure Arc-enabled Kubernetes cluster](/azure/azure-arc/kubernetes/overview), you can use it to create an [App Service enabled custom location](overview-arc-integration.md) and deploy web apps, function apps, and logic apps to it. Azure Arc-enabled Kubernetes lets you make your on-premises or cloud Kubernetes cluster visible to App Service, Functions, and Logic Apps in Azure. You can create an app and deploy to it just like another Azure region. az extension add --upgrade --yes --name appservice-kube ## Create a connected cluster > [!NOTE]-> This tutorial uses [Azure Kubernetes Service (AKS)](/azure/aks/) to provide concrete instructions for setting up an environment from scratch. However, for a production workload, you will likely not want to enable Azure Arc on an AKS cluster as it is already managed in Azure. The steps below will help you get started understanding the service, but for production deployments, they should be viewed as illustrative, not prescriptive. See [Quickstart: Connect an existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md) for general instructions on creating an Azure Arc-enabled Kubernetes cluster. +> This tutorial uses [Azure Kubernetes Service (AKS)](/azure/aks/) to provide concrete instructions for setting up an environment from scratch. However, for a production workload, you will likely not want to enable Azure Arc on an AKS cluster as it is already managed in Azure. The steps below will help you get started understanding the service, but for production deployments, they should be viewed as illustrative, not prescriptive. See [Quickstart: Connect an existing Kubernetes cluster to Azure Arc](/azure/azure-arc/kubernetes/quickstart-connect-cluster) for general instructions on creating an Azure Arc-enabled Kubernetes cluster. 1. Create a cluster in Azure Kubernetes Service with a public IP address. Replace `<group-name>` with the resource group name you want. You can learn more about these pods and their role in the system from [Pods crea ## Create a custom location -The [custom location](../azure-arc/kubernetes/custom-locations.md) in Azure is used to assign the App Service Kubernetes environment. +The [custom location](/azure/azure-arc/kubernetes/custom-locations) in Azure is used to assign the App Service Kubernetes environment. <!-- https://github.com/MicrosoftDocs/azure-docs-pr/pull/156618 --> The [custom location](../azure-arc/kubernetes/custom-locations.md) in Azure is u <!-- --kubeconfig ~/.kube/config # needed for non-Azure --> > [!NOTE]- > If you experience issues creating a custom location on your cluster, you may need to [enable the custom location feature on your cluster](../azure-arc/kubernetes/custom-locations.md#enable-custom-locations-on-your-cluster). This is required if logged into the CLI using a Service Principal or if you are logged in with a Microsoft Entra user with restricted permissions on the cluster resource. + > If you experience issues creating a custom location on your cluster, you may need to [enable the custom location feature on your cluster](/azure/azure-arc/kubernetes/custom-locations#enable-custom-locations-on-your-cluster). This is required if logged into the CLI using a Service Principal or if you are logged in with a Microsoft Entra user with restricted permissions on the cluster resource. > 3. Validate that the custom location is successfully created with the following command. The output should show the `provisioningState` property as `Succeeded`. If not, run it again after a minute. |
app-service | Overview Arc Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md | You can run App Service, Functions, and Logic Apps on an Azure Arc-enabled Kuber In most cases, app developers need to know nothing more than how to deploy to the correct Azure region that represents the deployed Kubernetes environment. For operators who provide the environment and maintain the underlying Kubernetes infrastructure, you must be aware of the following Azure resources: -- The connected cluster, which is an Azure projection of your Kubernetes infrastructure. For more information, see [What is Azure Arc-enabled Kubernetes?](../azure-arc/kubernetes/overview.md).-- A cluster extension, which is a subresource of the connected cluster resource. The App Service extension [installs the required pods into your connected cluster](#pods-created-by-the-app-service-extension). For more information about cluster extensions, see [Cluster extensions on Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/conceptual-extensions.md).-- A custom location, which bundles together a group of extensions and maps them to a namespace for created resources. For more information, see [Custom locations on top of Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/conceptual-custom-locations.md).+- The connected cluster, which is an Azure projection of your Kubernetes infrastructure. For more information, see [What is Azure Arc-enabled Kubernetes?](/azure/azure-arc/kubernetes/overview). +- A cluster extension, which is a subresource of the connected cluster resource. The App Service extension [installs the required pods into your connected cluster](#pods-created-by-the-app-service-extension). For more information about cluster extensions, see [Cluster extensions on Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/conceptual-extensions). +- A custom location, which bundles together a group of extensions and maps them to a namespace for created resources. For more information, see [Custom locations on top of Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/conceptual-custom-locations). - An App Service Kubernetes environment, which enables configuration common across apps but not related to cluster operations. Conceptually, it's deployed into the custom location resource, and app developers create apps into this environment. This resource is described in greater detail in [App Service Kubernetes environment](#app-service-kubernetes-environment). ## Public preview limitations |
app-service | Routine Maintenance Downtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/routine-maintenance-downtime.md | + + Title: Routine maintenance, restarts, and downtime +description: Learn about common reasons for restarts and downtime during Routine Maintenance and options to minimize disruptions. ++++ Last updated : 09/10/2024+++# Routine maintenance for Azure App Service, restarts, and downtime +++Azure App Service is a Platform as a Service (PaaS) for hosting web applications, REST APIs, and mobile back ends. One of the benefits of the offering is that planned maintenance is performed behind the scenes. Our customers can focus on deploying, running, and maintaining their application code instead of worrying about maintenance activities for the underlying infrastructure. Azure App Service maintenance is a robust process designed to avoid or minimize downtime to hosted applications. This process remains largely invisible to the users of hosted applications. However, our customers are often curious if downtime that they experience is a result of our planned maintenance, especially if they seem to coincide in time. ++## Background ++Our planned maintenance mechanism revolves around the architecture of the scale units that host the servers on which deployed applications run. Any given scale unit contains several different types of roles that all work together. The two roles that are most relevant to our planned maintenance update mechanism are the Worker and File Server roles. For a more detailed description of all the different roles and other details about the App Service architecture, review [Inside the Azure App Service Architecture](/archive/msdn-magazine/2017/february/azure-inside-the-azure-app-service-architecture) + +There are different ways that an update strategy could be designed and those different designs would each have their own benefits and downsides. One of the strategies that we use for major updates is that these updates don't run on servers / roles that are currently used by our customers. Instead, our update process updates instances in waves and the instances undergoing updates aren't used by applications. Instances being used by applications are gradually swapped out and replaced by updated instances. The resulting effect on an application is that the application experiences a start, or restart. From a statistical perspective and from empirical observations, applications restarts are much less disruptive than performing maintenance on servers that are actively being used by applications. ++## Instance update details + +There are two slightly different scenarios that play out during every Planned Maintenance cycle. These two scenarios are related to the updates performed on the Worker and File Server roles. At a high level, both these scenarios appear similar from an end-user perspective but there are some important differences that can sometimes cause some unexpected behavior. + +When a File Server role needs to be updated, the storage volume used by the application needs to be migrated from one File Server instance to another. During this change, an updated File Server role is added to the application. This causes a worker process restart simultaneously on all worker instances in that App Service Plan. The worker process restart is overlapped - the update mechanism starts the new worker process first, lets it complete its start-up, sends new requests to the new worker process. Once the new worker process is responding, the existing requests have 30 seconds by default to complete in the old worker process, then the old worker process is stopped. + +When a Worker role is updated, the update mechanism similarly swaps in a new updated Worker role. The worker is swapped as follows - An updated Worker is added to the ASP, the application is started on the new Worker, our infrastructure waits for the application to start-up, new requests are sent to the new worker instance, requests are allowed to complete on the old instance, then the old worker instance is removed from the ASP. This sequence usually occurs once for each worker instance in the ASP and is spread out over minutes or hours depending on the size of the plan and scale unit. + +The main differences between these two scenarios are: + +- A File Server role change results in a simultaneous overlapped worker process restart on all instances, whereas a Worker change results in an application start on a single instance. +- A File Server role change means that the application restarts on the same instance as it was running before, whereas a Worker change results in the application running on a different instance after start-up. + +The overlapped restart mechanism results in zero downtime for most applications and planned maintenance isn't even noticed. If the application takes some time to start, the application can experience some minimal downtime associated with application slowness or failures during or shortly after the process starts. Our platform keeps attempting to start the application until successful but if the application fails to start altogether, a longer downtime can occur. The downtime persists until some corrective action is taken, such as manually restarting the application on that instance. + +## Unexpected failure handling + +While this article focuses largely on planned maintenance activities, it's worth mentioning that similar behavior can occur as a result of the platform recovering from unexpected failures. If an unexpected hardware failure occurs that affects a Worker role, the platform similarly replaces it by a new worker. The application starts on this new Worker role. When a failure or latency affects a File Server role that is associated with the application, a new File Server role replaces it. A worker process restart occurs on all the Worker roles. This fact is important to consider when evaluating strategies for improving uptime for your applications. + +## Strategies for increased uptime + +Most of our hosted applications experience limited or no downtime during planned maintenance. However, this fact isn't helpful if your specific applications have a more complicated start-up behavior and are therefore susceptible to downtime when restarted. If applications are experiencing downtime every time they're restarted, addressing the downtime is even more pressing. There are several features available in our App Service product offering that are designed to further minimize downtime in these scenarios. Broadly speaking there are two categories of strategies that can be employed: + +- Improving application start-up consistency +- Minimizing application restarts + +Improving application start-up speed and ensuring it's consistently successful has a higher success rate statistically. We recommend reviewing options that are available in this area first. Some of them are fairly easy to implement and can yield large improvements. Start-up consistency strategies utilize both App Service features and techniques related to application code or configuration. Minimizing restarts is a group of options that can be used if we can't improve application start-up to be consistent enough. These options are typically more expensive and less reliable as they usually protect against a subset of restarts. Avoiding all restarts isn't possible. Using both types of strategies is something that is highly effective. +++### Strategies for start-up consistency + +#### Application Initialization (AppInit) + +When an application starts on a Windows Worker, the Azure App Service infrastructure tries to determine when the application is ready to serve requests before external requests are routed to this worker. By default, a successful request to the root (/) of the application is a signal that the application is ready to serve requests. For some applications, this default behavior isn't sufficient to ensure that the application is fully warmed up. Typically that happens if the root of the application has limited dependencies but other paths rely on more libraries or external dependencies to work. The [IIS Application Initialization Module](/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization) works well to fine tune warm-up behavior. At a high level, it allows the application owner to define which path or paths serve as indicators that the application is in fact ready to serve requests. For a detailed discussion of how to implement this mechanism, review the following article: [App Service Warm-Up Demystified](https://michaelcandido.com/app-service-warm-up-demystified/) . When correctly implemented, this feature can result in zero downtime even if the application start-up is more complex. ++Linux applications can utilize a similar mechanism by using the WEBSITE_WARMUP_PATH application setting. ++#### Health Check + +[Health Check](monitor-instances-health-check.md) is a feature that is designed to handle unexpected code and platform failures during normal application execution but can also be helpful to augment start-up resiliency. Health Check performs two different healing functions - removing a failing instance from the load balancer, and replacing an entire instance. We can utilize the removal of an instance from the load balancer to handle intermittent start-up failures. If an instance returns failures after start-up despite employing all other strategies, health check can remove that instance from the load balancer until that instance starts returning 200 status code to health check requests again. This feature therefore acts as a fail-safe to minimize any post start-up downtime that occurs. This feature can be useful if the post start-up failures are transient and don't require process restart. ++#### Auto-Heal + +Auto-Heal for [Windows](https://azure.github.io/AppService/2018/09/10/Announcing-the-New-Auto-Healing-Experience-in-App-Service-Diagnostics.html) and [Linux](https://azure.github.io/AppService/2021/04/21/Announcing-Autoheal-for-Azure-App-Service-Linux.html) is another feature that is designed for normal application execution but can be used for improving start-up behavior as well. If we know that the application sometimes enters an unrecoverable state after start-up, Health Check won't be suitable. However, auto-heal can automatically restart the worker process which can be useful in that scenario. We can configure an auto-heal rule that monitors failed requests and trigger a process restart on a single instance. ++#### Application start-up testing + +Testing the start-up of an application exhaustively can be overlooked. Start-up testing in combination with other factors such as dependency failures, library load failures, network issues etc. poses a bigger challenge. A relatively small failure rate for start-up can go unnoticed but can result in a high failure rate when there are multiple instances being restarted every update cycle. A plan with 20 instances and an application with a five-percent failure rate in start-up, results in three instances failing to start on average every update cycle. There are usually three application restarts per instance (20 instance moves and 2 File Server related restarts per instance). + +We recommend testing several scenarios + +- General start-up testing (one instance at a time) to establish individual instance start-up success rate. This simplest scenario should approach 100 percent before moving on to other more complicated scenarios. +- Simulate start-up dependency failure. If the app has any dependency on other Azure or non-Azure services, simulate downtime in those dependencies to reveal application behavior under those conditions. +- Simultaneous start-up of many instances - preferably more instances than in production. Testing with many instances often reveals dependency failures that are often used during start-up only, such as KeyVault references, App Configuration, databases etc. These dependencies should be tested for burst volume of requests that a simultaneous instance restart generates. +- Adding an instance under full load - making sure AppInit is configured correctly and application can be initialized fully before requests are sent to the new instance. Manual scaling out is an easy way to replicate an instance move during maintenance. +- Overlapped worker process restart - again testing whether AppInit is configured correctly and if requests can complete successfully as the old worker process completes and new worker process starts up. Changing an environment variable under load can simulate what a File Server change does. +- Multiple apps in a plan - if there are multiple apps in the same plan, perform all these tests simultaneously across all apps. +++#### Start-up logging + +Having the ability to retroactively troubleshoot start-up failures in production is a consideration that is separate from using testing to improve start-up consistency. However, it's equally or more important since despite all our efforts, we might not be able to simulate all types of real-world failures in a test or QA environment. It's also commonly the weakest area for logging as initializing the logging infrastructure is another start-up activity that must be performed. The order of operations for initializing the application is an important consideration for this reason and can become a chicken and egg type of problem. For example, if we need to configure logging based on a KeyVault reference, and we fail to obtain the KeyVault value, how do we log this failure? We might want to consider duplicating start-up logging using a separate logging mechanism that doesn't depend on any other external factors. For example, logging these types of start-up failures to the local disk. Simply turning on a general logging feature, such as [.NET Core stdout logging](/aspnet/core/test/troubleshoot-azure-iis#aspnet-core-module-stdout-log-azure-app-service), can be counter-productive as this logging keeps generating log data even after start-up, and that can fill up the disk over time. This feature can be used strategically for troubleshooting reproducible start-up failures. ++### Strategies for minimizing restarts + +The following strategies can significantly reduce the number of restarts that an application experiences during planned maintenance. Some of the strategies in this section can also give more control over when these restarts occur. In general, these strategies, while effective, can't avoid restarts altogether. The main reason is that some restarts occur due to unexpected failures rather than planned maintenance. ++> [!IMPORTANT] +> Completely avoiding restarts is not possible. The following strategies can help reduce the number of restarts. + +#### Local Cache + +[Local Cache](overview-local-cache.md) is a feature that is designed to improve resiliency due to external storage failures. At a high level, it creates a copy of the application content on the local disk of the instance on which it runs. This isolates the application from unexpected storage failures but also prevents restarts due to File Server changes. Utilizing this feature can vastly reduce the number of restarts during public maintenance - typically it can remove about two thirds of those restarts. Since it primarily avoids simultaneous worker process restarts, the observed improvement on application start-up consistency can be even bigger. Local Cache does have some design implications and changes to application behavior so it's important to fully test the application to ensure that the application is compatible with this feature. ++#### Planned maintenance notifications and paired regions + +If we want to reduce the risk of update-related restarts in production, we can utilize [Planned Maintenance Notifications](https://azure.github.io/AppService/2022/02/01/App-Service-Planned-Notification-Feature.html) to find out when any given application will be updated. We can then set up a copy of the application in a [Paired Region](https://azure.github.io/AppService/2022/02/01/App-Service-Planned-Notification-Feature.html) and route traffic to our secondary application copy during maintenance in the primary copy. This option can be costly as the window for this maintenance is fairly wide so the secondary application copy needs to run on sufficient instances for at least several days. This option can be less costly if we already have a secondary application set up for general resiliency. This option can reduce the number of restarts but like other options in this category can't eliminate all restarts. + +#### Controlling planned maintenance window in ASE v3 + +Controlling the window for maintenance is only available in our isolated ASE v3 environments. If we're using an ASE already, or it's feasible to use an ASE, doing so allows our customers to [Control Planned Maintenance](https://azure.github.io/AppService/2022/09/15/Configure-automation-for-upgrade-preferences-in-App-Service-Environment.html) behavior to a high degree. It isn't possible to control the time of the planned maintenance in a multitenant environment. |
app-service | Tutorial Custom Container Sidecar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container-sidecar.md | First you create the resources that the tutorial uses (for more information, see > `azd provision` uses the included templates to create the following Azure resources: > > - A resource group- > - A [container registry](../container-registry/container-registry-intro.md) with two images deployed: + > - A [container registry](/azure/container-registry/container-registry-intro) with two images deployed: > - An Nginx image with the OpenTelemetry module. > - An OpenTelemetry collector image, configured to export to [Azure Monitor](/azure/azure-monitor/overview). > - A [log analytics workspace](/azure/azure-monitor/logs/log-analytics-overview) |
app-service | Tutorial Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md | zone_pivot_groups: app-service-containers-windows-linux For more information, see [Operating system functionality on Azure App Service](operating-system-functionality.md). -You can deploy a custom-configured Windows image from Visual Studio to make OS changes that your app needs. This makes it easy to migrate an on-premises app that requires a custom OS and software configuration. This tutorial demonstrates how to migrate to App Service an ASP.NET app that uses custom fonts installed in the Windows font library. You deploy a custom-configured Windows image from Visual Studio to [Azure Container Registry](../container-registry/index.yml) and then run it in App Service. +You can deploy a custom-configured Windows image from Visual Studio to make OS changes that your app needs. This makes it easy to migrate an on-premises app that requires a custom OS and software configuration. This tutorial demonstrates how to migrate to App Service an ASP.NET app that uses custom fonts installed in the Windows font library. You deploy a custom-configured Windows image from Visual Studio to [Azure Container Registry](/azure/container-registry/) and then run it in App Service. :::image type="content" source="media/tutorial-custom-container/app-running-newupdate.png" alt-text="Shows the web app running in a Windows container."::: You can find *InstallFont.ps1* in the **CustomFontSample** project. It's a simpl ## Publish to Azure Container Registry -[Azure Container Registry](../container-registry/index.yml) can store your images for container deployments. You can configure App Service to use images that are hosted in Azure Container Registry. +[Azure Container Registry](/azure/container-registry/) can store your images for container deployments. You can configure App Service to use images that are hosted in Azure Container Registry. ### Open the publish wizard |
application-gateway | Http Response Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md | An HTTP 499 response is presented if a client request that is sent to applicatio #### 500 ΓÇô Internal Server Error -Azure Application Gateway shouldn't exhibit 500 response codes. Open a support request if you see this code, because this issue is an internal error to the service. For information on how to open a support case, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +Azure Application Gateway shouldn't exhibit 500 response codes. Open a support request if you see this code, because this issue is an internal error to the service. For information on how to open a support case, see [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). #### 502 ΓÇô Bad Gateway |
automation | Automation Dsc Cd Chocolatey | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-cd-chocolatey.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). In a DevOps world, there are many tools to assist with various points in the continuous integration pipeline. Azure Automation [State Configuration](automation-dsc-overview.md) is a welcome new addition to the options that DevOps teams can employ. |
automation | Automation Dsc Compile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-compile.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). You can compile Desired State Configuration (DSC) configurations in Azure Automation State Configuration in the following ways: |
automation | Automation Dsc Config Data At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-config-data-at-scale.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). > [!IMPORTANT] > This article refers to a solution that is maintained by the Open Source community. Support is only available in the form of GitHub collaboration, and not from Microsoft. |
automation | Automation Dsc Config From Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-config-from-server.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). > [!IMPORTANT] > The article refers to a solution that is maintained by the Open Source community. Support is only available in the form of GitHub collaboration, not from Microsoft. |
automation | Automation Dsc Configuration Based On Stig | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-configuration-based-on-stig.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). Creating configuration content for the first time can be challenging. In many cases, the goal is to automate configuration of servers following a "baseline" that hopefully aligns to an industry recommendation. |
automation | Automation Dsc Create Composite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-create-composite.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). > [!IMPORTANT] > This article refers to a solution that is maintained by the Open Source community and support is only available in the form of GitHub collaboration, not from Microsoft. |
automation | Automation Dsc Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-diagnostics.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). Azure Automation State Configuration retains node status data for 30 days. You can send node status data to [Azure Monitor Logs](/azure/azure-monitor/logs/data-platform-logs) if you prefer to retain this data for a longer period. Compliance status is visible in the Azure portal or with PowerShell, for nodes and for individual DSC resources in node configurations. |
automation | Automation Dsc Extension History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-extension-history.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). The Azure Desired State Configuration (DSC) VM [extension](/azure/virtual-machines/extensions/dsc-overview) is updated as-needed to support enhancements and new capabilities delivered by Azure, Windows Server, and the Windows Management Framework (WMF) that includes Windows PowerShell. |
automation | Automation Dsc Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-getting-started.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). This article provides a step-by-step guide for doing the most common tasks with Azure Automation State Configuration, such as creating, importing, and compiling configurations, enabling machines to manage, and viewing reports. For an overview State Configuration, see [State Configuration overview](automation-dsc-overview.md). For Desired State Configuration (DSC) documentation, see [Windows PowerShell Desired State Configuration Overview](/powershell/dsc/overview). |
automation | Automation Dsc Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-onboarding.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). This topic describes how you can set up your machines for management with Azure Automation State Configuration. For details of this service, see [Azure Automation State Configuration overview](automation-dsc-overview.md). |
automation | Automation Dsc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-overview.md | -> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). Azure Automation State Configuration is an Azure configuration management service that allows you to write, manage, and compile PowerShell Desired State Configuration (DSC) [configurations](/powershell/dsc/configurations/configurations) for nodes in any cloud or on-premises datacenter. The service also imports [DSC Resources](/powershell/dsc/resources/resources), and assigns configurations to target nodes, all in the cloud. You can access Azure Automation State Configuration in the Azure portal by selecting **State configuration (DSC)** under **Configuration Management**. |
automation | Automation Dsc Remediate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-remediate.md | Last updated 07/17/2019 # Remediate noncompliant Azure Automation State Configuration servers > [!NOTE]-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md). +> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). When servers are registered with Azure Automation State Configuration, the configuration mode is set to `ApplyOnly`, `ApplyAndMonitor`, or `ApplyAndAutoCorrect`. If the mode isn't set to `ApplyAndAutoCorrect`, |
automation | Automation Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md | -Azure Automation provides native integration of the Hybrid Runbook Worker role through the Azure virtual machine (VM) extension framework. The Azure VM agent is responsible for management of the extension on Azure VMs on Windows and Linux VMs, and [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) on Non-Azure machines including [Azure Arc-enabled Servers](../azure-arc/servers/overview.md) and [Azure Arc-enabled VMware vSphere (preview)](../azure-arc/vmware-vsphere/overview.md). Now there are two Hybrid Runbook Workers installation platforms supported by Azure Automation. +Azure Automation provides native integration of the Hybrid Runbook Worker role through the Azure virtual machine (VM) extension framework. The Azure VM agent is responsible for management of the extension on Azure VMs on Windows and Linux VMs, and [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) on Non-Azure machines including [Azure Arc-enabled Servers](/azure/azure-arc/servers/overview) and [Azure Arc-enabled VMware vSphere (preview)](/azure/azure-arc/vmware-vsphere/overview). Now there are two Hybrid Runbook Workers installation platforms supported by Azure Automation. | Platform | Description | ||| After the Update Management feature is enabled on Windows or Linux machines, you If you have more than 2,000 hybrid workers, to get a list of all of them, you can run the following PowerShell script: ```powershell-"Get-AzSubscription -SubscriptionName "<subscriptionName>" | Set-AzContext +Get-AzSubscription -SubscriptionName "<subscriptionName>" | Set-AzContext $workersList = (Get-AzAutomationHybridWorkerGroup -ResourceGroupName "<resourceGroupName>" -AutomationAccountName "<automationAccountName>").Runbookworker-$workersList | export-csv -Path "<Path>\output.csv" -NoClobber -NoTypeInformation" +$workersList | export-csv -Path "<Path>\output.csv" -NoClobber -NoTypeInformation ``` ## Next steps |
automation | Automation Linux Hrw Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md | -You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly it and against resources in the environment to manage those local resources. +You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). From the machine or server that's hosting the role, you can run runbooks directly it and against resources in the environment to manage those local resources. The Linux Hybrid Runbook Worker executes runbooks as a special user that can be elevated for running commands that need elevation. Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. This article describes how to: install the Hybrid Runbook Worker on a Linux machine, remove the worker, and remove a Hybrid Runbook Worker group. For User Hybrid Runbook Workers, see also [Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Automation](./extension-based-hybrid-runbook-worker-install.md) If you don't have an Azure Monitor Log Analytics workspace, review the [Azure Mo ### Log Analytics agent -The Hybrid Runbook Worker role requires the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for the supported Linux operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md). The agent is installed with certain service accounts that execute commands requiring root permissions. For more information, see [Service accounts](./automation-hrw-run-runbooks.md#service-accounts). +The Hybrid Runbook Worker role requires the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for the supported Linux operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). The agent is installed with certain service accounts that execute commands requiring root permissions. For more information, see [Service accounts](./automation-hrw-run-runbooks.md#service-accounts). ### Supported Linux operating systems To install and configure a Linux Hybrid Runbook Worker, perform the following st - For Azure VMs, install the Log Analytics agent for Linux using the [virtual machine extension for Linux](/azure/virtual-machines/extensions/oms-linux). The extension installs the Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace. You can use an Azure Resource Manager template, the Azure CLI, or Azure Policy to assign the [Deploy Log Analytics agent for *Linux* or *Windows* VMs](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Once the agent is installed, the machine can be added to a Hybrid Runbook Worker group in your Automation account. - - For non-Azure machines, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md). Azure Arc-enabled servers support deploying the Log Analytics agent using the following methods: + - For non-Azure machines, you can install the Log Analytics agent using [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). Azure Arc-enabled servers support deploying the Log Analytics agent using the following methods: - Using the VM extensions framework. This feature in Azure Arc-enabled servers allows you to deploy the Log Analytics agent VM extension to a non-Azure Windows and/or Linux server. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Azure Arc-enabled servers: - - The [Azure portal](../azure-arc/servers/manage-vm-extensions-portal.md) - - The [Azure CLI](../azure-arc/servers/manage-vm-extensions-cli.md) - - [Azure PowerShell](../azure-arc/servers/manage-vm-extensions-powershell.md) - - Azure [Resource Manager templates](../azure-arc/servers/manage-vm-extensions-template.md) + - The [Azure portal](/azure/azure-arc/servers/manage-vm-extensions-portal) + - The [Azure CLI](/azure/azure-arc/servers/manage-vm-extensions-cli) + - [Azure PowerShell](/azure/azure-arc/servers/manage-vm-extensions-powershell) + - Azure [Resource Manager templates](/azure/azure-arc/servers/manage-vm-extensions-template) - Using Azure Policy. |
automation | Automation Manage Send Joblogs Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-manage-send-joblogs-log-analytics.md | Azure Automation can send runbook job status and job streams to your Log Analyti - Trigger an email or alert based on your runbook job status (for example, failed or suspended). - Write advanced queries across your job streams. - Correlate jobs across Automation accounts.- - Use customized views and search queries to visualize your runbook results, runbook job status, and other related key indicators or metrics through an [Azure dashboard](../azure-portal/azure-portal-dashboards.md). + - Use customized views and search queries to visualize your runbook results, runbook job status, and other related key indicators or metrics through an [Azure dashboard](/azure/azure-portal/azure-portal-dashboards). - Get the audit logs related to Automation accounts, runbooks, and other asset create, modify and delete operations. Using Azure Monitor logs, you can consolidate logs from different resources in the same workspace where it can be analyzed with [queries](/azure/azure-monitor/logs/log-query-overview) to quickly retrieve, consolidate, and analyze the collected data. You can create and test queries using [Log Analytics](/azure/azure-monitor/logs/log-query-overview) in the Azure portal and then either directly analyze the data using these tools or save queries for use with [visualization](/azure/azure-monitor/best-practices-analysis) or [alert rules](/azure/azure-monitor/alerts/alerts-overview). |
automation | Automation Windows Hrw Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md | -You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources. +You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources. Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. This article describes how to deploy a user Hybrid Runbook Worker on a Windows machine, how to remove the worker, and how to remove a Hybrid Runbook Worker group. For user Hybrid Runbook Workers, see also [Deploy an extension-based Windows or Linux user Hybrid Runbook Worker in Automation](./extension-based-hybrid-runbook-worker-install.md) If you don't have an Azure Monitor Log Analytics workspace, review the [Azure Mo ### Log Analytics agent -The Hybrid Runbook Worker role requires the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for the supported Windows operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md). +The Hybrid Runbook Worker role requires the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for the supported Windows operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). ### Supported Windows operating system To install and configure a Windows Hybrid Runbook Worker, perform the following - For Azure VMs, install the Log Analytics agent for Windows using the [virtual machine extension for Windows](/azure/virtual-machines/extensions/oms-windows). The extension installs the Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace. You can use an Azure Resource Manager template, PowerShell, or Azure Policy to assign the [Deploy Log Analytics agent for *Linux* or *Windows* VMs](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Once the agent is installed, the machine can be added to a Hybrid Runbook Worker group in your Automation account. - - For non-Azure machines, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md). Azure Arc-enabled servers support deploying the Log Analytics agent using the following methods: + - For non-Azure machines, you can install the Log Analytics agent using [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). Azure Arc-enabled servers support deploying the Log Analytics agent using the following methods: - Using the VM extensions framework. This feature in Azure Arc-enabled servers allows you to deploy the Log Analytics agent VM extension to a non-Azure Windows or Linux server. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc-enabled servers: - - The [Azure portal](../azure-arc/servers/manage-vm-extensions-portal.md) - - The [Azure CLI](../azure-arc/servers/manage-vm-extensions-cli.md) - - [Azure PowerShell](../azure-arc/servers/manage-vm-extensions-powershell.md) - - Azure [Resource Manager templates](../azure-arc/servers/manage-vm-extensions-template.md) + - The [Azure portal](/azure/azure-arc/servers/manage-vm-extensions-portal) + - The [Azure CLI](/azure/azure-arc/servers/manage-vm-extensions-cli) + - [Azure PowerShell](/azure/azure-arc/servers/manage-vm-extensions-powershell) + - Azure [Resource Manager templates](/azure/azure-arc/servers/manage-vm-extensions-template) - Using Azure Policy. |
automation | Enable From Automation Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-automation-account.md | Sign in to the [Azure portal](https://portal.azure.com). ## Enable non-Azure VMs -Machines not in Azure need to be added manually. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you also plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. +Machines not in Azure need to be added manually. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you also plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. 1. From your Automation account select **Inventory** or **Change tracking** under **Configuration Management**. |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md | Machines connected to the Log Analytics workspace use the [Log Analytics agent]( > [!NOTE] > Change Tracking and Inventory requires linking a Log Analytics workspace to your Automation account. For a definitive list of supported regions, see [Azure Workspace mappings](../how-to/region-mappings.md). The region mappings don't affect the ability to manage VMs in a separate region from your Automation account. -As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](../../lighthouse/overview.md). Azure Lighthouse allows you to perform operations at scale across several Microsoft Entra tenants at once, making management tasks like Change Tracking and Inventory more efficient across those tenants you're responsible for. Change Tracking and Inventory can manage machines in multiple subscriptions in the same tenant, or across tenants using [Azure delegated resource management](../../lighthouse/concepts/architecture.md). +As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](/azure/lighthouse/overview). Azure Lighthouse allows you to perform operations at scale across several Microsoft Entra tenants at once, making management tasks like Change Tracking and Inventory more efficient across those tenants you're responsible for. Change Tracking and Inventory can manage machines in multiple subscriptions in the same tenant, or across tenants using [Azure delegated resource management](/azure/lighthouse/concepts/architecture). ## Current limitations You can enable Change Tracking and Inventory in the following ways: - From your [Automation account](enable-from-automation-account.md) for one or more Azure and non-Azure machines. -- Manually for non-Azure machines, including machines or servers registered with [Azure Arc-enabled servers](../../azure-arc/servers/overview.md). For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you plan to also monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.+- Manually for non-Azure machines, including machines or servers registered with [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you plan to also monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. - For a single Azure VM from the [Virtual machine page](enable-from-vm.md) in the Azure portal. This scenario is available for Linux and Windows VMs. |
automation | Extension Based Hybrid Runbook Worker Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md | The extension-based onboarding is only for **User** Hybrid Runbook Workers. This For **System** Hybrid Runbook Worker onboarding, see [Deploy an agent-based Windows Hybrid Runbook Worker in Automation](./automation-windows-hrw-install.md) or [Deploy an agent-based Linux Hybrid Runbook Worker in Automation](./automation-linux-hrw-install.md). -You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including [Azure Arc-enabled servers](../azure-arc/servers/overview.md), [Arc-enabled VMware vSphere](../azure-arc/vmware-vsphere/overview.md), and [Arc-enabled SCVMM](../azure-arc/system-center-virtual-machine-manager/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources. +You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), [Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/overview), and [Arc-enabled SCVMM](/azure/azure-arc/system-center-virtual-machine-manager/overview). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources. Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. After you successfully deploy a runbook worker, review [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md) to learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment. Azure Automation stores and manages runbooks and then delivers them to one or mo - Two cores - 4 GB of RAM-- **Non-Azure machines** must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers. See [Install Arc agent for Arc-enabled VMware VMs](../azure-arc/vmware-vsphere/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled VMware vSphere VMs and install [Arc agent for Arc-enabled SCVMM](../azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled SCVMM VMs.+- **Non-Azure machines** must have the [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](/azure/azure-arc/servers/onboard-portal) for Arc-enabled servers. See [Install Arc agent for Arc-enabled VMware VMs](/azure/azure-arc/vmware-vsphere/enable-guest-management-at-scale) to enable guest management for Arc-enabled VMware vSphere VMs and install [Arc agent for Arc-enabled SCVMM](/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale) to enable guest management for Arc-enabled SCVMM VMs. - The system-assigned managed identity must be enabled on the Azure virtual machine, Arc-enabled server, Arc-enabled VMware vSphere VM or Arc-enabled SCVMM VM. If the system-assigned managed identity isn't enabled, it will be enabled as part of the adding process. ### Supported operating systems You can also add machines to an existing hybrid worker group. 1. Select the checkbox next to the machine(s) you want to add to the hybrid worker group. - If you don't see your non-Azure machine listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent` see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers. See [Install Arc agent for Arc-enabled VMs](../azure-arc/vmware-vsphere/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled VMware vSphere and [Install Arc agent for Arc-enabled SCVMM](../azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled SCVMM VMs. + If you don't see your non-Azure machine listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent` see [Connect hybrid machines to Azure from the Azure portal](/azure/azure-arc/servers/onboard-portal) for Arc-enabled servers. See [Install Arc agent for Arc-enabled VMs](/azure/azure-arc/vmware-vsphere/enable-guest-management-at-scale) to enable guest management for Arc-enabled VMware vSphere and [Install Arc agent for Arc-enabled SCVMM](/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale) to enable guest management for Arc-enabled SCVMM VMs. 1. Select **Add** to add the machine to the group. Review the parameters used in this template. **Prerequisites** -You would require an Azure VM or Arc-enabled server. You can follow the steps [here](../azure-arc/servers/onboard-portal.md) to create an Arc connected machine. +You would require an Azure VM or Arc-enabled server. You can follow the steps [here](/azure/azure-arc/servers/onboard-portal) to create an Arc connected machine. **Install and use Hybrid Worker extension** To check the version of the extension-based Hybrid Runbook Worker: Using [VM insights](/azure/azure-monitor/vm/vminsights-overview), you can monitor the performance of Azure VMs and Arc-enabled Servers deployed as Hybrid Runbook workers. Among multiple elements that are considered during performances, the VM insights monitors the key operating system performance indicators related to processor, memory, network adapter, and disk utilization. - For Azure VMs, see [How to chart performance with VM insights](/azure/azure-monitor/vm/vminsights-performance).-- For Arc-enabled servers, see [Tutorial: Monitor a hybrid machine with VM insights](../azure-arc/servers/learn/tutorial-enable-vm-insights.md).+- For Arc-enabled servers, see [Tutorial: Monitor a hybrid machine with VM insights](/azure/azure-arc/servers/learn/tutorial-enable-vm-insights). ## Next steps Using [VM insights](/azure/azure-monitor/vm/vminsights-overview), you can monito - To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](/azure/virtual-machines/extensions/features-windows) and [Azure VM extensions and features for Linux](/azure/virtual-machines/extensions/features-linux). -- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).+- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](/azure/azure-arc/servers/manage-vm-extensions). -- To learn about Azure management services for Arc-enabled VMware VMs, see [Install Arc agents at scale for your VMware VMs](../azure-arc/vmware-vsphere/enable-guest-management-at-scale.md).+- To learn about Azure management services for Arc-enabled VMware VMs, see [Install Arc agents at scale for your VMware VMs](/azure/azure-arc/vmware-vsphere/enable-guest-management-at-scale). -- To learn about Azure management services for Arc-enabled SCVMM VMs, see [Install Arc agents at scale for Arc-enabled SCVMM VMs](../azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md).+- To learn about Azure management services for Arc-enabled SCVMM VMs, see [Install Arc agents at scale for Arc-enabled SCVMM VMs](/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale). |
automation | Migrate Existing Agent Based Hybrid Worker To Extension Based Workers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md | The purpose of the Extension-based approach is to simplify the installation and - Two cores - 4 GB of RAM-- **Non-Azure machines** must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs.+- **Non-Azure machines** must have the [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](/azure/azure-arc/servers/onboard-portal) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](/azure/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs. - The system-assigned managed identity must be enabled on the Azure virtual machine, Arc-enabled server or Arc-enabled VMware vSphere VM. If the system-assigned managed identity isn't enabled, it will be enabled as part of the installation process through the Azure portal. ### Supported operating systems To install Hybrid worker extension on an existing agent based hybrid worker, ens 1. Under **Process Automation**, select **Hybrid worker groups**, and then select your existing hybrid worker group to go to the **Hybrid worker group** page. 1. Under **Hybrid worker group**, select **Hybrid Workers** > **+ Add** to go to the **Add machines as hybrid worker** page.-1. Select the checkbox next to the existing Agent based (V1) Hybrid worker. If you don't see your agent-based Hybrid Worker listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers, or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs. +1. Select the checkbox next to the existing Agent based (V1) Hybrid worker. If you don't see your agent-based Hybrid Worker listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](/azure/azure-arc/servers/onboard-portal) for Arc-enabled servers, or see [Manage VMware virtual machines Azure Arc](/azure/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs. :::image type="content" source="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/add-machines-hybrid-worker-inline.png" alt-text="Screenshot of adding machines as hybrid worker." lightbox="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/add-machines-hybrid-worker-expanded.png"::: Review the parameters used in this template. **Prerequisites** -You would require an Azure VM or Arc-enabled server. You can follow the steps [here](../azure-arc/servers/onboard-portal.md) to create an Arc connected machine. +You would require an Azure VM or Arc-enabled server. You can follow the steps [here](/azure/azure-arc/servers/onboard-portal) to create an Arc connected machine. **Install and use Hybrid Worker extension** |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md | Azure Automation supports [source control integration](source-control-integratio Automation is designed to work across Windows and Linux physical servers and virtual machines outside of Azure, on your corporate network, or other cloud provider. It delivers a consistent way to automate and configure deployed workloads and the operating systems that run them. The Hybrid Runbook Worker feature of Azure Automation enables running runbooks directly on the non-Azure physical server or virtual machine hosting the role, and against resources in the environment to manage those local resources. -Through [Arc-enabled servers](../azure-arc/servers/overview.md), it provides a consistent deployment and management experience for your non-Azure machines. It enables integration with the Automation service using the VM extension framework to deploy the Hybrid Runbook Worker role, and simplify onboarding to Update Management and Change Tracking and Inventory. +Through [Arc-enabled servers](/azure/azure-arc/servers/overview), it provides a consistent deployment and management experience for your non-Azure machines. It enables integration with the Automation service using the VM extension framework to deploy the Hybrid Runbook Worker role, and simplify onboarding to Update Management and Change Tracking and Inventory. ## Common scenarios Azure Automation supports management throughout the lifecycle of your infrastruc Depending on your requirements, one or more of the following Azure services integrate with or complement Azure Automation to help fulfill them: -* [Azure Arc-enabled servers](../azure-arc/servers/overview.md) enables simplified onboarding of hybrid machines to Update Management, Change Tracking and Inventory, and the Hybrid Runbook Worker role. +* [Azure Arc-enabled servers](/azure/azure-arc/servers/overview) enables simplified onboarding of hybrid machines to Update Management, Change Tracking and Inventory, and the Hybrid Runbook Worker role. * [Azure Alerts action groups](/azure/azure-monitor/alerts/action-groups) can initiate an Automation runbook when an alert is raised. * [Azure Monitor](/azure/azure-monitor/overview) to collect metrics and log data from your Automation account for further analysis and take action on the telemetry. Automation features such as Update Management and Change Tracking and Inventory rely on the Log Analytics workspace to deliver elements of their functionality. * [Azure Policy](../governance/policy/samples/built-in-policies.md) includes initiative definitions to help establish and maintain compliance with different security standards for your Automation account. |
automation | Dsc Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md | -> Before you enable Azure Automation DSC, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md). +> Before you enable Azure Automation DSC, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](/azure/azure-arc/servers/overview). By enabling Azure Automation State Configuration, you can manage and monitor the configurations of your Windows servers using Desired State Configuration (DSC). Configurations that drift from a desired configuration can be identified or auto-corrected. This quickstart steps through enabling an Azure VM and deploying a LAMP stack using Azure Automation State Configuration. |
automation | Install Hybrid Worker Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/install-hybrid-worker-extension.md | -The Azure Automation User Hybrid Worker enables the execution of PowerShell and Python scripts directly on machines for managing guest workloads or as a gateway to environments that aren't accessible from Azure. You can configure Windows and Linux Azure Virtual Machine. [Azure Arc-enabled Server](../../azure-arc/servers/overview.md), [Arc-enabled VMware vSphere VM](../../azure-arc/vmware-vsphere/overview.md), and [Azure Arc-enabled SCVMM](../../azure-arc/system-center-virtual-machine-manager/overview.md) as User Hybrid Worker by installing Hybrid Worker extension. +The Azure Automation User Hybrid Worker enables the execution of PowerShell and Python scripts directly on machines for managing guest workloads or as a gateway to environments that aren't accessible from Azure. You can configure Windows and Linux Azure Virtual Machine. [Azure Arc-enabled Server](/azure/azure-arc/servers/overview), [Arc-enabled VMware vSphere VM](/azure/azure-arc/vmware-vsphere/overview), and [Azure Arc-enabled SCVMM](/azure/azure-arc/system-center-virtual-machine-manager/overview) as User Hybrid Worker by installing Hybrid Worker extension. This quickstart shows you how to install Azure Automation Hybrid Worker extension on an Azure Virtual Machine through the Extensions blade on Azure portal. |
automation | Extension Based Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/extension-based-hybrid-runbook-worker.md | You are deploying an extension-based Hybrid Runbook Worker on a VM and it fails You are deploying the extension-based Hybrid Worker on a non-Azure VM that does not have Arc connected machine agent installed on it. ### Resolution-Non-Azure machines must have the Arc connected machine agent installed on it, before deploying it as an extension-based Hybrid Runbook worker. To install the `AzureConnectedMachineAgent`, see [connect hybrid machines to Azure from the Azure portal](../../azure-arc/servers/onboard-portal.md) -for Arc-enabled servers or [Manage VMware virtual machines Azure Arc](../../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware VM. +Non-Azure machines must have the Arc connected machine agent installed on it, before deploying it as an extension-based Hybrid Runbook worker. To install the `AzureConnectedMachineAgent`, see [connect hybrid machines to Azure from the Azure portal](/azure/azure-arc/servers/onboard-portal) +for Arc-enabled servers or [Manage VMware virtual machines Azure Arc](/azure/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure#enable-guest-management) to enable guest management for Arc-enabled VMware VM. ### Scenario: Hybrid Worker deployment fails due to System assigned identity not enabled |
automation | Enable From Automation Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-automation-account.md | -This article describes how you can use your Automation account to enable the [Update Management](overview.md) feature for VMs in your environment, including machines or servers registered with [Azure Arc-enabled servers](../../azure-arc/servers/overview.md). To enable Azure VMs at scale, you must enable an existing Azure VM using Update Management. +This article describes how you can use your Automation account to enable the [Update Management](overview.md) feature for VMs in your environment, including machines or servers registered with [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). To enable Azure VMs at scale, you must enable an existing Azure VM using Update Management. > [!NOTE] > When enabling Update Management, only certain regions are supported for linking a Log Analytics workspace and an Automation account. For a list of the supported mapping pairs, see [Region mapping for Automation account and Log Analytics workspace](../how-to/region-mappings.md). This article describes how you can use your Automation account to enable the [Up * Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * [Automation account](../automation-security-overview.md) to manage machines.-* An [Azure virtual machine](/azure/virtual-machines/windows/quick-create-portal), or VM or server registered with Azure Arc-enabled servers. Non-Azure VMs or servers need to have the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for Windows or Linux installed and reporting to the workspace linked to the Automation account where Update Management is enabled. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. +* An [Azure virtual machine](/azure/virtual-machines/windows/quick-create-portal), or VM or server registered with Azure Arc-enabled servers. Non-Azure VMs or servers need to have the [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for Windows or Linux installed and reporting to the workspace linked to the Automation account where Update Management is enabled. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then use Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. ## Sign in to Azure |
automation | Operating System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md | The section describes operating system-specific requirements. For additional gui - Windows PowerShell 5.1 is required ([Download Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616).) - The Update Management feature depends on the system Hybrid Runbook Worker role, and you should confirm its [system requirements](../automation-windows-hrw-install.md#prerequisites). -Windows Update agents must be configured to communicate with a Windows Server Update Services (WSUS) server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Microsoft Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with VM insights, instead use the [Enable Enable VM insights](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. +Windows Update agents must be configured to communicate with a Windows Server Update Services (WSUS) server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then use Azure Policy to assign the [Deploy Log Analytics agent to Microsoft Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with VM insights, instead use the [Enable Enable VM insights](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. You can use Update Management with Microsoft Configuration Manager. To learn more about integration scenarios, see [Integrate Update Management with Windows Configuration Manager](mecmintegration.md). The [Log Analytics agent for Windows](/azure/azure-monitor/agents/agent-windows) is required for Windows servers managed by sites in your Configuration Manager environment. By default, Windows VMs that are deployed from Azure Marketplace are set to rece > Update assessment of Linux machines is supported in certain regions only. See the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings-for-log-analytics-and-azure-automation). -For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, to monitor the machines use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) instead of Azure Monitor for VMs. +For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, to monitor the machines use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) instead of Azure Monitor for VMs. ## Next steps |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md | -As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](../../lighthouse/overview.md). Update Management can be used to assess and schedule update deployments to machines in multiple subscriptions in the same Microsoft Entra tenant, or across tenants using Azure Lighthouse. +As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](/azure/lighthouse/overview). Update Management can be used to assess and schedule update deployments to machines in multiple subscriptions in the same Microsoft Entra tenant, or across tenants using Azure Lighthouse. Microsoft offers other capabilities to help you manage updates for your Azure VMs or Azure virtual machine scale sets that you should consider as part of your overall update management strategy. |
automation | Plan Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/plan-deployment.md | The [Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) for W On Azure VMs, if the Log Analytics agent isn't already installed, when you enable Update Management for the VM it is automatically installed using the Log Analytics VM extension for [Windows](/azure/virtual-machines/extensions/oms-windows) or [Linux](/azure/virtual-machines/extensions/oms-linux). The agent is configured to report to the Log Analytics workspace linked to the Automation account Update Management is enabled in. -Non-Azure VMs or servers need to have the Log Analytics agent for Windows or Linux installed and reporting to the linked workspace. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with [VM insights](/azure/azure-monitor/vm/vminsights-overview), instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. +Non-Azure VMs or servers need to have the Log Analytics agent for Windows or Linux installed and reporting to the linked workspace. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with [VM insights](/azure/azure-monitor/vm/vminsights-overview), instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. If you're enabling a machine that's currently managed by Operations Manager, a new agent isn't required. The workspace information is added to the agents configuration when you connect the management group to the Log Analytics workspace. |
automation | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md | Start/Stop VM runbooks have been updated to use Az modules in place of Azure Res **Type:** New feature -Use Azure Automation State Configuration to centrally store configurations and maintain the desired state of hybrid connected machines enabled through the Azure Arc enabled servers DSC VM extension. For more information, read [Arc enabled servers VM extensions overview](../azure-arc/servers/manage-vm-extensions.md). +Use Azure Automation State Configuration to centrally store configurations and maintain the desired state of hybrid connected machines enabled through the Azure Arc enabled servers DSC VM extension. For more information, read [Arc enabled servers VM extensions overview](/azure/azure-arc/servers/manage-vm-extensions). ### July 2020 |
automation | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md | On **31 August 2024**, Azure Automation will [retire](https://azure.microsoft.co ### General Availability: Azure Automation User Hybrid Runbook Worker Extension -User Hybrid Worker enables execution of the scripts directly on the machines for managing guest workloads or as a gateway to environments that are not accessible from Azure. Azure Automation announces **General Availability of User Hybrid Worker extension**, that is based on Virtual Machine extensions framework and provides a **seamless and integrated** installation experience. It is supported for Windows & Linux Azure VMs and [Azure Arc-enabled Servers](../azure-arc/servers/overview.md). It is also available for [Azure Arc-enabled VMware vSphere VMs](../azure-arc/vmware-vsphere/overview.md) in preview. +User Hybrid Worker enables execution of the scripts directly on the machines for managing guest workloads or as a gateway to environments that are not accessible from Azure. Azure Automation announces **General Availability of User Hybrid Worker extension**, that is based on Virtual Machine extensions framework and provides a **seamless and integrated** installation experience. It is supported for Windows & Linux Azure VMs and [Azure Arc-enabled Servers](/azure/azure-arc/servers/overview). It is also available for [Azure Arc-enabled VMware vSphere VMs](/azure/azure-arc/vmware-vsphere/overview) in preview. ## October 2022 |
avere-vfxt | Avere Vfxt Open Ticket | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-open-ticket.md | Follow these steps to make sure that your support ticket is tagged with a resour ## Request a quota increase -Read [Quota for the vFXT cluster](avere-vfxt-prereqs.md#quota-for-the-vfxt-cluster) to learn what components are needed to deploy the Avere vFXT for Azure. You can [request a quota increase](../azure-portal/supportability/regional-quota-requests.md) from the Azure portal. +Read [Quota for the vFXT cluster](avere-vfxt-prereqs.md#quota-for-the-vfxt-cluster) to learn what components are needed to deploy the Avere vFXT for Azure. You can [request a quota increase](/azure/azure-portal/supportability/regional-quota-requests) from the Azure portal. |
avere-vfxt | Avere Vfxt Prereqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-prereqs.md | There are some workarounds to allow a non-owner to create an Avere vFXT for Azur ## Quota for the vFXT cluster -Check that you have sufficient quota for the following Azure components. If needed, [request a quota increase](../azure-portal/supportability/regional-quota-requests.md). +Check that you have sufficient quota for the following Azure components. If needed, [request a quota increase](/azure/azure-portal/supportability/regional-quota-requests). > [!NOTE] > The virtual machines and SSD components listed here are for the vFXT cluster itself. Remember that you also need quota for the VMs and SSDs you will use for your compute farm. |
azure-app-configuration | Integrate Ci Cd Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-ci-cd-pipeline.md | This article explains how to use data from Azure App Configuration in a continuo ## Use App Configuration in your Azure DevOps Pipeline -If you have an Azure DevOps Pipeline, you can fetch key-values from App Configuration and set them as task variables. The [Azure App Configuration DevOps extension](https://go.microsoft.com/fwlink/?linkid=2091063) is an add-on module that provides this functionality. Follow its instructions to use the extension in a build or release task sequence. +If you have an Azure DevOps Pipeline, you can fetch key-values from App Configuration and set them as task variables. The Azure App Configuration DevOps extension is an add-on module that provides this functionality. [Get this module](https://go.microsoft.com/fwlink/?linkid=2091063) and refer to [Pull settings from App Configuration with Azure Pipelines](./pull-key-value-devops-pipeline.md) for instructions to use it in your Azure Pipelines. ## Deploy App Configuration data with your application |
azure-app-configuration | Quickstart Container Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-container-apps.md | Create an Azure Container Registry (ACR). ACR enables you to build, store, and m #### [Portal](#tab/azure-portal) -1. To create the container registry, follow the [Azure Container Registry quickstart](../container-registry/container-registry-get-started-portal.md). +1. To create the container registry, follow the [Azure Container Registry quickstart](/azure/container-registry/container-registry-get-started-portal). 1. Once the deployment is complete, open your ACR instance and from the left menu, select **Settings > Access keys**. 1. Take note of the **Login server** value listed on this page. You'll use this information in a later step. 1. Switch **Admin user** to *Enabled*. This option lets you connect the ACR to Azure Container Apps using admin user credentials. Alternatively, you can leave it disabled and configure the container app to [pull images from the registry with a managed identity](../container-apps/managed-identity-image-pull.md). #### [Azure CLI](#tab/azure-cli) -1. Create an ACR instance using the following command. It creates a basic tier registry named *myregistry* with admin user enabled that allows the container app to connect to the registry using admin user credentials. For more information, see [Azure Container Registry quickstart](../container-registry/container-registry-get-started-azure-cli.md). +1. Create an ACR instance using the following command. It creates a basic tier registry named *myregistry* with admin user enabled that allows the container app to connect to the registry using admin user credentials. For more information, see [Azure Container Registry quickstart](/azure/container-registry/container-registry-get-started-azure-cli). ```azurecli az acr create In this quickstart, you: - Added the container image to Azure Container Apps - Browsed to the URL of the Azure Container Apps instance updated with the settings you configured in your App Configuration store. -The managed identity enables one Azure resource to access another without you maintaining secrets. You can streamline access from Container Apps to other Azure resources. For more information, see how to [access App Configuration using the managed identity](howto-integrate-azure-managed-service-identity.md) and how to [[access Container Registry using the managed identity](../container-registry/container-registry-authentication-managed-identity.md)]. +The managed identity enables one Azure resource to access another without you maintaining secrets. You can streamline access from Container Apps to other Azure resources. For more information, see how to [access App Configuration using the managed identity](howto-integrate-azure-managed-service-identity.md) and how to [[access Container Registry using the managed identity](/azure/container-registry/container-registry-authentication-managed-identity)]. To learn how to configure your ASP.NET Core web app to dynamically refresh configuration settings, continue to the next tutorial. |
azure-app-configuration | Quickstart Feature Flag Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-aspnet-core.md | Follow the documents to create an ASP.NET Core app with dynamic configuration. ## Create a feature flag -Add a feature flag called *Beta* to the App Configuration store and leave **Label** and **Description** with their default values. For more information about how to add feature flags to a store using the Azure portal or the CLI, go to [Create a feature flag](./manage-feature-flags.md#create-a-feature-flag). +Add a feature flag called *Beta* to the App Configuration store (created in the [Prerequisites](./quickstart-feature-flag-aspnet-core.md#prerequisites) steps), and leave **Label** and **Description** with their default values. For more information about how to add feature flags to a store using the Azure portal or the CLI, go to [Create a feature flag](./manage-feature-flags.md#create-a-feature-flag). > [!div class="mx-imgBorder"] > ![Enable feature flag named Beta](./media/add-beta-feature-flag.png) ## Use a feature flag -1. Navigate into the project's directory, and run the following command to add a reference to the [Microsoft.FeatureManagement.AspNetCore](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore) NuGet package. +1. Navigate into the project's directory (created in the [Prerequisites](./quickstart-feature-flag-aspnet-core.md#prerequisites) steps), and run the following command to add a reference to the [Microsoft.FeatureManagement.AspNetCore](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore) NuGet package. ```dotnetcli dotnet add package Microsoft.FeatureManagement.AspNetCore |
azure-arc | Choose Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/choose-service.md | - Title: Choosing the right Azure Arc service for machines -description: Learn about the different services offered by Azure Arc and how to choose the right one for your machines. Previously updated : 06/19/2024----# Choosing the right Azure Arc service for machines --Azure Arc offers different services based on your existing IT infrastructure and management needs. Before onboarding your resources to Azure Arc-enabled servers, you should investigate the different Azure Arc offerings to determine which best suits your requirements. Choosing the right Azure Arc service provides the best possible inventorying and management of your resources. --There are several different ways you can connect your existing Windows and Linux machines to Azure Arc: --- Azure Arc-enabled servers-- Azure Arc-enabled VMware vSphere-- Azure Arc-enabled System Center Virtual Machine Manager (SCVMM)-- Azure Stack HCI--Each of these services extends the Azure control plane to your existing infrastructure and enables the use of [Azure security, governance, and management capabilities using the Connected Machine agent](/azure/azure-arc/servers/overview). Other services besides Azure Arc-enabled servers also use an [Azure Arc resource bridge](/azure/azure-arc/resource-bridge/overview), a part of the core Azure Arc platform that provides self-servicing and additional management capabilities. --General recommendations about the right service to use are as follows: --|If your machine is a... |...connect to Azure with... | -||| -|VMware VM (not running on AVS) |[Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) | -|Azure VMware Solution (AVS) VM |[Azure Arc-enabled VMware vSphere for Azure VMware Solution](/azure/azure-vmware/deploy-arc-for-azure-vmware-solution?tabs=windows) | -|VM managed by System Center Virtual Machine Manager |[Azure Arc-enabled SCVMM](system-center-virtual-machine-manager/overview.md) | -|Azure Stack HCI VM |[Azure Stack HCI](/azure-stack/hci/overview) | -|Physical server |[Azure Arc-enabled servers](servers/overview.md) | -|VM on another hypervisor |[Azure Arc-enabled servers](servers/overview.md) | -|VM on another cloud provider |[Azure Arc-enabled servers](servers/overview.md) | --If you're unsure about which of these services to use, you can start with Azure Arc-enabled servers and add a resource bridge for additional management capabilities later. Azure Arc-enabled servers allows you to connect servers containing all of the types of VMs supported by the other services and provides a wide range of capabilities such as Azure Policy and monitoring, while adding resource bridge can extend additional capabilities. --Region availability also varies between Azure Arc services, so you may need to use Azure Arc-enabled servers if a more specialized version of Azure Arc is unavailable in your preferred region. See [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc®ions=all&rar=true) to learn more about region availability for Azure Arc services. --Where your machine runs determines the best Azure Arc service to use. Organizations with diverse infrastructure may end up using more than one Azure Arc service; this is alright. The core set of features remains the same no matter which Azure Arc service you use. --## Azure Arc-enabled servers --[Azure Arc-enabled servers](servers/overview.md) lets you manage Windows and Linux physical servers and virtual machines hosted outside of Azure, on your corporate network, or other cloud provider. When connecting your machine to Azure Arc-enabled servers, you can perform various operational functions similar to native Azure virtual machines. --### Capabilities --- Govern: Assign Azure Automanage machine configurations to audit settings within the machine. Utilize Azure Policy pricing guide for cost understanding.--- Protect: Safeguard non-Azure servers with Microsoft Defender for Endpoint, integrated through Microsoft Defender for Cloud. This includes threat detection, vulnerability management, and proactive security monitoring. Utilize Microsoft Sentinel for collecting security events and correlating them with other data sources.--- Configure: Employ Azure Automation for managing tasks using PowerShell and Python runbooks. Use Change Tracking and Inventory for assessing configuration changes. Utilize Update Management for handling OS updates. Perform post-deployment configuration and automation tasks using supported Azure Arc-enabled servers VM extensions.--- Monitor: Utilize VM insights for monitoring OS performance and discovering application components. Collect log data, such as performance data and events, through the Log Analytics agent, storing it in a Log Analytics workspace.--- Procure Extended Security Updates (ESUs) at scale for your Windows Server 2012 and 2012R2 machines running on vCenter managed estate.--> [!IMPORTANT] -> Azure Arc-enabled VMware vSphere and Azure Arc-enabled SCVMM have all the capabilities of Azure Arc-enabled servers, but also provide specific, additional capabilities. -> -## Azure Arc-enabled VMware vSphere --[Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) simplifies the management of hybrid IT resources distributed across VMware vSphere and Azure. --Running software in Azure VMware Solution, as a private cloud in Azure, offers some benefits not realized by operating your environment outside of Azure. For software running in a VM, such as SQL Server and Windows Server, running in Azure VMware Solution provides additional value such as free Extended Security Updates (ESUs). --To take advantage of these benefits if you're running in an Azure VMware Solution, it's important to follow respective [onboarding](/azure/azure-vmware/deploy-arc-for-azure-vmware-solution?tabs=windows) processes to fully integrate the experience with the AVS private cloud. --Additionally, when a VM in Azure VMware Solution private cloud is Azure Arc-enabled using a method distinct from the one outlined in the AVS public document, the steps are provided in the [document](/azure/azure-vmware/deploy-arc-for-azure-vmware-solution?tabs=windows) to refresh the integration between the Azure Arc-enabled VMs and Azure VMware Solution. --### Capabilities --- Discover your VMware vSphere estate (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register resources with Azure Arc at scale.--- Perform various virtual machine (VM) operations directly from Azure, such as create, resize, delete, and power cycle operations such as start/stop/restart on VMware VMs consistently with Azure.--- Empower developers and application teams to self-serve VM operations on-demand usingΓÇ»Azure role-based access controlΓÇ»(RBAC).--- Install the Azure Arc-connected machine agent at scale on VMware VMs toΓÇ»govern, protect, configure, and monitorΓÇ»them.--- Browse your VMware vSphere resources (VMs, templates, networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.--## Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) --[Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md) (SCVMM) empowers System Center customers to connect their VMM environment to Azure and perform VM self-service operations from Azure portal. --Azure Arc-enabled System Center Virtual Machine Manager also allows you to manage your hybrid environment consistently and perform self-service VM operations through Azure portal. For Microsoft Azure Pack customers, this solution is intended as an alternative to perform VM self-service operations. --### Capabilities --- Discover and onboard existing SCVMM managed VMs to Azure.--- Perform various VM lifecycle operations such as start, stop, pause, and delete VMs on SCVMM managed VMs directly from Azure.--- Empower developers and application teams to self-serve VM operations on demand usingΓÇ»Azure role-based access control (RBAC).--- Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.--- Install the Azure Arc-connected machine agents at scale on SCVMM VMs toΓÇ»govern, protect, configure, and monitor them.--## Azure Stack HCI --[Azure Stack HCI](/azure-stack/hci/overview) is a hyperconverged infrastructure operating system delivered as an Azure service. This is a hybrid solution that is designed to host virtualized Windows and Linux VM or containerized workloads and their storage. Azure Stack HCI is a hybrid product that is offered on validated hardware and connects on-premises estates to Azure, enabling cloud-based services, monitoring and management. This helps customers manage their infrastructure from Azure and run virtualized workloads on-premises, making it easy for them to consolidate aging infrastructure and connect to Azure. --> [!NOTE] -> Azure Stack HCI comes with Azure resource bridge installed and uses the Azure Arc control plane for infrastructure and workload management, allowing you to monitor, update, and secure your HCI infrastructure from the Azure portal. -> --### Capabilities --- Deploy and manage workloads, including VMs and Kubernetes clusters from Azure through the Azure Arc resource bridge.--- Manage VM lifecycle operations such as start, stop, delete from Azure control plane.--- Manage Kubernetes lifecycle operations such as scale, update, upgrade, and delete clusters from Azure control plane.--- Install Azure connected machine agent and Azure Arc-enabled Kubernetes agent on your VM and Kubernetes clusters to use Azure services (i.e., Azure Monitor, Azure Defender for cloud, etc.).--- Leverage Azure Virtual Desktop for Azure Stack HCI to deploy session hosts on to your on-premises infrastructure to better meet your performance or data locality requirements.--- Empower developers and application teams to self-serve VM and Kubernetes cluster operations on demand usingΓÇ»Azure role-based access control (RBAC).--- Monitor, update, and secure your Azure Stack HCI infrastructure and workloads across fleets of locations directly from the Azure portal.--- Deploy and manage static and DHCP-based logical networks on-premises to host your workloads.--- VM image management with Azure Marketplace integration and ability to bring your own images from Azure storage account and cluster shared volumes.--- Create and manage storage paths to store your VM disks and config files.--## Capabilities at a glance --The following table provides a quick way to see the major capabilities of the three Azure Arc services that connect your existing Windows and Linux machines to Azure Arc. --| _ |Arc-enabled servers |Arc-enabled VMware vSphere |Arc-enabled SCVMM |Azure Stack HCI | -||||||| -|Microsoft Defender for Cloud |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Microsoft Sentinel | Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Azure Automation |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Azure Update Manager |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|VM extensions |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Azure Monitor |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Extended Security Updates for Windows Server 2012/2012R2 and SQL Server 2012 (11.x) |Γ£ô |Γ£ô |Γ£ô |Γ£ô | -|Discover & onboard VMs to Azure | |Γ£ô |Γ£ô |Γ£ù | -|Lifecycle operations (start/stop VMs, etc.) | |Γ£ô |Γ£ô |Γ£ô | -|Self-serve VM provisioning | |Γ£ô |Γ£ô |Γ£ô | -|SQL Server enabled by Azure Arc |Γ£ô |Γ£ô |Γ£ô |Γ£ô | --## Switching from Arc-enabled servers to another service --If you currently use Azure Arc-enabled servers, you can get the additional capabilities that come with Arc-enabled VMware vSphere or Arc-enabled SCVMM: --- [Enable virtual hardware and VM CRUD capabilities in a VMware machine with Azure Arc agent installed](/azure/azure-arc/vmware-vsphere/enable-virtual-hardware)--- [Enable virtual hardware and VM CRUD capabilities in an SCVMM machine with Azure Arc agent installed](/azure/azure-arc/system-center-virtual-machine-manager/enable-virtual-hardware-scvmm)- |
azure-arc | Alternate Key Based | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/alternate-key-based.md | - Title: Alternate key-based configuration for Cloud Ingest Edge Volumes -description: Learn about an alternate key-based configuration for Cloud Ingest Edge Volumes. ---- Previously updated : 08/26/2024---# Alternate: Key-based authentication configuration for Cloud Ingest Edge Volumes --This article describes an alternate configuration for [Cloud Ingest Edge Volumes](cloud-ingest-edge-volume-configuration.md) (blob upload with local purge) with key-based authentication. --This configuration is an alternative option for use with key-based authentication methods. You should review the recommended configuration using system-assigned managed identities in [Cloud Ingest Edge Volumes configuration](cloud-ingest-edge-volume-configuration.md). --## Prerequisites --1. Create a storage account [following these instructions](/azure/storage/common/storage-account-create?tabs=azure-portal). -- > [!NOTE] - > When you create a storage account, it's recommended that you create it under the same resource group and region/location as your Kubernetes cluster. --1. Create a container in the storage account that you created in the previous step, [following these instructions](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container). --## Create a Kubernetes secret --Edge Volumes supports the following three authentication methods: --- Shared Access Signature (SAS) Authentication (recommended)-- Connection String Authentication-- Storage Key Authentication--After you complete authentication for one of these methods, proceed to the [Create a Cloud Ingest Persistent Volume Claim (PVC)](#create-a-cloud-ingest-persistent-volume-claim-pvc) section. --### [Shared Access Signature (SAS) authentication](#tab/sas) --### Create a Kubernetes secret using Shared Access Signature (SAS) authentication --You can configure SAS authentication using YAML and `kubectl`, or by using the Azure CLI. --To find your `storageaccountsas`, perform the following procedure: --1. Navigate to your storage account in the Azure portal. -1. Expand **Security + networking** on the left blade and then select **Shared access signature**. -1. Under **Allowed resource types**, select **Service > Container > Object**. -1. Under **Allowed permissions**, unselect **Immutable storage** and **Permanent delete**. -1. Under **Start and expiry date/time**, choose your desired end date and time. -1. At the bottom, select **Generate SAS and connection string**. -1. The values listed under **SAS token** are used for the `storageaccountsas` variables in the next section. --#### Shared Access Signature (SAS) authentication using YAML and `kubectl` --1. Create a file named `sas.yaml` with the following contents. Replace `metadata::name`, `metadata::namespace`, and `storageaccountconnectionstring` with your own values. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: v1 - kind: Secret - metadata: - ### This name should look similar to "kharrisStorageAccount-secret" where "kharrisStorageAccount" is replaced with your storage account name - name: <your-storage-acct-name-secret> - # Use a namespace that matches your intended consuming pod, or "default" - namespace: <your-intended-consuming-pod-or-default> - stringData: - authType: SAS - # Container level SAS (must have ? prefixed) - storageaccountsas: "?..." - type: Opaque - ``` --1. To apply `sas.yaml`, run: -- ```bash - kubectl apply -f "sas.yaml" - ``` --#### Shared Access Signature (SAS) authentication using CLI --- If you want to scope SAS authentication at the container level, use the following commands. You must update `YOUR_CONTAINER_NAME` from the first command and `YOUR_NAMESPACE`, `YOUR_STORAGE_ACCT_NAME`, and `YOUR_SECRET` from the second command:-- ```bash - az storage container generate-sas [OPTIONAL auth via --connection-string "..."] --name YOUR_CONTAINER_NAME --permissions acdrw --expiry '2025-02-02T01:01:01Z' - kubectl create secret generic -n "YOUR_NAMESPACE" "YOUR_STORAGE_ACCT_NAME"-secret --from-literal=storageaccountsas="YOUR_SAS" - ``` --### [Connection string authentication](#tab/connectionstring) --### Create a Kubernetes secret using connection string authentication --You can configure connection string authentication using YAML and `kubectl`, or by using Azure CLI. --To find your `storageaccountconnectionstring`, perform the following procedure: --1. Navigate to your storage account in the Azure portal. -1. Expand **Security + networking** on the left blade and then select **Shared access signature**. -1. Under **Allowed resource types**, select **Service > Container > Object**. -1. Under **Allowed permissions**, unselect **Immutable storage** and **Permanent delete**. -1. Under **Start and expiry date/time**, choose your desired end date and time. -1. At the bottom, select **Generate SAS and connection string**. -1. The values listed under **Connection string** are used for the `storageaccountconnectionstring` variables in the next section.. --For more information, see [Create a connection string using a shared access signature](/azure/storage/common/storage-configure-connection-string?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json#create-a-connection-string-using-a-shared-access-signature). --#### Connection string authentication using YAML and `kubectl` --1. Create a file named `connectionString.yaml` with the following contents. Replace `metadata::name`, `metadata::namespace`, and `storageaccountconnectionstring` with your own values. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: v1 - kind: Secret - metadata: - ### This name should look similar to "kharrisStorageAccount-secret" where "kharrisStorageAccount" is replaced with your storage account name - name: <your-storage-acct-name-secret> - # Use a namespace that matches your intended consuming pod or "default" - namespace: <your-intended-consuming-pod-or-default> - stringData: - authType: CONNECTION_STRING - # Connection string which can contain a storage key or SAS. - # Depending on your decision on using storage key or SAS, comment out the undesired storageaccoutnconnectionstring. - # - Storage key example - - storageaccountconnectionstring: "DefaultEndpointsProtocol=https;AccountName=YOUR_ACCT_NAME_HERE;AccountKey=YOUR_ACCT_KEY_HERE;EndpointSuffix=core.windows.net" - # - SAS example - - storageaccountconnectionstring: "BlobEndpoint=https://YOUR_BLOB_ENDPOINT_HERE;SharedAccessSignature=YOUR_SHARED_ACCESS_SIG_HERE" - type: Opaque - ``` --1. To apply `connectionString.yaml`, run: -- ```bash - kubectl apply -f "connectionString.yaml" - ``` --#### Connection string authentication using CLI --A connection string can contain a storage key or SAS. --- For a storage key connection string, run the following commands. You must update the `your_storage_acct_name` value from the first command, and the `your_namespace`, `your_storage_acct_name`, and `your_secret` values from the second command:-- ```bash - az storage account show-connection-string --name YOUR_STORAGE_ACCT_NAME --output tsv - kubectl create secret generic -n "your_namespace" "your_storage_acct_name"-secret --from-literal=storageaccountconnectionstring="your_secret" - ``` --- For a SAS connection string, run the following commands. You must update the `your_storage_acct_name` and `your_sas_token` values from the first command, and the `your_namespace`, `your_storage_acct_name`, and `your_secret` values from the second command:-- ```bash - az storage account show-connection-string --name your_storage_acct_name --sas-token "your_sas_token" -output tsv - kubectl create secret generic -n "your_namespace" "your_storage_acct_name"-secret --from-literal=storageaccountconnectionstring="your_secret" - ``` --### [Storage key authentication](#tab/storagekey) --### Create a Kubernetes secret using storage key authentication --1. Create a file named `add-key.sh` with the following contents. No edits to the contents are necessary: -- ```bash - #!/usr/bin/env bash - - while getopts g:n:s: flag - do - case "${flag}" in - g) RESOURCE_GROUP=${OPTARG};; - s) STORAGE_ACCOUNT=${OPTARG};; - n) NAMESPACE=${OPTARG};; - esac - done - - SECRET=$(az storage account keys list -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT --query [0].value --output tsv) - - kubectl create secret generic -n "${NAMESPACE}" "${STORAGE_ACCOUNT}"-secret --from-literal=storageaccountkey="${SECRET}" --from-literal=storageaccountname="${STORAGE_ACCOUNT}" - ``` --1. Once you create the file, change the write permissions on the file and execute the shell script using the following commands. Running these commands creates a secret named `{your_storage_account}-secret`. This secret name is used for the `secretName` value when you configure the Persistent Volume (PV). -- ```bash - chmod +x add-key.sh - ./add-key.sh -g "$your_resource_group_name" -s "$your_storage_account_name" -n "$your_kubernetes_namespace" - ``` ----## Create a Cloud Ingest Persistent Volume Claim (PVC) --1. Create a file named `cloudIngestPVC.yaml` with the following contents. You must edit the `metadata::name` value, and add a name for your Persistent Volume Claim. This name is referenced on the last line of `deploymentExample.yaml` in the next step. You must also update the `metadata::namespace` value with your intended consuming pod. If you don't have an intended consuming pod, the `metadata::namespace` value is `default`: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - ### Create a name for the PVC ### - name: <your-storage-acct-name-secret> - ### Use a namespace that matches your intended consuming pod, or "default" ### - namespace: <your-intended-consuming-pod-or-default> - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi - storageClassName: cloud-backed-sc - ``` --2. To apply `cloudIngestPVC.yaml`, run: -- ```bash - kubectl apply -f "cloudIngestPVC.yaml" - ``` --## Attach sub-volume to Edge Volume --1. Get the name of your Edge Volume using the following command: -- ```bash - kubectl get edgevolumes - ``` --1. Create a file named `edgeSubvolume.yaml` and copy the following contents. Update the variables with your information: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your sub-volume. - - `spec::edgevolume`: This name was retrieved from the previous step using `kubectl get edgevolumes`. - - `spec::path`: Create your own subdirectory name under the mount path. Note that the following example already contains an example name (`exampleSubDir`). If you change this path name, line 33 in `deploymentExample.yaml` must be updated with the new path name. If you choose to rename the path, don't use a preceding slash. - - `spec::auth::authType`: Depends on what authentication method you used in the previous steps. Accepted inputs include `sas`, `connection_string`, and `key`. - - `spec::auth::secretName`: If you used storage key authentication, your `secretName` is `{your_storage_account_name}-secret`. If you used connection string or SAS authentication, your `secretName` was specified by you. - - `spec::auth::secretNamespace`: Matches your intended consuming pod, or `default`. - - `spec::container`: The container name in your storage account. - - `spec::storageaccountendpoint`: Navigate to your storage account in the Azure portal. On the **Overview** page, near the top right of the screen, select **JSON View**. You can find the `storageaccountendpoint` link under **properties::primaryEndpoints::blob**. Copy the entire link (for example, `https://mytest.blob.core.windows.net/`). -- ```yaml - apiVersion: "arccontainerstorage.azure.net/v1" - kind: EdgeSubvolume - metadata: - name: <create-a-subvolume-name-here> - spec: - edgevolume: <your-edge-volume-name-here> - path: exampleSubDir # If you change this path, line 33 in deploymentExample.yaml must be updated. Don't use a preceding slash. - auth: - authType: MANAGED_IDENTITY - secretName: <your-secret-name> - secretNamespace: <your_namespace> - storageaccountendpoint: <your_storage_account_endpoint> - container: <your-blob-storage-account-container-name> - ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration - ``` --2. To apply `edgeSubvolume.yaml`, run: -- ```bash - kubectl apply -f "edgeSubvolume.yaml" - ``` --### Optional: Modify the `ingestPolicy` from the default --1. If you want to change the `ingestPolicy` from the default `edgeingestpolicy-default`, create a file named `myedgeingest-policy.yaml` with the following contents. Update the following variables with your preferences. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your **ingestPolicy**. This name must be updated and referenced in the spec::ingestPolicy section of your `edgeSubvolume.yaml`. - - `spec::ingest::order`: The order in which dirty files are uploaded. This is best effort, not a guarantee (defaults to **oldest-first**). Options for order are: **oldest-first** or **newest-first**. - - `spec::ingest::minDelaySec`: The minimum number of seconds before a dirty file is eligible for ingest (defaults to 60). This number can range between 0 and 31536000. - - `spec::eviction::order`: How files are evicted (defaults to **unordered**). Options for eviction order are: **unordered** or **never**. - - `spec::eviction::minDelaySec`: The number of seconds before a clean file is eligible for eviction (defaults to 300). This number can range between 0 and 31536000. -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeIngestPolicy - metadata: - name: <create-a-policy-name-here> # This will need to be updated and referenced in the spec::ingestPolicy section of the edgeSubvolume.yaml - spec: - ingest: - order: <your-ingest-order> - minDelaySec: <your-min-delay-sec> - eviction: - order: <your-eviction-order> - minDelaySec: <your-min-delay-sec> - ``` --1. To apply `myedgeingest-policy.yaml`, run: -- ```bash - kubectl apply -f "myedgeingest-policy.yaml" - ``` --## Attach your app (Kubernetes native application) --1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `deploymentExample.yaml` with the following contents. Replace `containers::name` and `volumes::persistentVolumeClaim::claimName` with your values. If you updated the path name from `edgeSubvolume.yaml`, `exampleSubDir` on line 33 must be updated with your new path name. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: cloudingestedgevol-deployment ### This will need to be unique for every volume you choose to create - spec: - replicas: 2 - selector: - matchLabels: - name: wyvern-testclientdeployment - template: - metadata: - name: wyvern-testclientdeployment - labels: - name: wyvern-testclientdeployment - spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - wyvern-testclientdeployment - topologyKey: kubernetes.io/hostname - containers: - ### Specify the container in which to launch the busy box. ### - - name: <create-a-container-name-here> - image: mcr.microsoft.com/azure-cli:2.57.0@sha256:c7c8a97f2dec87539983f9ded34cd40397986dcbed23ddbb5964a18edae9cd09 - command: - - "/bin/sh" - - "-c" - - "dd if=/dev/urandom of=/data/exampleSubDir/esaingesttestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done" - volumeMounts: - ### This name must match the following volumes::name attribute ### - - name: wyvern-volume - ### This mountPath is where the PVC will be attached to the pod's filesystem ### - mountPath: "/data" - volumes: - ### User-defined 'name' that is used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ### - - name: wyvern-volume - persistentVolumeClaim: - ### This claimName must refer to your PVC metadata::name - claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml> - ``` --1. To apply `deploymentExample.yaml`, run: -- ```bash - kubectl apply -f "deploymentExample.yaml" - ``` --1. Use `kubectl get pods` to find the name of your pod. Copy this name; you use it in the next step. -- > [!NOTE] - > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods will appear using `kubectl get pods`. You can choose either pod name to use for the next step. --1. Run the following command and replace `POD_NAME_HERE` with your copied value from the last step: -- ```bash - kubectl exec -it pod_name_here -- sh - ``` --1. Change directories (`cd`) into the `/data` mount path as specified in your `deploymentExample.yaml`. --1. You should see a directory with the name you specified as your `path` in Step 2 of the [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume) section. Now, `cd` into `/your_path_name_here`, and replace `your_path_name_here` with your respective details. --1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`. --1. In the Azure portal, navigate to your storage account and find the container specified from Step 2 of [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume). When you select your container, you should see `file1.txt` populated within the container. If the file hasn't yet appeared, wait approximately 1 minute; Edge Volumes waits a minute before uploading. --## Next steps --After completing these steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring, or 3rd party monitoring with Prometheus and Grafana. --[Monitor your deployment](monitor-deployment-edge-volumes.md) |
azure-arc | Alternate Onelake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/alternate-onelake.md | - Title: Alternate OneLake configuration for Cloud Ingest Edge Volumes -description: Learn about an alternate Cloud Ingest Edge Volumes configuration. ---- Previously updated : 08/26/2024---# Alternate: OneLake configuration for Cloud Ingest Edge Volumes --This article describes an alternate configuration for [Cloud Ingest Edge Volumes](cloud-ingest-edge-volume-configuration.md) (blob upload with local purge) for OneLake Lakehouses. --This configuration is an alternative option that you can use with key-based authentication methods. You should review the recommended configuration using the system-assigned managed identities described in [Cloud Ingest Edge Volumes configuration](cloud-ingest-edge-volume-configuration.md). --## Configure OneLake for Extension Identity --### Add Extension Identity to OneLake workspace --1. Navigate to your OneLake portal; for example, `https://youraccount.powerbi.com`. -1. Create or navigate to your workspace. - :::image type="content" source="media/onelake-workspace.png" alt-text="Screenshot showing workspace ribbon in portal." lightbox="media/onelake-workspace.png"::: -1. Select **Manage Access**. - :::image type="content" source="media/onelake-manage-access.png" alt-text="Screenshot showing manage access screen in portal." lightbox="media/onelake-manage-access.png"::: -1. Select **Add people or groups**. -1. Enter your extension name from your Azure Container Storage enabled by Azure Arc installation. This must be unique within your tenant. - :::image type="content" source="media/add-extension-name.png" alt-text="Screenshot showing add extension name screen." lightbox="media/add-extension-name.png"::: -1. Change the drop-down for permissions from **Viewer** to **Contributor**. - :::image type="content" source="media/onelake-set-contributor.png" alt-text="Screenshot showing set contributor screen." lightbox="media/onelake-set-contributor.png"::: -1. Select **Add**. --### Create a Cloud Ingest Persistent Volume Claim (PVC) --1. Create a file named `cloudIngestPVC.yaml` with the following contents. Modify the `metadata::name` value with a name for your Persistent Volume Claim. This name is referenced on the last line of `deploymentExample.yaml` in the next step. You must also update the `metadata::namespace` value with your intended consuming pod. If you don't have an intended consuming pod, the `metadata::namespace` value is `default`. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - ### Create a nane for your PVC ### - name: <create-a-pvc-name-here> - ### Use a namespace that matches your intended consuming pod, or "default" ### - namespace: <intended-consuming-pod-or-default-here> - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi - storageClassName: cloud-backed-sc - ``` --1. To apply `cloudIngestPVC.yaml`, run: -- ```bash - kubectl apply -f "cloudIngestPVC.yaml" - ``` --### Attach sub-volume to Edge Volume --You can use the following process to create a sub-volume using Extension Identity to connect to your OneLake LakeHouse. --1. Get the name of your Edge Volume using the following command: -- ```bash - kubectl get edgevolumes - ``` --1. Create a file named `edgeSubvolume.yaml` and copy/paste the following contents. The following variables must be updated with your information: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your sub-volume. - - `spec::edgevolume`: This name was retrieved from the previous step using `kubectl get edgevolumes`. - - `spec::path`: Create your own subdirectory name under the mount path. Note that the following example already contains an example name (`exampleSubDir`). If you change this path name, line 33 in `deploymentExample.yaml` must be updated with the new path name. If you choose to rename the path, don't use a preceding slash. - - `spec::container`: Details of your One Lake Data Lake Lakehouse (for example, `<WORKSPACE>/<DATA_LAKE>/Files`). - - `spec::storageaccountendpoint`: Your storage account endpoint is the prefix of your Power BI web link. For example, if your OneLake page is `https://contoso-motors.powerbi.com/`, then your endpoint is `https://contoso-motors.dfs.fabric.microsoft.com`. -- ```yaml - apiVersion: "arccontainerstorage.azure.net/v1" - kind: EdgeSubvolume - metadata: - name: <create-a-subvolume-name-here> - spec: - edgevolume: <your-edge-volume-name-here> - path: exampleSubDir # If you change this path, line 33 in deploymentExample.yaml must to be updated. Don't use a preceding slash. - auth: - authType: MANAGED_IDENTITY - storageaccountendpoint: "https://<Your AZ Site>.dfs.fabric.microsoft.com/" # Your AZ site is the root of your Power BI OneLake interface URI, such as https://contoso-motors.powerbi.com - container: "<WORKSPACE>/<DATA_LAKE>/Files" # Details of your One Lake Data Lake Lakehouse - ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration - ``` --2. To apply `edgeSubvolume.yaml`, run: -- ```bash - kubectl apply -f "edgeSubvolume.yaml" - ``` --#### Optional: Modify the `ingestPolicy` from the default --1. If you want to change the `ingestPolicy` from the default `edgeingestpolicy-default`, create a file named `myedgeingest-policy.yaml` with the following contents. The following variables must be updated with your preferences: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your `ingestPolicy`. This name must be updated and referenced in the `spec::ingestPolicy` section of your `edgeSubvolume.yaml`. - - `spec::ingest::order`: The order in which dirty files are uploaded. This is best effort, not a guarantee (defaults to `oldest-first`). Options for order are: `oldest-first` or `newest-first`. - - `spec::ingest::minDelaySec`: The minimum number of seconds before a dirty file is eligible for ingest (defaults to 60). This number can range between 0 and 31536000. - - `spec::eviction::order`: How files are evicted (defaults to `unordered`). Options for eviction order are: `unordered` or `never`. - - `spec::eviction::minDelaySec`: The number of seconds before a clean file is eligible for eviction (defaults to 300). This number can range between 0 and 31536000. -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeIngestPolicy - metadata: - name: <create-a-policy-name-here> # This will need to be updated and referenced in the spec::ingestPolicy section of the edgeSubvolume.yaml - spec: - ingest: - order: <your-ingest-order> - minDelaySec: <your-min-delay-sec> - eviction: - order: <your-eviction-order> - minDelaySec: <your-min-delay-sec> - ``` --1. To apply `myedgeingest-policy.yaml`, run: -- ```bash - kubectl apply -f "myedgeingest-policy.yaml" - ``` --## Attach your app (Kubernetes native application) --1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `deploymentExample.yaml` with the following contents. Replace the values for `containers::name` and `volumes::persistentVolumeClaim::claimName` with your own. If you updated the path name from `edgeSubvolume.yaml`, `exampleSubDir` on line 33 must be updated with your new path name. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: cloudingestedgevol-deployment ### This must be unique for each deployment you choose to create. - spec: - replicas: 2 - selector: - matchLabels: - name: wyvern-testclientdeployment - template: - metadata: - name: wyvern-testclientdeployment - labels: - name: wyvern-testclientdeployment - spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - wyvern-testclientdeployment - topologyKey: kubernetes.io/hostname - containers: - ### Specify the container in which to launch the busy box. ### - - name: <create-a-container-name-here> - image: mcr.microsoft.com/azure-cli:2.57.0@sha256:c7c8a97f2dec87539983f9ded34cd40397986dcbed23ddbb5964a18edae9cd09 - command: - - "/bin/sh" - - "-c" - - "dd if=/dev/urandom of=/data/exampleSubDir/esaingesttestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done" - volumeMounts: - ### This name must match the following volumes::name attribute ### - - name: wyvern-volume - ### This mountPath is where the PVC is attached to the pod's filesystem ### - mountPath: "/data" - volumes: - ### User-defined name that's used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ### - - name: wyvern-volume - persistentVolumeClaim: - ### This claimName must refer to your PVC metadata::name - claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml> - ``` --1. To apply `deploymentExample.yaml`, run: -- ```bash - kubectl apply -f "deploymentExample.yaml" - ``` --1. Use `kubectl get pods` to find the name of your pod. Copy this name, as you need it in the next step. -- > [!NOTE] - > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods appear using `kubectl get pods`. You can choose either pod name to use for the next step. --1. Run the following command and replace `POD_NAME_HERE` with your copied value from the previous step: -- ```bash - kubectl exec -it POD_NAME_HERE -- sh - ``` --1. Change directories into the `/data` mount path as specified in `deploymentExample.yaml`. --1. You should see a directory with the name you specified as your `path` in Step 2 of the [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume) section. Now, `cd` into `/YOUR_PATH_NAME_HERE`, replacing `YOUR_PATH_NAME_HERE` with your details. --1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`. --1. In the Azure portal, navigate to your storage account and find the container specified from step 2 of [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume). When you select your container, you should find `file1.txt` populated within the container. If the file hasn't yet appeared, wait approximately 1 minute; Edge Volumes waits a minute before uploading. --## Next steps --After you complete these steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring, or 3rd-party monitoring with Prometheus and Grafana. --[Monitor Your Deployment](monitor-deployment-edge-volumes.md) |
azure-arc | Attach App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/attach-app.md | - Title: Attach your application using the Azure IoT Operations data processor or Kubernetes native application (preview) -description: Learn how to attach your app using the Azure IoT Operations data processor or Kubernetes native application in Azure Container Storage enabled by Azure Arc Cache Volumes. --- Previously updated : 08/26/2024-zone_pivot_groups: attach-app ---# Attach your application (preview) --This article assumes you created a Persistent Volume (PV) and a Persistent Volume Claim (PVC). For information about creating a PV, see [Create a persistent volume](create-persistent-volume.md). For information about creating a PVC, see [Create a Persistent Volume Claim](create-persistent-volume-claim.md). --## Configure the Azure IoT Operations data processor --When you use Azure IoT Operations (AIO), the Data Processor is spawned without any mounts for Cache Volumes. You can perform the following tasks: --- Add a mount for the Cache Volumes PVC you created previously.-- Reconfigure all pipelines' output stage to output to the Cache Volumes mount you just created. --## Add Cache Volumes to your aio-dp-runner-worker-0 pods --These pods are part of a **statefulSet**. You can't edit the statefulSet in place to add mount points. Instead, follow this procedure: --1. Dump the statefulSet to yaml: -- ```bash - kubectl get statefulset -o yaml -n azure-iot-operations aio-dp-runner-worker > stateful_worker.yaml - ``` --1. Edit the statefulSet to include the new mounts for Cache Volumes in volumeMounts and volumes: -- ```yaml - volumeMounts: - - mountPath: /etc/bluefin/config - name: config-volume - readOnly: true - - mountPath: /var/lib/bluefin/registry - name: nfs-volume - - mountPath: /var/lib/bluefin/local - name: runner-local - ### Add the next 2 lines ### - - mountPath: /mnt/esa - name: esa4 - - volumes: - - configMap: - defaultMode: 420 - name: file-config - name: config-volume - - name: nfs-volume - persistentVolumeClaim: - claimName: nfs-provisioner - ### Add the next 3 lines ### - - name: esa4 - persistentVolumeClaim: - claimName: esa4 - ``` --1. Delete the existing statefulSet: -- ```bash - kubectl delete statefulset -n azure-iot-operations aio-dp-runner-worker - ``` -- This deletes all `aio-dp-runner-worker-n` pods. This is an outage-level event. --1. Create a new statefulSet of aio-dp-runner-worker(s) with the Cache Volumes mounts: -- ```bash - kubectl apply -f stateful_worker.yaml -n azure-iot-operations - ``` -- When the `aio-dp-runner-worker-n` pods start, they include mounts to Cache Volumes. The PVC should convey this in the state. --1. Once you reconfigure your Data Processor workers to have access to the Cache Volumes, you must manually update the pipeline configuration to use a local path that corresponds to the mounted location of your Cache Volume on the worker PODs. -- In order to modify the pipeline, use `kubectl edit pipeline <name of your pipeline>`. In that pipeline, replace your output stage with the following YAML: -- ```yaml - output: - batch: - path: .payload - time: 60s - description: An example file output stage - displayName: Sample File output - filePath: '{{{instanceId}}}/{{{pipelineId}}}/{{{partitionId}}}/{{{YYYY}}}/{{{MM}}}/{{{DD}}}/{{{HH}}}/{{{mm}}}/{{{fileNumber}}}' - format: - type: jsonStream - rootDirectory: /mnt/esa - type: output/file@v1 - ``` ---## Configure a Kubernetes native application --1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `configPod.yaml` with the following contents: -- ```yaml - kind: Deployment - apiVersion: apps/v1 - metadata: - name: example-static - labels: - app: example-static - ### Uncomment the next line and add your namespace only if you are not using the default namespace (if you are using azure-iot-operations) as specified from Line 6 of your pvc.yaml. If you are not using the default namespace, all future kubectl commands require "-n YOUR_NAMESPACE" to be added to the end of your command. - # namespace: YOUR_NAMESPACE - spec: - replicas: 1 - selector: - matchLabels: - app: example-static - template: - metadata: - labels: - app: example-static - spec: - containers: - - image: mcr.microsoft.com/cbl-mariner/base/core:2.0 - name: mariner - command: - - sleep - - infinity - volumeMounts: - ### This name must match the 'volumes.name' attribute in the next section. ### - - name: blob - ### This mountPath is where the PVC is attached to the pod's filesystem. ### - mountPath: "/mnt/blob" - volumes: - ### User-defined 'name' that's used to link the volumeMounts. This name must match 'volumeMounts.name' as specified in the previous section. ### - - name: blob - persistentVolumeClaim: - ### This claimName must refer to the PVC resource 'name' as defined in the PVC config. This name must match what your PVC resource was actually named. ### - claimName: YOUR_CLAIM_NAME_FROM_YOUR_PVC - ``` -- > [!NOTE] - > If you are using your own namespace, all future `kubectl` commands require `-n YOUR_NAMESPACE` to be appended to the command. For example, you must use `kubectl get pods -n YOUR_NAMESPACE` instead of the standard `kubectl get pods`. --1. To apply this .yaml file, run the following command: -- ```bash - kubectl apply -f "configPod.yaml" - ``` --1. Use `kubectl get pods` to find the name of your pod. Copy this name, as you need it for the next step. --1. Run the following command and replace `POD_NAME_HERE` with your copied value from the previous step: -- ```bash - kubectl exec -it POD_NAME_HERE -- bash - ``` --1. Change directories into the `/mnt/blob` mount path as specified from your `configPod.yaml`. --1. As an example, to write a file, run `touch file.txt`. --1. In the Azure portal, navigate to your storage account and find the container. This is the same container you specified in your `pv.yaml` file. When you select your container, you see `file.txt` populated within the container. ---## Next steps --After you complete these steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring or third-party monitoring with Prometheus and Grafana: --[Third-party monitoring](third-party-monitoring.md) |
azure-arc | Azure Monitor Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/azure-monitor-kubernetes.md | - Title: Azure Monitor and Kubernetes monitoring (preview) -description: Learn how to monitor your deployment using Azure Monitor and Kubernetes monitoring in Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024----# Azure Monitor and Kubernetes monitoring (preview) --This article describes how to monitor your deployment using Azure Monitor and Kubernetes monitoring. --## Azure Monitor --[Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) is a full-stack monitoring service that you can use to monitor Azure resources for their availability, performance, and operation. --## Azure Monitor metrics --[Azure Monitor metrics](/azure/azure-monitor/essentials/data-platform-metrics) is a feature of Azure Monitor that collects data from monitored resources into a time-series database. --These metrics can originate from a number of different sources, including native platform metrics, native custom metrics via [Azure Monitor agent Application Insights](/azure/azure-monitor/insights/insights-overview), and [Azure Managed Prometheus](/azure/azure-monitor/essentials/prometheus-metrics-overview). --Prometheus metrics can be stored in an [Azure Monitor workspace](/azure/azure-monitor/essentials/azure-monitor-workspace-overview) for subsequent visualization via [Azure Managed Grafana](/azure/managed-grafana/overview). --### Metrics configuration --To configure the scraping of Prometheus metrics data into Azure Monitor, see the [Azure Monitor managed service for Prometheus scrape configuration](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration#enable-pod-annotation-based-scraping) article, which builds upon [this configmap](https://aka.ms/azureprometheus-addon-settings-configmap). Azure Container Storage enabled by Azure Arc specifies the `prometheus.io/scrape:true` and `prometheus.io/port` values, and relies on the default of `prometheus.io/path: '/metrics'`. You must specify the Azure Container Storage enabled by Azure Arc installation namespace under `pod-annotation-based-scraping` to properly scope your metrics' ingestion. --Once the Prometheus configuration has been completed, follow the [Azure Managed Grafana instructions](/azure/managed-grafana/overview) to create an [Azure Managed Grafana instance](/azure/managed-grafana/quickstart-managed-grafana-portal). --## Azure Monitor logs --[Azure Monitor logs](/azure/azure-monitor/logs/data-platform-logs) is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources, and can be used to [analyze this data in many ways](/azure/azure-monitor/logs/data-platform-logs#what-can-you-do-with-azure-monitor-logs). --### Logs configuration --If you want to access log data via Azure Monitor, you must enable [Azure Monitor Container Insights](/azure/azure-monitor/containers/container-insights-overview) on your Arc-enabled Kubernetes cluster, and then analyze the collected data with [a collection of views](/azure/azure-monitor/containers/container-insights-analyze) and [workbooks](/azure/azure-monitor/containers/container-insights-reports). --Additionally, you can use [Azure Monitor Log Analytics](/azure/azure-monitor/logs/log-analytics-tutorial) to query collected log data. --## Next steps --[Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | Blob Index Metadata Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/blob-index-metadata-tags.md | - Title: Blob index and metadata tags -description: Learn about blob index and metadata tags in Edge Volumes. ---- Previously updated : 08/26/2024---# Blob index and metadata tags --Cloud Ingest Edge Volumes now supports the ability to generate blob index tags and blob metadata tags directly from Azure Container Storage enabled by Azure Arc. This process involves incorporating extended attributes to the files within your Cloud Ingest Edge Volume, where Edge Volumes translates that into your selected index or metadata tag. --## Blob index tags --To generate a blob index tag, create an extended attribute using the prefix `azindex`, followed by the desired key and its corresponding value for the index tag. Edge Volumes subsequently propagates these values to the blob, appearing as the key matching the value. --> [!NOTE] -> Index tags are only supported for non-hierarchical namespace (HNS) accounts. --### Example 1: index tags --The following example creates the blob index tag `location=chicagoplant2` on `logfile1`: --```bash -$ attr -s azindex.location -V chicagoplant2 logfile1 -Attribute "azindex.location" set to a 13 byte value for logfile1: -chicagoplant2 -``` --### Example 2: index tags --The following example creates the blob index tag `datecreated=1705523841` on `logfile2`: --```bash -$ attr -s azindex.datecreated -V $(date +%s) logfile2 -Attribute " azindex.datecreated " set to a 10 byte value for logfile2: -1705523841 -``` --## Blob metadata tags --To generate a blob metadata tag, create an extended attribute using the prefix `azmeta`, followed by the desired key and its corresponding value for the metadata tag. Edge Volumes subsequently propagates these values to the blob, appearing as the key matching the value. --> [!NOTE] -> Metadata tags are supported for HNS and non-HNS accounts. --> [!NOTE] -> HNS blobs also receive `x-ms-meta-is_adls=true` to indicate that the blob was created with Datalake APIs. --### Example 1: metadata tags --The following example creates the blob metadata tag `x-ms-meta-location=chicagoplant2` on `logfile1`: --```bash -$ attr -s azmeta.location -V chicagoplant2 logfile1 -Attribute "azmeta.location" set to a 13 byte value for logfile1: -chicagoplant2 -``` --### Example 2: metadata tags --The following example creates the blob metadata tag `x-ms-meta-datecreated=1705523841` on `logfile2`: --```bash -$ attr -s azmeta.datecreated -V $(date +%s) logfile2 -Attribute " azmeta.datecreated " set to a 10 byte value for logfile2: -1705523841 -``` --## Next steps --[Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | Cache Volumes Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/cache-volumes-overview.md | - Title: Cache Volumes overview -description: Learn about the Cache Volumes offering from Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024----# Overview of Cache Volumes --This article describes the Cache Volumes offering from Azure Container Storage enabled by Azure Arc. --## How does Cache Volumes work? ---Cache Volumes works by performing the following operations: --- **Write** - Your file is processed locally and saved in the cache. If the file doesn't change within 3 seconds, Cache Volumes automatically uploads it to your chosen blob destination.-- **Read** - If the file is already in the cache, the file is served from the cache memory. If it isn't available in the cache, the file is pulled from your chosen blob storage target.--## Next steps --- [Prepare Linux](prepare-linux.md)-- [How to install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md)-- [Create a persistent volume](create-persistent-volume.md)-- [Monitor your deployment](azure-monitor-kubernetes.md) |
azure-arc | Cloud Ingest Edge Volume Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/cloud-ingest-edge-volume-configuration.md | - Title: Cloud Ingest Edge Volumes configuration -description: Learn about Cloud Ingest Edge Volumes configuration for Edge Volumes. ---- Previously updated : 08/26/2024---# Cloud Ingest Edge Volumes configuration --This article describes the configuration for *Cloud Ingest Edge Volumes* (blob upload with local purge). --## What is Cloud Ingest Edge Volumes? --*Cloud Ingest Edge Volumes* facilitates limitless data ingestion from edge to blob, including ADLSgen2. Files written to this storage type are seamlessly transferred to blob storage and once confirmed uploaded, are subsequently purged locally. This removal ensures space availability for new data. Moreover, this storage option supports data integrity in disconnected environments, which enables local storage and synchronization upon reconnection to the network. --For example, you can write a file to your cloud ingest PVC, and a process runs a scan to check for new files every minute. Once identified, the file is sent for uploading to your designated blob destination. Following confirmation of a successful upload, Cloud Ingest Edge Volume waits for five minutes, and then deletes the local version of your file. --## Prerequisites --1. Create a storage account [following the instructions here](/azure/storage/common/storage-account-create?tabs=azure-portal). -- > [!NOTE] - > When you create your storage account, it's recommended that you create it under the same resource group and region/location as your Kubernetes cluster. --1. Create a container in the storage account that you created previously, [following the instructions here](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container). --## Configure Extension Identity --Edge Volumes allows the use of a system-assigned extension identity for access to blob storage. This section describes how to use the system-assigned extension identity to grant access to your storage account, allowing you to upload cloud ingest volumes to these storage systems. --It's recommended that you use Extension Identity. If your final destination is blob storage or ADLSgen2, see the following instructions. If your final destination is OneLake, follow the instructions in [Configure OneLake for Extension Identity](alternate-onelake.md). --While it's not recommended, if you prefer to use key-based authentication, follow the instructions in [Key-based authentication](alternate-key-based.md). --### Obtain Extension Identity --#### [Azure portal](#tab/portal) --#### Azure portal --1. Navigate to your Arc-connected cluster. -1. Select **Extensions**. -1. Select your Azure Container Storage enabled by Azure Arc extension. -1. Note the Principal ID under **Cluster Extension Details**. - -#### [Azure CLI](#tab/cli) --#### Azure CLI --In Azure CLI, enter your values for the exports (`CLUSTER_NAME`, `RESOURCE_GROUP`) and run the following command: --```bash -export CLUSTER_NAME = <your-cluster-name-here> -export RESOURCE_GROUP = <your-resource-group-here> -export EXTENSION_TYPE=${1:-"microsoft.arc.containerstorage"} -az k8s-extension list --cluster-name ${CLUSTER_NAME} --resource-group ${RESOURCE_GROUP} --cluster-type connectedClusters | jq --arg extType ${EXTENSION_TYPE} 'map(select(.extensionType == $extType)) | .[] | .identity.principalId' -r -``` ----### Configure blob storage account for Extension Identity --#### Add Extension Identity permissions to a storage account --1. Navigate to storage account in the Azure portal. -1. Select **Access Control (IAM)**. -1. Select **Add+ -> Add role assignment**. -1. Select **Storage Blob Data Owner**, then select **Next**. -1. Select **+Select Members**. -1. To add your principal ID to the **Selected Members:** list, paste the ID and select **+** next to the identity. -1. Click **Select**. -1. To review and assign permissions, select **Next**, then select **Review + Assign**. --## Create a Cloud Ingest Persistent Volume Claim (PVC) --1. Create a file named `cloudIngestPVC.yaml` with the following contents. Edit the `metadata::name` line and create a name for your Persistent Volume Claim. This name is referenced on the last line of `deploymentExample.yaml` in the next step. Also, update the `metadata::namespace` value with your intended consuming pod. If you don't have an intended consuming pod, the `metadata::namespace` value is `default`. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - ### Create a name for your PVC ### - name: <create-persistent-volume-claim-name-here> - ### Use a namespace that matched your intended consuming pod, or "default" ### - namespace: <intended-consuming-pod-or-default-here> - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi - storageClassName: cloud-backed-sc - ``` --1. To apply `cloudIngestPVC.yaml`, run: -- ```bash - kubectl apply -f "cloudIngestPVC.yaml" - ``` --## Attach sub-volume to Edge Volume --To create a sub-volume using extension identity to connect to your storage account container, use the following process: --1. Get the name of your Edge Volume using the following command: -- ```bash - kubectl get edgevolumes - ``` --1. Create a file named `edgeSubvolume.yaml` and copy the following contents. These variables must be updated with your information: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your sub-volume. - - `spec::edgevolume`: This name was retrieved from the previous step using `kubectl get edgevolumes`. - - `spec::path`: Create your own subdirectory name under the mount path. Note that the following example already contains an example name (`exampleSubDir`). If you change this path name, line 33 in `deploymentExample.yaml` must be updated with the new path name. If you choose to rename the path, don't use a preceding slash. - - `spec::container`: The container name in your storage account. - - `spec::storageaccountendpoint`: Navigate to your storage account in the Azure portal. On the **Overview** page, near the top right of the screen, select **JSON View**. You can find the `storageaccountendpoint` link under **properties::primaryEndpoints::blob**. Copy the entire link (for example, `https://mytest.blob.core.windows.net/`). -- ```yaml - apiVersion: "arccontainerstorage.azure.net/v1" - kind: EdgeSubvolume - metadata: - name: <create-a-subvolume-name-here> - spec: - edgevolume: <your-edge-volume-name-here> - path: exampleSubDir # If you change this path, line 33 in deploymentExample.yaml must be updated. Don't use a preceding slash. - auth: - authType: MANAGED_IDENTITY - storageaccountendpoint: "https://<STORAGE ACCOUNT NAME>.blob.core.windows.net/" - container: <your-blob-storage-account-container-name> - ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration - ``` --2. To apply `edgeSubvolume.yaml`, run: -- ```bash - kubectl apply -f "edgeSubvolume.yaml" - ``` --### Optional: Modify the `ingestPolicy` from the default --1. If you want to change the `ingestPolicy` from the default `edgeingestpolicy-default`, create a file named `myedgeingest-policy.yaml` with the following contents. The following variables must be updated with your preferences: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- - `metadata::name`: Create a name for your **ingestPolicy**. This name must be updated and referenced in the `spec::ingestPolicy` section of your `edgeSubvolume.yaml`. - - `spec::ingest::order`: The order in which dirty files are uploaded. This is best effort, not a guarantee (defaults to **oldest-first**). Options for order are: **oldest-first** or **newest-first**. - - `spec::ingest::minDelaySec`: The minimum number of seconds before a dirty file is eligible for ingest (defaults to 60). This number can range between 0 and 31536000. - - `spec::eviction::order`: How files are evicted (defaults to **unordered**). Options for eviction order are: **unordered** or **never**. - - `spec::eviction::minDelaySec`: The number of seconds before a clean file is eligible for eviction (defaults to 300). This number can range between 0 and 31536000. -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeIngestPolicy - metadata: - name: <create-a-policy-name-here> # This must be updated and referenced in the spec::ingestPolicy section of the edgeSubvolume.yaml - spec: - ingest: - order: <your-ingest-order> - minDelaySec: <your-min-delay-sec> - eviction: - order: <your-eviction-order> - minDelaySec: <your-min-delay-sec> - ``` --1. To apply `myedgeingest-policy.yaml`, run: -- ```bash - kubectl apply -f "myedgeingest-policy.yaml" - ``` --## Attach your app (Kubernetes native application) --1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `deploymentExample.yaml` with the following contents. Modify the `containers::name` and `volumes::persistentVolumeClaim::claimName` values. If you updated the path name from `edgeSubvolume.yaml`, `exampleSubDir` on line 33 must be updated with your new path name. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: cloudingestedgevol-deployment ### This must be unique for each deployment you choose to create. - spec: - replicas: 2 - selector: - matchLabels: - name: wyvern-testclientdeployment - template: - metadata: - name: wyvern-testclientdeployment - labels: - name: wyvern-testclientdeployment - spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - wyvern-testclientdeployment - topologyKey: kubernetes.io/hostname - containers: - ### Specify the container in which to launch the busy box. ### - - name: <create-a-container-name-here> - image: mcr.microsoft.com/azure-cli:2.57.0@sha256:c7c8a97f2dec87539983f9ded34cd40397986dcbed23ddbb5964a18edae9cd09 - command: - - "/bin/sh" - - "-c" - - "dd if=/dev/urandom of=/data/exampleSubDir/esaingesttestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done" - volumeMounts: - ### This name must match the volumes::name attribute below ### - - name: wyvern-volume - ### This mountPath is where the PVC is attached to the pod's filesystem ### - mountPath: "/data" - volumes: - ### User-defined 'name' that's used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ### - - name: wyvern-volume - persistentVolumeClaim: - ### This claimName must refer to your PVC metadata::name (Line 5) - claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml> - ``` --1. To apply `deploymentExample.yaml`, run: -- ```bash - kubectl apply -f "deploymentExample.yaml" - ``` --1. Use `kubectl get pods` to find the name of your pod. Copy this name to use in the next step. -- > [!NOTE] - > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods appear using `kubectl get pods`. You can choose either pod name to use for the next step. --1. Run the following command and replace `POD_NAME_HERE` with your copied value from the last step: -- ```bash - kubectl exec -it POD_NAME_HERE -- sh - ``` --1. Change directories into the `/data` mount path as specified from your `deploymentExample.yaml`. --1. You should see a directory with the name you specified as your `path` in Step 2 of the [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume) section. Change directories into `/YOUR_PATH_NAME_HERE`, replacing the `YOUR_PATH_NAME_HERE` value with your details. --1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`. --1. In the Azure portal, navigate to your storage account and find the container specified from Step 2 of [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume). When you select your container, you should find `file1.txt` populated within the container. If the file hasn't yet appeared, wait approximately 1 minute; Edge Volumes waits a minute before uploading. --## Next steps --After you complete these steps, you can begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring or 3rd-party monitoring with Prometheus and Grafana. --[Monitor your deployment](monitor-deployment-edge-volumes.md) |
azure-arc | Create Persistent Volume Claim | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/create-persistent-volume-claim.md | - Title: Create a Persistent Volume Claim (PVC) (preview) -description: Learn how to create a Persistent Volume Claim (PVC) in Cache Volumes. --- Previously updated : 08/26/2024----# Create a Persistent Volume Claim (PVC) (preview) --The PVC is a persistent volume claim against the persistent volume that you can use to mount a Kubernetes pod. --This size does not affect the ceiling of blob storage used in the cloud to support this local cache. Make a note of the name of this PVC, as you need it when you create your application pod. --## Create PVC --1. Create a file named **pvc.yaml** with the following contents: -- ```yaml - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - ### Create a name for your PVC ### - name: CREATE_A_NAME_HERE - ### Use a namespace that matched your intended consuming pod, or "default" ### - namespace: INTENDED_CONSUMING_POD_OR_DEFAULT_HERE - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 5Gi - storageClassName: esa - volumeMode: Filesystem - ### This name references your PV name in your PV config ### - volumeName: INSERT_YOUR_PV_NAME - ``` -- > [!NOTE] - > If you intend to use your PVC with the Azure IoT Operations Data Processor, use `azure-iot-operations` as the `namespace` on line 7. --1. To apply this .yaml file, run: -- ```bash - kubectl apply -f "pvc.yaml" - ``` --## Next steps --After you create a Persistent Volume Claim (PVC), attach your app (Azure IoT Operations Data Processor or Kubernetes Native Application): --[Attach your app](attach-app.md) |
azure-arc | Create Persistent Volume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/create-persistent-volume.md | - Title: Create a persistent volume (preview) -description: Learn about creating persistent volumes in Cache Volumes. --- Previously updated : 08/26/2024----# Create a persistent volume (preview) --This article describes how to create a persistent volume using storage key authentication. --## Prerequisites --This section describes the prerequisites for creating a persistent volume (PV). --1. Create a storage account [following the instructions here](/azure/storage/common/storage-account-create?tabs=azure-portal). -- > [!NOTE] - > When you create your storage account, create it under the same resource group as your Kubernetes cluster. It is recommended that you also create it under the same region/location as your Kubernetes cluster. --1. Create a container in the storage account that you created in the previous step, [following the instructions here](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container). --## Storage key authentication configuration --1. Create a file named **add-key.sh** with the following contents. No edits or changes are necessary: -- ```bash - #!/usr/bin/env bash - - while getopts g:n:s: flag - do - case "${flag}" in - g) RESOURCE_GROUP=${OPTARG};; - s) STORAGE_ACCOUNT=${OPTARG};; - n) NAMESPACE=${OPTARG};; - esac - done - - SECRET=$(az storage account keys list -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT --query [0].value --output tsv) - - kubectl create secret generic -n "${NAMESPACE}" "${STORAGE_ACCOUNT}"-secret --from-literal=azurestorageaccountkey="${SECRET}" --from-literal=azurestorageaccountname="${STORAGE_ACCOUNT}" - ``` --1. After you create the file, change the write permissions on the file and execute the shell script using the following commands. Running these commands creates a secret named `{YOUR_STORAGE_ACCOUNT}-secret`. This secret name is used for the `secretName` value when configuring your PV: -- ```bash - chmod +x add-key.sh - ./add-key.sh -g "$YOUR_RESOURCE_GROUP_NAME" -s "$YOUR_STORAGE_ACCOUNT_NAME" -n "$YOUR_KUBERNETES_NAMESPACE" - ``` --## Create Persistent Volume (PV) --You must create a Persistent Volume (PV) for Cache Volumes to create a local instance and bind to a remote BLOB storage account. --Make a note of the `metadata: name:` as you must specify it in the `spec: volumeName` of the PVC that binds to it. Use your storage account and container that you created as part of the [prerequisites](#prerequisites). --1. Create a file named **pv.yaml**: -- ```yaml - apiVersion: v1 - kind: PersistentVolume - metadata: - ### Create a name here ### - name: CREATE_A_NAME_HERE - spec: - capacity: - ### This storage capacity value is not enforced at this layer. ### - storage: 10Gi - accessModes: - - ReadWriteMany - persistentVolumeReclaimPolicy: Retain - storageClassName: esa - csi: - driver: edgecache.csi.azure.com - readOnly: false - ### Make sure this volumeid is unique in the cluster. You must specify it in the spec:volumeName of the PVC. ### - volumeHandle: YOUR_NAME_FROM_METADATA_NAME_IN_LINE_4_HERE - volumeAttributes: - protocol: edgecache - edgecache-storage-auth: AccountKey - ### Fill in the next two/three values with your information. ### - secretName: YOUR_SECRET_NAME_HERE ### From the previous step, this name is "{YOUR_STORAGE_ACCOUNT}-secret" ### - ### If you use a non-default namespace, uncomment the following line and add your namespace. ### - ### secretNamespace: YOUR_NAMESPACE_HERE - containerName: YOUR_CONTAINER_NAME_HERE - ``` --1. To apply this .yaml file, run: -- ```bash - kubectl apply -f "pv.yaml" - ``` --## Next steps --- [Create a persistent volume claim](create-persistent-volume-claim.md)-- [Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | Install Cache Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/install-cache-volumes.md | - Title: Install Cache Volumes (preview) -description: Learn how to install the Cache Volumes offering from Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024----# Install Azure Container Storage enabled by Azure Arc Cache Volumes (preview) --This article describes the steps to install the Azure Container Storage enabled by Azure Arc extension. --## Optional: increase cache disk size --Currently, the cache disk size defaults to 8 GB. If you're satisfied with the cache disk size, see the next section, [Install the Azure Container Storage enabled by Azure Arc extension](#install-the-azure-container-storage-enabled-by-azure-arc-extension). --If you use Edge Essentials, require a larger cache disk size, and already created a **config.json** file, append the key and value pair (`"cachedStorageSize": "20Gi"`) to your existing **config.json**. Don't erase the previous contents of **config.json**. --If you require a larger cache disk size, create **config.json** with the following contents: --```json -{ - "cachedStorageSize": "20Gi" -} -``` --## Prepare the `azure-arc-containerstorage` namespace --In this step, you prepare a namespace in Kubernetes for `azure-arc-containerstoragee` and add it to your Open Service Mesh (OSM) configuration for link security. If you want to use a namespace other than `azure-arc-containerstorage`, substitute it in the `export extension_namespace`: --```bash -export extension_namespace=azure-arc-containerstorage -kubectl create namespace "${extension_namespace}" -kubectl label namespace "${extension_namespace}" openservicemesh.io/monitored-by=osm -kubectl annotate namespace "${extension_namespace}" openservicemesh.io/sidecar-injection=enabled -# Disable OSM permissive mode. -kubectl patch meshconfig osm-mesh-config \ - -n "arc-osm-system" \ - -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":'"false"'}}}' \ - --type=merge -``` --## Install the Azure Container Storage enabled by Azure Arc extension --Install the Azure Container Storage enabled by Azure Arc extension using the following command: --> [!NOTE] -> If you created a **config.json** file from the previous steps in [Prepare Linux](prepare-linux.md), append `--config-file "config.json"` to the following `az k8s-extension create` command. Any values set at installation time persist throughout the installation lifetime (including manual and auto-upgrades). --```bash -az k8s-extension create --resource-group "${YOUR-RESOURCE-GROUP}" --cluster-name "${YOUR-CLUSTER-NAME}" --cluster-type connectedClusters --name hydraext --extension-type microsoft.arc.containerstorage -``` --## Next steps --Once you complete these prerequisites, you can begin to [create a Persistent Volume (PV) with Storage Key Authentication](create-persistent-volume.md). |
azure-arc | Install Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/install-edge-volumes.md | - Title: Install Edge Volumes (preview) -description: Learn how to install the Edge Volumes offering from Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024---# Install Azure Container Storage enabled by Azure Arc Edge Volumes (preview) --This article describes the steps to install the Azure Container Storage enabled by Azure Arc extension. --## Prepare the `azure-arc-containerstorage` namespace --In this step, you prepare a namespace in Kubernetes for `azure-arc-containerstorage` and add it to your Open Service Mesh (OSM) configuration for link security. If you want to use a namespace other than `azure-arc-containerstorage`, substitute it in the `export extension_namespace`: --```bash -export extension_namespace=azure-arc-containerstorage -kubectl create namespace "${extension_namespace}" -kubectl label namespace "${extension_namespace}" openservicemesh.io/monitored-by=osm -kubectl annotate namespace "${extension_namespace}" openservicemesh.io/sidecar-injection=enabled -# Disable OSM permissive mode. -kubectl patch meshconfig osm-mesh-config \ - -n "arc-osm-system" \ - -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":'"false"'}}}' \ - --type=merge -``` --## Install the Azure Container Storage enabled by Azure Arc extension --Install the Azure Container Storage enabled by Azure Arc extension using the following command: --```azurecli -az k8s-extension create --resource-group "${YOUR-RESOURCE-GROUP}" --cluster-name "${YOUR-CLUSTER-NAME}" --cluster-type connectedClusters --name azure-arc-containerstorage --extension-type microsoft.arc.containerstorage -``` --> [!NOTE] -> By default, the `--release-namespace` parameter is set to `azure-arc-containerstorage`. If you want to override this setting, add the `--release-namespace` flag to the following command and populate it with your details. Any values set at installation time persist throughout the installation lifetime (including manual and auto-upgrades). --> [!IMPORTANT] -> If you use OneLake, you must use a unique extension name for the `--name` variable in the `az k8s-extension create` command. --## Configuration operator --### Configuration CRD --The Azure Container Storage enabled by Azure Arc extension uses a Custom Resource Definition (CRD) in Kubernetes to configure the storage service. Before you publish this CRD on your Kubernetes cluster, the Azure Container Storage enabled by Azure Arc extension is dormant and uses minimal resources. Once your CRD is applied with the configuration options, the appropriate storage classes, CSI driver, and service PODs are deployed to provide services. In this way, you can customize Azure Container Storage enabled by Azure Arc to meet your needs, and it can be reconfigured without reinstalling the Arc Kubernetes Extension. Common configurations are contained here, however this CRD offers the capability to configure non-standard configurations for Kubernetes clusters with differing storage capabilities. --#### [Single node or 2-node cluster](#tab/single) --#### Single node or 2-node cluster with Ubuntu or Edge Essentials --If you run a single node or 2-node cluster with **Ubuntu** or **Edge Essentials**, follow these instructions: --1. Create a file named **edgeConfig.yaml** with the following contents: -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeStorageConfiguration - metadata: - name: edge-storage-configuration - spec: - defaultDiskStorageClasses: - - "default" - - "local-path" - serviceMesh: "osm" - ``` --1. To apply this .yaml file, run: -- ```bash - kubectl apply -f "edgeConfig.yaml" - ``` --#### [Multi-node cluster](#tab/multi) --#### Multi-node cluster with Ubuntu or Edge Essentials --If you run a 3 or more node Kubernetes cluster with **Ubuntu** or **Edge Essentials**, follow these instructions. This configuration installs the ACStor storage subsystem to provide fault-tolerant, replicated storage for Kubernetes clusters with 3 or more nodes: --1. Create a file named **edgeConfig.yaml** with the following contents: -- > [!NOTE] - > To relocate storage to a different location on disk, update `diskMountPoint` with your desired path. -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeStorageConfiguration - metadata: - name: edge-storage-configuration - spec: - defaultDiskStorageClasses: - - acstor-arccontainerstorage-storage-pool - serviceMesh: "osm" - - apiVersion: arccontainerstorage.azure.net/v1 - kind: ACStorConfiguration - metadata: - name: acstor-configuration - spec: - diskMountPoint: /mnt - diskCapacity: 10Gi - createStoragePool: - enabled: true - replicas: 3 - ``` --1. To apply this .yaml file, run: -- ```bash - kubectl apply -f "edgeConfig.yaml" - ``` --#### [Arc-connected AKS/AKS Arc](#tab/arc) --#### Arc-connected AKS or AKS Arc --If you run a single-node or multi-node cluster with **Arc-connected AKS** or **AKS enabled by Arc**, follow these instructions: --1. Create a file named **edgeConfig.yaml** with the following contents: -- ```yaml - apiVersion: arccontainerstorage.azure.net/v1 - kind: EdgeStorageConfiguration - metadata: - name: edge-storage-configuration - spec: - defaultDiskStorageClasses: - - "default" - - "local-path" - serviceMesh: "osm" - ``` --1. To apply this .yaml file, run: -- ```bash - kubectl apply -f "edgeConfig.yaml" - ``` ----## Next steps --- [Configure your Local Shared Edge volumes](local-shared-edge-volumes.md)-- [Configure your Cloud Ingest Edge Volumes](cloud-ingest-edge-volume-configuration.md) |
azure-arc | Jumpstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/jumpstart.md | - Title: Azure Container Storage enabled by Azure Arc using Azure Arc Jumpstart (preview) -description: Learn about Azure Arc Jumpstart and Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024----# Azure Arc Jumpstart and Azure Container Storage enabled by Azure Arc --Azure Container Storage enabled by Azure Arc partnered with [Azure Arc Jumpstart](https://azurearcjumpstart.com/) to produce both a new Arc Jumpstart scenario and Azure Arc Jumpstart Drops, furthering the capabilities of edge computing solutions. This partnership led to an innovative scenario in which a computer vision AI model detects defects in bolts from real-time video streams, with the identified defects securely stored using Azure Container Storage enabled by Azure Arc on an AKS Edge Essentials instance. This scenario showcases the powerful integration of Azure Arc with AI and edge storage technologies. --Additionally, Azure Container Storage enabled by Azure Arc contributed to Azure Arc Jumpstart Drops, a curated collection of resources that simplify deployment and management for developers and IT professionals. These tools, including Kubernetes files and scripts, are designed to streamline edge storage solutions and demonstrate the practical applications of Microsoft's cutting-edge technology. --## Azure Arc Jumpstart scenario using Azure Container Storage enabled by Azure Arc --Azure Container Storage enabled by Azure Arc collaborated with the [Azure Arc Jumpstart](https://azurearcjumpstart.com/) team to implement a scenario in which a computer vision AI model detects defects in bolts by analyzing video from a supply line video feed streamed over Real-Time Streaming Protocol (RTSP). The identified defects are then stored in a container within a storage account using Azure Container Storage enabled by Azure Arc. --In this automated setup, Azure Container Storage enabled by Azure Arc is deployed on an [AKS Edge Essentials](/azure/aks/hybrid/aks-edge-overview) single-node instance, running in an Azure virtual machine. An Azure Resource Manager template is provided to create the necessary Azure resources and configure the **LogonScript.ps1** custom script extension. This extension handles AKS Edge Essentials cluster creation, Azure Arc onboarding for the Azure VM and AKS Edge Essentials cluster, and Azure Container Storage enabled by Azure Arc deployment. Once AKS Edge Essentials is deployed, Azure Container Storage enabled by Azure Arc is installed as a Kubernetes service that exposes a CSI driven storage class for use by applications in the Edge Essentials Kubernetes cluster. --For more information, see the following articles: --- [Watch the Jumpstart scenario on YouTube](https://youtu.be/Qnh2UH1g6Q4).-- [See the Jumpstart documentation](https://aka.ms/esajumpstart).-- [See the Jumpstart architecture diagrams](https://aka.ms/arcposters).--## Azure Arc Jumpstart Drops for Azure Container Storage enabled by Azure Arc --Azure Container Storage enabled by Azure Arc created Jumpstart Drops as part of another collaboration with [Azure Arc Jumpstart](https://azurearcjumpstart.com/). --[Jumpstart Drops](https://aka.ms/jumpstartdrops) is a curated online collection of tools, scripts, and other assets that simplify the daily tasks of developers, IT, OT, and day-2 operations professionals. Jumpstart Drops is designed to showcase the power of Microsoft's products and services and promote mutual support and knowledge sharing among community members. --For more information, see the article [Create an Azure Container Storage enabled by Azure Arc instance on a Single Node Ubuntu K3s system](https://arcjumpstart.com/create_an_edge_storage_accelerator_(esa)_instance_on_a_single_node_ubuntu_k3s_system). --This Jumpstart Drop provides Kubernetes files to create an Azure Container Storage enabled by Azure Arc Cache Volumes instance on an install on Ubuntu with K3s. --## Next steps --- [Azure Container Storage enabled by Azure Arc overview](overview.md)-- [AKS Edge Essentials overview](/azure/aks/hybrid/aks-edge-overview) |
azure-arc | Local Shared Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/local-shared-edge-volumes.md | - Title: Local Shared Edge Volume configuration for Edge Volumes -description: Learn about Local Shared Edge Volume configuration for Edge Volumes. ---- Previously updated : 08/26/2024---# Local Shared Edge Volumes --This article describes the configuration for Local Shared Edge Volumes (highly available, durable local storage). --## What is a Local Shared Edge Volume? --The *Local Shared Edge Volumes* feature provides highly available, failover-capable storage, local to your Kubernetes cluster. This shared storage type remains independent of cloud infrastructure, making it ideal for scratch space, temporary storage, and locally persistent data that might be unsuitable for cloud destinations. --## Create a Local Shared Edge Volumes Persistent Volume Claim (PVC) and configure a pod against the PVC --1. Create a file named `localSharedPVC.yaml` with the following contents. Modify the `metadata::name` value with a name for your Persistent Volume Claim. Then, in line 8, specify the namespace that matches your intended consuming pod. The `metadata::name` value is referenced on the last line of `deploymentExample.yaml` in the next step. -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - ### Create a name for your PVC ### - name: <create-a-pvc-name-here> - ### Use a namespace that matches your intended consuming pod, or "default" ### - namespace: <intended-consuming-pod-or-default-here> - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi - storageClassName: unbacked-sc - ``` --1. Create a file named `deploymentExample.yaml` with the following contents. Add the values for `containers::name` and `volumes::persistentVolumeClaim::claimName`: -- [!INCLUDE [lowercase-note](includes/lowercase-note.md)] -- ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: localsharededgevol-deployment ### This will need to be unique for every volume you choose to create - spec: - replicas: 2 - selector: - matchLabels: - name: wyvern-testclientdeployment - template: - metadata: - name: wyvern-testclientdeployment - labels: - name: wyvern-testclientdeployment - spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - wyvern-testclientdeployment - topologyKey: kubernetes.io/hostname - containers: - ### Specify the container in which to launch the busy box. ### - - name: <create-a-container-name-here> - image: 'mcr.microsoft.com/mirror/docker/library/busybox:1.35' - command: - - "/bin/sh" - - "-c" - - "dd if=/dev/urandom of=/data/esalocalsharedtestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done" - volumeMounts: - ### This name must match the following volumes::name attribute ### - - name: wyvern-volume - ### This mountPath is where the PVC will be attached to the pod's filesystem ### - mountPath: /data - volumes: - ### User-defined name that is used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ### - - name: wyvern-volume - persistentVolumeClaim: - ### This claimName must refer to your PVC metadata::name from lsevPVC.yaml. - claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml> - ``` --1. To apply these YAML files, run: -- ```bash - kubectl apply -f "localSharedPVC.yaml" - kubectl apply -f "deploymentExample.yaml" - ``` --1. Run `kubectl get pods` to find the name of your pod. Copy this name, as it's needed in the next step. -- > [!NOTE] - > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods appear using `kubectl get pods`. You can choose either pod name to use for the next step. --1. Run the following command and replace `POD_NAME_HERE` with your copied value from the previous step: -- ```bash - kubectl exec -it pod_name_here -- sh - ``` --1. Change directories to the `/data` mount path, as specified in `deploymentExample.yaml`. --1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`. --After you complete the previous steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring, or third-party monitoring with Prometheus and Grafana. --## Next steps --[Monitor your deployment](monitor-deployment-edge-volumes.md) |
azure-arc | Monitor Deployment Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/monitor-deployment-edge-volumes.md | - Title: Monitor your Azure Container Storage enabled by Azure Arc Edge Volumes deployment (preview) -description: Learn how to monitor your Azure Container Storage enabled by Azure Arc Edge Volumes deployment. --- Previously updated : 08/26/2024----# Monitor your Edge Volumes deployment (preview) --This article describes how to monitor your Azure Container Storage enabled by Azure Arc Edge Volumes deployment. --## Deployment monitoring overviews --For information about how to monitor your Edge Volumes deployment using Azure Monitor and Kubernetes Monitoring and 3rd-party monitoring with Prometheus and Grafana, see the following Azure Container Storage enabled by Azure Arc articles: --- [3rd party monitoring with Prometheus and Grafana](third-party-monitoring.md)-- [Azure Monitor and Kubernetes Monitoring](azure-monitor-kubernetes.md)--## Next steps --[Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | Multi Node Cluster Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/multi-node-cluster-edge-volumes.md | - Title: Prepare Linux for Edge Volumes using a multi-node cluster (preview) -description: Learn how to prepare Linux for Edge Volumes with a multi-node cluster using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/26/2024-zone_pivot_groups: platform-select ---# Prepare Linux for Edge Volumes using a multi-node cluster (preview) --This article describes how to prepare Linux using a multi-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux.md#prerequisites). --## Prepare Linux with AKS enabled by Azure Arc --Install and configure Open Service Mesh (OSM) using the following commands: --```azurecli -az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm -``` -----## Prepare Linux with Ubuntu --This section describes how to prepare Linux with Ubuntu if you run a multi-node cluster. --First, install and configure Open Service Mesh (OSM) using the following command: --```azurecli -az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm -``` ---## Next steps --[Install Extension](install-edge-volumes.md) |
azure-arc | Multi Node Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/multi-node-cluster.md | - Title: Prepare Linux for Cache Volumes using a multi-node cluster (preview) -description: Learn how to prepare Linux for Cache Volumes with a multi-node cluster using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/26/2024-zone_pivot_groups: platform-select ---# Prepare Linux using a multi-node cluster (preview) --This article describes how to prepare Linux using a multi-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux.md#prerequisites). --## Prepare Linux with AKS enabled by Azure Arc --Install and configure Open Service Mesh (OSM) using the following commands: --```azurecli -az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm -kubectl patch meshconfig osm-mesh-config -n "arc-osm-system" -p '{"spec":{"featureFlags":{"enableWASMStats": false }, "traffic":{"outboundPortExclusionList":[443,2379,2380], "inboundPortExclusionList":[443,2379,2380]}}}' --type=merge -``` ----5. Create a file named **config.json** with the following contents: -- ```json - { - "acstor.capacityProvisioner.tempDiskMountPoint": /var - } - ``` -- > [!NOTE] - > The location/path of this file is referenced later, when you install the Cache Volumes Arc extension. ---## Prepare Linux with Ubuntu --This section describes how to prepare Linux with Ubuntu if you run a multi-node cluster. --1. Install and configure Open Service Mesh (OSM) using the following command: -- ```azurecli - az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm - kubectl patch meshconfig osm-mesh-config -n "arc-osm-system" -p '{"spec":{"featureFlags":{"enableWASMStats": false }, "traffic":{"outboundPortExclusionList":[443,2379,2380], "inboundPortExclusionList":[443,2379,2380]}}}' --type=merge - ``` ----## Next steps --[Install Azure Container Storage enabled by Azure Arc](install-cache-volumes.md) |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/overview.md | - Title: What is Azure Container Storage enabled by Azure Arc? (preview) -description: Learn about Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024-----# What is Azure Container Storage enabled by Azure Arc? (preview) --> [!IMPORTANT] -> Azure Container Storage enabled by Azure Arc is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. --Azure Container Storage enabled by Azure Arc is a first-party storage system designed for Arc-connected Kubernetes clusters. Azure Container Storage enabled by Azure Arc can be deployed to write files to a "ReadWriteMany" persistent volume claim (PVC) where they are then transferred to Azure Blob Storage. Azure Container Storage enabled by Azure Arc offers a range of features to support Azure IoT Operations and other Arc services. Azure Container Storage enabled by Azure Arc with high availability and fault-tolerance will be fully supported and generally available (GA) in the second half of 2024. --## What does Azure Container Storage enabled by Azure Arc do? --Azure Container Storage enabled by Azure Arc serves as a native persistent storage system for Arc-connected Kubernetes clusters. Its primary role is to provide a reliable, fault-tolerant file system that allows data to be tiered to Azure. For Azure IoT Operations (AIO) and other Arc Services, Azure Container Storage enabled by Azure Arc is crucial in making Kubernetes clusters stateful. Key features of Azure Container Storage enabled by Azure Arc for Arc-connected K8s clusters include: --- **Tolerance to node failures:** When configured as a 3 node cluster, Azure Container Storage enabled by Azure Arc replicates data between nodes (triplication) to ensure high availability and tolerance to single node failures.-- **Data synchronization to Azure:** Azure Container Storage enabled by Azure Arc is configured with a storage target, so data written to volumes is automatically tiered to Azure Blob (block blob, ADLSgen-2 or OneLake) in the cloud.-- **Low latency operations:** Arc services, such as AIO, can expect low latency for read and write operations.-- **Simple connection:** Customers can easily connect to an Azure Container Storage enabled by Azure Arc volume using a CSI driver to start making Persistent Volume Claims against their storage.-- **Flexibility in deployment:** Azure Container Storage enabled by Azure Arc can be deployed as part of AIO or as a standalone solution.-- **Observable:** Azure Container Storage enabled by Azure Arc supports industry standard Kubernetes monitoring logs and metrics facilities, and supports Azure Monitor Agent observability.-- **Designed with integration in mind:** Azure Container Storage enabled by Azure Arc integrates seamlessly with AIO's Data Processor to ease the shuttling of data from your edge to Azure. -- **Platform neutrality:** Azure Container Storage enabled by Azure Arc is a Kubernetes storage system that can run on any Arc Kubernetes supported platform. Validation was done for specific platforms, including Ubuntu + CNCF K3s/K8s, Windows IoT + AKS-EE, and Azure Stack HCI + AKS-HCI.--## What are the different Azure Container Storage enabled by Azure Arc offerings? --The original Azure Container Storage enabled by Azure Arc offering is [*Cache Volumes*](cache-volumes-overview.md). The newest offering is [*Edge Volumes*](install-edge-volumes.md). --## What are Azure Container Storage enabled by Azure Arc Edge Volumes? --The first addition to the Edge Volumes offering is *Local Shared Edge Volumes*, providing highly available, failover-capable storage, local to your Kubernetes cluster. This shared storage type remains independent of cloud infrastructure, making it ideal for scratch space, temporary storage, and locally persistent data unsuitable for cloud destinations. --The second new offering is *Cloud Ingest Edge Volumes*, which facilitates limitless data ingestion from edge to Blob, including ADLSgen2 and OneLake. Files written to this storage type are seamlessly transferred to Blob storage and subsequently purged from the local cache once confirmed uploaded, ensuring space availability for new data. Moreover, this storage option supports data integrity in disconnected environments, enabling local storage and synchronization upon reconnection to the network. --Tailored for IoT applications, Edge Volumes not only eliminates local storage concerns and ingest limitations, but also optimizes local resource utilization and reduces storage requirements. --### How does Edge Volumes work? --You write to Edge Volumes as if it was your local file system. For a Local Shared Edge Volume, your data is stored and left untouched. For a Cloud Ingest Edge Volume, the volume checks for new data to mark for upload every minute, and then uploads that new data to your specified cloud destination. Five minutes after the confirmed upload to the cloud, the local copy is purged, allowing you to keep your local volume clear of old data and continue to receive new data. --Get started with [Edge Volumes](prepare-linux-edge-volumes.md). --### Supported Azure regions for Azure Container Storage enabled by Azure Arc --Azure Container Storage enabled by Azure Arc is only available in the following Azure regions: --- East US-- East US 2-- West US-- West US 2-- West US 3-- North Europe-- West Europe--## Next steps --- [Prepare Linux](prepare-linux-edge-volumes.md)-- [How to install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md) |
azure-arc | Prepare Linux Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/prepare-linux-edge-volumes.md | - Title: Prepare Linux for Edge Volumes (preview) -description: Learn how to prepare Linux in Azure Container Storage enabled by Azure Arc Edge Volumes using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/30/2024----# Prepare Linux for Edge Volumes (preview) --The article describes how to prepare Linux for Edge Volumes using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. --> [!NOTE] -> The minimum supported Linux kernel version is 5.1. At this time, there are known issues with 6.4 and 6.2. --## Prerequisites --> [!NOTE] -> Azure Container Storage enabled by Azure Arc is only available in the following regions: East US, East US 2, West US, West US 2, West US 3, North Europe, West Europe. --### Uninstall previous instance of Azure Container Storage enabled by Azure Arc extension --If you previously installed a version of Azure Container Storage enabled by Azure Arc earlier than **2.1.0-preview**, you must uninstall that previous instance in order to install the newer version. If you installed the **1.2.0-preview** release or earlier, [use these instructions](release-notes.md#if-i-installed-the-120-preview-or-any-earlier-release-how-do-i-uninstall-the-extension). Versions after **2.1.0-preview** are upgradeable and do not require this uninstall. --1. In order to delete the old version of the extension, the Kubernetes resources holding references to old version of the extension must be cleaned up. Any pending resources can delay the clean-up of the extension. There are at least two ways to clean up these resources: either using `kubectl delete <resource_type> <resource_name>`, or by "unapplying" the YAML files used to create the resources. The resources that need to be deleted are typically the pods, the PVC referenced, and the subvolume CRD (if Cloud Ingest Edge Volume was configured). Alternatively, the following four YAML files can be passed to `kubectl delete -f` using the following commands in the specified order. These variables must be updated with your information: -- - `YOUR_DEPLOYMENT_FILE_NAME_HERE`: Add your deployment file names. In the example in this article, the file name used was `deploymentExample.yaml`. If you created multiple deployments, each one must be deleted on a separate line. - - `YOUR_PVC_FILE_NAME_HERE`: Add your Persistent Volume Claim file names. In the example in this article, if you used the Cloud Ingest Edge Volume, the file name used was `cloudIngestPVC.yaml`. If you used the Local Shared Edge Volume, the file name used was `localSharedPVC.yaml`. If you created multiple PVCs, each one must be deleted on a separate line. - - `YOUR_EDGE_SUBVOLUME_FILE_NAME_HERE`: Add your Edge subvolume file names. In the example in this article, the file name used was `edgeSubvolume.yaml`. If you created multiple subvolumes, each one must be deleted on a separate line. - - `YOUR_EDGE_STORAGE_CONFIGURATION_FILE_NAME_HERE`: Add your Edge storage configuration file name here. In the example in this article, the file name used was `edgeConfig.yaml`. -- ```bash - kubectl delete -f "<YOUR_DEPLOYMENT_FILE_NAME_HERE.yaml>" - kubectl delete -f "<YOUR_PVC_FILE_NAME_HERE.yaml>" - kubectl delete -f "<YOUR_EDGE_SUBVOLUME_FILE_NAME_HERE.yaml>" - kubectl delete -f "<YOUR_EDGE_STORAGE_CONFIGURATION_FILE_NAME_HERE.yaml>" - ``` --1. After you delete the files for your deployments, PVCs, Edge subvolumes, and Edge storage configuration from the previous step, you can uninstall the extension using the following command. Replace `YOUR_RESOURCE_GROUP_NAME_HERE`, `YOUR_CLUSTER_NAME_HERE`, and `YOUR_EXTENSION_NAME_HERE` with your respective information: -- ```azurecli - az k8s-extension delete --resource-group YOUR_RESOURCE_GROUP_NAME_HERE --cluster-name YOUR_CLUSTER_NAME_HERE --cluster-type connectedClusters --name YOUR_EXTENSION_NAME_HERE - ``` ---## Next steps --- [Prepare Linux using a single-node cluster](single-node-cluster-edge-volumes.md)-- [Prepare Linux using a multi-node cluster](multi-node-cluster-edge-volumes.md) |
azure-arc | Prepare Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/prepare-linux.md | - Title: Prepare Linux (preview) -description: Learn how to prepare Linux in Azure Container Storage enabled by Azure Arc using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/26/2024----# Prepare Linux (preview) --The article describes how to prepare Linux using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. --> [!NOTE] -> The minimum supported Linux kernel version is 5.1. At this time, there are known issues with 6.4 and 6.2. --## Prerequisites --> [!NOTE] -> Azure Container Storage enabled by Azure Arc is only available in the following regions: East US, East US 2, West US, West US 2, West US 3, North Europe, West Europe. --### Arc-connected Kubernetes cluster --These instructions assume that you already have an Arc-connected Kubernetes cluster. To connect an existing Kubernetes cluster to Azure Arc, [see these instructions](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli). --If you want to use Azure Container Storage enabled by Azure Arc with Azure IoT Operations, follow the [instructions to create a cluster for Azure IoT Operations](/azure/iot-operations/get-started/quickstart-deploy?tabs=linux). --Use Ubuntu 22.04 on Standard D8s v3 machines with three SSDs attached for more storage. --## Single-node and multi-node clusters --A single-node cluster is commonly used for development or testing purposes due to its simplicity in setup and minimal resource requirements. These clusters offer a lightweight and straightforward environment for developers to experiment with Kubernetes without the complexity of a multi-node setup. Additionally, in situations where resources such as CPU, memory, and storage are limited, a single-node cluster is more practical. Its ease of setup and minimal resource requirements make it a suitable choice in resource-constrained environments. --However, single-node clusters come with limitations, mostly in the form of missing features, including their lack of high availability, fault tolerance, scalability, and performance. --A multi-node Kubernetes configuration is typically used for production, staging, or large-scale scenarios because of features such as high availability, fault tolerance, scalability, and performance. A multi-node cluster also introduces challenges and trade-offs, including complexity, overhead, cost, and efficiency considerations. For example, setting up and maintaining a multi-node cluster requires extra knowledge, skills, tools, and resources (network, storage, compute). The cluster must handle coordination and communication among nodes, leading to potential latency and errors. Additionally, running a multi-node cluster is more resource-intensive and is costlier than a single-node cluster. Optimization of resource usage among nodes is crucial for maintaining cluster and application efficiency and performance. --In summary, a [single-node Kubernetes cluster](single-node-cluster.md) might be suitable for development, testing, and resource-constrained environments. A [multi-node cluster](multi-node-cluster.md) is more appropriate for production deployments, high availability, scalability, and scenarios in which distributed applications are a requirement. This choice ultimately depends on your specific needs and goals for your deployment. --## Minimum hardware requirements --### Single-node or 2-node cluster --- Standard_D8ds_v5 VM recommended-- Equivalent specifications per node:- - 4 CPUs - - 16 GB RAM --### Multi-node cluster --- Standard_D8as_v5 VM recommended-- Equivalent specifications per node:- - 8 CPUs - - 32 GB RAM --32 GB RAM serves as a buffer; however, 16 GB RAM should suffice. Edge Essentials configurations require 8 CPUs with 10 GB RAM per node, making 16 GB RAM the minimum requirement. --## Minimum storage requirements --### Edge Volumes requirements --When you use the fault tolerant storage option, Edge Volumes allocates disk space out of a fault tolerant storage pool, which is made up of the storage exported by each node in the cluster. --The storage pool is configured to use 3-way replication to ensure fault tolerance. When an Edge Volume is provisioned, it allocates disk space from the storage pool, and allocates storage on 3 of the replicas. --For example, in a 3-node cluster with 20 GB of disk space per node, the cluster has a storage pool of 60 GB. However, due to replication, it has an effective storage size of 20 GB. --When an Edge Volume is provisioned with a requested size of 10 GB, it allocates a reserved system volume (statically sized to 1 GB) and a data volume (sized to the requested volume size, for example 10 GB). The reserved system volume consumes 3 GB (3 x 1 GB) of disk space in the storage pool, and the data volume will consume 30 GB (3 x 10 GB) of disk space in the storage pool, for a total of 33 GB. --### Cache Volumes requirements --Cache Volumes requires at least 4 GB per node of storage. For example, if you have a 3-node cluster, you need at least 12 GB of storage. --## Next steps --To continue preparing Linux, see the following instructions for single-node or multi-node clusters: --- [Single-node clusters](single-node-cluster.md)-- [Multi-node clusters](multi-node-cluster.md) |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/release-notes.md | - Title: Azure Container Storage enabled by Azure Arc FAQ and release notes (preview) -description: Learn about new features and known issues in Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/30/2024----# Azure Container Storage enabled by Azure Arc FAQ and release notes (preview) --This article provides information about new features and known issues in Azure Container Storage enabled by Azure Arc, and answers some frequently asked questions. --## Release notes --### Version 2.1.0-preview --- CRD operator-- Cloud Ingest Tunable Timers-- Uninstall during version updates-- Added regions: West US, West US 2, North Europe--### Version 1.2.0-preview --- Extension identity and OneLake support: Azure Container Storage enabled by Azure Arc now allows use of a system-assigned extension identity for access to blob storage or OneLake lake houses.-- Security fixes: security maintenance (package/module version updates).--### Version 1.1.0-preview --- Kernel versions: the minimum supported Linux kernel version is 5.1. Currently there are known issues with 6.4 and 6.2.--## FAQ --### Uninstall previous instance of the Azure Container Storage enabled by Azure Arc extension --#### If I installed the 1.2.0-preview or any earlier release, how do I uninstall the extension? --If you previously installed a version of Azure Container Storage enabled by Azure Arc earlier than **2.1.0-preview**, you must uninstall that previous instance in order to install the newer version. --> [!NOTE] -> The extension name for Azure Container Storage enabled by Azure Arc was previously **Edge Storage Accelerator**. If you still have this instance installed, the extension is referred to as **microsoft.edgestorageaccelerator** in the Azure portal. --1. Before you can delete the extension, you must delete your configPods, Persistent Volume Claims, and Persistent Volumes using the following commands in this order. Replace `YOUR_POD_FILE_NAME_HERE`, `YOUR_PVC_FILE_NAME_HERE`, and `YOUR_PV_FILE_NAME_HERE` with your respective file names. If you have more than one of each type, add one line per instance: -- ```bash - kubectl delete -f "YOUR_POD_FILE_NAME_HERE.yaml" - kubectl delete -f "YOUR_PVC_FILE_NAME_HERE.yaml" - kubectl delete -f "YOUR_PV_FILE_NAME_HERE.yaml" - ``` --1. After you delete your configPods, PVCs, and PVs in the previous step, you can uninstall the extension using the following command. Replace `YOUR_RESOURCE_GROUP_NAME_HERE`, `YOUR_CLUSTER_NAME_HERE`, and `YOUR_EXTENSION_NAME_HERE` with your respective information: -- ```azurecli - az k8s-extension delete --resource-group YOUR_RESOURCE_GROUP_NAME_HERE --cluster-name YOUR_CLUSTER_NAME_HERE --cluster-type connectedClusters --name YOUR_EXTENSION_NAME_HERE - ``` --1. If you installed the extension before the **1.1.0-preview** release (released on 4/19/24) and have a pre-existing `config.json` file, the `config.json` schema changed. Remove the old `config.json` file using `rm config.json`. --### Encryption --#### What types of encryption are used by Azure Container Storage enabled by Azure Arc? --There are three types of encryption that might be interesting for an Azure Container Storage enabled by Azure Arc customer: --- **Cluster to Blob Encryption**: Data in transit from the cluster to blob is encrypted using standard HTTPS protocols. Data is decrypted once it reaches the cloud.-- **Encryption Between Nodes**: This encryption is covered by Open Service Mesh (OSM) that is installed as part of setting up your Azure Container Storage enabled by Azure Arc cluster. It uses standard TLS encryption protocols.-- **On Disk Encryption**: Encryption at rest. Not currently supported by Azure Container Storage enabled by Azure Arc.--#### Is data encrypted in transit? --Yes, data in transit is encrypted using standard HTTPS protocols. Data is decrypted once it reaches the cloud. --#### Is data encrypted at REST? --Data persisted by the Azure Container Storage enabled by Azure Arc extension is encrypted at REST if the underlying platform provides encrypted disks. --### ACStor Triplication --#### What is ACStor triplication? --ACStor triplication stores data across three different nodes, each with its own hard drive. This intended behavior ensures data redundancy and reliability. --#### Can ACStor triplication occur on a single physical device? --No, ACStor triplication isn't designed to operate on a single physical device with three attached hard drives. --## Next steps --[Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | Single Node Cluster Edge Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/single-node-cluster-edge-volumes.md | - Title: Prepare Linux for Edge Volumes using a single-node or 2-node cluster (preview) -description: Learn how to prepare Linux for Edge Volumes with a single-node or 2-node cluster in Azure Container Storage enabled by Azure Arc using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/26/2024-zone_pivot_groups: platform-select ---# Prepare Linux for Edge Volumes using a single-node or two-node cluster (preview) --This article describes how to prepare Linux using a single-node or two-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux-edge-volumes.md#prerequisites). --## Prepare Linux with AKS enabled by Azure Arc --This section describes how to prepare Linux with AKS enabled by Azure Arc if you run a single-node or two-node cluster. --1. Install Open Service Mesh (OSM) using the following command: -- ```azurecli - az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm - ``` -----## Next steps --[Install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md) |
azure-arc | Single Node Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/single-node-cluster.md | - Title: Prepare Linux for Cache Volumes using a single-node or 2-node cluster (preview) -description: Learn how to prepare Linux for Cache Volumes with a single-node or 2-node cluster in Azure Container Storage enabled by Azure Arc using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu. ---- Previously updated : 08/26/2024-zone_pivot_groups: platform-select ---# Prepare Linux for Cache Volumes using a single-node or 2-node cluster (preview) --This article describes how to prepare Linux using a single-node or 2-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux.md#prerequisites). --## Prepare Linux with AKS enabled by Azure Arc --This section describes how to prepare Linux with AKS enabled by Azure Arc if you run a single-node or 2-node cluster. --1. Install Open Service Mesh (OSM) using the following command: -- ```azurecli - az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm - ``` --1. Disable **ACStor** by creating a file named **config.json** with the following contents: -- ```json - { - "feature.diskStorageClass": "default", - "acstorController.enabled": false - } - ``` ----5. Disable **ACStor** by creating a file named **config.json** with the following contents: -- ```json - { - "acstorController.enabled": false, - "feature.diskStorageClass": "local-path" - } - ``` ----3. Disable **ACStor** by creating a file named **config.json** with the following contents: -- ```json - { - "acstorController.enabled": false, - "feature.diskStorageClass": "local-path" - } - ``` ---## Next steps --[Install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md) |
azure-arc | Support Feedback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/support-feedback.md | - Title: Support and feedback for Azure Container Storage enabled by Azure Arc (preview) -description: Learn how to get support and provide feedback on Azure Container Storage enabled by Azure Arc. --- Previously updated : 08/26/2024----# Support and feedback for Azure Container Storage enabled by Azure Arc (preview) --If you experience an issue or need support during the preview, see the following video and steps to request support for Azure Container Storage enabled by Azure Arc in the Azure portal: --> [!VIDEO f477de99-2036-41a3-979a-586a39b1854f] --1. Navigate to the desired Arc-connected Kubernetes cluster with the Azure Container Storage enabled by Azure Arc extension that you are experiencing issues with. -1. To expand the menu, select **Settings** on the left blade. -1. Select **Extensions**. -1. Select the name for **Type**: `microsoft.arc.containerstorage`. In this example, the name is `hydraext`. -1. Select **Help** on the left blade to expand the menu. -1. Select **Support + Troubleshooting**. -1. In the search text box, describe the issue you are facing in a few words. -1. Select "Go" to the right of the search text box. -1. For **Which service you are having an issue with**, make sure that **Edge Storage Accelerator - Preview** is selected. If not, you might need to search for **Edge Storage Accelerator - Preview** in the drop-down. -1. Select **Next** after you select **Edge Storage Accelerator - Preview**. -1. **Subscription** should already be populated with the subscription that you used to set up your Kubernetes cluster. If not, select the subscription to which your Arc-connected Kubernetes cluster is linked. -1. For **Resource**, select **General question** from the drop-down menu. -1. Select **Next**. -1. For **Problem type**, from the drop-down menu, select the problem type that best describes your issue. -1. For **Problem subtype**, from the drop-down menu, select the subtype that best describes your issue. The subtype options vary based on your selected **Problem type**. -1. Select **Next**. -1. Based on the issue, there might be documentation available to help you triage your issue. If these articles are not relevant or don't solve the issue, select **Create a support request** at the top. -1. After you select **Create a support request at the top**, the fields in the **Problem description** section should already be populated with the details that you provided earlier. If you want to change anything, you can do so in this window. -1. Select **Next** once you verify that the information in the **Problem description** section is accurate. -1. In the **Recommended solution** section, recommended solutions appear based on the information you entered. If the recommended solutions are not helpful, select **Next** to continue filing a support request. -1. In the **Additional details** section, populate the **Problem details** with your information. -1. Once all required fields are complete, select **Next**. -1. Review your information from the previous sections, then select **Create**. --## Release notes --See the [release notes for Azure Container Storage enabled by Azure Arc](release-notes.md) for information about new features and known issues. --## Next steps --[What is Azure Container Storage enabled by Azure Arc?](overview.md) |
azure-arc | Third Party Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/third-party-monitoring.md | - Title: Third-party monitoring with Prometheus and Grafana (preview) -description: Learn how to monitor your Azure Container Storage enabled by Azure Arc deployment using third-party monitoring with Prometheus and Grafana. --- Previously updated : 08/26/2024----# Third-party monitoring with Prometheus and Grafana (preview) --This article describes how to monitor your deployment using third-party monitoring with Prometheus and Grafana. --## Metrics --### Configure an existing Prometheus instance for use with Azure Container Storage enabled by Azure Arc --This guidance assumes that you previously worked with and/or configured Prometheus for Kubernetes. If you haven't previously done so, [see this overview](/azure/azure-monitor/containers/kubernetes-monitoring-enable#enable-prometheus-and-grafana) for more information about how to enable Prometheus and Grafana. --[See the metrics configuration section](azure-monitor-kubernetes.md#metrics-configuration) for information about the required Prometheus scrape configuration. Once you configure Prometheus metrics, you can deploy [Grafana](/azure/azure-monitor/visualize/grafana-plugin) to monitor and visualize your Azure services and applications. --## Logs --The Azure Container Storage enabled by Azure Arc logs are accessible through the Azure Kubernetes Service [kubelet logs](/azure/aks/kubelet-logs). You can also collect this log data using the [syslog collection feature in Azure Monitor Container Insights](/azure/azure-monitor/containers/container-insights-syslog). --## Next steps --[Azure Container Storage enabled by Azure Arc overview](overview.md) |
azure-arc | About Arcdata Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/about-arcdata-extension.md | - Title: Reference for `az arcdata` extension- -description: Reference article for `az arcdata` commands. --- Previously updated : 06/17/2022------# Azure (`az`) CLI `arcdata` extension --The `arcdata` extension for Azure CLI provides tools for managing Azure Arc data services. --## Install extension --To install the extension, see [Install `arcdata` Azure CLI extension](install-arcdata-extension.md). --## Reference documentation --To access the latest reference documentation: --- [`az arcdata`](/cli/azure/arcdata)-- [`az sql mi-arc`](/cli/azure/sql/mi-arc)-- [`az sql midb-arc`](/cli/azure/sql/midb-arc)-- [`sql instance-failover-group-arc`](/cli/azure/sql/instance-failover-group-arc)-- [`az postgres server-arc`](/cli/azure/postgres/server-arc)--## Related content --[Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) |
azure-arc | Active Directory Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-introduction.md | - Title: Introduction to Azure Arc-enabled data services with Active Directory authentication -description: Introduction to Azure Arc-enabled data services with Active Directory authentication ------ Previously updated : 10/11/2022----# SQL Managed Instance enabled by Azure Arc with Active Directory authentication --Azure Arc-enabled data services support Active Directory (AD) for Identity and Access Management (IAM). SQL Managed Instance enabled by Azure Arc uses an existing on-premises Active Directory (AD) domain for authentication. --This article describes how to enable SQL Managed Instance enabled by Azure Arc with Active Directory (AD) Authentication. The article demonstrates two possible AD integration modes: -- Customer-managed keytab (CMK) -- Service-managed keytab (SMK) --The notion of Active Directory(AD) integration mode describes the process for keytab management including: -- Creating AD account used by SQL Managed Instance-- Registering Service Principal Names (SPNs) under the above AD account.-- Generating keytab file --## Background -To enable Active Directory authentication for SQL Server on Linux and Linux containers, use a [keytab file](/sql/linux/sql-server-linux-ad-auth-understanding#what-is-a-keytab-file). The keytab file is a cryptographic file containing service principal names (SPNs), account names and hostnames. SQL Server uses the keytab file for authenticating itself to the Active Directory (AD) domain and authenticating its clients using Active Directory (AD). Do the following steps to enable Active Directory authentication for Arc-enabled SQL Managed Instance: --- [Deploy data controller](create-data-controller-indirect-cli.md) -- [Deploy a customer-managed keytab AD connector](deploy-customer-managed-keytab-active-directory-connector.md) or [Deploy a service-managed keytab AD connector](deploy-system-managed-keytab-active-directory-connector.md)-- [Deploy SQL managed instances](deploy-active-directory-sql-managed-instance.md)--The following diagram shows how to enable Active Directory authentication for SQL Managed Instance enabled by Azure Arc: --![Actice Directory Deployment User journey](media/active-directory-deployment/active-directory-user-journey.png) ---## What is an Active Directory (AD) connector? --In order to enable Active Directory authentication for SQL Managed Instance, the instance must be deployed in an environment that allows it to communicate with the Active Directory domain. --To facilitate this, Azure Arc-enabled data services introduces a new Kubernetes-native [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called `Active Directory Connector`. It provides instances running on the same data controller the ability to perform Active Directory authentication. --## Compare AD integration modes --What is the difference between the two Active Directory integration modes? --To enable Active Directory authentication for SQL Managed Instance enabled by Azure Arc, you need an Active Directory connector where you specify the Active Directory integration deployment mode. The two Active Directory integration modes are: --- Customer-managed keytab-- Service-managed keytab --The following section compares these modes. --| |Customer-managed keytabΓÇï|System-managed keytab| -|||--| -|**Use cases**|Small and medium size businesses who are familiar with managing Active Directory objects and want flexibility in their automation process |All sizes of businesses - seeking to highly automated Active Directory management experience| -|**User provides**|An Active Directory account and SPNs under that account, and a [keytab file](/sql/linux/sql-server-linux-ad-auth-understanding#what-is-a-keytab-file) for Active Directory authentication |An [Organizational Unit (OU)](../../active-directory-domain-services/create-ou.md) and a domain service account has [sufficient permissions](deploy-system-managed-keytab-active-directory-connector.md?#prerequisites) on that OU in Active Directory.| -|**Characteristics**|User managed. Users bring the Active Directory account, which impersonates the identity of the managed instance and the keytab file. |System managed. The system creates a domain service account for each managed instance and sets SPNs automatically on that account. It also, creates and delivers a keytab file to the managed instance. | -|**Deployment process**| 1. Deploy data controller <br/> 2. Create keytab file <br/>3. Set up keytab information to Kubernetes secret<br/> 4. Deploy AD connector, deploy SQL managed instance<br/><br/>For more information, see [Deploy a customer-managed keytab Active Directory connector](deploy-customer-managed-keytab-active-directory-connector.md) | 1. Deploy data controller, deploy AD connector<br/>2. Deploy SQL managed instance<br/><br/>For more information, see [Deploy a system-managed keytab Active Directory connector](deploy-system-managed-keytab-active-directory-connector.md) | -|**Manageability**|You can create the keytab file by following the instructions from [Active Directory utility (`adutil`)](/sql/linux/sql-server-linux-ad-auth-adutil-introduction). Manual keytab rotation. |Managed keytab rotation.| -|**Limitations**|We do not recommend sharing keytab files among services. Each service should have a specific keytab file. As the number of keytab files increases the level of effort and complexity increases. |Managed keytab generation and rotation. The service account will require sufficient permissions in Active Directory to manage the credentials. <br/> <br/> Distributed Availability Group is not supported.| --For either mode, you need a specific Active Directory account, keytab, and Kubernetes secret for each SQL managed instance. --## Enable Active Directory authentication --When you deploy an instance with the intention to enable Active Directory authentication, the deployment needs to reference an Active Directory connector instance to use. Referencing the Active Directory connector in managed instance specification automatically sets up the needed environment in instance container to authenticate with Active Directory. --## Related content --* [Deploy a customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md) -* [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md) -* [Deploy SQL Managed Instance enabled by Azure Arc in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md) -* [Connect to SQL Managed Instance enabled by Azure Arc using Active Directory authentication](connect-active-directory-sql-managed-instance.md) |
azure-arc | Active Directory Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-prerequisites.md | - Title: Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites -description: Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites ------ Previously updated : 10/11/2022----# SQL Server enabled by Azure Arc in Active Directory authentication with system-managed keytab - prerequisites --This document explains how to prepare to deploy Azure Arc-enabled data services with Active Directory (AD) authentication. Specifically the article describes Active Directory objects you need to configure before the deployment of Kubernetes resources. --[The introduction](active-directory-introduction.md#compare-ad-integration-modes) describes two different integration modes: -- *System-managed keytab* mode allows the system to create and manage the AD accounts for each SQL Managed Instance.-- *Customer-managed keytab* mode allows you to create and manage the AD accounts for each SQL Managed Instance.--The requirements and recommendations are different for the two integration modes. ---|Active Directory Object|Customer-managed keytab |System-managed keytab | -|||| -|Organizational unit (OU) |Recommended|Required | -|Active Directory domain service account (DSA) for Active Directory Connector |Not required|Required | -|Active directory account for SQL Managed Instance |Created for each managed instance|System creates AD account for each managed instance| --### DSA account - system-managed keytab mode --To be able to create all the required objects in Active Directory automatically, AD Connector needs a domain service account (DSA). The DSA is an Active Directory account that has specific permissions to create, manage and delete users accounts inside the provided organizational unit (OU). This article explains how to configure the permission of this Active Directory account. The examples call the DSA account `arcdsa` as an example in this article. --### Auto generated Active Directory objects --An Arc-enabled SQL Managed Instance deployment automatically generates accounts in system-managed keytab mode. Each of the accounts represents a SQL Managed Instance and will be managed by the system throughout the lifetime of SQL. These accounts own the Service Principal Names (SPNs) required by each SQL. --The steps below assume you already have an Active Directory domain controller. If you don't have a domain controller, the following [guide](https://social.technet.microsoft.com/wiki/contents/articles/37528.create-and-configure-active-directory-domain-controller-in-azure-windows-server.aspx) includes steps that can be helpful. --## Create Active Directory objects --Do the following things before you deploy an Arc-enabled SQL Managed Instance with AD authentication: --1. Create an organizational unit (OU) for all Arc-enabled SQL Managed Instance related AD objects. Alternatively, you can choose an existing OU upon deployment. -1. Create an AD account for the AD Connector, or use an existing account, and provide this account the right permissions on the OU created in the previous step. --### Create an OU --System-managed keytab mode requires a designated OU. For customer-managed keytab mode an OU is recommended. --On the domain controller, open **Active Directory Users and Computers**. On the left panel, right-click the directory under which you want to create your OU and select **New**\> **Organizational Unit**, then follow the prompts from the wizard to create the OU. Alternatively, you can create an OU with PowerShell: --```powershell -New-ADOrganizationalUnit -Name "<name>" -Path "<Distinguished name of the directory you wish to create the OU in>" -``` --The examples in this article use `arcou` for the OU name. --![Screenshot of Active Directory Users and computers menu.](media/active-directory-deployment/start-new-organizational-unit.png) --![Screenshot of new object - organizational unit dialog.](media/active-directory-deployment/new-organizational-unit.png) --### Create the domain service account (DSA) --For system-managed keytab mode, you need an AD domain service account. --Create the Active Directory user that you will use as the domain service account. This account requires specific permissions. Make sure that you have an existing Active Directory account or create a new account, which Arc-enabled SQL Managed Instance can use to set up the necessary objects. --To create a new user in AD, you can right-click the domain or the OU and select **New** > **User**: --![Screenshot of user properties.](media/active-directory-deployment/start-ad-new-user.png) --This account will be referred to as *arcdsa* in this article. --### Set permissions for the DSA --For system-managed keytab mode, you need to set the permissions for the DSA. --Whether you have created a new account for the DSA or are using an existing Active Directory user account, there are certain permissions the account needs to have. The DSA needs to be able to create users, groups, and computer accounts in the OU. In the following steps, the Arc-enabled SQL Managed Instance domain service account name is `arcdsa`. --> [!IMPORTANT] -> You can choose any name for the DSA, but we do not recommend altering the account name once AD Connector is deployed. --1. On the domain controller, open **Active Directory Users and Computers**, click on **View**, select **Advanced Features** --1. In the left panel, navigate to your domain, then the OU which `arcou` will use --1. Right-click the OU, and select **Properties**. --> [!NOTE] -> Make sure that you have selected **Advanced Features** by right-clicking on the OU, and selecting **View** --1. Go to the Security tab. Select **Advanced Features** right-click on the OU, and select **View**. -- ![AD object properties](./media/active-directory-deployment/start-ad-new-user.png) --1. Select **Add...** and add the **arcdsa** user. -- ![Screenshot of add user dialog.](./media/active-directory-deployment/add-user.png) --1. Select the **arcdsa** user and clear all permissions, then select **Advanced**. --1. Select **Add** -- - Select **Select a Principal**, insert **arcdsa**, and select **Ok**. -- - Set **Type** to **Allow**. -- - Set **Applies To** to **This Object and all descendant objects**. -- ![Screenshot of permission entries.](./media/active-directory-deployment/set-permissions.png) -- - Scroll down to the bottom, and select **Clear all**. -- - Scroll back to the top, and select: - - **Read all properties** - - **Write all properties** - - **Create User objects** - - **Delete User objects** -- - Select **OK**. --1. Select **Add**. -- - Select **Select a Principal**, insert **arcdsa**, and select **Ok**. -- - Set **Type** to **Allow**. -- - Set **Applies To** to **Descendant User objects**. -- - Scroll down to the bottom, and select **Clear all**. -- - Scroll back to the top, and select **Reset password**. -- - Select **OK**. --- Select **OK** twice more to close open dialog boxes.--## Related content --* [Deploy a customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md) -* [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md) -* [Deploy a SQL Managed Instance enabled by Azure Arc in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md) -* [Connect to SQL Managed Instance enabled by Azure Arc using Active Directory authentication](connect-active-directory-sql-managed-instance.md) |
azure-arc | Adding Exporters And Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/adding-exporters-and-pipelines.md | - Title: Adding Exporters and Pipelines | Azure Arc-enabled Data Services -description: Learn how to add exporters and pipelines to the telemetry router ---- Previously updated : 10/25/2022----# Add exporters and pipelines to your telemetry router deployment --> [!NOTE] -> -> - The telemetry router is in Public Preview and should be deployed for **testing purposes only**. -> - While the telemetry router is in Public Preview, be advised that future preview releases could include changes to CRD specs, CLI commands, and/or telemetry router messages. -> - The current preview does not support in-place upgrades of a data controller deployed with the Arc telemetry router enabled. In order to install or upgrade a data controller in a future release, you will need to uninstall the data controller and then re-install. --## What are Exporters and Pipelines? --Exporters and Pipelines are two of the main components of the telemetry router. Exporters describe how to send data to a destination system such as Kafka. When creating an exporter, you associate it with a pipeline in order to route that type of telemetry data to that destination. You can have multiple exporters for each pipeline. --This article provides examples of how you can set up your own exporters and pipelines to route monitoring telemetry data to your own supported exporter. --### Supported Exporters --| Exporter | Supported Pipeline Types | -|--|--| -| Kafka | logs, metrics | -| Elasticsearch | logs | --## Configurations --All configurations are specified through the telemetry router's custom resource specification and support the configuration of exporters and pipelines. --### Exporters --For the Public Preview, exporters are partially configurable and support the following solutions: --| Exporter | Supported Telemetry Types | -|--|--| -| Kafka | logs, metrics | -| Elasticsearch | logs | --The following properties are currently configurable during the Public Preview: --#### General Exporter Settings --| Setting | Description | -|--|--| -| certificateName | The client certificate in order to export to the monitoring solution | -| caCertificateName | The cluster's Certificate Authority or customer-provided certificate for the Exporter | --#### Kafka Exporter Settings --| Setting | Description | -|--|--| -| topic | Name of the topic to export | -| brokers | List of brokers to connect to | -| encoding | Encoding for the telemetry: otlp_json or otlp_proto | --#### Elasticsearch Exporter Settings --| Setting | Description | -|--|--| -| index | This setting can be the name of an index or datastream name to publish events | -| endpoint | Endpoint of the Elasticsearch to export to | --### Pipelines --The Telemetry Router supports logs and metrics pipelines. These pipelines are exposed in the custom resource specification of the Arc telemetry router and available for modification. --You can't remove the last pipeline from the telemetry router. If you apply a yaml file that removes the last pipeline, the service rejects the update. --#### Pipeline Settings --| Setting | Description | -|--|--| -| logs | Can only declare new logs pipelines | -| metrics | Can only declare new metrics pipelines | -| exporters | List of exporters. Can be multiple of the same type | --### Credentials --#### Credentials Settings --| Setting | Description | -|--|--| -| certificateName | Name of the certificate must correspond to the certificate name specified in the exporter declaration | -| secretName | Name of the secret provided through Kubernetes | -| secretNamespace | Namespace with secret provided through Kubernetes | --## Example TelemetryRouter Specification --```yaml -apiVersion: arcdata.microsoft.com/v1beta4 -kind: TelemetryRouter -metadata: - name: arc-telemetry-router - namespace: <namespace> -spec: - credentials: - certificates: - - certificateName: arcdata-elasticsearch-exporter - - certificateName: cluster-ca-certificate - exporters: - elasticsearch: - - caCertificateName: cluster-ca-certificate - certificateName: arcdata-elasticsearch-exporter - endpoint: https://logsdb-svc:9200 - index: logstash-otel - name: arcdata - pipelines: - logs: - exporters: - - elasticsearch/arcdata -``` ---## Example 1: Adding a Kafka exporter for a metrics pipeline --You can test creating a Kafka exporter for a metrics pipeline that can send metrics data to your own instance of Kafka. You need to prefix the name of your metrics pipeline with `kafka/`. You can have one unnamed instance for each telemetry type. For example, "kafka" is a valid name for a metrics pipeline. - -1. Provide your client and CA certificates in the `credentials` section through Kubernetes secrets -2. Declare the new Exporter in the `exporters` section with the needed settings - name, certificates, broker, and index. Be sure to list the new exporter under the applicable type ("kakfa:") -3. List your exporter in the `pipelines` section of the spec as a metrics pipeline. The exporter name needs to be prefixed with the type of exporter. For example, `kafka/myMetrics` --In this example, we've added a metrics pipeline called "metrics" with a single exporter (`kafka/myMetrics`) that routes to your instance of Kafka. --**arc-telemetry-router.yaml** --```yaml -apiVersion: arcdata.microsoft.com/v1beta4 -kind: TelemetryRouter -metadata: - name: arc-telemetry-router - namespace: <namespace> -spec: - credentials: - certificates: - # Step 1. Provide your client and ca certificates through Kubernetes secrets - # where the name of the secret and its namespace are specified. - - certificateName: <kafka-client-certificate-name> - secretName: <name_of_secret> - secretNamespace: <namespace_with_secret> - - certificateName: <ca-certificate-name> - secretName: <name_of_secret> - secretNamespace: <namespace_with_secret> - exporters: - kafka: - # Step 2. Declare your Kafka exporter with the needed settings - # (name, certificates, endpoint, and index to export to) - - name: myMetrics - # Provide your client and CA certificate names - # for the exporter as well as any additional settings needed - caCertificateName: <ca-certificate-name> - certificateName: <kafka-client-certificate-name> - broker: <kafka_broker> - # Index can be the name of an index or datastream name to publish events to - index: <kafka_index> - pipelines: - metrics: - exporters: - # Step 3. Assign your kafka exporter to the list - # of exporters for the metrics pipeline. - - kafka/myMetrics -``` --```bash -kubectl apply -f arc-telemetry-router.yaml -n <namespace> -``` --You've added a metrics pipeline that exports to your instance of Kafka. After you've applied the changes to the yaml file, the TelemetryRouter custom resource will go into an updating state, and the collector service will restart. --## Example 2: Adding an Elasticsearch exporter for a logs pipeline --Your telemetry router deployment can export to multiple destinations by configuring more exporters. Multiple types of exporters are supported on a given telemetry router deployment. This example demonstrates adding an Elasticsearch exporter as a second exporter. We activate this second exporter by adding it to a logs pipeline. --1. Provide your client and CA certificates in the `credentials` section through Kubernetes secrets -2. Declare the new Exporter beneath the `exporters` section with the needed settings - name, certificates, endpoint, and index. Be sure to list the new exporter under the applicable type ("Elasticsearch:"). -3. List your exporter in the `pipelines` section of the spec as a logs pipeline. The exporter name needs to be prefixed with the type of exporter. For example, `elasticsearch/myLogs` --This example builds on the previous example by adding a logs pipeline for an Elasticsearch exporter (`elasticsearch/myLogs`). At the end of the example, we have two exporters with each exporter added to a different pipeline. --**arc-telemetry-router.yaml** --```yaml -apiVersion: arcdata.microsoft.com/v1beta4 -kind: TelemetryRouter -metadata: - name: arc-telemetry-router - namespace: <namespace> -spec: - credentials: - certificates: - # Step 1. Provide your client and ca certificates through Kubernetes secrets - # where the name of the secret and its namespace are specified. - - certificateName: <elasticsearch-client-certificate-name> - secretName: <name_of_secret> - secretNamespace: <namespace_with_secret> - - certificateName: <kafka-client-certificate-name> - secretName: <name_of_secret> - secretNamespace: <namespace_with_secret> - - certificateName: <ca-certificate-name> - secretName: <name_of_secret> - secretNamespace: <namespace_with_secret> - exporters: - Elasticsearch: - # Step 2. Declare your Elasticsearch exporter with the needed settings - # (certificates, endpoint, and index to export to) - - name: myLogs - # Provide your client and CA certificate names - # for the exporter as well as any additional settings needed - caCertificateName: <ca-certificate-name> - certificateName: <elasticsearch-client-certificate-name> - endpoint: <elasticsearch_endpoint> - # Index can be the name of an index or datastream name to publish events to - index: <elasticsearch_index> - kafka: - - name: myMetrics - caCertificateName: <ca-certificate-name> - certificateName: <kafka-client-certificate-name> - broker: <kafka_broker> - index: <kafka_index> - pipelines: - logs: - exporters: - # Step 3. Add your Elasticsearch exporter to - # the exporters list of a logs pipeline. - - elasticsearch/myLogs - metrics: - exporters: - - kafka/myMetrics -``` --```bash -kubectl apply -f arc-telemetry-router.yaml -n <namespace> -``` --You now have Kafka and Elasticsearch exporters, added to metrics and logs pipelines. After you apply the changes to the yaml file, the TelemetryRouter custom resource will go into an updating state, and the collector service will restart. |
azure-arc | Automated Integration Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/automated-integration-testing.md | - Title: Azure Arc-enabled data services - Automated validation testing -description: Running containerized validation tests on any Kubernetes Cluster ------ Previously updated : 09/07/2022------# Tutorial: Automated validation testing --As part of each commit that builds up Arc-enabled data services, Microsoft runs automated CI/CD pipelines that perform end-to-end tests. These tests are orchestrated via two containers that are maintained alongside the core-product (Data Controller, SQL Managed Instance enabled by Azure Arc & PostgreSQL server). These containers are: --- `arc-ci-launcher`: Containing deployment dependencies (for example, CLI extensions), as well product deployment code (using Azure CLI) for both Direct and Indirect connectivity modes. Once Kubernetes is onboarded with the Data Controller, the container leverages [Sonobuoy](https://sonobuoy.io/) to trigger parallel integration tests.-- `arc-sb-plugin`: A [Sonobuoy plugin](https://sonobuoy.io/plugins/) containing [Pytest](https://docs.pytest.org/en/7.1.x/)-based end-to-end integration tests, ranging from simple smoke-tests (deployments, deletes), to complex high-availability scenarios, chaos-tests (resource deletions) etc.--These testing containers are made publicly available for customers and partners to perform Arc-enabled data services validation testing in their own Kubernetes clusters running anywhere, to validate: -* Kubernetes distro/versions -* Host disto/versions -* Storage (`StorageClass`/CSI), networking (e.g. `LoadBalancer`s, DNS) -* Other Kubernetes or infrastructure specific setup --For Customers intending to run Arc-enabled Data Services on an undocumented distribution, they must run these validation tests successfully to be considered supported. Additionally, Partners can use this approach to certify their solution is compliant with Arc-enabled Data Services - see [Azure Arc-enabled data services Kubernetes validation](validation-program.md). --The following diagram outlines this high-level process: --![Diagram that shows the Arc-enabled data services Kube-native integration tests.](media/automated-integration-testing/integration-testing-overview.png) --In this tutorial, you learn how to: --> [!div class="checklist"] -> * Deploy `arc-ci-launcher` using `kubectl` -> * Examine validation test results in your Azure Blob Storage account --## Prerequisites - -- **Credentials**: - * The [`test.env.tmpl`](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/test/launcher/base/configs/.test.env.tmpl) file contains the necessary credentials required, and is a combination of the existing pre-requisites required to onboard an [Azure Arc Connected Cluster](../kubernetes/quickstart-connect-cluster.md?tabs=azure-cli) and [Directly Connected Data Controller](plan-azure-arc-data-services.md). Setup of this file is explained below with samples. - * A [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file to the tested Kubernetes cluster with `cluster-admin` access (required for Connected Cluster onboarding at this time) --- **Client-tooling**: - * `kubectl` installed - minimum version (Major:"1", Minor:"21") - * `git` command line interface (or UI-based alternatives) --## Kubernetes manifest preparation --The launcher is made available as part of the [`microsoft/azure_arc`](https://github.com/microsoft/azure_arc) repository, as a [Kustomize](https://kustomize.io/) manifest - Kustomize is [built into `kubectl`](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/) - so no additional tooling is required. --1. Clone the repo locally: --```bash -git clone https://github.com/microsoft/azure_arc.git -``` --2. Navigate to `azure_arc/arc_data_services/test/launcher`, to see the following folder structure: --```text -Γö£ΓöÇΓöÇ base <- Comon base for all Kubernetes Clusters -Γöé Γö£ΓöÇΓöÇ configs -Γöé Γöé ΓööΓöÇΓöÇ .test.env.tmpl <- To be converted into .test.env with credentials for a Kubernetes Secret -Γöé Γö£ΓöÇΓöÇ kustomization.yaml <- Defines the generated resources as part of the launcher -Γöé ΓööΓöÇΓöÇ launcher.yaml <- Defines the Kubernetes resources that make up the launcher -ΓööΓöÇΓöÇ overlays <- Overlays for specific Kubernetes Clusters - Γö£ΓöÇΓöÇ aks - Γöé Γö£ΓöÇΓöÇ configs - Γöé Γöé ΓööΓöÇΓöÇ patch.json.tmpl <- To be converted into patch.json, patch for Data Controller control.json - Γöé ΓööΓöÇΓöÇ kustomization.yaml - Γö£ΓöÇΓöÇ kubeadm - Γöé Γö£ΓöÇΓöÇ configs - Γöé Γöé ΓööΓöÇΓöÇ patch.json.tmpl - Γöé ΓööΓöÇΓöÇ kustomization.yaml - ΓööΓöÇΓöÇ openshift - Γö£ΓöÇΓöÇ configs - Γöé ΓööΓöÇΓöÇ patch.json.tmpl - Γö£ΓöÇΓöÇ kustomization.yaml - ΓööΓöÇΓöÇ scc.yaml -``` --In this tutorial, we're going to focus on steps for AKS, but the overlay structure above can be extended to include additional Kubernetes distributions. --The ready to deploy manifest will represent the following: -```text -Γö£ΓöÇΓöÇ base -Γöé Γö£ΓöÇΓöÇ configs -Γöé Γöé Γö£ΓöÇΓöÇ .test.env <- Config 1: For Kubernetes secret, see sample below -Γöé Γöé ΓööΓöÇΓöÇ .test.env.tmpl -Γöé Γö£ΓöÇΓöÇ kustomization.yaml -Γöé ΓööΓöÇΓöÇ launcher.yaml -ΓööΓöÇΓöÇ overlays - ΓööΓöÇΓöÇ aks - Γö£ΓöÇΓöÇ configs - Γöé Γö£ΓöÇΓöÇ patch.json.tmpl - Γöé ΓööΓöÇΓöÇ patch.json <- Config 2: For control.json patching, see sample below - ΓööΓöÇΓöÇ kustomization.yam -``` --There are two files that need to be generated to localize the launcher to run inside a specific environment. Each of these files can be generated by copy-pasting and filling out each of the template (`*.tmpl`) files above: -* `.test.env`: fill out from `.test.env.tmpl` -* `patch.json`: fill out from `patch.json.tmpl` --> [!TIP] -> The `.test.env` is a single set of environment variables that drives the launcher's behavior. Generating it with care for a given environment will ensure reproducibility of the launcher's behavior. --### Config 1: `.test.env` --A filled-out sample of the `.test.env` file, generated based on `.test.env.tmpl` is shared below with inline commentary. --> [!IMPORTANT] -> The `export VAR="value"` syntax below is not meant to be run locally to source environment variables from your machine - but is there for the launcher. The launcher mounts this `.test.env` file **as-is** as a Kubernetes `secret` using Kustomize's [`secretGenerator`](https://github.com/kubernetes-sigs/kustomize/blob/master/examples/secretGeneratorPlugin.md#secret-values-from-local-files) (Kustomize takes a file, base64 encodes the entire file's content, and turns it into a Kubernetes secret). During initialization, the launcher runs bash's [`source`](https://ss64.com/bash/source.html) command, which imports the environment variables from the as-is mounted `.test.env` file into the launcher's environment. --In other words, after copy-pasting `.test.env.tmpl` and editing to create `.test.env`, the generated file should look similar to the sample below. The process to fill out the `.test.env` file is identical across operating systems and terminals. --> [!TIP] -> There are a handful of environment variables that require additional explanation for clarity in reproducibility. These will be commented with `see detailed explanation below [X]`. --> [!TIP] -> Note that the `.test.env` example below is for **direct** mode. Some of these variables, such as `ARC_DATASERVICES_EXTENSION_VERSION_TAG` do not apply to **indirect** mode. For simplicity, it's best to setup the `.test.env` file with **direct** mode variables in mind, switching `CONNECTIVITY_MODE=indirect` will have the launcher ignore **direct** mode specific-settings and use a subset from the list. -> -> In other words, planning for **direct** mode allows us to satisfy **indirect** mode variables. --Finished sample of `.test.env`: -```bash -# ====================================== -# Arc Data Services deployment version = -# ====================================== --# Controller deployment mode: direct, indirect -# For 'direct', the launcher will also onboard the Kubernetes Cluster to Azure Arc -# For 'indirect', the launcher will skip Azure Arc and extension onboarding, and proceed directly to Data Controller deployment - see `patch.json` file -export CONNECTIVITY_MODE="direct" --# The launcher supports deployment of both GA/pre-GA trains - see detailed explanation below [1] -export ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN="stable" -export ARC_DATASERVICES_EXTENSION_VERSION_TAG="1.11.0" --# Image version -export DOCKER_IMAGE_POLICY="Always" -export DOCKER_REGISTRY="mcr.microsoft.com" -export DOCKER_REPOSITORY="arcdata" -export DOCKER_TAG="v1.11.0_2022-09-13" --# "arcdata" Azure CLI extension version override - see detailed explanation below [2] -export ARC_DATASERVICES_WHL_OVERRIDE="" --# ================ -# ARM parameters = -# ================ --# Custom Location Resource Provider Azure AD Object ID - this is a single, unique value per Azure AD tenant - see detailed explanation below [3] -export CUSTOM_LOCATION_OID="..." --# A pre-rexisting Resource Group is used if found with the same name. Otherwise, launcher will attempt to create a Resource Group -# with the name specified, using the Service Principal specified below (which will require `Owner/Contributor` at the Subscription level to work) -export LOCATION="eastus" -export RESOURCE_GROUP_NAME="..." --# A Service Principal with "sufficient" privileges - see detailed explanation below [4] -export SPN_CLIENT_ID="..." -export SPN_CLIENT_SECRET="..." -export SPN_TENANT_ID="..." -export SUBSCRIPTION_ID="..." --# Optional: certain integration tests test upload to Log Analytics workspace: -# https://learn.microsoft.com/azure/azure-arc/data/upload-logs -export WORKSPACE_ID="..." -export WORKSPACE_SHARED_KEY="..." --# ==================================== -# Data Controller deployment profile = -# ==================================== --# Samples for AKS -# To see full list of CONTROLLER_PROFILE, run: az arcdata dc config list -export CONTROLLER_PROFILE="azure-arc-aks-default-storage" --# azure, aws, gcp, onpremises, alibaba, other -export DEPLOYMENT_INFRASTRUCTURE="azure" --# The StorageClass used for PVCs created during the tests -export KUBERNETES_STORAGECLASS="default" --# ============================== -# Launcher specific parameters = -# ============================== --# Log/test result upload from launcher container, via SAS URL - see detailed explanation below [5] -export LOGS_STORAGE_ACCOUNT="<your-storage-account>" -export LOGS_STORAGE_ACCOUNT_SAS="?sv=2021-06-08&ss=bfqt&srt=sco&sp=rwdlacupiytfx&se=...&spr=https&sig=..." -export LOGS_STORAGE_CONTAINER="arc-ci-launcher-1662513182" --# Test behavior parameters -# The test suites to execute - space seperated array, -# Use these default values that run short smoke tests, further elaborate test suites will be added in upcoming releases -export SQL_HA_TEST_REPLICA_COUNT="3" -export TESTS_DIRECT="direct-crud direct-hydration controldb" -export TESTS_INDIRECT="billing controldb kube-rbac" -export TEST_REPEAT_COUNT="1" -export TEST_TYPE="ci" --# Control launcher behavior by setting to '1': -# -# - SKIP_PRECLEAN: Skips initial cleanup -# - SKIP_SETUP: Skips Arc Data deployment -# - SKIP_TEST: Skips sonobuoy tests -# - SKIP_POSTCLEAN: Skips final cleanup -# - SKIP_UPLOAD: Skips log upload -# -# See detailed explanation below [6] -export SKIP_PRECLEAN="0" -export SKIP_SETUP="0" -export SKIP_TEST="0" -export SKIP_POSTCLEAN="0" -export SKIP_UPLOAD="0" -``` --> [!IMPORTANT] -> If performing the configuration file generation in a Windows machine, you will need to convert the End-of-Line sequence from `CRLF` (Windows) to `LF` (Linux), as `arc-ci-launcher` runs as a Linux container. Leaving the line ending as `CRLF` may cause an error upon `arc-ci-launcher` container start - such as: `/launcher/config/.test.env: $'\r': command not found` -> For example, perform the change using VSCode (bottom-right of window): <br> -> ![Screenshot that shows where to change the end of line sequence (CRLF).](media/automated-integration-testing/crlf-to-lf.png) --#### Detailed explanation for certain variables --##### 1. `ARC_DATASERVICES_EXTENSION_*` - Extension version and train --> Mandatory: this is required for `direct` mode deployments. --The launcher can deploy both GA and pre-GA releases. --The extension version to release-train (`ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN`) mapping are obtained from here: -* **GA**: `stable` - [Version log](version-log.md) -* **Pre-GA**: `preview` - [Pre-release testing](preview-testing.md) --##### 2. `ARC_DATASERVICES_WHL_OVERRIDE` - Azure CLI previous version download URL --> Optional: leave this empty in `.test.env` to use the pre-packaged default. --The launcher image is pre-packaged with the latest arcdata CLI version at the time of each container image release. However, to work with older releases and upgrade testing, it may be necessary to provide the launcher with Azure CLI Blob URL download link, to override the pre-packaged version; e.g to instruct the launcher to install version **1.4.3**, fill in: --```bash -export ARC_DATASERVICES_WHL_OVERRIDE="https://azurearcdatacli.blob.core.windows.net/cli-extensions/arcdata-1.4.3-py2.py3-none-any.whl" -``` -The CLI version to Blob URL mapping can be found [here](https://azcliextensionsync.blob.core.windows.net/index1/index.json). --<a name='3-custom_location_oidcustom-locations-object-id-from-your-specific-azure-ad-tenant'></a> --##### 3. `CUSTOM_LOCATION_OID` - Custom Locations Object ID from your specific Microsoft Entra tenant --> Mandatory: this is required for Connected Cluster Custom Location creation. --The following steps are sourced from [Enable custom locations on your cluster](../kubernetes/custom-locations.md#enable-custom-locations-on-your-cluster) to retrieve the unique Custom Location Object ID for your Microsoft Entra tenant. --There are two approaches to obtaining the `CUSTOM_LOCATION_OID` for your Microsoft Entra tenant. --1. Via Azure CLI: -- ```bash - az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv - # 51dfe1e8-70c6-4de... < This is for Microsoft's own tenant - do not use, the value for your tenant will be different, use that instead to align with the Service Principal for launcher. - ``` -- ![A screenshot of a PowerShell terminal that shows `az ad sp show --id <>`.](media/automated-integration-testing/custom-location-oid-cli.png) --2. Via Azure portal - navigate to your Microsoft Entra blade, and search for `Custom Locations RP`: -- ![A screenshot of the custom locations RP.](media/automated-integration-testing/custom-location-oid-portal.png) --##### 4. `SPN_CLIENT_*` - Service Principal Credentials --> Mandatory: this is required for Direct Mode deployments. --The launcher logs in to Azure using these credentials. --Validation testing is meant to be performed on **Non-Production/Test Kubernetes cluster & Azure Subscriptions** - focusing on functional validation of the Kubernetes/Infrastructure setup. Therefore, to avoid the number of manual steps required to perform launches, it's recommended to provide a `SPN_CLIENT_ID/SECRET` that has `Owner` at the Resource Group (or Subscription) level, as it will create several resources in this Resource Group, as well as assigning permissions to those resources against several Managed Identities created as part of the deployment (these role assignments in turn require the Service Principal to have `Owner`). --##### 5. `LOGS_STORAGE_ACCOUNT_SAS` - Blob Storage Account SAS URL --> Recommended: leaving this empty means you will not obtain test results and logs. --The launcher needs a persistent location (Azure Blob Storage) to upload results to, as Kubernetes doesn't (yet) allow copying files from stopped/completed pods - [see here](https://github.com/kubernetes/kubectl/issues/454). The launcher achieves connectivity to Azure Blob Storage using an _**account-scoped SAS URL**_ (as opposed to _container_ or _blob_ scoped) - a signed URL with a time-bound access definition - see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md), in order to: -1. Create a new Storage Container in the pre-existing Storage Account (`LOGS_STORAGE_ACCOUNT`), if it doesn't exist (name based on `LOGS_STORAGE_CONTAINER`) -2. Create new, uniquely named blobs (test log tar files) --The follow steps are sourced from [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md#grant-limited-access-to-azure-storage-resources-using-shared-access-signatures-sas). --> [!TIP] -> SAS URLs are different from the Storage Account Key, a SAS URL is formatted as follows. -> ```text -> ?sv=2021-06-08&ss=bfqt&srt=sco&sp=rwdlacupiytfx&se=...&spr=https&sig=... -> ``` --There are several approaches to generating a SAS URL. This example shows the portal: --![A screenshot of the shared access signature details on the Azure portal.](media/automated-integration-testing/sas-url-portal.png) --To use the Azure CLI instead, see [`az storage account generate-sas`](/cli/azure/storage/account?view=azure-cli-latest&preserve-view=true#az-storage-account-generate-sas) --##### 6. `SKIP_*` - controlling the launcher behavior by skipping certain stages --> Optional: leave this empty in `.test.env` to run all stages (equivalent to `0` or blank) --The launcher exposes `SKIP_*` variables, to run and skip specific stages - for example, to perform a "cleanup only" run. --Although the launcher is designed to clean up both in the beginning and the end of each run, it's possible for launch and/or test-failures to leave residue resources behind. To run the launcher in "cleanup only" mode, set the following variables in `.test.env`: --```bash -export SKIP_PRECLEAN="0" # Run cleanup -export SKIP_SETUP="1" # Do not setup Arc-enabled Data Services -export SKIP_TEST="1" # Do not run integration tests -export SKIP_POSTCLEAN="1" # POSTCLEAN is identical to PRECLEAN, although idempotent, not needed here -export SKIP_UPLOAD="1" # Do not upload logs from this run -``` --The settings above instructs the launcher to clean up all Arc and Arc Data Services resources, and to not deploy/test/upload logs. --### Config 2: `patch.json` --A filled-out sample of the `patch.json` file, generated based on `patch.json.tmpl` is shared below: --> Note that the `spec.docker.registry, repository, imageTag` should be identical to the values in `.test.env` above --Finished sample of `patch.json`: -```json -{ - "patch": [ - { - "op": "add", - "path": "spec.docker", - "value": { - "registry": "mcr.microsoft.com", - "repository": "arcdata", - "imageTag": "v1.11.0_2022-09-13", - "imagePullPolicy": "Always" - } - }, - { - "op": "add", - "path": "spec.storage.data.className", - "value": "default" - }, - { - "op": "add", - "path": "spec.storage.logs.className", - "value": "default" - } - ] -} -``` --## Launcher deployment --> It is recommended to deploy the launcher in a **Non-Production/Test cluster** - as it performs destructive actions on Arc and other used Kubernetes resources. --### `imageTag` specification -The launcher is defined within the Kubernetes Manifest as a [`Job`](https://kubernetes.io/docs/concepts/workloads/controllers/job/), which requires instructing Kubernetes where to find the launcher's image. This is set in `base/kustomization.yaml`: --```YAML -images: -- name: arc-ci-launcher- newName: mcr.microsoft.com/arcdata/arc-ci-launcher - newTag: v1.11.0_2022-09-13 -``` --> [!TIP] -> To recap, at this point - there are **3** places we specified `imageTag`s, for clarity, here's an explanation of the different uses of each. Typically - when testing a given release, all 3 values would be the same (aligning to a given release): -> ->| # | Filename | Variable name | Why? | Used by? | ->| | | - | -- | | ->| 1 | **`.test.env`** | `DOCKER_TAG` | Sourcing the [Bootstrapper image](https://mcr.microsoft.com/v2/arcdata/arc-bootstrapper/tags/list) as part of [extension install](https://mcr.microsoft.com/v2/arcdata/arcdataservices-extension/tags/list) | [`az k8s-extension create`](/cli/azure/k8s-extension?view=azure-cli-latest&preserve-view=true#az-k8s-extension-create) in the launcher | ->| 2 | **`patch.json`** | `value.imageTag` | Sourcing the [Data Controller image](https://mcr.microsoft.com/v2/arcdata/arc-controller/tags/list) | [`az arcdata dc create`](/cli/azure/arcdata/dc?view=azure-cli-latest&preserve-view=true#az-arcdata-dc-create) in the launcher | ->| 3 | **`kustomization.yaml`** | `images.newTag` | Sourcing the [launcher's image](https://mcr.microsoft.com/v2/arcdata/arc-ci-launcher/tags/list) | `kubectl apply`ing the launcher | --### `kubectl apply` --To validate that the manifest has been properly set up, attempt client-side validation with `--dry-run=client`, which prints out the Kubernetes resources to be created for the launcher: --```bash -kubectl apply -k arc_data_services/test/launcher/overlays/aks --dry-run=client -# namespace/arc-ci-launcher created (dry run) -# serviceaccount/arc-ci-launcher created (dry run) -# clusterrolebinding.rbac.authorization.k8s.io/arc-ci-launcher created (dry run) -# secret/test-env-fdgfm8gtb5 created (dry run) <- Created from Config 1: `patch.json` -# configmap/control-patch-2hhhgk847m created (dry run) <- Created from Config 2: `.test.env` -# job.batch/arc-ci-launcher created (dry run) -``` --To deploy the launcher and tail logs, run the following: -```bash -kubectl apply -k arc_data_services/test/launcher/overlays/aks -kubectl wait --for=condition=Ready --timeout=360s pod -l job-name=arc-ci-launcher -n arc-ci-launcher -kubectl logs job/arc-ci-launcher -n arc-ci-launcher --follow -``` --At this point, the launcher should start - and you should see the following: --![A screenshot of the console terminal after the launcher starts.](media/automated-integration-testing/launcher-start.png) --Although it's best to deploy the launcher in a cluster with no pre-existing Arc resources, the launcher contains pre-flight validation to discover pre-existing Arc and Arc Data Services CRDs and ARM resources, and attempts to clean them up on a best-effort basis (using the provided Service Principal credentials), prior to deploying the new release: --![A screenshot of the console terminal discovering Kubernetes and other resources.](media/automated-integration-testing/launcher-pre-flight.png) --This same metadata-discovery and cleanup process is also run upon launcher exit, to leave the cluster as close as possible to it's pre-existing state before the launch. --## Steps performed by launcher --At a high-level, the launcher performs the following sequence of steps: --1. Authenticate to Kubernetes API using Pod-mounted Service Account -2. Authenticate to ARM API using Secret-mounted Service Principal -3. Perform CRD metadata scan to discover existing Arc and Arc Data Services Custom Resources -4. Clean up any existing Custom Resources in Kubernetes, and subsequent resources in Azure. If any mismatch between the credentials in `.test.env` compared to resources existing in the cluster, quit. -5. Generate a unique set of environment variables based on timestamp for Arc Cluster name, Data Controller and Custom Location/Namespace. Prints out the environment variables, obfuscating sensitive values (e.g. Service Principal Password etc.) -6. a. For Direct Mode - Onboard the Cluster to Azure Arc, then deploys the controller. -- b. For Indirect Mode: deploy the Data Controller -7. Once Data Controller is `Ready`, generate a set of Azure CLI ([`az arcdata dc debug`](/cli/azure/arcdata/dc/debug?view=azure-cli-latest&preserve-view=true)) logs and store locally, labeled as `setup-complete` - as a baseline. -8. Use the `TESTS_DIRECT/INDIRECT` environment variable from `.test.env` to launch a set of parallelized Sonobuoy test runs based on a space-separated array (`TESTS_(IN)DIRECT`). These runs execute in a new `sonobuoy` namespace, using `arc-sb-plugin` pod that contains the Pytest validation tests. -9. [Sonobuoy aggregator](https://sonobuoy.io/docs/v0.56.0/plugins/) accumulate the [`junit` test results](https://sonobuoy.io/docs/v0.56.0/results/) and logs per `arc-sb-plugin` test run, which are exported into the launcher pod. -10. Return the exit code of the tests, and generates another set of debug logs - Azure CLI and `sonobuoy` - stored locally, labeled as `test-complete`. -11. Perform a CRD metadata scan, similar to Step 3, to discover existing Arc and Arc Data Services Custom Resources. Then, proceed to destroy all Arc and Arc Data resources in reverse order from deployment, as well as CRDs, Role/ClusterRoles, PV/PVCs etc. -12. Attempt to use the SAS token `LOGS_STORAGE_ACCOUNT_SAS` provided to create a new Storage Account container named based on `LOGS_STORAGE_CONTAINER`, in the **pre-existing** Storage Account `LOGS_STORAGE_ACCOUNT`. If Storage Account container already exists, use it. Upload all local test results and logs to this storage container as a tarball (see below). -13. Exit. --## Tests performed per test suite --There are approximately **375** unique integration tests available, across **27** test suites - each testing a separate functionality. --| Suite # | Test suite name | Description of test | -| - | | | -| 1 | `ad-connector` | Tests the deployment and update of an Active Directory Connector (AD Connector). | -| 2 | `billing` | Testing various Business Critical license types are reflected in resource table in controller, used for Billing upload. | -| 3 | `ci-billing` | Similar as `billing`, but with more CPU/Memory permutations. | -| 4 | `ci-sqlinstance` | Long running tests for multi-replica creation, updates, GP -> BC Update, Backup validation and SQL Server Agent. | -| 5 | `controldb` | Tests Control database - SA secret check, system login verification, audit creation, and sanity checks for SQL build version. | -| 6 | `dc-export` | Indirect Mode billing and usage upload. | -| 7 | `direct-crud` | Creates a SQL instance using ARM calls, validates in both Kubernetes and ARM. | -| 8 | `direct-fog` | Creates multiple SQL instances and creates a Failover Group between them using ARM calls. | -| 9 | `direct-hydration` | Creates SQL Instance with Kubernetes API, validates presence in ARM. | -| 10 | `direct-upload` | Validates billing upload in Direct Mode | -| 11 | `kube-rbac` | Ensures Kubernetes Service Account permissions for Arc Data Services matches least-privilege expectations. | -| 12 | `nonroot` | Ensures containers run as non-root user | -| 13 | `postgres` | Completes various Postgres creation, scaling, backup/restore tests. | -| 14 | `release-sanitychecks` | Sanity checks for month-to-month releases, such as SQL Server Build versions. | -| 15 | `sqlinstance` | Shorter version of `ci-sqlinstance`, for fast validations. | -| 16 | `sqlinstance-ad` | Tests creation of SQL Instances with Active Directory Connector. | -| 17 | `sqlinstance-credentialrotation` | Tests automated Credential Rotation for both General Purpose and Business Critical. | -| 18 | `sqlinstance-ha` | Various High Availability Stress tests, including pod reboots, forced failovers and suspensions. | -| 19 | `sqlinstance-tde` | Various Transparent Data Encryption tests. | -| 20 | `telemetry-elasticsearch` | Validates Log ingestion into Elasticsearch. | -| 21 | `telemetry-grafana` | Validates Grafana is reachable. | -| 22 | `telemetry-influxdb` | Validates Metric ingestion into InfluxDB. | -| 23 | `telemetry-kafka` | Various tests for Kafka using SSL, single/multi-broker setup. | -| 24 | `telemetry-monitorstack` | Tests Monitoring components, such as `Fluentbit` and `Collectd` are functional. | -| 25 | `telemetry-telemetryrouter` | Tests Open Telemetry. | -| 26 | `telemetry-webhook` | Tests Data Services Webhooks with valid and invalid calls. | -| 27 | `upgrade-arcdata` | Upgrades a full suite of SQL Instances (GP, BC 2 replica, BC 3 replica, with Active Directory) and upgrades from last month's release to latest build. | --As an example, for `sqlinstance-ha`, the following tests are performed: --- `test_critical_configmaps_present`: Ensures the ConfigMaps and relevant fields are present for a SQL Instance.-- `test_suspended_system_dbs_auto_heal_by_orchestrator`: Ensures if `master` and `msdb` are suspended by any means (in this case, user). Orchestrator maintenance reconcile auto-heals it.-- `test_suspended_user_db_does_not_auto_heal_by_orchestrator`: Ensures if a User Database is deliberately suspended by user, Orchestrator maintenance reconcile does not auto-heal it.-- `test_delete_active_orchestrator_twice_and_delete_primary_pod`: Deletes orchestrator pod multiple times, followed by the primary replica, and verifies all replicas are synchronized. Failover time expectations for 2 replica are relaxed.-- `test_delete_primary_pod`: Deletes primary replica and verifies all replicas are synchronized. Failover time expectations for 2 replica are relaxed.-- `test_delete_primary_and_orchestrator_pod`: Deletes primary replica and orchestrator pod and verifies all replicas are synchronized.-- `test_delete_primary_and_controller`: Deletes primary replica and data controller pod and verifies primary endpoint is accessible and the new primary replica is synchronized. Failover time expectations for 2 replica are relaxed.-- `test_delete_one_secondary_pod`: Deletes secondary replica and data controller pod and verifies all replicas are synchronized.-- `test_delete_two_secondaries_pods`: Deletes secondary replicas and data controller pod and verifies all replicas are synchronized.-- `test_delete_controller_orchestrator_secondary_replica_pods`:-- `test_failaway`: Forces AG failover away from current primary, ensures the new primary is not the same as the old primary. Verifies all replicas are synchronized.-- `test_update_while_rebooting_all_non_primary_replicas`: Tests Controller-driven updates are resilient with retries despite various turbulent circumstances.--> [!NOTE] -> Certain tests may require specific hardware, such as privileged Access to Domain Controllers for `ad` tests for Account and DNS entry creation - which may not be available in all environments looking to use the `arc-ci-launcher`. --## Examining Test Results --A sample storage container and file uploaded by the launcher: --![A screenshot of the launcher storage container.](media/automated-integration-testing/launcher-storage-container.png) --![A screenshot of the launcher tarball.](media/automated-integration-testing/launcher-tarball.png) --And the test results generated from the run: --![A screenshot of the launcher test results.](media/automated-integration-testing/launcher-test-results.png) --## Clean up resources --To delete the launcher, run: -```bash -kubectl delete -k arc_data_services/test/launcher/overlays/aks -``` --This cleans up the resource manifests deployed as part of the launcher. --## Related content --> [!div class="nextstepaction"] -> [Pre-release testing](preview-testing.md) |
azure-arc | Azure Data Studio Dashboards | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/azure-data-studio-dashboards.md | - Title: Azure Data Studio dashboards -description: Azure Data Studio dashboards ------ Previously updated : 11/03/2021----# Azure Data Studio dashboards --[Azure Data Studio](/azure-data-studio/what-is-azure-data-studio) provides an experience similar to the Azure portal for viewing information about your Azure Arc resources. These views are called **dashboards** and have a layout and options similar to what you could see about a given resource in the Azure portal, but give you the flexibility of seeing that information locally in your environment in cases where you don't have a connection available to Azure. --## Connect to a data controller --### Prerequisites --- Download [Azure Data Studio](/azure-data-studio/download-azure-data-studio)-- Azure Arc extension is installed--### Connect --1. Open Azure Data Studio. -2. Select the **Connections** tab on the left. -3. Expand the panel called **Azure Arc Controllers**. -4. Select the **Connect Controller** button. -- Azure Data Studio opens a blade on the right side. --1. Enter the **Namespace** for the data controller. -- Azure Data Studio reads from the `kube.config` file in your default directory and lists the available Kubernetes cluster contexts. It selects the current cluster context. If this is the right cluster to connect to, use that namespace. -- If you need to retrieve the namespace where the Azure Arc data controller is deployed, you can run `kubectl get datacontrollers -A` on your Kubernetes cluster. --6. Optionally add a display name for the Azure Arc data controller in the input for **Name**. -7. Select **Connect**. ---After you connect to a data controller, you can view the dashboards. Azure Data Studio has dashboards for the data controller and any SQL managed instances or PostgreSQL server resources that you have. --## View the data controller dashboard --Right-click on the data controller in the Connections panel in the **Arc Controllers** expandable panel and choose **Manage**. --Here you can see details about the data controller resource such as name, region, connection mode, resource group, subscription, controller endpoint, and namespace. You can see a list of all of the managed database resources managed by the data controller as well. --You'll notice that the layout is similar to what you might see in the Azure portal. --Conveniently, you can launch the creation of a SQL managed instance or PostgreSQL server by clicking the + New Instance button. --You can also open the Azure portal in context to this data controller by clicking the Open in Azure portal button. --## View the SQL Managed Instance dashboards --If you have created some SQL Managed Instances, see them listed under **Connections** in the **Azure Data Controllers** expandable panel underneath the data controller that is managing them. --To view the SQL Managed Instance dashboard for a given instance, right-click on the instance and choose **Manage**. --The **Connection** panel prompts you for the login and password to connect to an instance. If you know the connection information you can enter it and choose **Connect**. If you don't know, choose **Cancel**. Either way, Azure Data Studio returns to the dashboard when the **Connection** panel closes. --On the **Overview** tab, view resource group, data controller, subscription ID, status, region, and other information. This location also provides links to the Grafana dashboard for viewing metrics or Kibana dashboard for viewing logs in context to that SQL managed instance. --With a connection to the SQL manage instance, you can see additional information here. --You can delete the SQL managed instance from here or open the Azure portal to view the SQL managed instance in the Azure portal. --If you click on the **Connection Strings** tab, the Azure Data Studio presents a list of pre-constructed connection strings for that instance making. Copy and paste these strings into various other applications or code. --## View the PostgreSQL server dashboards --If the deployment includes PostgreSQL servers, Azure Data Studio lists them in the **Connections** panel in the **Azure Data Controllers** expandable panel underneath the data controller that is managing them. --To view the PostgreSQL server dashboard for a given server group, right-click on the server group and choose Manage. --On the **Overview** tab, review details about the server group such as resource group, data controller, subscription ID, status, region and more. The tab also has links to the Grafana dashboard for viewing metrics or Kibana dashboard for viewing logs in context to that server group. --You can delete the server group from here or open the Azure portal to view the server group in the Azure portal. --If you click on the **Connection Strings** tab on the left, Azure Data Studio provides pre-constructed connection strings for that server group. Copy and paste these strings to various other applications or code. --Select the **Properties** tab on the left to see additional details. --The **Resource health** tab on the left displays the current health of that server group. --The **Diagnose and solve problems** tab on the left, launches the PostgreSQL troubleshooting notebook. --For Azure support, select the **New support request** tab. This launches the Azure portal in context to the server group. Create an Azure support request from there. --## Related content --- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md) |
azure-arc | Backup Controller Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/backup-controller-database.md | - Title: Back up controller database -description: Explains how to back up the controller database for Azure Arc-enabled data services ------ Previously updated : 04/26/2023----# Back up and recover controller database --When you deploy Azure Arc data services, the Azure Arc Data Controller is one of the most critical components that is deployed. The functions of the data controller include: --- Provision, de-provision and update resources-- Orchestrate most of the activities for SQL Managed Instance enabled by Azure Arc such as upgrades, scale out etc. -- Capture the billing and usage information of each Arc SQL managed instance. --In order to perform above functions, the Data controller needs to store an inventory of all the current Arc SQL managed instances, billing, usage and the current state of all these SQL managed instances. All this data is stored in a database called `controller` within the SQL Server instance that is deployed into the `controldb-0` pod. --This article explains how to back up the controller database. --## Back up data controller database --As part of built-in capabilities, the Data controller database `controller` is automatically backed up every 5 minutes once backups are enabled. To enable backups: --- Create a `backups-controldb` `PersistentVolumeClaim` with a storage class that supports `ReadWriteMany` access:--```yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: backups-controldb - namespace: <namespace> -spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 15Gi - storageClassName: <storage-class> -``` --- Edit the `DataController` custom resource spec to include a `backups` storage definition:--```yaml -storage: - backups: - accessMode: ReadWriteMany - className: <storage-class> - size: 15Gi - data: - accessMode: ReadWriteOnce - className: managed-premium - size: 15Gi - logs: - accessMode: ReadWriteOnce - className: managed-premium - size: 10Gi -``` --The `.bak` files for the `controller` database are stored on the `backups` volume of the `controldb` pod at `/var/opt/backups/mssql`. --## Recover controller database --There are two types of recovery possible: --1. `controller` is corrupted and you just need to restore the database -1. the entire storage that contains the `controller` data and log files is corrupted/gone and you need to recover --### Corrupted controller database scenario --In this scenario, all the pods are up and running, you are able to connect to the `controldb` SQL Server, and there may be a corruption with the `controller` database. You just need to restore the database from a backup. --Follow these steps to restore the controller database from a backup, if the SQL Server is still up and running on the `controldb` pod, and you are able to connect to it: --1. Verify connectivity to SQL Server pod hosting the `controller` database. -- - First, retrieve the credentials for the secret. `controller-system-secret` is the secret that holds the credentials for the `system` user account that can be used to connect to the SQL instance. - Run the following command to retrieve the secret contents: - - ```console - kubectl get secret controller-system-secret --namespace [namespace] -o yaml - ``` -- For example: -- ```console - kubectl get secret controller-system-secret --namespace arcdataservices -o yaml - ``` -- - Decode the base64 encoded credentials. The contents of the yaml file of the secret `controller-system-secret` contain a `password` and `username`. You can use any base64 decoder tool to decode the contents of the `password`. - - Verify connectivity: With the decoded credentials, run a command such as `SELECT @@SERVERNAME` to verify connectivity to the SQL Server. -- ```powershell - kubectl exec controldb-0 -n <namespace> -c mssql-server -- /opt/mssql-tools/bin/sqlcmd -S localhost -U system -P "<password>" -Q "SELECT @@SERVERNAME" - ``` - - ```powershell - kubectl exec controldb-0 -n contosons -c mssql-server -- /opt/mssql-tools/bin/sqlcmd -S localhost -U system -P "<password>" -Q "SELECT @@SERVERNAME" - ``` --1. Scale the controller ReplicaSet down to 0 replicas as follows: -- ```console - kubectl scale --replicas=0 rs/control -n <namespace>` - ``` -- For example: -- ```console - kubectl scale --replicas=0 rs/control -n arcdataservices - ``` --1. Connect to the `controldb` SQL Server as `system` as described in step 1. --1. Delete the corrupted controller database using T-SQL: -- ```sql - DROP DATABASE controller - ``` --1. Restore the database from backup - after the corrupted `controllerdb` is dropped. For example: -- ```sql - RESTORE DATABASE test FROM DISK = '/var/opt/backups/mssql/<controller backup file>.bak' - WITH MOVE 'controller' to '/var/opt/mssql/datf - ,MOVE 'controller' to '/var/opt/mssql/data/controller_log.ldf' - ,RECOVERY; - GO - ``` - -1. Scale the controller ReplicaSet back up to 1 replica. -- ```console - kubectl scale --replicas=1 rs/control -n <namespace> - ``` -- For example: -- ```console - kubectl scale --replicas=1 rs/control -n arcdataservices - ``` --### Corrupted storage scenario --In this scenario, the storage hosting the Data controller data and log files, has corruption and a new storage was provisioned and you need to restore the controller database. --Follow these steps to restore the controller database from a backup with new storage for the `controldb` StatefulSet: --1. Ensure that you have a backup of the last known good state of the `controller` database --2. Scale the controller ReplicaSet down to 0 replicas as follows: -- ```console - kubectl scale --replicas=0 rs/control -n <namespace> - ``` -- For example: -- ```console - kubectl scale --replicas=0 rs/control -n arcdataservices - ``` -3. Scale the `controldb` StatefulSet down to 0 replicas, as follows: -- ```console - kubectl scale --replicas=0 sts/controldb -n <namespace> - ``` -- For example: -- ```console - kubectl scale --replicas=0 sts/controldb -n arcdataservices` - ``` --4. Create a kubernetes secret named `controller-sa-secret` with the following YAML: -- ```yml - apiVersion: v1 - kind: Secret - metadata: - name: controller-sa-secret - namespace: <namespace> - type: Opaque - data: - password: <base64 encoded password> - ``` --5. Edit the `controldb` StatefulSet to include a `controller-sa-secret` volume and corresponding volume mount (`/var/run/secrets/mounts/credentials/mssql-sa-password`) in the `mssql-server` container, by using `kubectl edit sts controldb -n <namespace>` command. --6. Create new data (`data-controldb`) and logs (`logs-controldb`) persistent volume claims for the `controldb` pod as follows: -- ```yml - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: data-controldb - namespace: <namespace> - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 15Gi - storageClassName: <storage class> - - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: logs-controldb - namespace: <namespace> - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi - storageClassName: <storage class> - ``` --7. Scale the `controldb` StatefulSet back to 1 replica using: -- ```console - kubectl scale --replicas=1 sts/controldb -n <namespace> - ``` --8. Connect to the `controldb` SQL server as `sa` using the password in the `controller-sa-secret` secret created earlier. --9. Create a `system` login with sysadmin role using the password in the `controller-system-secret` kubernetes secret as follows: -- ```sql - CREATE LOGIN [system] WITH PASSWORD = '<password-from-secret>' - ALTER SERVER ROLE sysadmin ADD MEMBER [system] - ``` --10. Restore the backup using the `RESTORE` command as follows: -- ```sql - RESTORE DATABASE [controller] FROM DISK = N'/var/opt/backups/mssql/<controller backup file>.bak' WITH FILE = 1 - ``` --11. Create a `controldb-rw-user` login using the password in the `controller-db-rw-secret` secret `CREATE LOGIN [controldb-rw-user] WITH PASSWORD = '<password-from-secret>'` and associate it with the existing `controldb-rw-user` user in the controller DB `ALTER USER [controldb-rw-user] WITH LOGIN = [controldb-rw-user]`. --12. Disable the `sa` login using TSQL - `ALTER LOGIN [sa] DISABLE`. --13. Edit the `controldb` StatefulSet to remove the `controller-sa-secret` volume and corresponding volume mount. --14. Delete the `controller-sa-secret` secret. --16. Scale the controller ReplicaSet back up to 1 replica using the `kubectl scale` command. --## Related content --[Azure Data Studio dashboards](azure-data-studio-dashboards.md) |
azure-arc | Backup Restore Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/backup-restore-postgresql.md | - Title: Automated backup for Azure Arc-enabled PostgreSQL server -description: Explains how to configure backups for Azure Arc-enabled PostgreSQL server ------ Previously updated : 03/12/2023----# Automated backup Azure Arc-enabled PostgreSQL servers --To enable automated backups, include the `--storage-class-backups` argument when you create an Azure Arc-enabled PostgreSQL server. Specify the retention period for backups with the `--retention-days` parameter. Use this parameter when you create or update an Arc-enabled PostgreSQL server. The retention period can be between 0 and 35 days. If backups are enabled but no retention period is specified, the default is seven days. --Additionally, if you set the retention period to zero, then automated backups are disabled. ---## Create server with automated backup --Create an Azure Arc-enabled PostgreSQL server with automated backups: --```azurecli -az postgres server-arc create -n <name> -k <namespace> --storage-class-backups <storage-class> --retention-days <number of days> --use-k8s -``` --## Update a server to set retention period --Update the backup retention period for an Azure Arc-enabled PostgreSQL server: --```azurecli -az postgres server-arc update -n pg01 -k test --retention-days <number of days> --use-k8s -``` --## Related content --- [Restore Azure Arc-enabled PostgreSQL servers](restore-postgresql.md)-- [Scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-server-using-cli.md) your server. |
azure-arc | Change Postgresql Port | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/change-postgresql-port.md | - Title: Change the PostgreSQL port -description: Change the port on which the Azure Arc-enabled PostgreSQL server is listening. ------ Previously updated : 11/03/2021-----# Change the port on which the server group is listening --To change the port, edit the server group. For example, run the following command: --```azurecli - az postgres server-arc update -n <server name> --port <desired port number> --k8s-namespace <namespace> --use-k8s -``` --If the name of your server group is _postgres01_ and you would like it to listen on port _866_. Run the following command: --```azurecli - az postgres server-arc update -n postgres01 --port 866 --k8s-namespace arc --use-k8s -``` --## Verify that the port was changed --To verify that the port was changed, run the following command to show the configuration of your server group: --```azurecli -az postgres server-arc show -n <server name> --k8s-namespace <namespace> --use-k8s -``` --In the output of that command, look at the port number displayed for the item "port" in the "service" section of the specifications of your server group. --Alternatively, you can verify in the item `externalEndpoint` of the status section of the specifications of your server group that the IP address is followed by the port number you configured. --As an illustration, to continue the example above, run the command: --```azurecli -az postgres server-arc show -n postgres01 --k8s-namespace arc --use-k8s -``` --The command return port 866: --```output -"services": { - "primary": { - "port": 866, - "type": "LoadBalancer" - } - } -``` --In addition, note the value for `primaryEndpoint`. --```output -"primaryEndpoint": "12.345.67.890:866", -``` --## Related content -- Read about [how to connect to your server group](get-connection-endpoints-and-connection-strings-postgresql-server.md).-- Read about how you can configure other aspects of your server group in the section How-to\Manage\Configure & scale section of the documentation. |
azure-arc | Clean Up Past Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/clean-up-past-installation.md | - Title: Clean up past installations -description: Describes how to remove Azure Arc-enabled data controller and associated resources from past installations. ------ Previously updated : 07/11/2022----# Clean up from past installations --If you installed the data controller in the past and later deleted the data controller, there may be some cluster level objects that would still need to be deleted. --This article describes how to delete these cluster level objects. --## Replace values in sample script --For some of the tasks, you'll need to replace `{namespace}` with the value for your namespace. Substitute the name of the namespace the data controller was deployed in into `{namespace}`. If unsure, get the name of the `mutatingwebhookconfiguration` using `kubectl get clusterrolebinding`. --## Run script to remove artifacts --Run the following commands to delete the data controller cluster level objects: --> [!NOTE] -> Not all of these objects will exist in your environment. The objects in your environment depend on which version of the Arc data controller was installed --```console -# Clean up azure arc data service artifacts --# Custom resource definitions (CRD) -kubectl delete crd datacontrollers.arcdata.microsoft.com -kubectl delete crd postgresqls.arcdata.microsoft.com -kubectl delete crd sqlmanagedinstances.sql.arcdata.microsoft.com -kubectl delete crd sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com -kubectl delete crd dags.sql.arcdata.microsoft.com -kubectl delete crd exporttasks.tasks.arcdata.microsoft.com -kubectl delete crd monitors.arcdata.microsoft.com -kubectl delete crd activedirectoryconnectors.arcdata.microsoft.com -kubectl delete crd failovergroups.sql.arcdata.microsoft.com -kubectl delete crd kafkas.arcdata.microsoft.com -kubectl delete crd postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com -kubectl delete crd sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com -kubectl delete crd sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com -kubectl delete crd telemetrycollectors.arcdata.microsoft.com -kubectl delete crd telemetryrouters.arcdata.microsoft.com --# Substitute the name of the namespace the data controller was deployed in into {namespace}. --# Cluster roles and role bindings -kubectl delete clusterrole arcdataservices-extension -kubectl delete clusterrole arc:cr-arc-metricsdc-reader -kubectl delete clusterrole arc:cr-arc-dc-watch -kubectl delete clusterrole cr-arc-webhook-job -kubectl delete clusterrole {namespace}:cr-upgrade-worker -kubectl delete clusterrole {namespace}:cr-deployer -kubectl delete clusterrolebinding {namespace}:crb-arc-metricsdc-reader -kubectl delete clusterrolebinding {namespace}:crb-arc-dc-watch -kubectl delete clusterrolebinding crb-arc-webhook-job -kubectl delete clusterrolebinding {namespace}:crb-upgrade-worker -kubectl delete clusterrolebinding {namespace}:crb-deployer --# Substitute the name of the namespace the data controller was deployed in into {namespace}. If unsure, get the name of the mutatingwebhookconfiguration using 'kubectl get clusterrolebinding' --# API services -# Up to May 2021 release -kubectl delete apiservice v1alpha1.arcdata.microsoft.com -kubectl delete apiservice v1alpha1.sql.arcdata.microsoft.com --# June 2021 release -kubectl delete apiservice v1beta1.arcdata.microsoft.com -kubectl delete apiservice v1beta1.sql.arcdata.microsoft.com --# GA/July 2021 release -kubectl delete apiservice v1.arcdata.microsoft.com -kubectl delete apiservice v1.sql.arcdata.microsoft.com --# Substitute the name of the namespace the data controller was deployed in into {namespace}. If unsure, get the name of the mutatingwebhookconfiguration using 'kubectl get mutatingwebhookconfiguration' -kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{namespace} -``` --## Related content --[Start by creating a Data Controller](create-data-controller-indirect-cli.md) --Already created a Data Controller? [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) |
azure-arc | Configure Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-managed-instance.md | - Title: Configure SQL Managed Instance enabled by Azure Arc -description: Configure SQL Managed Instance enabled by Azure Arc. --- Previously updated : 12/05/2023----- - devx-track-azurecli --# Configure SQL Managed Instance enabled by Azure Arc --This article explains how to configure SQL Managed Instance enabled by Azure Arc. --## Configure resources such as cores and memory --### Configure using CLI --To update the configuration of an instance with the CLI. Run the following command to see configuration options. --```azurecli -az sql mi-arc update --help -``` --To update the available memory and cores for an instance use: --```azurecli -az sql mi-arc update --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s -``` --The following example sets the cpu core and memory requests and limits. --```azurecli -az sql mi-arc update --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n sqlinstance1 --k8s-namespace arc --use-k8s -``` --To view the changes made to the instance, you can use the following commands to view the configuration yaml file: --```azurecli -az sql mi-arc show -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s -``` --## Configure readable secondaries --When you deploy SQL Managed Instance enabled by Azure Arc in `BusinessCritical` service tier with 2 or more replicas, by default, one secondary replica is automatically configured as `readableSecondary`. This setting can be changed, either to add or to remove the readable secondaries as follows: --```azurecli -az sql mi-arc update --name <sqlmi name> --readable-secondaries <value> --k8s-namespace <namespace> --use-k8s -``` --For example, the following example resets the readable secondaries to 0. --```azurecli -az sql mi-arc update --name sqlmi1 --readable-secondaries 0 --k8s-namespace mynamespace --use-k8s -``` --## Configure replicas --You can also scale up or down the number of replicas deployed in the `BusinessCritical` service tier as follows: --```azurecli -az sql mi-arc update --name <sqlmi name> --replicas <value> --k8s-namespace <namespace> --use-k8s -``` --For example: --The following example scales down the number of replicas from 3 to 2. --```azurecli -az sql mi-arc update --name sqlmi1 --replicas 2 --k8s-namespace mynamespace --use-k8s -``` --> [!NOTE] -> If you scale down from 2 replicas to 1 replica, you might run into a conflict with the pre-configured `--readable--secondaries` setting. You can first edit the `--readable--secondaries` before scaling down the replicas. --## Configure server options --You can configure certain server configuration settings for SQL Managed Instance enabled by Azure Arc either during or after creation time. This article describes how to configure settings like enabling "Ad Hoc Distributed Queries" or "backup compression default" etc. --Currently the following server options can be configured: -- Ad Hoc Distributed Queries-- Default Trace Enabled-- Database Mail XPs-- Backup compression default-- Cost threshold for parallelism-- Optimize for ad hoc workloads--> [!NOTE] -> - Currently these options can only be specified via YAML file, either during SQL Managed Instance creation or post deployment. -> -> - The SQL managed instance image tag has to be at least version v1.19.x or above. --Add the following to your YAML file during deployment to configure any of these options. --```yml -spec: - serverConfigurations: - - name: "Ad Hoc Distributed Queries" - value: 1 - - name: "Default Trace Enabled" - value: 0 - - name: "Database Mail XPs" - value: 1 - - name: "backup compression default" - value: 1 - - name: "cost threshold for parallelism" - value: 50 - - name: "optimize for ad hoc workloads" - value: 1 -``` --If you already have an existing SQL managed instance enabled by Azure Arc, you can run `kubectl edit sqlmi <sqlminame> -n <namespace>` and add the above options into the spec. --Example YAML file: --```yml -apiVersion: sql.arcdata.microsoft.com/v13 -kind: SqlManagedInstance -metadata: - name: sql1 - annotations: - exampleannotation1: exampleannotationvalue1 - exampleannotation2: exampleannotationvalue2 - labels: - examplelabel1: examplelabelvalue1 - examplelabel2: examplelabelvalue2 -spec: - dev: true #options: [true, false] - licenseType: LicenseIncluded #options: [LicenseIncluded, BasePrice]. BasePrice is used for Azure Hybrid Benefits. - tier: GeneralPurpose #options: [GeneralPurpose, BusinessCritical] - serverConfigurations: - - name: "Ad Hoc Distributed Queries" - value: 1 - - name: "Default Trace Enabled" - value: 0 - - name: "Database Mail XPs" - value: 1 - - name: "backup compression default" - value: 1 - - name: "cost threshold for parallelism" - value: 50 - - name: "optimize for ad hoc workloads" - value: 1 - security: - adminLoginSecret: sql1-login-secret - scheduling: - default: - resources: - limits: - cpu: "2" - memory: 4Gi - requests: - cpu: "1" - memory: 2Gi - - primary: - type: LoadBalancer - storage: - backups: - volumes: - - className: azurefile # Backup volumes require a ReadWriteMany (RWX) capable storage class - size: 5Gi - data: - volumes: - - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment - size: 5Gi - datalogs: - volumes: - - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment - size: 5Gi - logs: - volumes: - - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment - size: 5Gi -``` --## Enable SQL Server Agent --SQL Server agent is disabled during a default deployment of SQL Managed Instance enabled by Azure Arc. It can be enabled by running the following command: --```azurecli -az sql mi-arc update -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s --agent-enabled true -``` --As an example: --```azurecli -az sql mi-arc update -n sqlinstance1 --k8s-namespace arc --use-k8s --agent-enabled true -``` --## Enable trace flags --Trace flags can be enabled as follows: --```azurecli -az sql mi-arc update -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s --trace-flags "3614,1234" -``` |
azure-arc | Configure Security Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-security-postgresql.md | - Title: Configure security for your Azure Arc-enabled PostgreSQL server -description: Configure security for your Azure Arc-enabled PostgreSQL server ------ Previously updated : 11/03/2021----# Configure security for your Azure Arc-enabled PostgreSQL server --This document describes various aspects related to security of your server group: --- Encryption at rest-- Postgres roles and users management- - General perspectives - - Change the password of the _postgres_ administrative user -- Audit---## Encryption at rest --You can implement encryption at rest either by encrypting the disks on which you store your databases and/or by using database functions to encrypt the data you insert or update. --### Hardware: Linux host volume encryption --Implement system data encryption to secure any data that resides on the disks used by your Azure Arc-enabled Data Services setup. You can read more about this topic: --- [Data encryption at rest](https://wiki.archlinux.org/index.php/Data-at-rest_encryption) on Linux in general -- Disk encryption with LUKS `cryptsetup` command (Linux)(https://www.cyberciti.biz/security/howto-linux-hard-disk-encryption-with-luks-cryptsetup-command/) specifically. Since Azure Arc-enabled Data Services runs on the physical infrastructure that you provide, you are in charge of securing the infrastructure.--### Software: Use the PostgreSQL `pgcrypto` extension in your server group --In addition of encrypting the disks used to host your Azure Arc setup, you can configure your Azure Arc-enabled PostgreSQL server to expose mechanisms that your applications can use to encrypt data in your database(s). The `pgcrypto` extension is part of the `contrib` extensions of Postgres and is available in your Azure Arc-enabled PostgreSQL server. You find details about the `pgcrypto` extension [here](https://www.postgresql.org/docs/current/pgcrypto.html). -In summary, with the following commands, you enable the extension, you create it and you use it: --#### Create the `pgcrypto` extension --Connect to your server group with the client tool of your choice and run the standard PostgreSQL query: --```console -CREATE EXTENSION pgcrypto; -``` --> Find details [here](get-connection-endpoints-and-connection-strings-postgresql-server.md) about how to connect. --#### Verify the list the extensions ready to use in your server group --You can verify that the `pgcrypto` extension is ready to use by listing the extensions available in your server group. -Connect to your server group with the client tool of your choice and run the standard PostgreSQL query: --```console -select * from pg_extension; -``` -You should see `pgcrypto` if you enabled and created it with the commands indicated above. --#### Use the `pgcrypto` extension --Now you can adjust the code your applications so that they use any of the functions offered by `pgcrypto`: --- General hashing functions-- Password hashing functions-- PGP encryption functions-- Raw encryption functions-- Random-data functions--For example, to generate hash values. Run the command: --```console -select crypt('Les sanglots longs des violons de l_automne', gen_salt('md5')); -``` --Returns the following hash: --```console - crypt -- $1$/9ACBYOV$z52PAGjQ5WTU9xvEECBNv/ -``` --Or, for example: --```console -select hmac('Les sanglots longs des violons de l_automne', 'md5', 'sha256'); -``` --Returns the following hash: --```console - hmac - \xd4e4790b69d2cc8dbce3385ee63272bc7760f1603640bb211a7b864e695570c5 -``` --Or, for example, to store encrypted data like a password: --- An application stores secrets in the following table:-- ```console - create table mysecrets(USERid int, USERname char(255), USERpassword char(512)); - ``` --- Encrypt their password when creating a user:-- ```console - insert into mysecrets values (1, 'Me', crypt('MySecretPasswrod', gen_salt('md5'))); - ``` --- Notice that the password is encrypted:-- ```console - select * from mysecrets; - ``` --Output: --```output -- USERid: 1-- USERname: Me-- USERpassword: $1$Uc7jzZOp$NTfcGo7F10zGOkXOwjHy31-``` --When you connect with the application and pass a password, it looks up in the `mysecrets` table and returns the name of the user if there is a match between the password that is provided to the application and the passwords stored in the table. For example: ---- Pass the wrong password:- - ```console - select USERname from mysecrets where (USERpassword = crypt('WrongPassword', USERpassword)); - ``` -- Output -- ```output - USERname - - (0 rows) - ``` --- Pass the correct password:-- ```console - select USERname from mysecrets where (USERpassword = crypt('MySecretPasswrod', USERpassword)); - ``` -- Output: -- ```output - USERname - - Me - (1 row) - ``` --This small example demonstrates that you can encrypt data at rest (store encrypted data) in Azure Arc-enabled PostgreSQL server using the Postgres `pgcrypto` extension and your applications can use functions offered by `pgcrypto` to manipulate this encrypted data. --## Postgres roles and users management --### General perspectives --To configure roles and users in your Azure Arc-enabled PostgreSQL server, use the standard Postgres way to manage roles and users. For more details, read [here](https://www.postgresql.org/docs/12/user-manag.html). --## Audit --For audit scenarios please configure your server group to use the `pgaudit` extensions of Postgres. For more details about `pgaudit` see [`pgAudit` GitHub project](https://github.com/pgaudit/pgaudit/blob/master/README.md). To enable the `pgaudit` extension in your server group read [Use PostgreSQL extensions](using-extensions-in-postgresql-server.md). --## Use SSL connection --SSL is required for client connections. In connection string, the SSL mode parameter should not be disabled. [Form connection strings](get-connection-endpoints-and-connection-strings-postgresql-server.md#form-connection-strings). --## Related content -- See [`pgcrypto` extension](https://www.postgresql.org/docs/current/pgcrypto.html)-- See [Use PostgreSQL extensions](using-extensions-in-postgresql-server.md) |
azure-arc | Configure Transparent Data Encryption Manually | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-manually.md | - Title: Encrypt a database with transparent data encryption manually in SQL Managed Instance enabled by Azure Arc -description: How-to guide to turn on transparent data encryption in an SQL Managed Instance enabled by Azure Arc ------- Previously updated : 05/22/2022----# Encrypt a database with transparent data encryption on SQL Managed Instance enabled by Azure Arc --This article describes how to enable transparent data encryption on a database created in a SQL Managed Instance enabled by Azure Arc. In this article, the term *managed instance* refers to a deployment of SQL Managed Instance enabled by Azure Arc. --## Prerequisites --Before you proceed with this article, you must have a SQL Managed Instance enabled by Azure Arc resource created and connect to it. --- [Create a SQL Managed Instance enabled by Azure Arc](./create-sql-managed-instance.md)-- [Connect to SQL Managed Instance enabled by Azure Arc](./connect-managed-instance.md)--## Turn on transparent data encryption on a database in the managed instance --Turning on transparent data encryption in the managed instance follows the same steps as SQL Server on-premises. Follow the steps described in [SQL Server's transparent data encryption guide](/sql/relational-databases/security/encryption/transparent-data-encryption#enable-tde). --After you create the necessary credentials, back up any newly created credentials. --## Back up a transparent data encryption credential --When you back up credentials from the managed instance, the credentials are stored within the container. To store credentials on a persistent volume, specify the mount path in the container. For example, `var/opt/mssql/data`. The following example backs up a certificate from the managed instance: --> [!NOTE] -> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below. --1. Back up the certificate from the container to `/var/opt/mssql/data`. -- ```sql - USE master; - GO -- BACKUP CERTIFICATE <cert-name> TO FILE = '<cert-path>' - WITH PRIVATE KEY ( FILE = '<private-key-path>', - ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>'); - ``` -- Example: -- ```sql - USE master; - GO -- BACKUP CERTIFICATE MyServerCert TO FILE = '/var/opt/mssql/data/servercert.crt' - WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key', - ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>'); - ``` --2. Copy the certificate from the container to your file system. --### [Windows](#tab/windows) -- ```console - kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-certificate-path> > <local-certificate-path> - ``` -- Example: -- ```console - kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.crt > $HOME\sqlcerts\servercert.crt - ``` --### [Linux](#tab/linux) - ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-certificate-path> <local-certificate-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.crt $HOME/sqlcerts/servercert.crt - ``` ----3. Copy the private key from the container to your file system. --### [Windows](#tab/windows) - ```console - kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-private-key-path> > <local-private-key-path> - ``` -- Example: -- ```console - kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.key > $HOME\sqlcerts\servercert.key - ``` --### [Linux](#tab/linux) - ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-private-key-path> <local-private-key-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.key $HOME/sqlcerts/servercert.key - ``` ----4. Delete the certificate and private key from the container. -- ```console - kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path> - ``` -- Example: -- ```console - kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" - ``` --## Restore a transparent data encryption credential to a managed instance --Similar to above, to restore the credentials, copy them into the container and run the corresponding T-SQL afterwards. --> [!NOTE] -> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below. --1. Copy the certificate from your file system to the container. -### [Windows](#tab/windows) - ```console - type <local-certificate-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-certificate-path> - ``` -- Example: -- ```console - type $HOME\sqlcerts\servercert.crt | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.crt - ``` --### [Linux](#tab/linux) - ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <local-certificate-path> <pod-name>:<pod-certificate-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.crt sql-0:/var/opt/mssql/data/servercert.crt - ``` ----2. Copy the private key from your file system to the container. -### [Windows](#tab/windows) - ```console - type <local-private-key-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-private-key-path> - ``` -- Example: -- ```console - type $HOME\sqlcerts\servercert.key | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.key - ``` --### [Linux](#tab/linux) - ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <local-private-key-path> <pod-name>:<pod-private-key-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.key sql-0:/var/opt/mssql/data/servercert.key - ``` ----3. Create the certificate using file paths from `/var/opt/mssql/data`. -- ```sql - USE master; - GO -- CREATE CERTIFICATE <certicate-name> - FROM FILE = '<certificate-path>' - WITH PRIVATE KEY ( FILE = '<private-key-path>', - DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' ); - ``` -- Example: -- ```sql - USE master; - GO -- CREATE CERTIFICATE MyServerCertRestored - FROM FILE = '/var/opt/mssql/data/servercert.crt' - WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key', - DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' ); - ``` --4. Delete the certificate and private key from the container. -- ```console - kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path> - ``` -- Example: -- ```console - kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" - ``` --## Related content --[Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) |
azure-arc | Configure Transparent Data Encryption Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-sql-managed-instance.md | - Title: Turn on transparent data encryption in SQL Managed Instance enabled by Azure Arc (preview) -description: How-to guide to turn on transparent data encryption in an SQL Managed Instance enabled by Azure Arc (preview) ------- Previously updated : 06/06/2023----# Enable transparent data encryption on SQL Managed Instance enabled by Azure Arc (preview) --This article describes how to enable and disable transparent data encryption (TDE) at-rest on a SQL Managed Instance enabled by Azure Arc. In this article, the term *managed instance* refers to a deployment of SQL Managed Instance enabled by Azure Arc and enabling/disabling TDE will apply to all databases running on a managed instance. --For more info on TDE, please refer to [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption). --Turning on the TDE feature does the following: --- All existing databases will now be automatically encrypted.-- All newly created databases will get automatically encrypted.---## Prerequisites --Before you proceed with this article, you must have a SQL Managed Instance enabled by Azure Arc resource created and connect to it. --- [Create a SQL Managed Instance enabled by Azure Arc](./create-sql-managed-instance.md)-- [Connect to SQL Managed Instance enabled by Azure Arc](./connect-managed-instance.md)--## Limitations --The following limitations apply when you enable automatic TDE: --- Only General Purpose Tier is supported.-- Failover groups aren't supported.---## Create a managed instance with TDE enabled (Azure CLI) --The following example creates a SQL Managed Instance enabled by Azure Arc with one replica, TDE enabled: --```azurecli -az sql mi-arc create --name sqlmi-tde --k8s-namespace arc --tde-mode ServiceManaged --use-k8s -``` --## Turn on TDE on the managed instance --When TDE is enabled on Arc-enabled SQL Managed Instance, the data service automatically does the following tasks: --1. Adds the service-managed database master key in the `master` database. -2. Adds the service-managed certificate protector. -3. Adds the associated Database Encryption Keys (DEK) on all databases on the managed instance. -4. Enables encryption on all databases on the managed instance. --You can set SQL Managed Instance enabled by Azure Arc TDE in one of two modes: --- Service-managed-- Customer-managed--In service-managed mode, TDE requires the managed instance to use a service-managed database master key as well as the service-managed server certificate. These credentials are automatically created when service-managed TDE is enabled. --In customer-managed mode, TDE uses a service-managed database master key and uses keys you provide for the server certificate. To configure customer-managed mode: --1. Create a certificate. -1. Store the certificate as a secret in the same Kubernetes namespace as the instance. --### Enable --# [Service-managed](#tab/service-managed) --The following section explains how to enable TDE in service-managed mode. --# [Customer-managed](#tab/customer-managed) --The following section explains how to enable TDE in customer-managed mode. ----# [Azure CLI](#tab/azure-cli/service-managed) --To enable TDE in service managed mode, run the following command: --```azurecli -az sql mi-arc update --tde-mode ServiceManaged -``` --# [Kubernetes native tools](#tab/kubernetes-native/service-managed) --To enable TDE in service-managed mode, run kubectl patch to enable service-managed TDE: --```console -kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "ServiceManaged" } } } }' -``` --Example: --```console -kubectl patch sqlmi sqlmi-tde --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "ServiceManaged" } } } }' -``` --# [Azure CLI](#tab/azure-cli/customer-managed) --To enable TDE in customer-managed mode with Azure CLI: --1. Create a certificate. -- ```console - openssl req -x509 -newkey rsa:2048 -nodes -keyout <key-file> -days 365 -out <cert-file> - ``` --1. Create a secret for the certificate. -- > [!IMPORTANT] - > Store the secret in the same namespace as the managed instance -- ```console - kubectl create secret generic <tde-secret-name> --from-literal=privatekey.pem="$(cat <key-file>)" --from-literal=certificate.pem="$(cat <cert-file>) --namespace <namespace>" - ``` --1. Update and run the following example to enable customer-managed TDE: -- ```azurecli - az sql mi-arc update --tde-mode CustomerManaged --tde-protector-private-key-file <key-file> --tde-protector-public-key-file <cert-file> - ``` --# [Kubernetes native tools](#tab/kubernetes-native/customer-managed) --To enable TDE in customer-managed mode: --1. Create a certificate. -- ```console - openssl req -x509 -newkey rsa:2048 -nodes -keyout <key-file> -days 365 -out <cert-file> - ``` --1. Create a secret for the certificate. -- > [!IMPORTANT] - > Store the secret in the same namespace as the managed instance -- ```console - kubectl create secret generic <tde-secret-name> --from-literal=privatekey.pem="$(cat <key-file>)" --from-literal=certificate.pem="$(cat <cert-file>) --namespace <namespace>" - ``` --1. Run `kubectl patch ...` to enable customer-managed TDE -- ```console - kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "CustomerManaged", "protectorSecret": "<tde-secret-name>" } } } }' - ``` -- Example: -- ```console - kubectl patch sqlmi sqlmi-tde --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "CustomerManaged", "protectorSecret": "sqlmi-tde-protector-cert-secret" } } } }' - ``` -----## Turn off TDE on the managed instance --When TDE is disabled on Arc-enabled SQL Managed Instance, the data service automatically does the following tasks: --1. Disables encryption on all databases on the managed instance. -2. Drops the associated DEKs on all databases on the managed instance. -3. Drops the service-managed certificate protector. -4. Drops the service-managed database master key in the `master` database. --# [Azure CLI](#tab/azure-cli) --To disable TDE: --```azurecli -az sql mi-arc update --tde-mode Disabled -``` --# [Kubernetes native tools](#tab/kubernetes-native) --Run kubectl patch to disable service-managed TDE. --```console -kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "Disabled" } } } }' -``` --Example: -```console -kubectl patch sqlmi sqlmi-tde --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "Disabled" } } } }' -``` -----## Back up a TDE credential --When you back up credentials from the managed instance, the credentials are stored within the container. To store credentials on a persistent volume, specify the mount path in the container. For example, `var/opt/mssql/data`. The following example backs up a certificate from the managed instance: --> [!NOTE] -> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below. --1. Back up the certificate from the container to `/var/opt/mssql/data`. -- ```sql - USE master; - GO -- BACKUP CERTIFICATE <cert-name> TO FILE = '<cert-path>' - WITH PRIVATE KEY ( FILE = '<private-key-path>', - ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>'); - ``` -- Example: -- ```sql - USE master; - GO -- BACKUP CERTIFICATE MyServerCert TO FILE = '/var/opt/mssql/data/servercert.crt' - WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key', - ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>'); - ``` --2. Copy the certificate from the container to your file system. -- ### [Windows](#tab/windows) -- ```console - kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-certificate-path> > <local-certificate-path> - ``` -- Example: -- ```console - kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.crt > $HOME\sqlcerts\servercert.crt - ``` -- ### [Linux](#tab/linux) - ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-certificate-path> <local-certificate-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.crt $HOME/sqlcerts/servercert.crt - ``` -- --3. Copy the private key from the container to your file system. -- ### [Windows](#tab/windows) -- ```console - kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-private-key-path> > <local-private-key-path> - ``` -- Example: -- ```console - kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.key > $HOME\sqlcerts\servercert.key - ``` -- ### [Linux](#tab/linux) -- ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-private-key-path> <local-private-key-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.key $HOME/sqlcerts/servercert.key - ``` -- --4. Delete the certificate and private key from the container. -- ```console - kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path> - ``` -- Example: -- ```console - kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" - ``` --## Restore a TDE credential to a managed instance --Similar to above, to restore the credentials, copy them into the container and run the corresponding T-SQL afterwards. ----> [!NOTE] -> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below. -> To restore database backups that have been taken before enabling TDE, you would need to disable TDE on the SQL Managed Instance, restore the database backup and enable TDE again. --1. Copy the certificate from your file system to the container. -- ### [Windows](#tab/windows) -- ```console - type <local-certificate-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-certificate-path> - ``` -- Example: -- ```console - type $HOME\sqlcerts\servercert.crt | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.crt - ``` -- ### [Linux](#tab/linux) -- ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <local-certificate-path> <pod-name>:<pod-certificate-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.crt sql-0:/var/opt/mssql/data/servercert.crt - ``` -- --2. Copy the private key from your file system to the container. -- # [Windows](#tab/windows) - - ```console - type <local-private-key-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-private-key-path> - ``` -- Example: -- ```console - type $HOME\sqlcerts\servercert.key | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.key - ``` -- ### [Linux](#tab/linux) -- ```console - kubectl cp --namespace <namespace> --container arc-sqlmi <local-private-key-path> <pod-name>:<pod-private-key-path> - ``` -- Example: -- ```console - kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.key sql-0:/var/opt/mssql/data/servercert.key - ``` - --3. Create the certificate using file paths from `/var/opt/mssql/data`. -- ```sql - USE master; - GO -- CREATE CERTIFICATE <certicate-name> - FROM FILE = '<certificate-path>' - WITH PRIVATE KEY ( FILE = '<private-key-path>', - DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' ); - ``` -- Example: -- ```sql - USE master; - GO -- CREATE CERTIFICATE MyServerCertRestored - FROM FILE = '/var/opt/mssql/data/servercert.crt' - WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key', - DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' ); - ``` --4. Delete the certificate and private key from the container. -- ```console - kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path> - ``` -- Example: -- ```console - kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" - ``` --## Related content --[Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) |
azure-arc | Connect Active Directory Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connect-active-directory-sql-managed-instance.md | - Title: Connect to AD-integrated SQL Managed Instance enabled by Azure Arc -description: Connect to AD-integrated SQL Managed Instance enabled by Azure Arc ------ Previously updated : 10/11/2022----# Connect to AD-integrated SQL Managed Instance enabled by Azure Arc --This article describes how to connect to SQL Managed Instance endpoint using Active Directory (AD) authentication. Before you proceed, make sure you have an AD-integrated SQL Managed Instance enabled by Azure Arc deployed already. --See [Tutorial ΓÇô Deploy AD-integrated SQL Managed Instance](deploy-active-directory-sql-managed-instance.md) to deploy SQL Managed Instance enabled by Azure Arc with Active Directory authentication enabled. --> [!NOTE] -> Ensure that a DNS record for the SQL endpoint is created in Active Directory DNS servers before continuing on this page. --## Create Active Directory logins in SQL Managed Instance --Once SQL Managed Instance is successfully deployed, you will need to provision Active Directory logins in SQL Server. --To provision logins, first connect to the SQL Managed Instance using the SQL login with administrative privileges and run the following T-SQL: --```sql -CREATE LOGIN [<NetBIOS domain name>\<AD account name>] FROM WINDOWS; -GO -``` --The following example creates a login for an Active Directory account named `admin`, in the domain named `contoso.local`, with NetBIOS domain name as `CONTOSO`: --```sql -CREATE LOGIN [CONTOSO\admin] FROM WINDOWS; -GO -``` --## Connect to SQL Managed Instance enabled by Azure Arc --From your domain joined Windows-based client machine or a Linux-based domain aware machine, you can use `sqlcmd` utility, or open [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio (ADS)](/azure-data-studio/download-azure-data-studio) to connect to the instance with AD authentication. --A domain-aware Linux-based machine is one where you are able to use Kerberos authentication using kinit. Such machine should have /etc/krb5.conf file set to point to the Active Directory domain (realm) being used. It should also have /etc/resolv.conf file set such that one can run DNS lookups against the Active Directory domain. ---### Connect from Linux/Mac OS --To connect from a Linux/Mac OS client, authenticate to Active Directory using the kinit command and then use sqlcmd tool to connect to the SQL Managed Instance. --```console -kinit <username>@<REALM> -sqlcmd -S <Endpoint DNS name>,<Endpoint port number> -E -``` --For example, to connect with the CONTOSO\admin account to the SQL managed instance with endpoint `sqlmi.contoso.local` at port `31433`, use the following command: --```console -kinit admin@CONTOSO.LOCAL -sqlcmd -S sqlmi.contoso.local,31433 -E -``` --In the example, `-E` specifies Active Directory integrated authentication. --## Connect SQL Managed Instance from Windows --To log in to SQL Managed Instance with your current Windows Active Directory login, run the following command: --```console -sqlcmd -S <DNS name for master instance>,31433 -E -``` --## Connect to SQL Managed Instance from SSMS --![Connect with SSMS](media/active-directory-deployment/connect-with-ssms.png) --## Connect to SQL Managed Instance from ADS --![Connect with ADS](media/active-directory-deployment/connect-with-ads.png) |
azure-arc | Connect Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connect-managed-instance.md | - Title: Connect to SQL Managed Instance enabled by Azure Arc -description: Connect to SQL Managed Instance enabled by Azure Arc ------ Previously updated : 07/30/2021---# Connect to SQL Managed Instance enabled by Azure Arc --This article explains how you can connect to your SQL Managed Instance enabled by Azure Arc. ---## View SQL Managed Instance enabled by Azure Arc --To view instance and the external endpoints, use the following command: --```azurecli -az sql mi-arc list --k8s-namespace <namespace> --use-k8s -o table -``` --Output should look like this: --```console -Name PrimaryEndpoint Replicas State - - - - -sqldemo 10.240.0.107,1433 1/1 Ready -``` --If you are using AKS or kubeadm or OpenShift etc., you can copy the external IP and port number from here and connect to it using your favorite tool for connecting to a SQL Sever/Azure SQL instance such as Azure Data Studio or SQL Server Management Studio. However, if you are using the quick start VM, see below for special information about how to connect to that VM from outside of Azure. --> [!NOTE] -> Your corporate policies may block access to the IP and port, especially if this is created in the public cloud. --## Connect --Connect with Azure Data Studio, SQL Server Management Studio, or SQLCMD --Open Azure Data Studio and connect to your instance with the external endpoint IP address and port number above. If you are using an Azure VM you will need the _public_ IP address, which is identifiable using the [Special note about Azure virtual machine deployments](#special-note-about-azure-virtual-machine-deployments). --For example: --- Server: 52.229.9.30,30913-- Username: sa-- Password: your specified SQL password at provisioning time--> [!NOTE] -> You can use Azure Data Studio [view the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards). --> [!NOTE] -> In order to connect to a managed instance that was created using a Kubernetes manifest, the username and password need to be provided to sqlcmd in base64 encoded form. --To connect using SQLCMD or Linux or Windows you can use a command like this. Enter the SQL password when prompted: --```bash -sqlcmd -S 52.229.9.30,30913 -U sa -``` --## Special note about Azure virtual machine deployments --If you are using an Azure virtual machine, then the endpoint IP address will not show the public IP address. To locate the external IP address, use the following command: --```azurecli -az network public-ip list -g azurearcvm-rg --query "[].{PublicIP:ipAddress}" -o table -``` --You can then combine the public IP address with the port to make your connection. --You may also need to expose the port of the sql instance through the network security gateway (NSG). To allow traffic through the (NSG) you will need to add a rule which you can do using the following command. --To set a rule you will need to know the name of your NSG which you can find out using the command below: --```azurecli -az network nsg list -g azurearcvm-rg --query "[].{NSGName:name}" -o table -``` --Once you have the name of the NSG, you can add a firewall rule using the following command. The example values here create an NSG rule for port 30913 and allows connection from **any** source IP address. This is not a security best practice! You can lock things down better by specifying a -source-address-prefixes value that is specific to your client IP address or an IP address range that covers your team's or organization's IP addresses. --Replace the value of the `--destination-port-ranges` parameter below with the port number you got from the `az sql mi-arc list` command above. --```azurecli -az network nsg rule create -n db_port --destination-port-ranges 30913 --source-address-prefixes '*' --nsg-name azurearcvmNSG --priority 500 -g azurearcvm-rg --access Allow --description 'Allow port through for db access' --destination-address-prefixes '*' --direction Inbound --protocol Tcp --source-port-ranges '*' -``` --## Related content --- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards)-- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md) |
azure-arc | Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connectivity.md | - Title: Connectivity modes and requirements -description: Explains Azure Arc-enabled data services connectivity options for from your environment to Azure ------ Previously updated : 07/19/2023----# Connectivity modes and requirements --This article describes the connectivity modes available for Azure Arc-enabled data services, and their respective requirements. --## Connectivity modes --There are multiple options for the degree of connectivity from your Azure Arc-enabled data services environment to Azure. As your requirements vary based on business policy, government regulation, or the availability of network connectivity to Azure, you can choose from the following connectivity modes. --Azure Arc-enabled data services provide you the option to connect to Azure in two different *connectivity modes*: --- Directly connected -- Indirectly connected--The connectivity mode provides you the flexibility to choose how much data is sent to Azure and how users interact with the Arc Data Controller. Depending on the connectivity mode that is chosen, some functionality of Azure Arc-enabled data services might or might not be available. --Importantly, if the Azure Arc-enabled data services are directly connected to Azure, then users can use [Azure Resource Manager APIs](/rest/api/resources/), the Azure CLI, and the Azure portal to operate the Azure Arc data services. The experience in directly connected mode is much like how you would use any other Azure service with provisioning/de-provisioning, scaling, configuring, and so on, all in the Azure portal. If the Azure Arc-enabled data services are indirectly connected to Azure, then the Azure portal is a read-only view. You can see the inventory of SQL managed instances and PostgreSQL servers that you have deployed and the details about them, but you can't take action on them in the Azure portal. In the indirectly connected mode, all actions must be taken locally using Azure Data Studio, the appropriate CLI, or Kubernetes native tools like kubectl. --Additionally, Microsoft Entra ID and Azure Role-Based Access Control can be used in the directly connected mode only because there's a dependency on a continuous and direct connection to Azure to provide this functionality. --Some Azure-attached services are only available when they can be directly reached such as Container Insights, and backup to blob storage. --||**Indirectly connected**|**Directly connected**|**Never connected**| -||||| -|**Description**|Indirectly connected mode offers most of the management services locally in your environment with no direct connection to Azure. A minimal amount of data must be sent to Azure for inventory and billing purposes _only_. It's exported to a file and uploaded to Azure at least once per month. No direct or continuous connection to Azure is required. Some features and services that require a connection to Azure won't be available.|Directly connected mode offers all of the available services when a direct connection can be established with Azure. Connections are always initiated _from_ your environment to Azure and use standard ports and protocols such as HTTPS/443.|No data can be sent to or from Azure in any way.| -|**Current availability**| Available |Available|Not currently supported.| -|**Typical use cases**|On-premises data centers that donΓÇÖt allow connectivity in or out of the data region of the data center due to business or regulatory compliance policies or out of concerns of external attacks or data exfiltration. Typical examples: Financial institutions, health care, government. <br/><br/>Edge site locations where the edge site doesnΓÇÖt typically have connectivity to the Internet. Typical examples: oil/gas or military field applications. <br/><br/>Edge site locations that have intermittent connectivity with long periods of outages. Typical examples: stadiums, cruise ships. | Organizations who are using public clouds. Typical examples: Azure, AWS or Google Cloud.<br/><br/>Edge site locations where Internet connectivity is typically present and allowed. Typical examples: retail stores, manufacturing.<br/><br/>Corporate data centers with more permissive policies for connectivity to/from their data region of the datacenter to the Internet. Typical examples: Nonregulated businesses, small/medium sized businesses|Truly "air-gapped" environments where no data under any circumstances can come or go from the data environment. Typical examples: top secret government facilities.| -|**How data is sent to Azure**|There are three options for how the billing and inventory data can be sent to Azure:<br><br> 1) Data is exported out of the data region by an automated process that has connectivity to both the secure data region and Azure.<br><br>2) Data is exported out of the data region by an automated process within the data region, automatically copied to a less secure region, and an automated process in the less secure region uploads the data to Azure.<br><br>3) Data is manually exported by a user within the secure region, manually brought out of the secure region, and manually uploaded to Azure. <br><br>The first two options are an automated continuous process that can be scheduled to run frequently so there's minimal delay in the transfer of data to Azure subject only to the available connectivity to Azure.|Data is automatically and continuously sent to Azure.|Data is never sent to Azure.| --## Feature availability by connectivity mode --|**Feature**|**Indirectly connected**|**Directly connected**| -|||| -|**Automatic high availability**|Supported|Supported| -|**Self-service provisioning**|Supported<br/>Use Azure Data Studio, the appropriate CLI, or Kubernetes native tools like Helm, `kubectl`, or `oc`, or use Azure Arc-enabled Kubernetes GitOps provisioning.|Supported<br/>In addition to the indirectly connected mode creation options, you can also create through the Azure portal, Azure Resource Manager APIs, the Azure CLI, or ARM templates. -|**Elastic scalability**|Supported|Supported<br/>| -|**Billing**|Supported<br/>Billing data is periodically exported out and sent to Azure.|Supported<br/>Billing data is automatically and continuously sent to Azure and reflected in near real time. | -|**Inventory management**|Supported<br/>Inventory data is periodically exported out and sent to Azure.<br/><br/>Use client tools like Azure Data Studio, Azure Data CLI, or `kubectl` to view and manage inventory locally.|Supported<br/>Inventory data is automatically and continuously sent to Azure and reflected in near real time. As such, you can manage inventory directly from the Azure portal.| -|**Automatic upgrades and patching**|Supported<br/>The data controller must either have direct access to the Microsoft Container Registry (MCR) or the container images need to be pulled from MCR and pushed to a local, private container registry that the data controller has access to.|Supported| -|**Automatic backup and restore**|Supported<br/>Automatic local backup and restore.|Supported<br/>In addition to automated local backup and restore, you can _optionally_ send backups to Azure blob storage for long-term, off-site retention.| -|**Monitoring**|Supported<br/>Local monitoring using Grafana and Kibana dashboards.|Supported<br/>In addition to local monitoring dashboards, you can _optionally_ send monitoring data and logs to Azure Monitor for at-scale monitoring of multiple sites in one place. | -|**Authentication**|Use local username/password for data controller and dashboard authentication. Use SQL and Postgres logins or Active Directory (AD isn't currently supported) for connectivity to database instances. Use Kubernetes authentication providers for authentication to the Kubernetes API.|In addition to or instead of the authentication methods for the indirectly connected mode, you can _optionally_ use Microsoft Entra ID.| -|**Role-based access control (RBAC)**|Use Kubernetes RBAC on Kubernetes API. Use SQL and Postgres RBAC for database instances.|You can use Microsoft Entra ID and Azure RBAC.| --## Connectivity requirements --**Some functionality requires a connection to Azure.** --**All communication with Azure is always initiated from your environment.** This is true even for operations that are initiated by a user in the Azure portal. In that case, there is effectively a task, which is queued up in Azure. An agent in your environment initiates the communication with Azure to see what tasks are in the queue, runs the tasks, and reports back the status/completion/fail to Azure. --|**Type of Data**|**Direction**|**Required/Optional**|**Additional Costs**|**Mode Required**|**Notes**| -||||||| -|**Container images**|Microsoft Container Registry -> Customer|Required|No|Indirect or direct|Container images are the method for distributing the software. In an environment which can connect to the Microsoft Container Registry (MCR) over the Internet, the container images can be pulled directly from MCR. If the deployment environment doesnΓÇÖt have direct connectivity, you can pull the images from MCR and push them to a private container registry in the deployment environment. At creation time, you can configure the creation process to pull from the private container registry instead of MCR. This also applies to automated updates.| -|**Resource inventory**|Customer environment -> Azure|Required|No|Indirect or direct|An inventory of data controllers, database instances (PostgreSQL and SQL) is kept in Azure for billing purposes and also for purposes of creating an inventory of all data controllers and database instances in one place which is especially useful if you have more than one environment with Azure Arc data services. As instances are provisioned, deprovisioned, scaled out/in, scaled up/down the inventory is updated in Azure.| -|**Billing telemetry data**|Customer environment -> Azure|Required|No|Indirect or direct|Utilization of database instances must be sent to Azure for billing purposes. | -|**Monitoring data and logs**|Customer environment -> Azure|Optional|Maybe depending on data volume (see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/))|Indirect or direct|You might want to send the locally collected monitoring data and logs to Azure Monitor for aggregating data across multiple environments into one place and also to use Azure Monitor services like alerts, using the data in Azure Machine Learning, etc.| -|**Azure Role-based Access Control (Azure RBAC)**|Customer environment -> Azure -> Customer Environment|Optional|No|Direct only|If you want to use Azure RBAC, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure RBAC, then local Kubernetes RBAC can be used.| -|**Microsoft Entra ID (Future)**|Customer environment -> Azure -> Customer environment|Optional|Maybe, but you might already be paying for Microsoft Entra ID|Direct only|If you want to use Microsoft Entra ID for authentication, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Microsoft Entra ID for authentication, you can use Active Directory Federation Services (ADFS) over Active Directory. **Pending availability in directly connected mode**| -|**Backup and restore**|Customer environment -> Customer environment|Required|No|Direct or indirect|The backup and restore service can be configured to point to local storage classes. | -|**Azure backup - long term retention (Future)**| Customer environment -> Azure | Optional| Yes for Azure storage | Direct only |You might want to send backups that are taken locally to Azure Backup for long-term, off-site retention of backups and bring them back to the local environment for restore. | -|**Provisioning and configuration changes from Azure portal**|Customer environment -> Azure -> Customer environment|Optional|No|Direct only|Provisioning and configuration changes can be done locally using Azure Data Studio or the appropriate CLI. In directly connected mode, you can also provision and make configuration changes from the Azure portal.| --## Details on internet addresses, ports, encryption, and proxy server support ---## Additional network requirements --In addition, resource bridge requires [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md#azure-arc-enabled-kubernetes-endpoints). |
azure-arc | Create Complete Managed Instance Directly Connected | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-directly-connected.md | - Title: Quickstart - Deploy Azure Arc-enabled data services - directly connected mode - Azure portal -description: Demonstrates how to deploy Azure Arc-enabled data services from beginning, including a Kubernetes cluster. Finishes with an instance of Azure SQL Managed Instance. ------ Previously updated : 12/09/2021----# Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal --This article demonstrates how to deploy Azure Arc-enabled data services in directly connected mode from the Azure portal. --To deploy in indirectly connected mode, see [Quickstart: Deploy Azure Arc-enabled data services - indirectly connected mode - Azure CLI](create-complete-managed-instance-indirectly-connected.md). --When you complete the steps in this article, you will have: --- An Arc-enabled Azure Kubernetes cluster.-- A data controller in directly connected mode.-- An instance of SQL Managed Instance enabled by Azure Arc.-- A connection to the instance with Azure Data Studio.--Azure Arc allows you to run Azure data services on-premises, at the edge, and in public clouds via Kubernetes. Deploy SQL Managed Instance and PostgreSQL server (preview) data services with Azure Arc. The benefits of using Azure Arc include staying current with constant service patches, elastic scale, self-service provisioning, unified management, and support for disconnected mode. --## Install client tools --First, install the [client tools](install-client-tools.md) needed on your machine. To complete the steps in this article, you will use the following tools: -* Azure Data Studio -* The Azure Arc extension for Azure Data Studio -* Kubernetes CLI -* Azure CLI -* `arcdata` extension for Azure CLI. --In addition, you need the following additional extensions to connect the cluster to Azure: --* connectedk8s -* k8s-extension ---## Access your Kubernetes cluster --After installing the client tools, you need access to a Kubernetes cluster. You can create a Kubernetes cluster with [`az aks create`](/cli/azure/aks#az-aks-create), or you can follow the steps below to create the cluster in the Azure portal. --### Create a cluster --To quickly create a Kubernetes cluster, use Azure Kubernetes Services (AKS). --1. Log in to [Azure portal](https://portal.azure.com). -1. In the search resources field at the top of the portal, type **Kubernetes**, and select **Kubernetes services**. - Azure takes you to Kubernetes services. -1. Select **Create** > **Create Kubernetes cluster**. -1. Under **Basics**, - 1. Specify your **Subscription**. - 1. Create a resource group, or specify an existing resource group. - 2. For **Cluster preset configuration**, review the available options and select for your workload. For a development/test proof of concept, use **Dev/Test**. Select a configuration with at least 4 vCPUs. - 3. Specify a cluster name. - 4. Specify a region. - 5. Under **Availability zones**, remove all selected zones. You should not specify any zones. - 6. Verify the Kubernetes version. For minimum supported version, see [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md). - 7. Under **Node size**, select a node size for your cluster based on the [Sizing guidance](sizing-guidance.md). - 8. For **Scale method**, select **Manual**. -1. Click **Review + create**. -1. Click **Create**. --Azure creates your Kubernetes cluster. --When the cluster is completed, the Azure updates the portal to show the completed status: ---### Connect to the cluster --After creating the cluster, connect to the cluster through the Azure CLI. --1. Log in to Azure - if not already. -- ```azurecli - az login - ``` -- Follow the steps to connect. --1. Get the credentials to connect to the cluster. -- The scripts in this article use angle brackets `< ... >` to identify values you will need to replace before you run the scripts. Do not include the angle brackets. -- ```azurecli - az aks get-credentials --resource-group <resource_group_name> --name <cluster_name> - ``` -- Use the resource group and cluster name that you defined when you created the cluster in the portal. -- Azure CLI returns the following output. -- ```output - Merged "<cluster name>" as current context in C:<current path>\.kube\config - ``` --1. Confirm that your cluster is running. Use the following command: -- ```azurecli - kubectl get nodes - ``` -- The command returns a list of the running nodes. -- ```output - NAME STATUS ROLES AGE VERSION - aks-agentpool-37241625-vmss000000 Ready agent 3h10m v1.20.9 - aks-agentpool-37241625-vmss000001 Ready agent 3h10m v1.20.9 - aks-agentpool-37241625-vmss000002 Ready agent 3h9m v1.20.9 - ``` --### Arc enable the Kubernetes cluster --Now that the cluster is running, connect the cluster to Azure. When you connect a cluster to Azure, you enable it for Azure Arc. Connecting the cluster to Azure allows you to view and manage the cluster. In addition, you can deploy and manage additional services such as Arc-enabled data services on the cluster directly from Azure portal. --Use `az connectedk8s connect` to connect the cluster to Azure: --```azurecli -az connectedk8s connect --resource-group <resource group> --name <cluster name> -``` --After the connect command completes successfully, you can view the shadow object in the Azure portal. The shadow object is the representation of the Azure Arc-enabled cluster. --1. In the Azure portal, locate the resource group. One way to find the resource group is to type the resource group name in search on the portal. The portal displays a link to the resource group below the search box. Click the resource group link. -1. In the resource group, under **Overview** you can see the Kubernetes cluster, and the shadow object. See the following image: -- :::image type="content" source="media/create-complete-managed-instance-directly-connected/azure-arc-resources.png" alt-text="The Kubernetes - Azure Arc item type is the shadow resource." lightbox="media/create-complete-managed-instance-directly-connected/azure-arc-resources-expanded.png"::: -- The shadow resource is the resource type **Kubernetes - Azure Arc** in the image above. The other resource is the **Kubernetes service** cluster. Both resources have the same name. --## Create the data controller --The next step is to create the data controller in directly connected mode via the Azure portal. Use the same subscription and resource group that you used to [create a cluster](#create-a-cluster). --1. In the portal, locate the resource group from the previous step. -1. From the search bar in Azure portal, search for *Azure Arc data controllers*, and select **+ Create**. -1. Select **Azure Arc-enabled Kubernetes cluster (Direct connectivity mode)**. Select **Next: Data controller details**. -1. Specify a name for the data controller. -1. Specify a custom location (namespace). -- :::image type="content" source="media/create-complete-managed-instance-directly-connected/custom-location.png" alt-text="Create a new custom location and specify a namespace."::: --1. For **Kubernetes configuration template**, specify *azure-arc-aks-premium-storage* because this example uses an AKS cluster. -2. For **Service type**, select **Load balancer**. -3. Set a user name and password for the metrics and log services. -- The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters. --Follow the instructions in the portal to complete the specification and deploy the data controller. --To view data controllers, run following command: --```console -kubectl get datacontrollers -A -``` --### Monitor deployment --You can also monitor the creation of the data controller with the following command: --```console -kubectl get datacontroller --namespace <namespace> -``` --The command returns the state of the data controller. For example, the following results indicate that the deployment is in progress: --```output -NAME STATE -<namespace> DeployingMonitoring -``` --Once the state of the data controller is ΓÇÿREADYΓÇÖ, then this step is completed. For example: --```output -NAME STATE -<namespace> Ready -``` --## Deploy SQL Managed Instance enabled by Azure Arc --1. In the portal, locate the resource group. -1. In the resource group, select **Create**. -1. Enter *managed instance*. The Azure portal returns resource types with a matching name. -1. Select **Azure SQL Managed Instance - Azure Arc**. -1. Click **Create**. -1. Specify your resource group, and custom location. Use the same value that you set in the [previous step](#create-a-cluster). -1. Set the **LoadBalancer** service type. -1. Provide credentials (login and password) for the managed instance administrator account. -1. Click **Review and Create**. -1. Click **Create**. --Azure creates the managed instance on the Azure Arc-enabled Kubernetes cluster. --To know when the instance has been created, run: --```console -kubectl get sqlmi -n <namespace> -``` --Once the state of the managed instance namespace is ΓÇÿREADYΓÇÖ, then this step is completed. For example: --```output -NAME STATE -<namespace> Ready -``` ---## Connect with Azure Data Studio --To connect with Azure Data Studio, see [Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md). |
azure-arc | Create Complete Managed Instance Indirectly Connected | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-indirectly-connected.md | - Title: Quickstart - Deploy Azure Arc-enabled data services -description: Quickstart - deploy Azure Arc-enabled data services in indirectly connected mode. Includes a Kubernetes cluster. Uses Azure CLI. ------ Previously updated : 09/20/2022----# Quickstart: Deploy Azure Arc-enabled data services - indirectly connected mode - Azure CLI --In this quickstart, you will deploy Azure Arc-enabled data services in indirectly connected mode from with the Azure CLI. --When you complete the steps in this article, you will have: --- A Kubernetes cluster on Azure Kubernetes Services (AKS).-- A data controller in indirectly connected mode.-- SQL Managed Instance enabled by Azure Arc.-- A connection to the instance with Azure Data Studio.--Use these objects to experience Azure Arc-enabled data services. --Azure Arc allows you to run Azure data services on-premises, at the edge, and in public clouds via Kubernetes. Deploy SQL Managed Instance and PostgreSQL server data services (preview) with Azure Arc. The benefits of using Azure Arc include staying current with constant service patches, elastic scale, self-service provisioning, unified management, and support for disconnected mode. --## Prerequisites --If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. --To complete the task in this article, install the required [client tools](install-client-tools.md). Specifically, you will use the following tools: --* Azure Data Studio -* The Azure Arc extension for Azure Data Studio -* Kubernetes CLI -* Azure CLI -* `arcdata` extension for Azure CLI --## Set metrics and logs service credentials --Azure Arc-enabled data services provides: -- Log services and dashboards with Kibana-- Metrics services and dashboards with Grafana--These services require a credential for each service. The credential is a username and a password. For this step, set an environment variable with the values for each credential. --The environment variables include passwords for log and metric services. The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters. --Run the following command to set the credential. --### [Linux](#tab/linux) --```console -export AZDATA_LOGSUI_USERNAME=<username for logs> -export AZDATA_LOGSUI_PASSWORD=<password for logs> -export AZDATA_METRICSUI_USERNAME=<username for metrics> -export AZDATA_METRICSUI_PASSWORD=<password for metrics> -``` --### [Windows / PowerShell](#tab/powershell) --```powershell -$ENV:AZDATA_LOGSUI_USERNAME="<username for logs>" -$ENV:AZDATA_LOGSUI_PASSWORD="<password for logs>" -$ENV:AZDATA_METRICSUI_USERNAME="<username for metrics>" -$ENV:AZDATA_METRICSUI_PASSWORD="<password for metrics>" -``` ----## Create and connect to your Kubernetes cluster --After you install the client tools, and configure the environment variables, you need access to a Kubernetes cluster. The steps in this section deploy a cluster on Azure Kubernetes Service (AKS). ---Follow the steps below to deploy the cluster from the Azure CLI. --1. Create the resource group -- Create a resource group for the cluster. For location, specify a supported region. For Azure Arc-enabled data services, supported regions are listed in the [Overview](overview.md#supported-regions). -- ```azurecli - az group create --name <resource_group_name> --location <location> - ``` -- To learn more about resource groups, see [What is Azure Resource Manager](../../azure-resource-manager/management/overview.md). --1. Create Kubernetes cluster -- Create the cluster in the resource group that you created previously. -- Select a node size that meets your requirements. See [Sizing guidance](sizing-guidance.md). -- The following example creates a three node cluster, with monitoring enabled, and generates public and private key files if missing. -- ```azurecli - az aks create --resource-group <resource_group_name> --name <cluster_name> --node-count 3 --enable-addons monitoring --generate-ssh-keys --node-vm-size <node size> - ``` -- For command details, see [az aks create](/cli/azure/aks#az-aks-create). -- For a complete demonstration, including an application on a single-node Kubernetes cluster, go to [Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI](/azure/aks/learn/quick-kubernetes-deploy-cli). --1. Get credentials -- You will need to get credential to connect to your cluster. -- Run the following command to get the credentials: -- ```azurecli - az aks get-credentials --resource-group <resource_group_name> --name <cluster_name> - ``` --1. Verify cluster -- To confirm the cluster is running and that you have the current connection context, run -- ```console - kubectl get nodes - ``` -- The command returns a list of nodes. For example: -- ```output - NAME STATUS ROLES AGE VERSION - aks-nodepool1-34164736-vmss000000 Ready agent 4h28m v1.20.9 - aks-nodepool1-34164736-vmss000001 Ready agent 4h28m v1.20.9 - aks-nodepool1-34164736-vmss000002 Ready agent 4h28m v1.20.9 - ``` --## Create the data controller --Now that our cluster is up and running, we are ready to create the data controller in indirectly connected mode. --The CLI command to create the data controller is: --```azurecli -az arcdata dc create --profile-name azure-arc-aks-premium-storage --k8s-namespace <namespace> --name <data controller name> --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --use-k8s -``` --### Monitor deployment --You can also monitor the creation of the data controller with the following command: --```console -kubectl get datacontroller --namespace <namespace> -``` --The command returns the state of the data controller. For example, the following results indicate that the deployment is in progress: --```output -NAME STATE -<namespace> DeployingMonitoring -``` --Once the state of the data controller is ΓÇÿREADYΓÇÖ, then this step is completed. For example: --```output -NAME STATE -<namespace> Ready -``` --## Deploy an instance of SQL Managed Instance enabled by Azure Arc --Now, we can create the Azure MI for indirectly connected mode with the following command: --```azurecli -az sql mi-arc create -n <instanceName> --k8s-namespace <namespace> --use-k8s -``` --To know when the instance has been created, run: --```console -kubectl get sqlmi -n <namespace>[ -``` --Once the state of the managed instance namespace is ΓÇÿREADYΓÇÖ, then this step is completed. For example: --```output -NAME STATE -<namespace> Ready -``` --## Connect to managed instance on Azure Data Studio --To connect with Azure Data Studio, see [Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md). --## Upload usage and metrics to Azure portal --If you wish, you can [Upload usage data, metrics, and logs to Azure](upload-metrics-and-logs-to-azure-monitor.md). --## Clean up resources --After you are done with the resources you created in this article. --Follow the steps in [Delete data controller in indirectly connected mode](uninstall-azure-arc-data-controller.md#delete-data-controller-in-indirectly-connected-mode). --## Related content --> [!div class="nextstepaction"] -> [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md). |
azure-arc | Create Custom Configuration Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-custom-configuration-template.md | - Title: Create custom configuration templates -description: Create custom configuration templates ------- Previously updated : 07/30/2021---# Create custom configuration templates --This article explains how to create a custom configuration template for Azure Arc-enabled data controller. --One of required parameters during deployment of a data controller in indirectly connected mode, is the `az arcdata dc create --profile-name` parameter. Currently, the available list of built-in profiles can be found via running the query: --```azurecli -az arcdata dc config list -``` --These profiles are template JSON files that have various settings for the Azure Arc-enabled data controller such as container registry and repository settings, storage classes for data and logs, storage size for data and logs, security, service type etc. and can be customized to your environment. --However, in some cases, you may want to customize those configuration templates to meet your requirements and pass the customized configuration template using the `--path` parameter to the `az arcdata dc create` command rather than pass a preconfigured configuration template using the `--profile-name` parameter. --## Create control.json file --Run `az arcdata dc config init` to initiate a control.json file with pre-defined settings based on your distribution of Kubernetes cluster. -For instance, a template control.json file for a Kubernetes cluster based on the `azure-arc-kubeadm` template in a subdirectory called `custom` in the current working directory can be created as follows: --```azurecli -az arcdata dc config init --source azure-arc-kubeadm --path custom -``` -The created control.json file can be edited in any editor such as Visual Studio Code to customize the settings appropriate for your environment. --## Use custom control.json file to deploy Azure Arc-enabled data controller using Azure CLI (az) --Once the template file is created, the file can be applied during Azure Arc-enabled data controller create command as follows: --```azurecli -az arcdata dc create --path ./custom --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --k8s-namespace <namespace> --use-k8s --#Example: -#az arcdata dc create --path ./custom --name arc --subscription <subscription ID> --resource-group my-resource-group --location eastus --connectivity-mode indirect --k8s-namespace <namespace> --use-k8s -``` --## Use custom control.json file for deploying Azure Arc data controller using Azure portal --From the Azure Arc data controller create screen, select "Configure custom template" under Custom template. This will invoke a blade to provide custom settings. In this blade, you can either type in the values for the various settings, or upload a pre-configured control.json file directly. --After ensuring the values are correct, click Apply to proceed with the Azure Arc data controller deployment. --## Related content --* For direct connectivity mode: [Deploy data controller - direct connect mode (prerequisites)](create-data-controller-direct-prerequisites.md) --* For indirect connectivity mode: [Create data controller using CLI](create-data-controller-indirect-cli.md) |
azure-arc | Create Data Controller Direct Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-azure-portal.md | - Title: Deploy Azure Arc data controller from Azure portal| Direct connect mode -description: Explains how to deploy the data controller in direct connect mode from Azure portal. ------ Previously updated : 11/03/2021----# Create Azure Arc data controller from Azure portal - Direct connectivity mode --This article describes how to deploy the Azure Arc data controller in direct connect mode from the Azure portal. --## Complete prerequisites --Before you begin, verify that you have completed the prerequisites in [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md). --## Deploy Azure Arc data controller --Azure Arc data controller create flow can be launched from the Azure portal in one of the following ways: --- From the search bar in Azure portal, search for "Azure Arc data controllers", and select "+ Create"-- From the Overview page of your Azure Arc-enabled Kubernetes cluster,- - Select "Extensions " under Settings. - - Select "Add" from the Extensions overview page and then select "Azure Arc data controller" - - Select Create from the Azure Arc data controller marketplace gallery - -Either of these actions should bring you to the Azure Arc data controller prerequisites page of the create flow. --- Ensure the Azure Arc-enabled Kubernetes cluster (Direct connectivity mode) option is selected. Select "Next : Data controller details"-- In the **Data controller details** page:- - Select the Azure Subscription and Resource group where the Azure Arc data controller will be projected to. - - Enter a **name** for the Data controller - - Select a pre-created **Custom location** or select "Create new" to create a new custom location. If you choose to create a new custom location, enter a name for the new custom location, select the Azure Arc-enabled Kubernetes cluster from the dropdown, and then enter a namespace to be associated with the new custom location, and finally select Create in the Create new custom location window. Learn more about [custom locations](../kubernetes/conceptual-custom-locations.md) - - **Kubernetes configuration** - Select a Kubernetes configuration template that best matches your Kubernetes distribution from the dropdown. If you choose to use your own settings or have a custom profile you want to use, select the Custom template option from the dropdown. In the blade that opens on the right side, enter the details for Docker credentials, repository information, Image tag, Image pull policy, infrastructure type, storage settings for data, logs and their sizes, Service type, and ports for controller and management proxy. Select Apply when all the required information is provided. You can also choose to upload your own template file by selecting the "Upload a template (JSON) from the top of the blade. If you use custom settings and would like to download a copy of those settings, use the "Download this template (JSON)" to do so. Learn more about [custom configuration profiles](create-custom-configuration-template.md). - - Select the appropriate **Service Type** for your environment - - **Metrics and Logs Dashboard Credentials** - Enter the credentials for the Grafana and Kibana dashboards - - Select the "Next: Additional settings" button to proceed forward after all the required information is provided. -- In the **Additional Settings** page:- - **Metrics upload:** Select this option to automatically upload your metrics to Azure Monitor so you can aggregate and analyze metrics, raise alerts, send notifications, or trigger automated actions. The required **Monitoring Metrics Publisher** role will be granted to the Managed Identity of the extension. - - **Logs upload:** Select this option to automatically upload logs to an existing Log Analytics workspace. Enter the Log Analytics workspace ID and the Log analytics shared access key. - - Select "Next: Tags" to proceed. -- In the **Tags** page, enter the Names and Values for your tags and select "Next: Review + Create".-- In the **Review + Create** page, view the summary of your deployment. Ensure all the settings look correct and select "Create" to start the deployment of Azure Arc data controller.--## Monitor the creation from Azure portal --Selecting the "Create" button from the previous step should launch the Azure deployment overview page which shows the progress of the deployment of Azure Arc data controller. --## Monitor the creation from your Kubernetes cluster --The progress of Azure Arc data controller deployment can be monitored as follows: --- Check if the CRDs are created by running ```kubectl get crd ``` from your cluster -- Check if the namespace is created by running ```kubectl get ns``` from your cluster-- Check if the custom location is created by running ```az customlocation list --resource-group <resourcegroup> -o table``` -- Check the status of pod deployment by running ```kubectl get pods -ns <namespace>```--## Related information --[Deploy SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) --[Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) |
azure-arc | Create Data Controller Direct Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md | -- Title: Create Azure Arc data controller | Direct connect mode -description: Explains how to create the data controller in direct connect mode. ------- Previously updated : 05/27/2022----# Create Azure Arc data controller in direct connectivity mode using CLI --This article describes how to create the Azure Arc data controller in direct connectivity mode using Azure CLI. --## Complete prerequisites --Before you begin, verify that you have completed the prerequisites in [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md). ---## Deploy Arc data controller --Creating an Azure Arc data controller in direct connectivity mode involves the following steps: --1. Create an Azure Arc-enabled data services extension. -1. Create a custom location. -1. Create the data controller. --Create the Arc data controller extension, custom location, and Arc data controller all in one command as follows: --##### [Linux](#tab/linux) --```console -## variables for Azure subscription, resource group, cluster name, location, extension, and namespace. -export resourceGroup=<Your resource group> -export clusterName=<name of your connected Kubernetes cluster> -export customLocationName=<name of your custom location> --## variables for logs and metrics dashboard credentials -export AZDATA_LOGSUI_USERNAME=<username for Kibana dashboard> -export AZDATA_LOGSUI_PASSWORD=<password for Kibana dashboard> -export AZDATA_METRICSUI_USERNAME=<username for Grafana dashboard> -export AZDATA_METRICSUI_PASSWORD=<password for Grafana dashboard> -``` --##### [Windows (PowerShell)](#tab/windows) --``` PowerShell -## variables for Azure location, extension and namespace -$ENV:resourceGroup="<Your resource group>" -$ENV:clusterName="<name of your connected Kubernetes cluster>" -$ENV:customLocationName="<name of your custom location>" --## variables for Metrics and Monitoring dashboard credentials -$ENV:AZDATA_LOGSUI_USERNAME="<username for Kibana dashboard>" -$ENV:AZDATA_LOGSUI_PASSWORD="<password for Kibana dashboard>" -$ENV:AZDATA_METRICSUI_USERNAME="<username for Grafana dashboard>" -$ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>" -``` -- --Deploy the Azure Arc data controller using released profile -##### [Linux](#tab/linux) --```azurecli -az arcdata dc create --name <name> -g ${resourceGroup} --custom-location ${customLocationName} --cluster-name ${clusterName} --connectivity-mode direct --profile-name <the-deployment-profile> --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass> --# Example -az arcdata dc create --name arc-dc1 --resource-group my-resource-group -custom-location cl-name --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --auto-upload-logs true --storage-class mystorageclass -``` --##### [Windows (PowerShell)](#tab/windows) --```azurecli -az arcdata dc create --name <name> -g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --profile-name <the-deployment-profile> --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass> --# Example -az arcdata dc create --name arc-dc1 --g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --auto-upload-logs true --storage-class mystorageclass --``` ---If you want to create the Azure Arc data controller using a custom configuration template, follow the steps described in [Create custom configuration profile](create-custom-configuration-template.md) and provide the path to the file as follows: -##### [Linux](#tab/linux) --```azurecli -az arcdata dc create --name -g ${resourceGroup} --custom-location ${customLocationName} --cluster-name ${clusterName} --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true --# Example -az arcdata dc create --name arc-dc1 --resource-group my-resource-group -custom-location cl-name --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true -``` --##### [Windows (PowerShell)](#tab/windows) --```azurecli -az arcdata dc create --name <name> -g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass> --# Example -az arcdata dc create --name arc-dc1 --resource-group $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true --storage-class mystorageclass --``` ----## Monitor the status of Azure Arc data controller deployment --The deployment status of the Arc data controller on the cluster can be monitored as follows: --```console -kubectl get datacontrollers --namespace arc -``` --## Related content --[Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) --[Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) |
azure-arc | Create Data Controller Direct Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-prerequisites.md | - Title: Prerequisites | Direct connect mode -description: Prerequisites to deploy the data controller in direct connect mode. ------ Previously updated : 11/03/2021----# Prerequisites to deploy the data controller in direct connectivity mode --This article describes how to prepare to deploy a data controller for Azure Arc-enabled data services in direct connect mode. Before you deploy an Azure Arc data controller understand the concepts described in [Plan to deploy Azure Arc-enabled data services](plan-azure-arc-data-services.md). --At a high level, the prerequisites for creating Azure Arc data controller in **direct** connectivity mode include: --1. Have access to your Kubernetes cluster. If you do not have a Kubernetes cluster, you can create a test/demonstration cluster on Azure Kubernetes Service (AKS). -1. Connect Kubernetes cluster to Azure using Azure Arc-enabled Kubernetes. --Follow the instructions at [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md) --## Connect Kubernetes cluster to Azure using Azure Arc-enabled Kubernetes --To connect your Kubernetes cluster to Azure, use Azure CLI `az` with the following extensions or Helm. --### Install tools --- Helm version 3.3+ ([install](https://helm.sh/docs/intro/install/))-- Install or upgrade to the latest version of [Azure CLI](/cli/azure/install-azure-cli)--### Add extensions for Azure CLI --Install the latest versions of the following az extensions: -- `k8s-extension`-- `connectedk8s`-- `k8s-configuration`-- `customlocation`--Run the following commands to install the az CLI extensions: --```azurecli -az extension add --name k8s-extension -az extension add --name connectedk8s -az extension add --name k8s-configuration -az extension add --name customlocation -``` --If you've previously installed the `k8s-extension`, `connectedk8s`, `k8s-configuration`, `customlocation` extensions, update to the latest version using the following command: --```azurecli -az extension update --name k8s-extension -az extension update --name connectedk8s -az extension update --name k8s-configuration -az extension update --name customlocation -``` --### Connect your cluster to Azure --Connect Kubernetes cluster to Azure using Azure Arc-enabled Kubernetes -- To connect your Kubernetes cluster to Azure, use Azure CLI `az` or PowerShell. -- Run the following command: -- # [Azure CLI](#tab/azure-cli) -- ```azurecli - az connectedk8s connect --name <cluster_name> --resource-group <resource_group_name> - ``` -- ```output - <pre> - Helm release deployment succeeded -- { - "aadProfile": { - "clientAppId": "", - "serverAppId": "", - "tenantId": "" - }, - "agentPublicKeyCertificate": "xxxxxxxxxxxxxxxxxxx", - "agentVersion": null, - "connectivityStatus": "Connecting", - "distribution": "gke", - "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/AzureArcTest/providers/Microsoft.Kubernetes/connectedClusters/AzureArcTest1", - "identity": { - "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "type": "SystemAssigned" - }, - "infrastructure": "gcp", - "kubernetesVersion": null, - "lastConnectivityTime": null, - "location": "eastus", - "managedIdentityCertificateExpirationTime": null, - "name": "AzureArcTest1", - "offering": null, - "provisioningState": "Succeeded", - "resourceGroup": "AzureArcTest", - "tags": {}, - "totalCoreCount": null, - "totalNodeCount": null, - "type": "Microsoft.Kubernetes/connectedClusters" - } - </pre> - ``` -- > [!TIP] - > The above command without the location parameter specified creates the Azure Arc-enabled Kubernetes resource in the same location as the resource group. To create the Azure Arc-enabled Kubernetes resource in a different location, specify either `--location <region>` or `-l <region>` when running the `az connectedk8s connect` command. -- > [!NOTE] - > If you are logged into Azure CLI using a service principal, an [additional parameter](../kubernetes/troubleshooting.md#enable-custom-locations-using-service-principal) needs to be set for enabling the custom location feature on the cluster. -- # [Azure PowerShell](#tab/azure-powershell) -- ```azurepowershell - New-AzConnectedKubernetes -ClusterName AzureArcTest1 -ResourceGroupName AzureArcTest -Location eastus - ``` -- ```output - <pre> - Location Name Type - -- - - - eastus AzureArcTest1 microsoft.kubernetes/connectedclusters - </pre> - ``` -- ---A more thorough walk-through of this task is available at [Connect an existing Kubernetes cluster to Azure arc](../kubernetes/quickstart-connect-cluster.md). --### Verify `azure-arc` namespace pods are created -- Before you proceed to the next step, make sure that all of the `azure-arc-` namespace pods are created. Run the following command. -- ```console - kubectl get pods -n azure-arc - ``` -- :::image type="content" source="media/deploy-data-controller-direct-mode-prerequisites/verify-azure-arc-pods.png" alt-text="All containers return a status of running."::: -- When all containers return a status of running, you can connect the cluster to Azure. --## Optionally, keep the Log Analytics workspace ID and Shared access key ready --When you deploy Azure Arc-enabled data controller, you can enable automatic upload of metrics and logs during setup. Metrics upload uses the system assigned managed identity. However, uploading logs requires a Workspace ID and the access key for the workspace. --You can also enable or disable automatic upload of metrics and logs after you deploy the data controller. --For instructions, see [Create a log analytics workspace](upload-logs.md#create-a-log-analytics-workspace). --## Create Azure Arc data services --After you have completed these prerequisites, you can [Deploy Azure Arc data controller | Direct connect mode - Azure Portal](create-data-controller-direct-azure-portal.md) or [using the Azure CLI](create-data-controller-direct-cli.md). |
azure-arc | Create Data Controller Indirect Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-azure-data-studio.md | - Title: Create data controller in Azure Data Studio -description: Create data controller in Azure Data Studio ------ Previously updated : 11/03/2021----# Create data controller in Azure Data Studio --You can create a data controller using Azure Data Studio through the deployment wizard and notebooks. ---## Prerequisites --- You need access to a Kubernetes cluster and have your kubeconfig file configured to point to the Kubernetes cluster you want to deploy to.-- You need to [install the client tools](install-client-tools.md) including **Azure Data Studio**, the Azure Data Studio extensions called **Azure Arc** and Azure CLI with the `arcdata` extension.-- You need to log in to Azure in Azure Data Studio. To do this: type CTRL/Command + SHIFT + P to open the command text window and type **Azure**. Choose **Azure: Sign in**. In the panel, that comes up click the + icon in the top right to add an Azure account.-- You need to run `az login` in your local Command Prompt to login to Azure CLI.--## Use the Deployment Wizard to create Azure Arc data controller --Follow these steps to create an Azure Arc data controller using the Deployment wizard. --1. In Azure Data Studio, click on the Connections tab on the left navigation. -1. Click on the **...** button at the top of the Connections panel and choose **New Deployment...** -1. In the new Deployment wizard, choose **Azure Arc Data Controller**, and then click the **Select** button at the bottom. -1. Ensure the prerequisite tools are available and meet the required versions. **Click Next**. -1. Use the default kubeconfig file or select another one. Click **Next**. -1. Choose a Kubernetes cluster context. Click **Next**. -1. Choose a deployment configuration profile depending on your target Kubernetes cluster. **Click Next**. -1. Choose the desired subscription and resource group. -1. Select an Azure location. - - The Azure location selected here is the location in Azure where the *metadata* about the data controller and the database instances that it manages will be stored. The data controller and database instances will be actually created in your Kubernetes cluster wherever that may be. - - Once done, click **Next**. --1. Enter a name for the data controller and for the namespace that the data controller will be created in. -- The data controller and namespace name will be used to create a custom resource in the Kubernetes cluster so they must conform to [Kubernetes naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). - - If the namespace already exists it will be used if the namespace does not already contain other Kubernetes objects - pods, etc. If the namespace does not exist, an attempt to create the namespace will be made. Creating a namespace in a Kubernetes cluster requires Kubernetes cluster administrator privileges. If you don't have Kubernetes cluster administrator privileges, ask your Kubernetes cluster administrator to perform the first few steps in the [Create a data controller using Kubernetes-native tools](./create-data-controller-using-kubernetes-native-tools.md) article which are required to be performed by a Kubernetes administrator before you complete this wizard. ---1. Select the storage class where the data controller will be deployed. -1. Enter a username and password and confirm the password for the data controller administrator user account. Click **Next**. --1. Review the deployment configuration. -1. Click the **Deploy** to deploy the desired configuration or the **Script to Notebook** to review the deployment instructions or make any changes necessary such as storage class names or service types. Click **Run All** at the top of the notebook. --## Monitoring the creation status --Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --> [!NOTE] -> The example commands below assume that you created a data controller and Kubernetes namespace with the name 'arc'. If you used a different namespace/data controller name, you can replace 'arc' with your name. --```console -kubectl get datacontroller --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. --```console -kubectl describe pod/<pod name> --namespace arc --#Example: -#kubectl describe pod/control-2g7bl --namespace arc -``` --## Troubleshooting creation problems --If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md). |
azure-arc | Create Data Controller Indirect Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-azure-portal.md | - Title: Create an Azure Arc data controller in indirect mode from Azure portal -description: Create an Azure Arc data controller in indirect mode from Azure portal ------ Previously updated : 07/30/2021----# Create Azure Arc data controller from Azure portal - Indirect connectivity mode ---## Introduction --You can use the Azure portal to create an Azure Arc data controller, in indirect connectivity mode. --Many of the creation experiences for Azure Arc start in the Azure portal even though the resource to be created or managed is outside of Azure infrastructure. The user experience pattern in these cases, especially when there is no direct connectivity between Azure and your environment, is to use the Azure portal to generate a script which can then be downloaded and executed in your environment to establish a secure connection back to Azure. For example, Azure Arc-enabled servers follows this pattern to [create Azure Arc-enabled servers](../servers/onboard-portal.md). --When you use the indirect connect mode of Azure Arc-enabled data services, you can use the Azure portal to generate a notebook for you that can then be downloaded and run in Azure Data Studio against your Kubernetes cluster. -- [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --When you use direct connect mode, you can provision the data controller directly from the Azure portal. You can read more about [connectivity modes](connectivity.md). --## Use the Azure portal to create an Azure Arc data controller --Follow the steps below to create an Azure Arc data controller using the Azure portal and Azure Data Studio. --1. First, log in to the [Azure portal marketplace](https://portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home/searchQuery/azure%20arc%20data%20controller). The marketplace search results will be filtered to show you the 'Azure Arc data controller'. -1. If the first step has not entered the search criteria. Please enter in to the search results, click on 'Azure Arc data controller'. -1. Select the Azure Data Controller tile from the marketplace. -1. Click on the **Create** button. -1. Select the indirect connectivity mode. Learn more about [Connectivity modes and requirements](./connectivity.md). -1. Review the requirements to create an Azure Arc data controller and install any missing prerequisite software such as Azure Data Studio and kubectl. -1. Click on the **Next: Data controller details** button. -1. Choose a subscription, resource group and Azure location just like you would for any other resource that you would create in the Azure portal. In this case the Azure location that you select will be where the metadata about the resource will be stored. The resource itself will be created on whatever infrastructure you choose. It doesn't need to be on Azure infrastructure. -1. Enter a name for your data controller. --1. Click the **Open in Azure Studio** button. -1. On the next screen, you will see a summary of your selections and a notebook that is generated. You can click the **Open link in Azure Data Studio** button to open the generated notebook in Azure Data Studio. -1. Open the notebook in Azure Data Studio and click the **Run All** button at the top. -1. Follow the prompts and instructions in the notebook to complete the data controller creation. --## Monitoring the creation status --Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --> [!NOTE] -> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly. --```console -kubectl get datacontroller/arc-dc --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. --```console -kubectl describe po/<pod name> --namespace arc --#Example: -#kubectl describe po/control-2g7bl --namespace arc -``` --## Troubleshooting creation problems --If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md). |
azure-arc | Create Data Controller Indirect Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-cli.md | - Title: Create data controller using CLI -description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster that you already have created, using the CLI. ------- Previously updated : 11/03/2021----# Create Azure Arc data controller using the CLI ---## Prerequisites --Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) for overview information. --### Install tools --Before you begin, install the `arcdata` extension for Azure (az) CLI. --[Install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](install-client-tools.md) --Regardless of which target platform you choose, you need to set the following environment variables prior to the creation for the data controller. These environment variables become the credentials used for accessing the metrics and logs dashboards after data controller creation. --### Set environment variables --Following are two sets of environment variables needed to access the metrics and logs dashboards. --The environment variables include passwords for log and metric services. The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters. ---# [Linux](#tab/linux) --```console -## variables for Metrics and Monitoring dashboard credentials -export AZDATA_LOGSUI_USERNAME=<username for Kibana dashboard> -export AZDATA_LOGSUI_PASSWORD=<password for Kibana dashboard> -export AZDATA_METRICSUI_USERNAME=<username for Grafana dashboard> -export AZDATA_METRICSUI_PASSWORD=<password for Grafana dashboard> -``` --# [Windows (PowerShell)](#tab/windows) --```PowerShell -## variables for Metrics and Monitoring dashboard credentials -$ENV:AZDATA_LOGSUI_USERNAME="<username for Kibana dashboard>" -$ENV:AZDATA_LOGSUI_PASSWORD="<password for Kibana dashboard>" -$ENV:AZDATA_METRICSUI_USERNAME="<username for Grafana dashboard>" -$ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>" -``` -- --### Connect to Kubernetes cluster --Connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server. --You can check to see that you have a current Kubernetes connection and confirm your current context with the following commands. --```console -kubectl cluster-info -kubectl config current-context -``` --## Create the Azure Arc data controller --The following sections provide instructions for specific types of Kubernetes platforms. Follow the instructions for your platform. --- [Azure Kubernetes Service (AKS)](#create-on-azure-kubernetes-service-aks)-- [AKS on Azure Stack HCI](#create-on-aks-on-azure-stack-hci)-- [Azure Red Hat OpenShift (ARO)](#create-on-azure-red-hat-openshift-aro)-- [Red Hat OpenShift Container Platform (OCP)](#create-on-red-hat-openshift-container-platform-ocp)-- [Open source, upstream Kubernetes (kubeadm)](#create-on-open-source-upstream-kubernetes-kubeadm)-- [AWS Elastic Kubernetes Service (EKS)](#create-on-aws-elastic-kubernetes-service-eks)-- [Google Cloud Kubernetes Engine Service (GKE)](#create-on-google-cloud-kubernetes-engine-service-gke)--> [!TIP] -> If you have no Kubernetes cluster, you can create one on Azure. Follow the instructions at [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md) to walk through the entire process. -> -> Then follow the instructions under [Create on Azure Kubernetes Service (AKS)](#create-on-azure-kubernetes-service-aks). --## Create on Azure Kubernetes Service (AKS) --By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class only works if you have VMs that were deployed using VM images that have premium disks. --If you are going to use `managed-premium` as your storage class, then you can run the following command to create the data controller. Substitute the placeholders in the command with your resource group name, subscription ID, and Azure location. --```azurecli -az arcdata dc create --profile-name azure-arc-aks-premium-storage --k8s-namespace <namespace> --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --use-k8s --#Example: -#az arcdata dc create --profile-name azure-arc-aks-premium-storage --k8s-namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect --use-k8s -``` --If you are not sure what storage class to use, you should use the `default` storage class which is supported regardless of which VM type you are using. It just won't provide the fastest performance. --If you want to use the `default` storage class, then you can run this command: --```azurecli -az arcdata dc create --profile-name azure-arc-aks-default-storage --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --#Example: -#az arcdata dc create --profile-name azure-arc-aks-default-storage --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on AKS on Azure Stack HCI --### Configure storage (Azure Stack HCI with AKS-HCI) --If you are using Azure Stack HCI with AKS-HCI, create a custom storage class with `fsType`. -- ```json - fsType: ext4 - ``` --Use this type to deploy the data controller. See the complete instructions at [Create a custom storage class for an AKS on Azure Stack HCI disk](/azure-stack/aks-hci/container-storage-interface-disks#create-a-custom-storage-class-for-an-aks-on-azure-stack-hci-disk). --By default, the deployment profile uses a storage class named `default` and the service type `LoadBalancer`. --You can run the following command to create the data controller using the `default` storage class and service type `LoadBalancer`. --```azurecli -az arcdata dc create --profile-name azure-arc-aks-hci --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --#Example: -#az arcdata dc create --profile-name azure-arc-aks-hci --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on Azure Red Hat OpenShift (ARO) --### Create custom deployment profile --Use the profile `azure-arc-azure-openshift` for Azure RedHat Open Shift. --```azurecli -az arcdata dc config init --source azure-arc-azure-openshift --path ./custom -``` --### Create data controller --You can run the following command to create the data controller: --```azurecli -az arcdata dc create --profile-name azure-arc-azure-openshift --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --#Example -#az arcdata dc create --profile-name azure-arc-azure-openshift --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on Red Hat OpenShift Container Platform (OCP) --### Determine storage class --To determine which storage class to use, run the following command. --```console -kubectl get storageclass -``` --### Create custom deployment profile --Create a new custom deployment profile file based on the `azure-arc-openshift` deployment profile by running the following command. This command creates a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory. --Use the profile `azure-arc-openshift` for OpenShift Container Platform. --```azurecli -az arcdata dc config init --source azure-arc-openshift --path ./custom -``` --### Set storage class --Now, set the desired storage class by replacing `<storageclassname>` in the command below with the name of the storage class that you want to use that was determined by running the `kubectl get storageclass` command above. --```azurecli -az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=<storageclassname>" -az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=<storageclassname>" --#Example: -#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=mystorageclass" -#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=mystorageclass" -``` --### Set LoadBalancer (optional) --By default, the `azure-arc-openshift` deployment profile uses `NodePort` as the service type. If you are using an OpenShift cluster that is integrated with a load balancer, you can change the configuration to use the `LoadBalancer` service type using the following command: --```azurecli -az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer" -``` --### Create data controller --Now you are ready to create the data controller using the following command. --> [!NOTE] -> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself. --> [!NOTE] -> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`. --```azurecli -az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure> --#Example: -#az arcdata dc create --path ./custom --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect --infrastructure onpremises -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on open source, upstream Kubernetes (kubeadm) --By default, the kubeadm deployment profile uses a storage class called `local-storage` and service type `NodePort`. If this is acceptable you can skip the instructions below that set the desired storage class and service type and immediately run the `az arcdata dc create` command below. --If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command creates a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory. --```azurecli -az arcdata dc config init --source azure-arc-kubeadm --path ./custom -``` --You can look up the available storage classes by running the following command. --```console -kubectl get storageclass -``` --Now, set the desired storage class by replacing `<storageclassname>` in the command below with the name of the storage class that you want to use that was determined by running the `kubectl get storageclass` command above. --```azurecli -az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=<storageclassname>" -az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=<storageclassname>" --#Example: -#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=mystorageclass" -#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=mystorageclass" -``` --By default, the kubeadm deployment profile uses `NodePort` as the service type. If you are using a Kubernetes cluster that is integrated with a load balancer, you can change the configuration using the following command. --```azurecli -az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer" -``` --Now you are ready to create the data controller using the following command. --> [!NOTE] -> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`. --```azurecli -az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure> --#Example: -#az arcdata dc create --path ./custom - --k8s-namespace <namespace> --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect --infrastructure onpremises -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on AWS Elastic Kubernetes Service (EKS) --By default, the EKS storage class is `gp2` and the service type is `LoadBalancer`. --Run the following command to create the data controller using the provided EKS deployment profile. --```azurecli -az arcdata dc create --profile-name azure-arc-eks --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --#Example: -#az arcdata dc create --profile-name azure-arc-eks --k8s-namespace <namespace> --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Create on Google Cloud Kubernetes Engine Service (GKE) --By default, the GKE storage class is `standard` and the service type is `LoadBalancer`. --Run the following command to create the data controller using the provided GKE deployment profile. --```azurecli -az arcdata dc create --profile-name azure-arc-gke --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --#Example: -#az arcdata dc create --profile-name azure-arc-gke --k8s-namespace <namespace> --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect -``` --Once you have run the command, continue on to [Monitoring the creation status](#monitor-the-creation-status). --## Monitor the creation status --It takes a few minutes to create the controller completely. You can monitor the progress in another terminal window with the following commands: --> [!NOTE] -> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly. --```console -kubectl get datacontroller/arc-dc --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. --```console -kubectl describe po/<pod name> --namespace arc --#Example: -#kubectl describe po/control-2g7bl --namespace arc -``` --## Troubleshooting creation problems --If you encounter any troubles with creation, see the [troubleshooting guide](troubleshoot-guide.md). |
azure-arc | Create Data Controller Using Kubernetes Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md | - Title: Create a data controller using Kubernetes tools -description: Create a data controller using Kubernetes tools ------ Previously updated : 11/03/2021----# Create Azure Arc-enabled data controller using Kubernetes tools --A data controller manages Azure Arc-enabled data services for a Kubernetes cluster. This article describes how to use Kubernetes tools to create a data controller. --Creating the data controller has the following high level steps: --1. Create the namespace and bootstrapper service -1. Create the data controller --> [!NOTE] -> For simplicity, the steps below assume that you are a Kubernetes cluster administrator. For production deployments or more secure environments, it is recommended to follow the security best practices of "least privilege" when deploying the data controller by granting only specific permissions to users and service accounts involved in the deployment process. -> -> See the topic [Operate Arc-enabled data services with least privileges](least-privilege.md) for detailed instructions. ---## Prerequisites --Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) for overview information. --To create the data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json. --[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) --## Create the namespace and bootstrapper service --The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller. --Save a copy of [bootstrapper-unified.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper-unified.yaml), and replace the placeholder `{{NAMESPACE}}` in *all the places* in the file with the desired namespace name, for example: `arc`. --> [!IMPORTANT] -> The bootstrapper-unified.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment can't directly access the Microsoft Container Registry, you can do the following: -> - Follow the steps to [pull the container images from the Microsoft Container Registry and push them to a private container registry](offline-deployment.md). -> - [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) named `arc-private-registry` for your private container registry. -> - Change the image URL for the bootstrapper image in the bootstrap.yaml file. -> - Replace `arc-private-registry` in the bootstrap.yaml file if a different name was used for the image pull secret. --Run the following command to create the namespace and bootstrapper service with the edited file. --```console -kubectl apply --namespace arc -f bootstrapper-unified.yaml -``` --Verify that the bootstrapper pod is running using the following command. --```console -kubectl get pod --namespace arc -l app=bootstrapper -``` --If the status is not _Running_, run the command a few times until the status is _Running_. --## Create the data controller --Now you are ready to create the data controller itself. --First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings. --### Create the metrics and logs dashboards user names and passwords --At the top of the file, you can specify a user name and password that is used to authenticate to the metrics and logs dashboards as an administrator. Choose a secure password and share it with only those that need to have these privileges. --A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password. --You can use an online tool to base64 encode your desired username and password or you can use built in CLI tools depending on your platform. --PowerShell --```console -[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>')) --#Example -#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example')) --``` --Linux/macOS --```console -echo -n '<your string to encode here>' | base64 --#Example -# echo -n 'example' | base64 -``` --### Create certificates for logs and metrics dashboards --Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify SSL/TLS certificates during Kubernetes native tools deployment](monitor-certificates.md). --### Edit the data controller configuration --Edit the data controller configuration as needed: --**REQUIRED** -- **location**: Change this to be the Azure location where the _metadata_ about the data controller will be stored. Review the [list of available regions](overview.md#supported-regions).-- **resourceGroup**: the Azure resource group where you want to create the data controller Azure resource in Azure Resource Manager. Typically this resource group should already exist, but it is not required until the time that you upload the data to Azure.-- **subscription**: the Azure subscription GUID for the subscription that you want to create the Azure resources in.--**RECOMMENDED TO REVIEW AND POSSIBLY CHANGE DEFAULTS** -- **storage..className**: the storage class to use for the data controller data and log files. If you are unsure of the available storage classes in your Kubernetes cluster, you can run the following command: `kubectl get storageclass`. The default is `default` which assumes there is a storage class that exists and is named `default` not that there is a storage class that _is_ the default. Note: There are two className settings to be set to the desired storage class - one for data and one for logs.-- **serviceType**: Change the service type to `NodePort` if you are not using a LoadBalancer.-- **Security** For Azure Red Hat OpenShift or Red Hat OpenShift Container Platform, replace the `security:` settings with the following values in the data controller yaml file.--```yml - security: - allowDumps: false - allowNodeMetricsCollection: false - allowPodMetricsCollection: false -``` --**OPTIONAL** -- **name**: The default name of the data controller is `arc`, but you can change it if you want.-- **displayName**: Set this to the same value as the name attribute at the top of the file.-- **logsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the logs UI certificate.-- **metricsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.--The following example shows a completed data controller yaml. ---Save the edited file on your local computer and run the following command to create the data controller: --```console -kubectl create --namespace arc -f <path to your data controller file> --#Example -kubectl create --namespace arc -f data-controller.yaml -``` --## Monitoring the creation status --Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --```console -kubectl get datacontroller --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status or logs of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. --```console -kubectl describe pod/<pod name> --namespace arc -kubectl logs <pod name> --namespace arc --#Example: -#kubectl describe pod/control-2g7bl --namespace arc -#kubectl logs control-2g7b1 --namespace arc -``` --## Troubleshooting creation problems --If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md). --## Related content --- [Create a SQL managed instance using Kubernetes-native tools](./create-sql-managed-instance-using-kubernetes-native-tools.md)-- [Create a PostgreSQL server using Kubernetes-native tools](./create-postgresql-server-kubernetes-native-tools.md)- |
azure-arc | Create Postgresql Server Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server-azure-data-studio.md | - Title: Create Azure Arc-enabled PostgreSQL server using Azure Data Studio -description: Create Azure Arc-enabled PostgreSQL server using Azure Data Studio ------ Previously updated : 07/30/2021----# Create Azure Arc-enabled PostgreSQL server using Azure Data Studio --This document walks you through the steps for using Azure Data Studio to provision Azure Arc-enabled PostgreSQL servers. ----## Preliminary and temporary step for OpenShift users only --Implement this step before moving to the next step. To deploy PostgreSQL server onto Red Hat OpenShift in a project other than the default, you need to execute the following commands against your cluster to update the security constraints. This command grants the necessary privileges to the service accounts that will run your PostgreSQL server. The security context constraint (SCC) **_arc-data-scc_** is the one you added when you deployed the Azure Arc data controller. --```console -oc adm policy add-scc-to-user arc-data-scc -z <server-name> -n <namespace name> -``` --_**Server-name** is the name of the server you will deploy during the next step._ - -For more details on SCCs in OpenShift, please refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.2/authentication/managing-security-context-constraints.html). -You may now implement the next step. --## Create an Azure Arc-enabled PostgreSQL server --1. Launch Azure Data Studio -1. On the Connections tab, Click on the three dots on the top left and choose "New Deployment" -1. From the deployment options, select **PostgreSQL server - Azure Arc** - >[!NOTE] - > You may be prompted to install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] here if it is not currently installed. -1. Accept the Privacy and license terms and click **Select** at the bottom -1. In the Deploy PostgreSQL server - Azure Arc blade, enter the following information: - - Enter a name for the server - - Enter and confirm a password for the _postgres_ administrator user of the server - - Select the storage class as appropriate for data - - Select the storage class as appropriate for logs - - Select the storage class as appropriate for backups -1. Click the **Deploy** button --This starts the creation of the Azure Arc-enabled PostgreSQL server on the data controller. --In a few minutes, your creation should successfully complete. --### Storage class considerations - -It is important you set the storage class right at the time you deploy a server as this cannot be changed after you deploy. If you were to change the storage class after deployment, you would need to extract the data, delete your server, create a new server, and import the data. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used. - - - to set the storage class for the data, indicate the parameter `--storage-class-data` followed by the name of the storage class. - - to set the storage class for the logs, indicate the parameter `--storage-class-logs` followed by the name of the storage class. - - setting the storage class for the backups has been temporarily removed as we temporarily removed the backup/restore functionalities as we finalize designs and experiences. ---## Related content -- [Manage your server using Azure Data Studio](manage-postgresql-server-with-azure-data-studio.md)-- [Monitor your server](monitor-grafana-kibana.md)-- > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL server offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL server. --- [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities) |
azure-arc | Create Postgresql Server Kubernetes Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server-kubernetes-native-tools.md | - Title: Create a PostgreSQL server using Kubernetes tools -description: Create a PostgreSQL server using Kubernetes tools ------ Previously updated : 11/03/2021----# Create a PostgreSQL server using Kubernetes tools ---## Prerequisites --You should have already created a [data controller](plan-azure-arc-data-services.md). --To create a PostgreSQL server using Kubernetes tools, you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json. --[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) --## Overview --To create a PostgreSQL server, you need to create a Kubernetes secret to store your postgres administrator login and password securely and a PostgreSQL server custom resource based on the `postgresqls` custom resource definitions. --## Create a yaml file --You can use the [template yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/postgresql.yaml) file as a starting point to create your own custom PostgreSQL server yaml file. Download this file to your local computer and open it in a text editor. It is useful to use a text editor such as [VS Code](https://code.visualstudio.com/download) that support syntax highlighting and linting for yaml files. --**Example yaml file**: --```yaml -apiVersion: v1 -data: - username: <your base64 encoded username> - password: <your base64 encoded password> -kind: Secret -metadata: - name: pg1-login-secret -type: Opaque --apiVersion: arcdata.microsoft.com/v1beta3 -kind: postgresql -metadata: - name: pg1 -spec: - scheduling: - default: - resources: - limits: - cpu: "4" - memory: 4Gi - requests: - cpu: "1" - memory: 2Gi - - primary: - type: LoadBalancer # Modify service type based on your Kubernetes environment - storage: - data: - volumes: - - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment - size: 5Gi - logs: - volumes: - - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment - size: 5Gi -``` --### Customizing the login and password. -A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password. You will need to base64 encode an administrator login and password and place them in the placeholder location at `data.password` and `data.username`. Do not include the `<` and `>` symbols provided in the template. --You can use an online tool to base64 encode your desired username and password or you can use built in CLI tools depending on your platform. --PowerShell --```console -[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>')) --#Example -#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example')) --``` --Linux/macOS --```console -echo -n '<your string to encode here>' | base64 --#Example -# echo -n 'example' | base64 -``` --### Customizing the name --The template has a value of `pg1` for the name attribute. You can change this value but it must be characters that follow the DNS naming standards. If you change the name, change the name of the secret to match. For example, if you change the name of the PostgreSQL server to `pg2`, you must change the name of the secret from `pg1-login-secret` to `pg2-login-secret` ---### Customizing the resource requirements --You can change the resource requirements - the RAM and core limits and requests - as needed. --> [!NOTE] -> You can learn more about [Kubernetes resource governance](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes). --Requirements for resource limits and requests: -- The cores limit value is **required** for billing purposes.-- The rest of the resource requests and limits are optional.-- The cores limit and request must be a positive integer value, if specified.-- The minimum of one core is required for the cores request, if specified.-- The memory value format follows the Kubernetes notation. --### Customizing service type --The service type can be changed to NodePort if desired. A random port number will be assigned. --### Customizing storage --You can customize the storage classes for storage to match your environment. If you are not sure which storage classes are available, run the command `kubectl get storageclass` to view them. The template has a default value of `default`. This value means that there is a storage class _named_ `default` not that there is a storage class that _is_ the default. You can also optionally change the size of your storage. You can read more about [storage configuration](./storage-configuration.md). --## Creating the PostgreSQL server --Now that you have customized the PostgreSQL server yaml file, you can create the PostgreSQL server by running the following command: --```console -kubectl create -n <your target namespace> -f <path to your yaml file> --#Example -#kubectl create -n arc -f C:\arc-data-services\postgres.yaml -``` ---## Monitoring the creation status --Creating the PostgreSQL server will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --> [!NOTE] -> The example commands below assume that you created a PostgreSQL server named `pg1` and Kubernetes namespace with the name `arc`. If you used a different namespace/PostgreSQL server name, you can replace `arc` and `pg1` with your names. --```console -kubectl get postgresqls/pg1 --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status of any particular pod by running `kubectl describe` command. The `describe` command is especially useful for troubleshooting any issues. For example: --```console -kubectl describe pod/<pod name> --namespace arc --#Example: -#kubectl describe pod/pg1-0 --namespace arc -``` --## Troubleshooting creation problems --If you encounter any troubles with creation, see the [troubleshooting guide](troubleshoot-guide.md). |
azure-arc | Create Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server.md | - Title: Create an Azure Arc-enabled PostgreSQL server from CLI -description: Create an Azure Arc-enabled PostgreSQL server from CLI ------- Previously updated : 11/03/2021----# Create an Azure Arc-enabled PostgreSQL server from CLI --This document describes the steps to create a PostgreSQL server on Azure Arc and to connect to it. ----## Getting started -If you are already familiar with the topics below, you may skip this paragraph. -There are important topics you may want read before you proceed with creation: -- [Overview of Azure Arc-enabled data services](overview.md)-- [Connectivity modes and requirements](connectivity.md)-- [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities)--If you prefer to try out things without provisioning a full environment yourself, get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. ---## Preliminary and temporary step for OpenShift users only -Implement this step before moving to the next step. To deploy PostgreSQL server onto Red Hat OpenShift in a project other than the default, you need to execute the following commands against your cluster to update the security constraints. This command grants the necessary privileges to the service accounts that will run your PostgreSQL server. The security context constraint (SCC) arc-data-scc is the one you added when you deployed the Azure Arc data controller. --```Console -oc adm policy add-scc-to-user arc-data-scc -z <server-name> -n <namespace-name> -``` --**Server-name is the name of the server you will create during the next step.** --For more details on SCCs in OpenShift, refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.2/authentication/managing-security-context-constraints.html). Proceed to the next step. ---## Create an Azure Arc-enabled PostgreSQL server --To create an Azure Arc-enabled PostgreSQL server on your Arc data controller, you will use the command `az postgres server-arc create` to which you will pass several parameters. --For details about all the parameters you can set at the creation time, review the output of the command: -```azurecli -az postgres server-arc create --help -``` --The main parameters should consider are: -- **the name of the server** you want to deploy. Indicate either `--name` or `-n` followed by a name whose length must not exceed 11 characters.--- **The storage classes** you want your server to use. It is important you set the storage class right at the time you deploy a server as this setting cannot be changed after you deploy. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used.- - To set the storage class for the backups, indicate the parameter `--storage-class-backups` followed by the name of the storage class. Excluding this parameter disables automated backups - - To set the storage class for the data, indicate the parameter `--storage-class-data` followed by the name of the storage class. - - To set the storage class for the logs, indicate the parameter `--storage-class-logs` followed by the name of the storage class. -- > [!IMPORTANT] - > If you need to change the storage class after deployment, extract the data, delete your server, create a new server, and import the data. --When you execute the create command, you will be prompted to enter the username and password for the administrative user. You may skip the interactive prompt by setting the `AZDATA_USERNAME` and `AZDATA_PASSWORD` session environment variables before you run the create command. --### Examples --**To deploy a PostgreSQL server named postgres01 that uses the same storage classes as the data controller, run the following command**: --```azurecli -az postgres server-arc create -n postgres01 --k8s-namespace <namespace> --use-k8s -``` --> [!NOTE] -> - If you deployed the data controller using `AZDATA_USERNAME` and `AZDATA_PASSWORD` session environment variables in the same terminal session, then the values for `AZDATA_PASSWORD` will be used to deploy the PostgreSQL server too. If you prefer to use another password, either (1) update the values for `AZDATA_USERNAME` and `AZDATA_PASSWORD` or (2) delete the `AZDATA_USERNAME` and `AZDATA_PASSWORD` environment variables or (3) delete their values to be prompted to enter a username and password interactively when you create a server. -> - Creating a PostgreSQL server will not immediately register resources in Azure. As part of the process of uploading [resource inventory](upload-metrics-and-logs-to-azure-monitor.md) or [usage data](view-billing-data-in-azure.md) to Azure, the resources will be created in Azure and you will be able to see your resources in the Azure portal. ---## List the PostgreSQL servers deployed in your Arc data controller --To list the PostgreSQL servers deployed in your Arc data controller, run the following command: --```azurecli -az postgres server-arc list --k8s-namespace <namespace> --use-k8s -``` ---```output - { - "name": "postgres01", - "state": "Ready" - } -``` --## Get the endpoints to connect to your Azure Arc-enabled PostgreSQL servers --To view the endpoints for a PostgreSQL server, run the following command: --```azurecli -az postgres server-arc endpoint list -n <server name> --k8s-namespace <namespace> --use-k8s -``` -For example: -```console -{ - "instances": [ - { - "endpoints": [ - { - "description": "PostgreSQL Instance", - "endpoint": "postgresql://postgres:<replace with password>@123.456.78.912:5432" - }, - { - "description": "Log Search Dashboard", - }, - { - "description": "Metrics Dashboard", - "endpoint": "https://98.765.432.11:3000/d/postgres-metrics?var-Namespace=arc&var-Name=postgres01" - } - ], - "engine": "PostgreSql", - "name": "postgres01" - } - ], - "namespace": "arc" -} -``` --You can use the PostgreSQL Instance endpoint to connect to the PostgreSQL server from your favorite tool: [Azure Data Studio](/azure-data-studio/download-azure-data-studio), [pgcli](https://www.pgcli.com/) psql, pgAdmin, etc. -- [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --## Special note about Azure virtual machine deployments --When you are using an Azure virtual machine, then the endpoint IP address will not show the _public_ IP address. To locate the public IP address, use the following command: -```azurecli -az network public-ip list -g azurearcvm-rg --query "[].{PublicIP:ipAddress}" -o table -``` -You can then combine the public IP address with the port to make your connection. --You may also need to expose the port of the PostgreSQL server through the network security gateway (NSG). To allow traffic through the (NSG), set a rule. To set a rule, you will need to know the name of your NSG. You determine the NSG using the command below: --```azurecli -az network nsg list -g azurearcvm-rg --query "[].{NSGName:name}" -o table -``` --Once you have the name of the NSG, you can add a firewall rule using the following command. The example values here create an NSG rule for port 30655 and allows connection from **any** source IP address. --> [!WARNING] -> We do not recommend setting a rule to allow connection from any source IP address. You can lock down things better by specifying a `-source-address-prefixes` value that is specific to your client IP address or an IP address range that covers your team's or organization's IP addresses. --Replace the value of the `--destination-port-ranges` parameter below with the port number you got from the `az postgres server-arc list` command above. --```azurecli -az network nsg rule create -n db_port --destination-port-ranges 30655 --source-address-prefixes '*' --nsg-name azurearcvmNSG --priority 500 -g azurearcvm-rg --access Allow --description 'Allow port through for db access' --destination-address-prefixes '*' --direction Inbound --protocol Tcp --source-port-ranges '*' -``` --## Connect with Azure Data Studio --Open Azure Data Studio and connect to your instance with the external endpoint IP address and port number above, and the password you specified at the time you created the instance. If PostgreSQL isn't available in the *Connection type* dropdown, you can install the PostgreSQL extension by searching for PostgreSQL in the extensions tab. --> [!NOTE] -> You will need to click the [Advanced] button in the connection panel to enter the port number. --Remember, if you are using an Azure VM you will need the _public_ IP address, which is accessible via the following command: --```azurecli -az network public-ip list -g azurearcvm-rg --query "[].{PublicIP:ipAddress}" -o table -``` --## Connect with psql --To access your PostgreSQL server, pass the external endpoint of the PostgreSQL server that you retrieved from above: --You can now connect either psql: --```console -psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655 -``` --## Related content --- Connect to your Azure Arc-enabled PostgreSQL server: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgresql-server.md)-- > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL server offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL server. --- [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Expanding Persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)-- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities) |
azure-arc | Create Sql Managed Instance Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance-azure-data-studio.md | - Title: Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio -description: Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio ------ Previously updated : 06/16/2021----# Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio --This document demonstrates how to install Azure SQL Managed Instance - Azure Arc using Azure Data Studio. ---## Steps --1. Launch Azure Data Studio -2. On the Connections tab, select on the three dots on the top left and choose **New Deployment...**. -3. From the deployment options, select **Azure SQL managed instance**. - > [!NOTE] - > You may be prompted to install the appropriate CLI here if it is not currently installed. - -4. Select **Select**. -- Azure Data Studio opens **Azure SQL managed instance**. --5. For **Resource Type**, choose **Azure SQL managed instance - Azure Arc**. -6. Accept the privacy statement and license terms -1. Review the required tools. Follow instructions to update tools before you proceed. -1. Select **Next**. -- Azure Data Studio allows you to set your specifications for the managed instance. The following table describes the fields: -- |Setting | Description | Required or optional - |-|-|-| - |**Target Azure Controller** | Name of the Azure Arc data controller. | Required | - |**Instance name** | Managed instance name. | Required | - |**Username** | System administrator user name. | Required | - |**System administrator password** | SQL authentication password for the managed instance. The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters.<br/></br> Confirm the password. | Required | - |**Service tier** | Specify the appropriate service tier: Business Critical or General Purpose. | Required | - |**I already have a SQL Server License** | Select if this managed instance will use a license from your organization. | Optional | - |**Storage Class (Data)** | Select from the list. | Required | - |**Volume Size in Gi (Data)** | The amount of space in gibibytes to allocate for data. | Required | - |**Storage Class (Database logs)** | Select from the list. | Required | - |**Volume Size in Gi (Database logs)** | The amount of space in gibibytes to allocate for database transaction logs. | Required | - |**Storage Class (Logs)** | Select from the list. | Required | - |**Volume Size in Gi (Logs)** | The amount of space in gibibytes to allocate for logs. | Required | - |**Storage Class (Backups)** | Select from the list. Specify a ReadWriteMany (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). If this storage class isn't RWX capable, the deployment may not succeed. | Required | - |**Volume Size in Gi (Backups)** | The size of the storage volume to be used for database backups in gibibytes. | Required | - |**Cores Request** | The number of cores to request for the managed instance. Integer. | Optional | - |**Cores Limit** | The request for the capacity for the managed instance in gigabytes. Integer. | Optional | - |**Memory Request** | Select from the list. | Required | - |**Point in time retention (days)** | The number of days to keep your point in time backups. | Optional | -- After you've set all of the required values, Azure Data Studio enables the **Deploy** button. If this control is disabled, verify that you have all required settings configured. --1. Select the **Deploy** button to create the managed instance. --After you select the deploy button, the Azure Arc data controller initiates the deployment. The deployment creates the managed instance. The deployment process takes a few minutes to create the data controller. --## Connect from Azure Data Studio --View all the SQL Managed Instances provisioned to this data controller. Use the following command: -- ```azurecli - az sql mi-arc list --k8s-namespace <namespace> --use-k8s - ``` -- Output should look like this, copy the ServerEndpoint (including the port number) from here. -- ```console - Name Replicas ServerEndpoint State - - -- - - sqlinstance1 1/1 25.51.65.109:1433 Ready - ``` --1. In Azure Data Studio, under **Connections** tab, select the **New Connection** on the **Servers** view -1. Under **Connection**>**Server**, paste the ServerEndpoint -1. Select **SQL Login** as the Authentication type -1. Enter *sa* as the user name -1. Enter the password for the `sa` account -1. Optionally, enter the specific database name to connect to -1. Optionally, select/Add New Server Group as appropriate -1. Select **Connect** to connect to the Azure SQL Managed Instance - Azure Arc --## Related information --Now try to [monitor your SQL instance](monitor-grafana-kibana.md) |
azure-arc | Create Sql Managed Instance Using Kubernetes Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools.md | - Title: Deploy a new SQL Managed Instance enabled by Azure Arc using Kubernetes tools -description: Describes how to use Kubernetes tools to deploy SQL Managed Instance enabled by Azure Arc. ------ Previously updated : 02/28/2022----# Deploy SQL Managed Instance enabled by Azure Arc using Kubernetes tools --This article demonstrates how to deploy Azure SQL Managed Instance for Azure Arc with Kubernetes tools. --## Prerequisites --You should have already created a [data controller](plan-azure-arc-data-services.md). --To create a SQL managed instance using Kubernetes tools, you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json. --[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) --## Overview --To create a SQL Managed Instance, you need to: -1. Create a Kubernetes secret to store your system administrator login and password securely -1. Create a SQL Managed Instance custom resource based on the `SqlManagedInstance` custom resource definition --Define both of these items in a yaml file. --## Create a yaml file --Use the [template yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/sqlmi.yaml) file as a starting point to create your own custom SQL managed instance yaml file. Download this file to your local computer and open it in a text editor. Use a text editor such as [VS Code](https://code.visualstudio.com/download) that support syntax highlighting and linting for yaml files. --> [!NOTE] -> Beginning with the February, 2022 release, `ReadWriteMany` (RWX) capable storage class needs to be specified for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). -> If no storage class is specified for backups, the default storage class in Kubernetes is used. If the default is not RWX capable, the SQL Managed Instance installation may not succeed. --### Example yaml file --See the following example of a yaml file: ---### Customizing the login and password --A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password. You will need to base64 encode a system administrator login and password and place them in the placeholder location at `data.password` and `data.username`. Do not include the `<` and `>` symbols provided in the template. --> [!NOTE] -> For optimum security, using the value `sa` is not allowed for the login . -> Follow the [password complexity policy](/sql/relational-databases/security/password-policy#password-complexity). --You can use an online tool to base64 encode your desired username and password or you can use CLI tools depending on your platform. --PowerShell --```console -[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>')) --#Example -#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example')) -``` --Linux/macOS --```console -echo -n '<your string to encode here>' | base64 --#Example -# echo -n 'example' | base64 -``` --### Customizing the name --The template has a value of `sql1` for the name attribute. You can change this value, but it must include characters that follow the DNS naming standards. You must also change the name of the secret to match. For example, if you change the name of the SQL managed instance to `sql2`, you must change the name of the secret from `sql1-login-secret` to `sql2-login-secret` --### Customizing the resource requirements --You can change the resource requirements - the RAM and core limits and requests - as needed. --> [!NOTE] -> You can learn more about [Kubernetes resource governance](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes). --Requirements for resource limits and requests: -- The cores limit value is **required** for billing purposes.-- The rest of the resource requests and limits are optional.-- The cores limit and request must be a positive integer value, if specified.-- The minimum of 1 core is required for the cores request, if specified.-- The memory value format follows the Kubernetes notation. -- A minimum of 2 GB is required for memory request, if specified.-- As a general guideline, you should have 4 GB of RAM for each 1 core for production use cases.--### Customizing service type --The service type can be changed to NodePort if desired. A random port number will be assigned. --### Customizing storage --You can customize the storage classes for storage to match your environment. If you are not sure which storage classes are available, run the command `kubectl get storageclass` to view them. --The template has a default value of `default`. --For example --```yml -storage: - data: - volumes: - - className: default -``` --This example means that there is a storage class named `default` - not that there is a storage class that is the default. You can also optionally change the size of your storage. For more information, see [storage configuration](./storage-configuration.md). --## Creating the SQL managed instance --Now that you have customized the SQL managed instance yaml file, you can create the SQL managed instance by running the following command: --```console -kubectl create -n <your target namespace> -f <path to your yaml file> --#Example -#kubectl create -n arc -f C:\arc-data-services\sqlmi.yaml -``` --## Monitoring the creation status --Creating the SQL managed instance will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --> [!NOTE] -> The example commands below assume that you created a SQL managed instance named `sql1` and Kubernetes namespace with the name `arc`. If you used a different namespace/SQL managed instance name, you can replace `arc` and `sqlmi` with your names. --```console -kubectl get sqlmi/sql1 --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status of any particular pod. Run `kubectl describe pod ...`. Use this command to troubleshoot any issues. For example: --```console -kubectl describe pod/<pod name> --namespace arc --#Example: -#kubectl describe pod/sql1-0 --namespace arc -``` --## Troubleshoot deployment problems --If you encounter any troubles with the deployment, please see the [troubleshooting guide](troubleshoot-guide.md). --## Related content --[Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md) |
azure-arc | Create Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance.md | - Title: Create a SQL Managed Instance enabled by Azure Arc -description: Deploy SQL Managed Instance enabled by Azure Arc ------- Previously updated : 07/30/2021----# Deploy a SQL Managed Instance enabled by Azure Arc ---To view available options for the create command for SQL Managed Instance enabled by Azure Arc, use the following command: --```azurecli -az sql mi-arc create --help -``` --To create a SQL Managed Instance enabled by Azure Arc, use `az sql mi-arc create`. See the following examples for different connectivity modes: --> [!NOTE] -> A ReadWriteMany (RWX) capable storage class needs to be specified for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) --If no storage class is specified for backups, the default storage class in Kubernetes is used and if this is not RWX capable, the SQL Managed Instance enabled by Azure Arc installation may not succeed. --### [Directly connected mode](#tab/directly-connected-mode) --```azurecli -az sql mi-arc create --name <name> --resource-group <group> -ΓÇôsubscription <subscription> --custom-location <custom-location> --storage-class-backups <RWX capable storageclass> -``` --Example: --```azurecli -az sql mi-arc create --name sqldemo --resource-group rg -ΓÇôsubscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --storage-class-backups mybackups -``` ---### [Indirectly connected mode](#tab/indirectly-connected-mode) --```azurecli -az sql mi-arc create -n <instanceName> --storage-class-backups <RWX capable storageclass> --k8s-namespace <namespace> --use-k8s -``` --Example: --```azurecli -az sql mi-arc create -n sqldemo --storage-class-backups mybackups --k8s-namespace my-namespace --use-k8s -``` -----> [!NOTE] -> Names must be less than 60 characters in length and conform to [DNS naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#rfc-1035-label-names). -> When specifying memory allocation and vCore allocation use this formula to ensure your performance is acceptable: for each 1 vCore you should have at least 4GB of RAM of capacity available on the Kubernetes node where the SQL Managed Instance enabled by Azure Arc pod will run. -> If you want to automate the creation of SQL Managed Instance enabled by Azure Arc and avoid the interactive prompt for the admin password, you can set the `AZDATA_USERNAME` and `AZDATA_PASSWORD` environment variables to the desired username and password prior to running the `az sql mi-arc create` command. -> If you created the data controller using AZDATA_USERNAME and AZDATA_PASSWORD in the same terminal session, then the values for AZDATA_USERNAME and AZDATA_PASSWORD will be used to create the SQL Managed Instance enabled by Azure Arc too. -> [!NOTE] -> If you are using the indirect connectivity mode, creating SQL Managed Instance enabled by Azure Arc in Kubernetes will not automatically register the resources in Azure. Steps to register the resource are in the following articles: -> - [Upload billing data to Azure and view it in the Azure portal](view-billing-data-in-azure.md) -> ---## View instance on Azure Arc --To view the instance, use the following command: --```azurecli -az sql mi-arc list --k8s-namespace <namespace> --use-k8s -``` --You can copy the external IP and port number from here and connect to SQL Managed Instance enabled by Azure Arc using your favorite tool for connecting to eg. SQL Server or Azure SQL Managed Instance such as Azure Data Studio or SQL Server Management Studio. ---## Related content -- [Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md)-- [Register your instance with Azure and upload metrics and logs about your instance](upload-metrics-and-logs-to-azure-monitor.md)-- [Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio](create-sql-managed-instance-azure-data-studio.md)- |
azure-arc | Delete Azure Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-azure-resources.md | - Title: Delete resources from Azure Arc-enabled data services -description: Describes how to delete resources from Azure Arc-enabled data services ------- Previously updated : 07/19/2023----# Delete resources from Azure Arc-enabled data services --This article describes how to delete Azure Arc-enabled data service resources from Azure. --> [!WARNING] -> When you delete resources as described in this article, these actions are irreversible. --The information in this article applies to resources in Azure Arc-enabled data services. To delete resources in Azure, review the information at [Azure Resource Manager resource group and resource deletion](../../azure-resource-manager/management/delete-resource-group.md). --## Before --Before you delete a resource such as Azure Arc SQL managed instance or Azure Arc data controller, you need to export and upload the usage information to Azure for accurate billing calculation by following the instructions described in [Upload billing data to Azure - Indirectly connected mode](view-billing-data-in-azure.md#upload-billing-data-to-azureindirectly-connected-mode). --## Direct connectivity mode --When a cluster is connected to Azure with direct connectivity mode, use the Azure portal to manage the resources. Use the portal for all create, read, update, & delete (CRUD) operations for data controller, managed instances, and PostgreSQL servers. --From Azure portal: -1. Browse to the resource group and delete the Azure Arc data controller -2. Select the Azure Arc-enabled Kubernetes cluster, go to the Overview page - - Select **Extensions** under Settings - - In the Extensions page, select the Azure Arc data services extension (of type microsoft.arcdataservices) and click on **Uninstall** -3. Optionally delete the Custom Location that the Azure Arc data controller is deployed to. -4. Optionally, you can also delete the namespace on your Kubernetes cluster if there are no other resources created in the namespace. --See [Manage Azure resources by using the Azure portal](../../azure-resource-manager/management/manage-resources-portal.md). --## Indirect connectivity mode --In indirect connect mode, deleting an instance from Kubernetes will not remove it from Azure and deleting an instance from Azure will not remove it from Kubernetes. For indirect connect mode, deleting a resource is a two step process and this will be improved in the future. Kubernetes will be the source of truth and the portal will be updated to reflect it. --In some cases, you may need to manually delete Azure Arc-enabled data services resources in Azure. You can delete these resources using any of the following options. --- [Delete an entire resource group](#delete-an-entire-resource-group)-- [Delete specific resources in the resource group](#delete-specific-resources-in-the-resource-group)-- [Delete resources using the Azure CLI](#delete-resources-using-the-azure-cli)- - [Delete SQL managed instance resources using the Azure CLI](#delete-sql-managed-instance-resources-using-the-azure-cli) - - [Delete PostgreSQL server resources using the Azure CLI](#delete-postgresql-server-resources-using-the-azure-cli) - - [Delete Azure Arc data controller resources using the Azure CLI](#delete-azure-arc-data-controller-resources-using-the-azure-cli) - - [Delete a resource group using the Azure CLI](#delete-a-resource-group-using-the-azure-cli) ---## Delete an entire resource group --If you have been using a specific and dedicated resource group for Azure Arc-enabled data services and you want to delete *everything* inside of the resource group you can delete the resource group which will delete everything inside of it. --You can delete a resource group in the Azure portal by doing the following: --- Browse to the resource group in the Azure portal where the Azure Arc-enabled data services resources have been created.-- Click the **Delete resource group** button.-- Confirm the deletion by entering the resource group name and click **Delete**.--## Delete specific resources in the resource group --You can delete specific Azure Arc-enabled data services resources in a resource group in the Azure portal by doing the following: --- Browse to the resource group in the Azure portal where the Azure Arc-enabled data services resources have been created.-- Select all the resources to be deleted.-- Click on the Delete button.-- Confirm the deletion by typing 'yes' and click **Delete**.--## Delete resources using the Azure CLI --You can delete specific Azure Arc-enabled data services resources using the Azure CLI. --### Delete SQL managed instance resources using the Azure CLI --To delete SQL managed instance resources from Azure using the Azure CLI replace the placeholder values in the command below and run it. --```azurecli -az resource delete --name <sql instance name> --resource-type Microsoft.AzureArcData/sqlManagedInstances --resource-group <resource group name> --#Example -#az resource delete --name sql1 --resource-type Microsoft.AzureArcData/sqlManagedInstances --resource-group rg1 -``` --### Delete PostgreSQL server resources using the Azure CLI --To delete a PostgreSQL server resource from Azure using the Azure CLI replace the placeholder values in the command below and run it. --```azurecli -az resource delete --name <postgresql instance name> --resource-type Microsoft.AzureArcData/postgresInstances --resource-group <resource group name> --#Example -#az resource delete --name pg1 --resource-type Microsoft.AzureArcData/postgresInstances --resource-group rg1 -``` --### Delete Azure Arc data controller resources using the Azure CLI --> [!NOTE] -> Before deleting an Azure Arc data controller, you should delete all of the database instance resources that it is managing. --To delete an Azure Arc data controller from Azure using the Azure CLI replace the placeholder values in the command below and run it. --```azurecli -az resource delete --name <data controller name> --resource-type Microsoft.AzureArcData/dataControllers --resource-group <resource group name> --#Example -#az resource delete --name dc1 --resource-type Microsoft.AzureArcData/dataControllers --resource-group rg1 -``` --### Delete a resource group using the Azure CLI --You can also use the Azure CLI to [delete a resource group](../../azure-resource-manager/management/delete-resource-group.md). |
azure-arc | Delete Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-managed-instance.md | - Title: Delete a SQL Managed Instance enabled by Azure Arc -description: Learn how to delete a SQL Managed Instance enabled by Azure Arc and optionally, reclaim associated Kubernetes persistent volume claims (PVCs). ------- Previously updated : 07/30/2021----# Delete a SQL Managed Instance enabled by Azure Arc --In this how-to guide, you'll find and then delete a SQL Managed Instance enabled by Azure Arc. Optionally, after deleting managed instances, you can reclaim associated Kubernetes persistent volume claims (PVCs). --1. Find existing instances: -- ```azurecli - az sql mi-arc list --k8s-namespace <namespace> --use-k8s - ``` -- Example output: -- ```console - Name Replicas ServerEndpoint State - - - - - demo-mi 1/1 10.240.0.4:32023 Ready - ``` --1. Delete the SQL Managed Instance, run one of the commands appropriate for your deployment type: -- 1. **Indirectly connected mode**: -- ```azurecli - az sql mi-arc delete --name <instance_name> --k8s-namespace <namespace> --use-k8s - ``` -- Example output: -- ```azurecli - # az sql mi-arc delete --name demo-mi --k8s-namespace <namespace> --use-k8s - Deleted demo-mi from namespace arc - ``` -- 1. **Directly connected mode**: -- ```azurecli - az sql mi-arc delete --name <instance_name> --resource-group <resource_group> - ``` -- Example output: -- ```azurecli - # az sql mi-arc delete --name demo-mi --resource-group my-rg - Deleted demo-mi from namespace arc - ``` --## Optional - Reclaim Kubernetes PVCs --A Persistent Volume Claim (PVC) is a request for storage by a user from a Kubernetes cluster while creating and adding storage to a SQL Managed Instance. Deleting PVCs is recommended but it isn't mandatory. However, if you don't reclaim these PVCs, you'll eventually end up with errors in your Kubernetes cluster. For example, you might be unable to create, read, update, or delete resources from the Kubernetes API. You might not be able to run commands like `az arcdata dc export` because the controller pods were evicted from the Kubernetes nodes due to storage issues (normal Kubernetes behavior). You can see messages in the logs similar to: --- Annotations: microsoft.com/ignore-pod-health: true -- Status: Failed -- Reason: Evicted -- Message: The node was low on resource: ephemeral-storage. Container controller was using 16372Ki, which exceeds its request of 0.--By design, deleting a SQL Managed Instance doesn't remove its associated [PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). The intention is to ensure that you can access the database files in case the deletion was accidental. --1. To reclaim the PVCs, take the following steps: - 1. Find the PVCs for the server group you deleted. -- ```console - kubectl get pvc - ``` -- In the example below, notice the PVCs for the SQL Managed Instances you deleted. -- ```console - # kubectl get pvc -n arc -- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - data-demo-mi-0 Bound pvc-1030df34-4b0d-4148-8986-4e4c20660cc4 5Gi RWO managed-premium 13h - logs-demo-mi-0 Bound pvc-11836e5e-63e5-4620-a6ba-d74f7a916db4 5Gi RWO managed-premium 13h - ``` -- 1. Delete the data and log PVCs for each of the SQL Managed Instances you deleted. - The general format of this command is: -- ```console - kubectl delete pvc <name of pvc> - ``` -- For example: -- ```console - kubectl delete pvc data-demo-mi-0 -n arc - kubectl delete pvc logs-demo-mi-0 -n arc - ``` -- Each of these kubectl commands will confirm the successful deleting of the PVC. For example: -- ```console - persistentvolumeclaim "data-demo-mi-0" deleted - persistentvolumeclaim "logs-demo-mi-0" deleted - ``` - -## Related content --Learn more about [Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md) --[Start by creating a Data Controller](create-data-controller-indirect-cli.md) --Already created a Data Controller? [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) |
azure-arc | Delete Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-postgresql-server.md | - Title: Delete an Azure Arc-enabled PostgreSQL server -description: Delete an Azure Arc-enabled Postgres Hyperscale server group ------- Previously updated : 07/30/2021----# Delete an Azure Arc-enabled PostgreSQL server --This document describes the steps to delete a server from your Azure Arc setup. ---## Delete the server --As an example, let's consider we want to delete the _postgres01_ instance from the below setup: --```azurecli -az postgres server-arc list --k8s-namespace <namespace> --use-k8s -Name State -- --postgres01 Ready -``` --The general format of the delete command is: -```azurecli -az postgres server-arc delete -n <server name> --k8s-namespace <namespace> --use-k8s -``` -When you execute this command, you will be requested to confirm the deletion of the server. If you are using scripts to automate deletions you will need to use the --force parameter to bypass the confirmation request. For example, you would run a command like: -```azurecli -az postgres server-arc delete -n <server name> --force --k8s-namespace <namespace> --use-k8s -``` --For more details about the delete command, run: -```azurecli -az postgres server-arc delete --help -``` --### Delete the server used in this example --```azurecli -az postgres server-arc delete -n postgres01 --k8s-namespace <namespace> --use-k8s -``` --## Reclaim the Kubernetes Persistent Volume Claims (PVCs) --A PersistentVolumeClaim (PVC) is a request for storage by a user from Kubernetes cluster while creating and adding storage to a PostgreSQL server. Deleting a server group does not remove its associated [PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). This is by design. The intention is to help the user to access the database files in case the deletion of instance was accidental. Deleting PVCs is not mandatory. However it is recommended. If you don't reclaim these PVCs, you'll eventually end up with errors as your Kubernetes cluster will think it's running out of disk space or usage of the same PostgreSQL server name while creating new one might cause inconsistencies. -To reclaim the PVCs, take the following steps: --### 1. List the PVCs for the server group you deleted --To list the PVCs, run this command: --```console -kubectl get pvc [-n <namespace name>] -``` --It returns the list of PVCs, in particular the PVCs for the server group you deleted. For example: --```output -kubectl get pvc -NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -data-few7hh0k4npx9phsiobdc3hq-postgres01-0 Bound pvc-72ccc225-dad0-4dee-8eae-ed352be847aa 5Gi RWO default 2d18h -data-few7hh0k4npx9phsiobdc3hq-postgres01-1 Bound pvc-ce6f0c51-faed-45ae-9472-8cdf390deb0d 5Gi RWO default 2d18h -data-few7hh0k4npx9phsiobdc3hq-postgres01-2 Bound pvc-5a863ab9-522a-45f3-889b-8084c48c32f8 5Gi RWO default 2d18h -data-few7hh0k4npx9phsiobdc3hq-postgres01-3 Bound pvc-00e1ace3-1452-434f-8445-767ec39c23f2 5Gi RWO default 2d15h -logs-few7hh0k4npx9phsiobdc3hq-postgres01-0 Bound pvc-8b810f4c-d72a-474a-a5d7-64ec26fa32de 5Gi RWO default 2d18h -logs-few7hh0k4npx9phsiobdc3hq-postgres01-1 Bound pvc-51d1e91b-08a9-4b6b-858d-38e8e06e60f9 5Gi RWO default 2d18h -logs-few7hh0k4npx9phsiobdc3hq-postgres01-2 Bound pvc-8e5ad55e-300d-4353-92d8-2e383b3fe96e 5Gi RWO default 2d18h -logs-few7hh0k4npx9phsiobdc3hq-postgres01-3 Bound pvc-f9e4cb98-c943-45b0-aa07-dd5cff7ea585 5Gi RWO default 2d15h -``` -There are 8 PVCs for this server group. --### 2. Delete each of the PVCs --Delete the data and log PVCs for the PostgreSQL server you deleted. --The general format of this command is: --```console -kubectl delete pvc <name of pvc> [-n <namespace name>] -``` --For example: --```console -kubectl delete pvc data-few7hh0k4npx9phsiobdc3hq-postgres01-0 -kubectl delete pvc data-few7hh0k4npx9phsiobdc3hq-postgres01-1 -kubectl delete pvc data-few7hh0k4npx9phsiobdc3hq-postgres01-2 -kubectl delete pvc data-few7hh0k4npx9phsiobdc3hq-postgres01-3 -kubectl delete pvc logs-few7hh0k4npx9phsiobdc3hq-postgres01-0 -kubectl delete pvc logs-few7hh0k4npx9phsiobdc3hq-postgres01-1 -kubectl delete pvc logs-few7hh0k4npx9phsiobdc3hq-postgres01-2 -kubectl delete pvc logs-few7hh0k4npx9phsiobdc3hq-postgres01-3 -``` --Each of these kubectl commands will confirm the successful deleting of the PVC. For example: --```output -persistentvolumeclaim "data-postgres01-0" deleted -``` - -->[!NOTE] -> As indicated, not deleting the PVCs might eventually get your Kubernetes cluster in a situation where it will throw errors. Some of these errors may include being unable to create, read, update, delete resources from the Kubernetes API, or being able to run commands like `az arcdata dc export` as the controller pods may be evicted from the Kubernetes nodes because of this storage issue (normal Kubernetes behavior). -> -> For example, you may see messages in the logs similar to: -> ```output -> Annotations: microsoft.com/ignore-pod-health: true -> Status: Failed -> Reason: Evicted -> Message: The node was low on resource: ephemeral-storage. Container controller was using 16372Ki, which exceeds its request of 0. -> ``` - -## Next step -Create [Azure Arc-enabled PostgreSQL server](create-postgresql-server.md) |
azure-arc | Deploy Active Directory Connector Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-cli.md | - Title: Tutorial ΓÇô Deploy Active Directory connector using Azure CLI -description: Tutorial to deploy an Active Directory connector using Azure CLI ------- Previously updated : 10/11/2022-----# Tutorial ΓÇô Deploy Active Directory connector using Azure CLI --This article explains how to deploy an Active Directory (AD) connector using Azure CLI. The AD connector is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc. --## Prerequisites --### Install tools --Before you can proceed with the tasks in this article, install the following tools: --- The [Azure CLI (az)](/cli/azure/install-azure-cli)-- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)--To know further details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md) ---## Deploy Active Directory connector in customer-managed keytab mode --### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode) --#### Create an AD connector instance --> [!NOTE] -> Make sure to wrap your password for the domain service AD account with single quote `'` to avoid the expansion of special characters such as `!`. -> --To view available options for create command for AD connector instance, use the following command: --```azurecli -az arcdata ad-connector create --help -``` --To create an AD connector instance, use `az arcdata ad-connector create`. See the following examples for different connectivity modes: ---##### Indirectly connected mode --```azurecli -az arcdata ad-connector create name < name >k8s-namespace < Kubernetes namespace >realm < AD Domain name >nameserver-addresses < DNS server IP addresses >account-provisioning < account provisioning mode : manual or automatic > prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >use-k8s-``` --Example: --```azurecli -az arcdata ad-connector create name arcadc k8s-namespace arc realm CONTOSO.LOCAL nameserver-addresses 10.10.10.11account-provisioning manualprefer-k8s-dns falseuse-k8s-``` --```azurecli -# Setting environment variables needed for automatic account provisioning -DOMAIN_SERVICE_ACCOUNT_USERNAME='sqlmi' -DOMAIN_SERVICE_ACCOUNT_PASSWORD='arc@123!!' --# Deploying active directory connector with automatic account provisioning -az arcdata ad-connector create name arcadc k8s-namespace arc realm CONTOSO.LOCAL nameserver-addresses 10.10.10.11account-provisioning automaticprefer-k8s-dns falseuse-k8s-``` --##### Directly connected mode --```azurecli -az arcdata ad-connector create name < name >dns-domain-name < The DNS name of AD domain > realm < AD Domain name > nameserver-addresses < DNS server IP addresses >account-provisioning < account provisioning mode : manual or automatic >prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >data-controller-name < Arc Data Controller Name >resource-group < resource-group >-``` --Example: --```azurecli -az arcdata ad-connector create name arcadc realm CONTOSO.LOCAL dns-domain-name contoso.local nameserver-addresses 10.10.10.11account-provisioning manualprefer-k8s-dns falsedata-controller-name arcdcresource-group arc-rg-``` --```azurecli -# Setting environment variables needed for automatic account provisioning -DOMAIN_SERVICE_ACCOUNT_USERNAME='sqlmi' -DOMAIN_SERVICE_ACCOUNT_PASSWORD='arc@123!!' --# Deploying active directory connector with automatic account provisioning -az arcdata ad-connector create name arcadc realm CONTOSO.LOCAL dns-domain-name contoso.local nameserver-addresses 10.10.10.11account-provisioning automaticprefer-k8s-dns falsedata-controller-name arcdcresource-group arc-rg-``` --### Update an AD connector instance --To view available options for update command for AD connector instance, use the following command: --```azurecli -az arcdata ad-connector update --help -``` --To update an AD connector instance, use `az arcdata ad-connector update`. See the following examples for different connectivity modes: --#### Indirectly connected mode --```azurecli -az arcdata ad-connector update name < name >k8s-namespace < Kubernetes namespace > nameserver-addresses < DNS server IP addresses >use-k8s-``` --Example: --```azurecli -az arcdata ad-connector update name arcadc k8s-namespace arc nameserver-addresses 10.10.10.11use-k8s-``` --#### Directly connected mode --```azurecli -az arcdata ad-connector update name < name >nameserver-addresses < DNS server IP addresses > data-controller-name < Arc Data Controller Name >resource-group < resource-group >-``` --Example: --```azurecli -az arcdata ad-connector update name arcadc nameserver-addresses 10.10.10.11data-controller-name arcdcresource-group arc-rg-``` ---### [system-managed keytab mode](#tab/system-managed-keytab-mode) -To create an AD connector instance, use `az arcdata ad-connector create`. See the following examples for different connectivity modes: ---#### Indirectly connected mode --```azurecli -az arcdata ad-connector create name < name >k8s-namespace < Kubernetes namespace > dns-domain-name < The DNS name of AD domain > realm < AD Domain name > nameserver-addresses < DNS server IP addresses >account-provisioning < account provisioning mode > ou-distinguished-name < AD Organizational Unit distinguished name >prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >use-k8s-``` --Example: --```azurecli -az arcdata ad-connector create name arcadc k8s-namespace arc realm CONTOSO.LOCAL netbios-domain-name CONTOSO dns-domain-name contoso.local nameserver-addresses 10.10.10.11account-provisioning automatic ou-distinguished-name ΓÇ£OU=arcou,DC=contoso,DC=localΓÇ¥ prefer-k8s-dns falseuse-k8s-``` --#### Directly connected mode --```azurecli -az arcdata ad-connector create name < name >dns-domain-name < The DNS name of AD domain > realm < AD Domain name > netbios-domain-name < AD domain NETBOIS name > nameserver-addresses < DNS server IP addresses >account-provisioning < account provisioning mode > ou-distinguished-name < AD domain organizational distinguished name >prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >data-controller-name < Arc Data Controller Name >resource-group < resource-group >-``` --Example: --```azurecli -az arcdata ad-connector create name arcadc realm CONTOSO.LOCAL netbios-domain-name CONTOSO dns-domain-name contoso.local nameserver-addresses 10.10.10.11account-provisioning automatic ou-distinguished-name ΓÇ£OU=arcou,DC=contoso,DC=localΓÇ¥ prefer-k8s-dns falsedata-controller-name arcdcresource-group arc-rg-``` --### Update an AD connector instance --To view available options for update command for AD connector instance, use the following command: --```azurecli -az arcdata ad-connector update --help -``` -To update an AD connector instance, use `az arcdata ad-connector update`. See the following examples for different connectivity modes: --### Indirectly connected mode --```azurecli -az arcdata ad-connector update name < name >k8s-namespace < Kubernetes namespace > nameserver-addresses < DNS server IP addresses >use-k8s-``` --Example: --```azurecli -az arcdata ad-connector update name arcadc k8s-namespace arc nameserver-addresses 10.10.10.11use-k8s-``` --#### Directly connected mode --```azurecli -az arcdata ad-connector update name < name >nameserver-addresses < DNS server IP addresses > data-controller-name < Arc Data Controller Name>resource-group <resource-group>-``` --Example: --```azurecli -az arcdata ad-connector update name arcadc nameserver-addresses 10.10.10.11data-controller-name arcdcresource-group arc-rg-``` ----## Delete an AD connector instance --To delete an AD connector instance, use `az arcdata ad-connector delete`. See the following examples for both connectivity modes: --### [Indirectly connected mode](#tab/indirectly-connected-mode) --```azurecli -az arcdata ad-connector delete --name < AD Connector name > --k8s-namespace < namespace > --use-k8s -``` --Example: --```azurecli -az arcdata ad-connector delete --name arcadc --k8s-namespace arc --use-k8s -``` --### [Directly connected mode](#tab/directly-connected-mode) -```azurecli -az arcdata ad-connector delete --name < AD Connector name > --data-controller-name < data controller name > --resource-group < resource group > -``` --Example: --```azurecli -az arcdata ad-connector delete --name arcadc --data-controller-name arcdc --resource-group arc-rg -``` ----## Related content -* [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md) -* [Tutorial ΓÇô Deploy AD connector in system-managed keytab mode](deploy-system-managed-keytab-active-directory-connector.md) -* [Deploy Arc-enabled SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md). |
azure-arc | Deploy Active Directory Connector Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-portal.md | - Title: Tutorial ΓÇô Deploy Active Directory connector using Azure portal -description: Tutorial to deploy an Active Directory connector using Azure portal ------ Previously updated : 10/11/2022----# Tutorial ΓÇô Deploy Active Directory connector using Azure portal --Active Directory (AD) connector is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc. --This article explains how to deploy, manage, and delete an Active Directory (AD) connector in directly connected mode from the Azure portal. --## Prerequisites --For details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md). --Make sure you have the following deployed before proceed with the steps in this article: --- An Arc-enabled Azure Kubernetes cluster.-- A data controller in directly connected mode.--## Create a new AD connector --1. Log in to [Azure portal](https://portal.azure.com). -1. In the search resources field at the top of the portal, type **data controllers**, and select **Azure Arc data controllers**. --Azure takes you to where you can find all available data controllers deployed in your selected Azure subscription. --1. Select the data controller where you wish to add an AD connector. -1. Under **Settings** select **Active Directory**. The portal shows the Active Directory connectors for this data controller. -1. Select **+ Add Connector**, the portal presents an **Add Connector** interface. -1. Under **Active Directory connector** - 1. Specify your **Connector name**. - 2. Choose the account provisioning type - either **Automatic** or **Manual**. --The account provisioning type determines whether you deploy a customer-managed keytab AD connector or a system-managed keytab AD connector. --### Create a new customer-managed keytab AD connector --1. Click **Add Connector**. - -1. Choose the account provisioning type **Manual**. - -1. Set the editable fields for your connector: - - **Realm**: The name of the Active Directory (AD) domain in uppercase. For example *CONTOSO.COM*. - - **Nameserver IP address**: A comma-separated list of Active Directory DNS server IP addresses. For example: *10.10.10.11, 10.10.10.12*. - - **Netbios domain name**: Optional. The NETBIOS name of the Active Directory domain. For example *CONTOSO*. Defaults to the first label of realm. - - **DNS domain name**: Optional. The DNS domain name associated with the Active Directory domain. For example, *contoso.com*. - - **DNS replicas**: Optional. The number of replicas to deploy for the DNS proxy service. Defaults to `1`. - - **Prefer Kubernetes DNS for PTR lookups**: Optional. Check to set Kubernetes DNS for IP address lookup. Clear to use Active Directory DNS. -- ![Screenshot of the portal interface to add customer managed keytab.](media/active-directory-deployment/add-ad-customer-managed-keytab-connector-portal.png) --1. Click **Add Connector** to create a new customer-managed keytab AD connector. --### Create a new system-managed keytab AD connector -1. Click **Add Connector**. -1. Choose the account provisioning type **Automatic**. -1. Set the editable fields for your connector: - - **Realm**: The name of the Active Directory (AD) domain in uppercase. For example *CONTOSO.COM*. - - **Nameserver IP address**: A comma-separated list of Active Directory DNS server IP addresses. For example: *10.10.10.11, 10.10.10.12*. - - **OU distinguished name** The distinguished name of the Organizational Unit (OU) pre-created in the Active Directory (AD) domain. For example, `OU=arcou,DC=contoso,DC=com`. - - **Domain Service Account username** The username of the Domain Service Account in Active Directory. - - **Domain Service Account password** The password of the Domain Service Account in Active Directory. - - **Primary domain controller hostname (Optional)** The hostname of the primary Active Directory domain controller. For example, `azdc01.contoso.com`. - - **Secondary domain controller hostname (Optional)** The secondary domain controller hostname. - - **Netbios domain name**: Optional. The NETBIOS name of the Active Directory domain. For example *CONTOSO*. Defaults to the first label of realm. - - **DNS domain name**: Optional. The DNS domain name associated with the Active Directory domain. For example, *contoso.com*. - - **DNS replicas (Optional)** The number of replicas to deploy for the DNS proxy service. Defaults to `1`. - - **Prefer Kubernetes DNS for PTR lookups**: Optional. Check to set Kubernetes DNS for IP address lookup. Clear to use Active Directory DNS. -- ![Screenshot of the portal interface to add system managed keytab.](media/active-directory-deployment/add-ad-system-managed-keytab-connector-portal.png) --1. Click **Add Connector** to create a new system-managed keytab AD connector. --## Edit an existing AD connector --1. Select the AD connect that you want to edit. Select the ellipses (**...**), and then **Edit**. The portal presents an **Edit Connector** interface. --1. You may update any editable fields. For example: - - **Primary domain controller hostname** The hostname of the primary Active Directory domain controller. For example, `azdc01.contoso.com`. - - **Secondary domain controller hostname** The secondary domain controller hostname. - - **Nameserver IP address**: A comma-separated list of Active Directory DNS server IP addresses. For example: *10.10.10.11, 10.10.10.12*. - - **DNS replicas** The number of replicas to deploy for the DNS proxy service. Defaults to `1`. - - **Prefer Kubernetes DNS for PTR lookups**: Check to set Kubernetes DNS for IP address lookup. Clear to use Active Directory DNS. --1. Click on **Apply** for changes to take effect. ---## Delete an AD connector --1. Select the ellipses (**...**) on the right of the Active Directory connector you would like to delete. -1. Select **Delete**. --To delete multiple AD connectors at one time: --1. Select the checkbox in the beginning row of each AD connector you want to delete. -- Alternatively, select the checkbox in the top row to select all the AD connectors in the table. --1. Click **Delete** in the management bar to delete the AD connectors that you selected. --## Related content -* [Tutorial ΓÇô Deploy Active Directory connector using Azure CLI](deploy-active-directory-connector-cli.md) -* [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md) -* [Tutorial ΓÇô Deploy Active Directory connector in system-managed keytab mode](deploy-system-managed-keytab-active-directory-connector.md) -* [Deploy Arc-enabled SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md). |
azure-arc | Deploy Active Directory Postgresql Server Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-postgresql-server-cli.md | - Title: Deploy Active Directory integrated Azure Arc-enabled PostgreSQL server using Azure CLI -description: Explains how to deploy Active Directory integrated Azure Arc-enabled PostgreSQL server using Azure CLI ------- Previously updated : 02/10/2023----# Deploy Active Directory integrated Azure Arc-enabled PostgreSQL using Azure CLI --This article explains how to deploy Azure Arc-enabled PostgreSQL server with Active Directory (AD) authentication using Azure CLI. --See these articles for specific instructions: --- [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md)--### Prerequisites --Before you proceed, install the following tools: --- The [Azure CLI (az)](/cli/azure/install-azure-cli)-- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)--To know more further details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md) --> [!IMPORTANT] -> When using Active Directory, the default account must be named "postgres" in order for connections to succeed. --## Deploy and update Active Directory integrated Azure Arc-enabled PostgreSQL server --### Customer-managed keytab mode --#### Create an Azure Arc-enabled PostgreSQL server --To view available options for the create command for Azure Arc-enabled PostgreSQL server, use the following command: --```azurecli -az postgres server-arc create --help -``` --To create a SQL Managed Instance, use `az postgres server-arc create`. See the following example: --```azurecli -az postgres server-arc create name < PostgreSQL server name > k8s-namespace < namespace > ad-connector-name < your AD connector name > keytab-secret < PostgreSQL server keytab secret name > ad-account-name < PostgreSQL server AD user account > dns-name < PostgreSQL server primary endpoint DNS name > port < PostgreSQL server primary endpoint port number >use-k8s-``` --Example: --```azurecli -az postgres server-arc create name contosopg k8s-namespace arc ad-connector-name adarc keytab-secret arcuser-keytab-secretad-account-name arcuser dns-name arcpg.contoso.localport 31432use-k8s-``` --#### Update an Azure Arc-enabled PostgreSQL server --To update an Arc-enabled PostgreSQL server, use `az postgres server-arc update`. See the following example: --```azurecli -az postgres server-arc update name < PostgreSQL server name > k8s-namespace < namespace > keytab-secret < PostgreSQL server keytab secret name > use-k8s-``` --Example: --```azurecli -az postgres server-arc update name contosopg k8s-namespace arc keytab-secret arcuser-keytab-secretuse-k8s-``` --## Related content -- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. |
azure-arc | Deploy Active Directory Sql Managed Instance Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance-cli.md | - Title: Deploy Active Directory integrated SQL Managed Instance enabled by Azure Arc using Azure CLI -description: Explains how to deploy Active Directory integrated SQL Managed Instance enabled by Azure Arc using Azure CLI ------- Previously updated : 10/11/2022----# Deploy Active Directory integrated SQL Managed Instance enabled by Azure Arc using Azure CLI --This article explains how to deploy SQL Managed Instance enabled by Azure Arc with Active Directory (AD) authentication using Azure CLI. --See these articles for specific instructions: --- [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md)-- [Tutorial ΓÇô Deploy AD connector in system-managed keytab mode](deploy-system-managed-keytab-active-directory-connector.md)--### Prerequisites --Before you proceed, install the following tools: --- The [Azure CLI (az)](/cli/azure/install-azure-cli)-- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)--To know more further details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md) ---## Deploy and update Active Directory integrated SQL Managed Instance --### [Customer-managed keytab mode](#tab/Customer-managed-keytab-mode) ---#### Create an instance --To view available options for create command for SQL Managed Instance enabled by Azure Arc, use the following command: --```azurecli -az sql mi-arc create --help -``` --To create a SQL Managed Instance, use `az sql mi-arc create`. See the following examples for different connectivity modes: --#### Create - indirectly connected mode --```azurecli -az sql mi-arc create name < SQL MI name > k8s-namespace < namespace > ad-connector-name < your AD connector name > keytab-secret < SQL MI keytab secret name > ad-account-name < SQL MI AD user account > primary-dns-name < SQL MI primary endpoint DNS name > primary-port-number < SQL MI primary endpoint port number > secondary-dns-name < SQL MI secondary endpoint DNS name > secondary-port-number < SQL MI secondary endpoint port number > use-k8s-``` --Example: --```azurecli -az sql mi-arc create name contososqlmi k8s-namespace arc ad-connector-name adarc keytab-secret arcuser-keytab-secretad-account-name arcuser primary-dns-name arcsqlmi.contoso.localprimary-port-number 31433 secondary-dns-name arcsqlmi-2.contoso.localsecondary-port-number 31434use-k8s-``` --#### Create - directly connected mode --```azurecli -az sql mi-arc create name < SQL MI name > ad-connector-name < your AD connector name > keytab-secret < SQL MI keytab secret name > ad-account-name < SQL MI AD user account > primary-dns-name < SQL MI primary endpoint DNS name > primary-port-number < SQL MI primary endpoint port number > secondary-dns-name < SQL MI secondary endpoint DNS name > secondary-port-number < SQL MI secondary endpoint port number >custom-location < your custom location > resource-group < resource-group >-``` --Example: --```azurecli -az sql mi-arc create name contososqlmi ad-connector-name adarc keytab-secret arcuser-keytab-secretad-account-name arcuser primary-dns-name arcsqlmi.contoso.localprimary-port-number 31433 secondary-dns-name arcsqlmi-2.contoso.localsecondary-port-number 31434custom-location private-locationresource-group arc-rg-``` --#### Update an instance --To update a SQL Managed Instance, use `az sql mi-arc update`. See the following examples for different connectivity modes: --#### Update - indirectly connected mode --```azurecli -az sql mi-arc update name < SQL MI name > k8s-namespace < namespace > keytab-secret < SQL MI keytab secret name > use-k8s-``` --Example: --```azurecli -az sql mi-arc update name contososqlmi k8s-namespace arc keytab-secret arcuser-keytab-secretuse-k8s-``` --#### Update - directly connected mode --> [!NOTE] -> Note that the **resource group** is a mandatory parameter but this is not changeable. --```azurecli -az sql mi-arc update name < SQL MI name > keytab-secret < SQL MI keytab secret name > resource-group < resource-group >-``` --Example: --```azurecli -az sql mi-arc update name contososqlmi keytab-secret arcuser-keytab-secretresource-group arc-rg-``` --### [System-managed keytab mode](#tab/system-managed-keytab-mode) ---#### Create an instance --To view available options for create command for SQL Managed Instance enabled by Azure Arc, use the following command: --```azurecli -az sql mi-arc create --help -``` --To create a SQL Managed Instance, use `az sql mi-arc create`. See the following examples for different connectivity modes: ---##### Create - indirectly connected mode --```azurecli -az sql mi-arc create name < SQL MI name > k8s-namespace < namespace > ad-connector-name < your AD connector name > ad-account-name < SQL MI AD user account > primary-dns-name < SQL MI primary endpoint DNS name > primary-port-number < SQL MI primary endpoint port number > secondary-dns-name < SQL MI secondary endpoint DNS name > secondary-port-number < SQL MI secondary endpoint port number >use-k8s-``` --Example: --```azurecli -az sql mi-arc create name contososqlmi k8s-namespace arc ad-connector-name adarc ad-account-name arcuser primary-dns-name arcsqlmi.contoso.localprimary-port-number 31433 secondary-dns-name arcsqlmi-2.contoso.localsecondary-port-number 31434use-k8s-``` --##### Create - directly connected mode --```azurecli -az sql mi-arc create name < SQL MI name > ad-connector-name < your AD connector name > ad-account-name < SQL MI AD user account > primary-dns-name < SQL MI primary endpoint DNS name > primary-port-number < SQL MI primary endpoint port number > secondary-dns-name < SQL MI secondary endpoint DNS name > secondary-port-number < SQL MI secondary endpoint port number >custom-location < your custom location > resource-group <resource-group>-``` --Example: --```azurecli -az sql mi-arc create name contososqlmi ad-connector-name adarc ad-account-name arcuser primary-dns-name arcsqlmi.contoso.localprimary-port-number 31433 secondary-dns-name arcsqlmi-2.contoso.localsecondary-port-number 31434custom-location private-locationresource-group arc-rg-``` ------## Delete an instance in directly connected mode --To delete a SQL Managed Instance, use `az sql mi-arc delete`. See the following examples for both connectivity modes: ---### [Indirectly connected mode](#tab/indirectly-connected-mode) --```azurecli -az sql mi-arc delete --name < SQL MI name > --k8s-namespace < namespace > --use-k8s -``` --Example: --```azurecli -az sql mi-arc delete --name contososqlmi --k8s-namespace arc --use-k8s -``` --### [Directly connected mode](#tab/directly-connected-mode) --```azurecli -az sql mi-arc delete --name < SQL MI name > --resource-group < resource group > -``` --Example: --```azurecli -az sql mi-arc delete --name contososqlmi --resource-group arc-rg -``` --## Related content --* [Deploy Arc-enabled SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md). -* [Connect to Active Directory integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md). |
azure-arc | Deploy Active Directory Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md | - Title: Deploy Active Directory-integrated SQL Managed Instance enabled by Azure Arc -description: Learn how to deploy SQL Managed Instance enabled by Azure Arc with Active Directory authentication. ------ Previously updated : 10/11/2022----# Deploy Active Directory-integrated SQL Managed Instance enabled by Azure Arc --In this article, learn how to deploy Azure Arc-enabled Azure SQL Managed Instance with Active Directory authentication. --## Prerequisites --Before you begin your SQL Managed Instance deployment, make sure you have these prerequisites: --- An Active Directory domain-- A deployed Azure Arc data controller-- A deployed Active Directory connector with a [customer-managed keytab](deploy-customer-managed-keytab-active-directory-connector.md) or [system-managed keytab](deploy-system-managed-keytab-active-directory-connector.md)--## Connector requirements --The customer-managed keytab Active Directory connector and the system-managed keytab Active Directory connector are different deployment modes that have different requirements and steps. Each mode has specific requirements during deployment. Select the tab for the connector you use. --### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode) --For an Active Directory customer-managed keytab deployment, you must provide: --- An Active Directory user account for SQL-- Service principal names (SPNs) under the user account-- DNS A (forward) record for the primary endpoint of SQL (and optionally, a secondary endpoint)--### [System-managed keytab mode](#tab/system-managed-keytab-mode) --For an Active Directory system-managed keytab deployment, you must provide: --- A unique name of an Active Directory user account for SQL-- DNS A (forward) record for the primary endpoint of SQL (and optionally, a secondary endpoint)----## Prepare for deployment --Depending on your deployment mode, complete the following steps to prepare to deploy SQL Managed Instance. --### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode) --To prepare for deployment in customer-managed keytab mode: --1. **Identify a DNS name for the SQL endpoints**: Choose unique DNS names for the SQL endpoints that clients will connect to from outside the Kubernetes cluster. -- - The DNS names should be in the Active Directory domain or in its descendant domains. - - The examples in this article use `sqlmi-primary.contoso.local` for the primary DNS name and `sqlmi-secondary.contoso.local` for the secondary DNS name. --1. **Identify the port numbers for the SQL endpoints**: Enter a port number for each of the SQL endpoints. -- - The port numbers must be in the acceptable range of port numbers for your Kubernetes cluster. - - The examples in this article use `31433` for the primary port number and `31434` for the secondary port number. --1. **Create an Active Directory account for the managed instance**: Choose a name for the Active Directory account to represent your managed instance. -- - The name must be unique in the Active Directory domain. - - The examples in this article use `sqlmi-account` for the Active Directory account name. -- To create the account: -- 1. On the domain controller, open the Active Directory Users and Computers tool. Create an account to represent the managed instance. - 1. Enter an account password that complies with the Active Directory domain password policy. You'll use this password in some of the steps in the next sections. - 1. Ensure that the account is enabled. The account doesn't need any special permissions. --1. **Create DNS records for the SQL endpoints in the Active Directory DNS servers**: In one of the Active Directory DNS servers, create A records (forward lookup records) for the DNS name you chose in step 1. -- - The DNS records should point to the IP address that the SQL endpoint will listen on for connections from outside the Kubernetes cluster. - - You don't need to create reverse-lookup Pointer (PTR) records in association with the A records. --1. **Create SPNs**: For SQL to be able to accept Active Directory authentication against the SQL endpoints, you must register two SPNs in the account you created in the preceding step. Two SPNs must be registered for the primary endpoint. If you want Active Directory authentication for the secondary endpoint, the SPNs must also be registered for the secondary endpoint. -- To create and register SPNs: -- 1. Use the following format to create the SPNs: -- ```output - MSSQLSvc/<DNS name> - MSSQLSvc/<DNS name>:<port> - ``` -- 1. On one of the domain controllers, run the following commands to register the SPNs: -- ```console - setspn -S MSSQLSvc/<DNS name> <account> - setspn -S MSSQLSvc/<DNS name>:<port> <account> - ``` -- Your commands might look like the following example: -- ```console - setspn -S MSSQLSvc/sqlmi-primary.contoso.local sqlmi-account - setspn -S MSSQLSvc/sqlmi-primary.contoso.local:31433 sqlmi-account - ``` -- 1. If you want Active Directory authentication on the secondary endpoint, run the same commands to add SPNs for the secondary endpoint: -- ```console - setspn -S MSSQLSvc/<DNS name> <account> - setspn -S MSSQLSvc/<DNS name>:<port> <account> - ``` - - Your commands might look like the following example: -- ```console - setspn -S MSSQLSvc/sqlmi-secondary.contoso.local sqlmi-account - setspn -S MSSQLSvc/sqlmi-secondary.contoso.local:31434 sqlmi-account - ``` --1. **Generate a keytab file that has entries for the account and SPNs**: For SQL to be able to authenticate itself to Active Directory and accept authentication from Active Directory users, provide a keytab file by using a Kubernetes secret. -- - The keytab file contains encrypted entries for the Active Directory account that's generated for the managed instance and the SPNs. - - SQL Server uses this file as its credential against Active Directory. - - You can choose from multiple tools to generate a keytab file: -- - `adutil`: Available for Linux (see [Introduction to adutil](/sql/linux/sql-server-linux-ad-auth-adutil-introduction)) - - `ktutil`: Available on Linux - - `ktpass`: Available on Windows - - Custom scripts - - To generate the keytab file specifically for the managed instance: -- 1. Use one of these custom scripts: -- - Linux: [create-sql-keytab.sh](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.sh) - - Windows Server: [create-sql-keytab.ps1](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.ps1) -- The scripts accept several parameters and generate a keytab file and a YAML specification file for the Kubernetes secret that contains the keytab. -- 1. In your script, replace the parameter values with values for your managed instance deployment. -- For the input parameters, use the following values: -- - `--realm`: The Active Directory domain in uppercase. Example: `CONTOSO.LOCAL` - - `--account`: The Active Directory account where the SPNs are registered. Example: `sqlmi-account` - - `--port`: The primary SQL endpoint port number. Example: `31433` - - `--dns-name`: The DNS name for the primary SQL endpoint. - - `--keytab-file`: The path to the keytab file. - - `--secret-name`: The name of the keytab secret to generate a specification for. - - `--secret-namespace`: The Kubernetes namespace that contains the keytab secret. - - `--secondary-port`: The secondary SQL endpoint port number (optional). Example: `31434` - - `--secondary-dns-name`: The DNS name for the secondary SQL endpoint (optional). -- Choose a name for the Kubernetes secret that hosts the keytab. Use the namespace where the managed instance is deployed. -- 1. Run the following command to create a keytab: -- ```console - AD_PASSWORD=<password> ./create-sql-keytab.sh --realm <Active Directory domain in uppercase> --account <Active Directory account name> --port <endpoint port> --dns-name <endpoint DNS name> --keytab-file <keytab file name/path> --secret-name <keytab secret name> --secret-namespace <keytab secret namespace> - ``` -- Your command might look like the following example: -- ```console - AD_PASSWORD=<password> ./create-sql-keytab.sh --realm CONTOSO.LOCAL --account sqlmi-account --port 31433 --dns-name sqlmi.contoso.local --keytab-file sqlmi.keytab --secret-name sqlmi-keytab-secret --secret-namespace sqlmi-ns - ``` -- 1. Run the following command to verify that the keytab is correct: -- ```console - klist -kte <keytab file> - ``` --1. **Deploy the Kubernetes secret for the keytab**: Use the Kubernetes secret specification file you create in the preceding step to deploy the secret. -- The specification file looks similar to this example: -- ```yaml - apiVersion: v1 - kind: Secret - type: Opaque - metadata: - name: <secret name> - namespace: <secret namespace> - data: - keytab: <keytab content in Base64> - ``` - - To deploy the Kubernetes secret, run this command: - - ```console - kubectl apply -f <file> - ``` - - Your command might look like this example: - - ```console - kubectl apply -f sqlmi-keytab-secret.yaml - ``` --### [System-managed keytab mode](#tab/system-managed-keytab-mode) --To prepare for deployment in system-managed keytab mode: --1. **Identify a DNS name for the SQL endpoints**: Choose unique DNS names for the SQL endpoints that clients will connect to from outside the Kubernetes cluster. -- - The DNS names should be in the Active Directory domain or its descendant domains. - - The examples in this article use `sqlmi-primary.contoso.local` for the primary DNS name and `sqlmi-secondary.contoso.local` for the secondary DNS name. --1. **Identify the port numbers for the SQL endpoints**: Enter a port number for each of the SQL endpoints. -- - The port numbers must be in the acceptable range of port numbers for your Kubernetes cluster. - - The examples in this article use `31433` for the primary port number and `31434` for the secondary port number. --1. **Choose an Active Directory account name for SQL**: Choose a name for the Active Directory account that will represent your managed instance. -- - This name should be unique in the Active Directory domain, and the account must *not* already exist in the domain. This account is automatically generated in the domain. - - The examples in this article use `sqlmi-account` for the Active Directory account name. --1. **Create DNS records for the SQL endpoints in the Active Directory DNS servers**: In one of the Active Directory DNS servers, create A records (forward lookup records) for the DNS names chosen in step 1. -- - The DNS records should point to the IP address that the SQL endpoint will listen on for connections from outside the Kubernetes cluster. - - You don't need to create reverse-lookup Pointer (PTR) records in association with the A records. ----## Set properties for Active Directory authentication --To deploy SQL Managed Instance enabled by Azure Arc for Azure Arc Active Directory authentication, update your deployment specification file to reference the Active Directory connector instance to use. Referencing the Active Directory connector in the SQL specification file automatically sets up SQL for Active Directory authentication. --### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode) --To support Active Directory authentication on SQL in customer-managed keytab mode, set the following properties in your deployment specification file. Some properties are required and some are optional. --#### Required --- `spec.security.activeDirectory.connector.name`: The name of the preexisting Active Directory connector custom resource to join for Active Directory authentication. If you enter a value for this property, Active Directory authentication is implemented.-- `spec.security.activeDirectory.accountName`: The name of the Active Directory account for the managed instance.-- `spec.security.activeDirectory.keytabSecret`: The name of the Kubernetes secret that hosts the pre-created keytab file for users. This secret must be in the same namespace as the managed instance. This parameter is required only for the Active Directory deployment in customer-managed keytab mode.-- `spec.services.primary.dnsName`: Enter a DNS name for the primary SQL endpoint.-- `spec.services.primary.port`: Enter a port number for the primary SQL endpoint.--#### Optional --- `spec.security.activeDirectory.connector.namespace`: The Kubernetes namespace of the preexisting Active Directory connector to join for Active Directory authentication. If you don't enter a value, the SQL namespace is used.-- `spec.services.readableSecondaries.dnsName`: Enter a DNS name for the secondary SQL endpoint.-- `spec.services.readableSecondaries.port`: Enter a port number for the secondary SQL endpoint.--### [System-managed keytab mode](#tab/system-managed-keytab-mode) --To support Active Directory authentication on SQL in system-managed keytab mode, set the following properties in your deployment specification file. Some properties are required and some are optional. --#### Required --- `spec.security.activeDirectory.connector.name`: The name of the preexisting Active Directory connector custom resource to join for Active Directory authentication. If you enter a value for this property, Active Directory authentication is implemented.-- `spec.security.activeDirectory.accountName`: The name of the Active Directory account for the managed instance. This account is automatically generated for this managed instance and must not exist in the domain before you deploy SQL.-- `spec.services.primary.dnsName`: Enter a DNS name for the primary SQL endpoint.-- `spec.services.primary.port`: Enter a port number for the primary SQL endpoint.--#### Optional --- `spec.security.activeDirectory.connector.namespace`: The Kubernetes namespace of the preexisting Active Directory connector to join for Active Directory authentication. If you don't enter a value, the SQL namespace is used.-- `spec.security.activeDirectory.encryptionTypes`: A list of Kerberos encryption types to allow for the automatically generated Active Directory account provided in `spec.security.activeDirectory.accountName`. Accepted values are `RC4`, `AES128`, and `AES256`. If you don't enter an encryption type, all encryption types are allowed. You can disable RC4 by entering only `AES128` and `AES256` as encryption types.-- `spec.services.readableSecondaries.dnsName`: Enter a DNS name for the secondary SQL endpoint.-- `spec.services.readableSecondaries.port`: Enter a port number for the secondary SQL endpoint.----## Prepare your deployment specification file --Next, prepare a YAML specification file to deploy SQL Managed Instance. For the mode you use, enter your deployment values in the specification file. --> [!NOTE] -> In the specification file for both modes, the `admin-login-secret` value in the YAML example provides basic authentication. You can use the parameter value to log in to the managed instance, and then create logins for Active Directory users and groups. For more information, see [Connect to Active Directory-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md). --### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode) --The following example shows a specification file for customer-managed keytab mode: --```yaml -apiVersion: v1 -data: - password: <your Base64-encoded password> - username: <your Base64-encoded username> -kind: Secret -metadata: - name: admin-login-secret -type: Opaque --apiVersion: sql.arcdata.microsoft.com/v3 -kind: SqlManagedInstance -metadata: - name: <name> - namespace: <namespace> -spec: - backup: - retentionPeriodInDays: 7 - dev: false - tier: GeneralPurpose - forceHA: "true" - licenseType: LicenseIncluded - replicas: 1 - security: - adminLoginSecret: admin-login-secret - activeDirectory: - connector: - name: <Active Directory connector name> - namespace: <Active Directory connector namespace> - accountName: <Active Directory account name> - keytabSecret: <keytab secret name> - - primary: - type: LoadBalancer - dnsName: <primary endpoint DNS name> - port: <primary endpoint port number> - readableSecondaries: - type: LoadBalancer - dnsName: <secondary endpoint DNS name> - port: <secondary endpoint port number> - storage: - data: - volumes: - - accessMode: ReadWriteOnce - className: local-storage - size: 5Gi - logs: - volumes: - - accessMode: ReadWriteOnce - className: local-storage - size: 5Gi -``` --### [System-managed keytab mode](#tab/system-managed-keytab-mode) --The following example shows a specification file for system-managed keytab mode: --```yaml -apiVersion: v1 -data: - password: <your Base64-encoded password> - username: <your Base64-encoded username> -kind: Secret -metadata: - name: admin-login-secret -type: Opaque --apiVersion: sql.arcdata.microsoft.com/v3 -kind: SqlManagedInstance -metadata: - name: <name> - namespace: <namespace> -spec: - backup: - retentionPeriodInDays: 7 - dev: false - tier: GeneralPurpose - forceHA: "true" - licenseType: LicenseIncluded - replicas: 1 - security: - adminLoginSecret: admin-login-secret - activeDirectory: - connector: - name: <Active Directory connector name> - namespace: <Active Directory connector namespace> - accountName: <Active Directory account name> - - primary: - type: LoadBalancer - dnsName: <primary endpoint DNS name> - port: <primary endpoint port number> - readableSecondaries: - type: LoadBalancer - dnsName: <secondary endpoint DNS name> - port: <secondary endpoint port number> - storage: - data: - volumes: - - accessMode: ReadWriteOnce - className: local-storage - size: 5Gi - logs: - volumes: - - accessMode: ReadWriteOnce - className: local-storage - size: 5Gi -``` ----## Deploy the managed instance --For both customer-managed keytab mode and system-managed keytab mode, deploy the managed instance by using the prepared specification YAML file: --1. Save the file. The example in the next step uses *sqlmi.yaml* for the specification file name, but you can choose any file name. --1. Run the following command to deploy the instance by using the specification: -- ```console - kubectl apply -f <specification file name> - ``` -- Your command might look like the following example: -- ```console - kubectl apply -f sqlmi.yaml - ``` --## Related content --- [Connect to Active Directory-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md)-- [Upgrade your Active Directory connector](upgrade-active-directory-connector.md) |
azure-arc | Deploy Customer Managed Keytab Active Directory Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-customer-managed-keytab-active-directory-connector.md | - Title: Tutorial ΓÇô Deploy Active Directory (AD) Connector in customer-managed keytab mode -description: Tutorial to deploy a customer-managed keytab Active Directory (AD) connector ------ Previously updated : 10/11/2022----# Tutorial ΓÇô Deploy Active Directory (AD) connector in customer-managed keytab mode --This article explains how to deploy Active Directory (AD) connector in customer-managed keytab mode. The connector is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc. --## Active Directory connector in customer-managed keytab mode --In customer-managed keytab mode, an Active Directory connector deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS -* Active Directory DNS Servers -* Kubernetes DNS Servers --The AD Connector facilitates the environment needed by SQL to authenticate AD logins. --The following diagram shows AD Connector and DNS Proxy service functionality in customer-managed keytab mode: --![Active Directory connector](media/active-directory-deployment/active-directory-connector-customer-managed.png) --## Prerequisites --Before you proceed, you must have: --* An instance of Data Controller deployed on a supported version of Kubernetes -* An Active Directory (AD) domain --## Input for deploying Active Directory (AD) Connector --To deploy an instance of Active Directory connector, several inputs are needed from the Active Directory domain environment. --These inputs are provided in a YAML specification of AD Connector instance. --Following metadata about the AD domain must be available before deploying an instance of AD Connector: -* Name of the Active Directory domain -* List of the domain controllers (fully qualified domain names) -* List of the DNS server IP addresses --Following input fields are exposed to the users in the Active Directory connector spec: --- **Required**-- - `spec.activeDirectory.realm` - Name of the Active Directory domain in uppercase. This is the AD domain that this instance of AD Connector will be associated with. - - - `spec.activeDirectory.dns.nameserverIpAddresses` - List of Active Directory DNS server IP addresses. DNS proxy service will forward DNS queries in the provided domain name to these servers. --- **Optional**-- - `spec.activeDirectory.netbiosDomainName` NetBIOS name of the Active Directory domain. This is the short domain name (pre-Windows 2000 name) of your Active Directory domain. This is often used to qualify accounts in the AD domain. e.g. if the accounts in the domain are referred to as CONTOSO\admin, then CONTOSO is the NETBIOS domain name. - - This field is optional. When not provided, its value defaults to the first label of the `spec.activeDirectory.realm` field. - - In most domain environments, this is set to the default value but some domain environments may have a non-default value. You will need to use this field only when your domain's NetBIOS name does not match the first label of its fully qualified name. -- - `spec.activeDirectory.dns.domainName` - DNS domain name for which DNS lookups should be forwarded to the Active Directory DNS servers. -- A DNS lookup for any name belonging to this domain or its descendant domains will get forwarded to Active Directory. -- This field is optional. When not provided, it defaults to the value provided for `spec.activeDirectory.realm` converted to lowercase. -- - `spec.activeDirectory.dns.replicas` - Replica count for DNS proxy service. This field is optional and defaults to 1 when not provided. -- - `spec.activeDirectory.dns.preferK8sDnsForPtrLookups` - Flag indicating whether to prefer Kubernetes DNS server response over AD DNS server response for IP address lookups. -- DNS proxy service relies on this field to determine which upstream group of DNS servers to prefer for IP address lookups. -- This field is optional. When not provided, it defaults to `true` i.e. the DNS lookups of IP addresses will be first forwarded to Kubernetes DNS servers. If Kubernetes DNS servers fail to answer the lookup, the query is then forwarded to AD DNS servers. When set to `false`, these DNS lookups will be forwarded to AD DNS servers first and upon failure, fall back to Kubernetes. ---## Deploy a customer-managed keytab Active Directory (AD) connector --To deploy an AD connector, create a .yaml specification file called `active-directory-connector.yaml`. --The following example is an example of a customer-managed keytab AD connector uses an AD domain of name `CONTOSO.LOCAL`. Ensure to replace the values with the ones for your AD domain. --```yaml -apiVersion: arcdata.microsoft.com/v1beta1 -kind: ActiveDirectoryConnector -metadata: - name: adarc - namespace: <namespace> -spec: - activeDirectory: - realm: CONTOSO.LOCAL - dns: - preferK8sDnsForPtrLookups: false - nameserverIPAddresses: - - <DNS Server 1 IP address> - - <DNS Server 2 IP address> -``` --The following command deploys the AD connector instance. Currently, only kube-native approach of deploying is supported. --```console -kubectl apply ΓÇôf active-directory-connector.yaml -``` --After submitting the deployment of AD Connector instance, you may check the status of the deployment using the following command. --```console -kubectl get adc -n <namespace> -``` --## Related content -* [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md) -* [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md). -* [Connect to AD-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md). - |
azure-arc | Deploy System Managed Keytab Active Directory Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-system-managed-keytab-active-directory-connector.md | - Title: Tutorial ΓÇô Deploy Active Directory connector in system-managed keytab mode -description: Tutorial to deploy a system-managed keytab Active Directory connector ------ Previously updated : 10/11/2022-----# Tutorial ΓÇô Deploy Active Directory connector in system-managed keytab mode --This article explains how to deploy Active Directory connector in system-managed keytab mode. It is a key component to enable Active Directory authentication on SQL Managed Instance enabled by Azure Arc. --## Active Directory connector in system-managed keytab mode --In System-Managed Keytab mode, an Active Directory connector deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS -* Active Directory DNS Servers -* Kubernetes DNS Servers --In addition to the DNS proxy service, AD Connector also deploys a Security Support Service that facilitates communication to the AD domain for automatic creation and management of AD accounts, Service Principal Names (SPNs) and keytabs. --The following diagram shows AD Connector and DNS Proxy service functionality in system-managed keytab mode: --![Active Directory connector](media/active-directory-deployment/active-directory-connector-smk.png) --## Prerequisites --Before you proceed, you must have: --* An instance of Data Controller deployed on a supported version of Kubernetes -* An Active Directory domain -* A pre-created organizational unit (OU) in the Active Directory domain -* An Active Directory domain service account --The AD domain service account should have sufficient permissions to automatically create and delete users accounts inside the provided organizational unit (OU) in the active directory. --Grant the following permissions - scoped to the Organizational Unit (OU) - to the domain service account: - -- Read all properties-- Write all properties-- Create User objects-- Delete User objects-- Reset Password for Descendant User objects--For details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication with system-managed keytab - prerequisites](active-directory-prerequisites.md) --## Input for deploying Active Directory connector in system-managed keytab mode --To deploy an instance of Active Directory connector, several inputs are needed from the Active Directory domain environment. --These inputs are provided in a yaml specification for the AD connector instance. --The following metadata about the AD domain must be available before deploying an instance of AD connector: --* Name of the Active Directory domain -* List of the domain controllers (fully qualified domain names) -* List of the DNS server IP addresses --The following input fields are exposed to the users in the Active Directory connector specification: --- **Required**- - `spec.activeDirectory.realm` - Name of the Active Directory domain in uppercase. This is the AD domain that this instance of AD Connector will be associated with. -- - `spec.activeDirectory.domainControllers.primaryDomainController.hostname` - Fully qualified domain name of the Primary Domain Controller (PDC) in the AD domain. -- If you do not know which domain controller in the domain is primary, you can find out by running this command on any Windows machine joined to the AD domain: `netdom query fsmo`. - - - `spec.activeDirectory.dns.nameserverIpAddresses` - List of Active Directory DNS server IP addresses. DNS proxy service will forward DNS queries in the provided domain name to these servers. --- **Optional**- - `spec.activeDirectory.serviceAccountProvisioning` This is an optional field which defines your AD connector deployment mode with possible values as `manual` for customer-managed keytab or `automatic` for system-managed keytab. When this field is not set, the value defaults to `manual`. When set to `automatic` (system-managed keytab), the system will automatically generate AD accounts and Service Principal Names (SPNs) for the SQL Managed Instances associated with this AD Connector and create keytab files for them. When set to `manual` (customer-managed keytab), the system will not provide automatic generation of the AD account and keytab generation. The user will be expected to provide a keytab file. -- - `spec.activeDirectory.ouDistinguishedName` This is an optional field. Though it becomes conditionally mandatory when the value of `serviceAccountProvisioning` is set to `automatic`. This field accepts the Distinguished Name (DN) of the Organizational Unit (OU) that the users must create in Active Directory domain before deploying AD Connector. It is used to store the system-generated AD accounts for SQL Managed Instances in Active Directory domain. The example of the value looks like: `OU=arcou,DC=contoso,DC=local`. -- - `spec.activeDirectory.domainServiceAccountSecret` This is an optional field. It becomes conditionally mandatory when the value of `serviceAccountProvisioning` is set to `automatic`. This field accepts the name of the Kubernetes secret that contains the username and password of the Domain Service Account that was created prior to the AD Connector deployment. The system will use this account to generate other AD accounts in the OU and perform actions on those AD accounts. -- - `spec.activeDirectory.netbiosDomainName` NetBIOS name of the Active Directory domain. This is the short domain name (pre-Windows 2000 name) of your Active Directory domain. This is often used to qualify accounts in the AD domain. e.g. if the accounts in the domain are referred to as CONTOSO\admin, then CONTOSO is the NETBIOS domain name. - - This field is optional. When not provided, its value defaults to the first label of the `spec.activeDirectory.realm` field. - - In most domain environments, this is set to the default value but some domain environments may have a non-default value. You will need to use this field only when your domain's NetBIOS name does not match the first label of its fully qualified name. -- - `spec.activeDirectory.domainControllers.secondaryDomainControllers[*].hostname` - List of the fully qualified domain names of the secondary domain controllers in the AD domain. -- If your domain is served by multiple domain controllers, it is a good practice to provide some of their fully qualified domain names in this list. This allows high-availability for Kerberos operations. -- This field is optional and not needed. The system will automatically detect the secondary domain controllers when a value is not provided. -- - `spec.activeDirectory.dns.domainName` - DNS domain name for which DNS lookups should be forwarded to the Active Directory DNS servers. -- A DNS lookup for any name belonging to this domain or its descendant domains will get forwarded to Active Directory. -- This field is optional. When not provided, it defaults to the value provided for `spec.activeDirectory.realm` converted to lowercase. -- - `spec.activeDirectory.dns.replicas` - Replica count for DNS proxy service. This field is optional and defaults to 1 when not provided. -- - `spec.activeDirectory.dns.preferK8sDnsForPtrLookups` - Flag indicating whether to prefer Kubernetes DNS server response over AD DNS server response for IP address lookups. -- DNS proxy service relies on this field to determine which upstream group of DNS servers to prefer for IP address lookups. -- This field is optional. When not provided, it defaults to `true` i.e. the DNS lookups of IP addresses will be first forwarded to Kubernetes DNS servers. If Kubernetes DNS servers fail to answer the lookup, the query is then forwarded to AD DNS servers. When set to `false`, these DNS lookups will be forwarded to AD DNS servers first and upon failure, fall back to Kubernetes. --## Deploy Active Directory connector in system-managed keytab mode --To deploy an AD connector, create a YAML specification file called `active-directory-connector.yaml`. --Following is an example of a system-managed keytab AD connector that uses an AD domain of name `CONTOSO.LOCAL`. Ensure to replace the values with the ones for your AD domain. The `adarc-dsa-secret` contains the AD domain service account that was created prior to the AD deployment. --> [!NOTE] -> Make sure the password of provided domain service AD account here doesn't contain `!` as special characters. -> --```yaml -apiVersion: v1 -kind: Secret -type: Opaque -metadata: - name: adarc-dsa-secret - namespace: <namespace> -data: - password: <your base64 encoded password> - username: <your base64 encoded username> --apiVersion: arcdata.microsoft.com/v1beta2 -kind: ActiveDirectoryConnector -metadata: - name: adarc - namespace: <namespace> -spec: - activeDirectory: - realm: CONTOSO.LOCAL - serviceAccountProvisioning: automatic - ouDistinguishedName: "OU=arcou,DC=contoso,DC=local" - domainServiceAccountSecret: adarc-dsa-secret - domainControllers: - primaryDomainController: - hostname: dc1.contoso.local - secondaryDomainControllers: - - hostname: dc2.contoso.local - - hostname: dc3.contoso.local - dns: - preferK8sDnsForPtrLookups: false - nameserverIPAddresses: - - <DNS Server 1 IP address> - - <DNS Server 2 IP address> -``` ---The following command deploys the AD connector instance. Currently, only kube-native approach of deploying is supported. --```console -kubectl apply ΓÇôf active-directory-connector.yaml -``` --After submitting the deployment for the AD connector instance, you may check the status of the deployment using the following command. --```console -kubectl get adc -n <namespace> -``` --## Related content -* [Deploy a customer-managed keytab Active Directory connector](deploy-customer-managed-keytab-active-directory-connector.md) -* [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md). -* [Connect to AD-integrated SQL Managed Instance enabled by Azure Arc](connect-active-directory-sql-managed-instance.md). |
azure-arc | Deploy Telemetry Router | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-telemetry-router.md | - Title: Deploy telemetry router | Azure Arc-enabled data services -description: Learn how to deploy the Azure Arc Telemetry Router ---- Previously updated : 09/07/2022----# Deploy the Azure Arc telemetry Router --> [!NOTE] -> -> - The telemetry router is in Public Preview and should be deployed for **testing purposes only**. -> - While the telemetry router is in Public Preview, be advised that future preview releases could include changes to CRD specs, CLI commands, and/or telemetry router messages. -> - The current preview does not support in-place upgrades of a data controller deployed with the Arc telemetry router enabled. In order to install or upgrade a data controller in a future release, you will need to uninstall the data controller and then re-install. --## What is the Azure Arc Telemetry Router? --The Azure Arc telemetry router enables exporting telemetry data to other monitoring solutions. For this Public Preview, we only support exporting log data to either Kafka or Elasticsearch and metric data to Kafka. --This document specifies how to deploy the telemetry router and configure it to work with the supported exporters. --## Deployment --> [!NOTE] -> -> The telemetry router currently supports indirectly connected mode only. --### Create a Custom Configuration Profile --After setting up your Kubernetes cluster, you'll need to [create a custom configuration profile](create-custom-configuration-template.md). Next, enable a temporary feature flag that deploys the telemetry router during data controller creation. --### Turn on the Feature Flag --After creating the custom configuration profile, you'll need to edit the profile to add the `monitoring` property with the `enableOpenTelemetry` flag set to `true`. You can set the feature flag by running the following az CLI commands (edit the --path parameter, as necessary): --```bash -az arcdata dc config add --path ./control.json --json-values ".spec.monitoring={}" -az arcdata dc config add --path ./control.json --json-values ".spec.monitoring.enableOpenTelemetry=true" -``` --To confirm the flag was set correctly, open the control.json file and confirm the `monitoring` object was added to the `spec` object and `enableOpenTelemetry` is set to `true`. --```yaml -spec: - monitoring: - enableOpenTelemetry: true -``` --This feature flag requirement will be removed in a future release. --### Create the Data Controller --After creating the custom configuration profile and setting the feature flag, you're ready to [create the data controller using indirect connectivity mode](create-data-controller-indirect-cli.md?tabs=linux). Be sure to replace the `--profile-name` parameter with a `--path` parameter that points to your custom control.json file (see [use custom control.json file to deploy Azure Arc-enabled data controller](create-custom-configuration-template.md)) --### Verify Telemetry Router Deployment --When the data controller is created, a TelemetryRouter custom resource is also created. Data controller deployment is marked ready when both custom resources have finished deploying. After the data controller finishes deployment, you can use the following command to verify that the TelemetryRouter exists: --```bash -kubectl describe telemetryrouter arc-telemetry-router -n <namespace> -``` --```yaml -apiVersion: arcdata.microsoft.com/v1beta4 - kind: TelemetryRouter - metadata: - name: arc-telemetry-router - namespace: <namespace> - spec: - credentials: - exporters: - pipelines: -``` --At the time of creation, no pipeline or exporters are set up. You can [setup your own pipelines and exporters](adding-exporters-and-pipelines.md) to route metrics and logs data to your own instances of Kafka and Elasticsearch. --After the TelemetryRouter is deployed, an instance of Kafka (arc-router-kafka) and a single instance of TelemetryCollector (collector-inbound) should be deployed and in a ready state. These resources are system managed and editing them isn't supported. The following pods will be deployed as a result: --- An inbound collector pod - `arctc-collector-inbound-0`-- A kakfa broker pod - `arck-arc-router-kafka-broker-0`-- A kakfa controller pod - `arck-arc-router-kafka-controller-0`---> [!NOTE] -> An outbound collector pod isn't created until at least one pipeline has been added to the telemetry router. -> -> After you create the first pipeline, an additional TelemetryCollector resource (collector-outbound) and pod `arctc-collector-outbound-0` are deployed. --```bash -kubectl get pods -n <namespace> --NAME READY STATUS RESTARTS AGE -arc-bootstrapper-job-4z2vr 0/1 Completed 0 15h -arc-webhook-job-facc4-z7dd7 0/1 Completed 0 15h -arck-arc-router-kafka-broker-0 2/2 Running 0 15h -arck-arc-router-kafka-controller-0 2/2 Running 0 15h -arctc-collector-inbound-0 2/2 Running 0 15h -bootstrapper-8d5bff6f7-7w88j 1/1 Running 0 15h -control-vpfr9 2/2 Running 0 15h -controldb-0 2/2 Running 0 15h -logsdb-0 3/3 Running 0 15h -logsui-fwrh9 3/3 Running 0 15h -metricsdb-0 2/2 Running 0 15h -metricsdc-bc4df 2/2 Running 0 15h -metricsdc-fm7jh 2/2 Running 0 15h -metricsui-qqgbv 2/2 Running 0 15h -``` --## Related content --- [Add exporters and pipelines to your telemetry router](adding-exporters-and-pipelines.md) |
azure-arc | Get Connection Endpoints And Connection Strings Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/get-connection-endpoints-and-connection-strings-postgresql-server.md | - Title: Get connection endpoints and create connection strings for your Azure Arc-enabled PostgreSQL server- -description: Get connection endpoints & create connection strings for your Azure Arc-enabled PostgreSQL server ------ Previously updated : 11/03/2021----# Get connection endpoints & create the connection strings for your Azure Arc-enabled PostgreSQL server --This article explains how you can retrieve the connection endpoints for your server group and how you can form the connection strings, which can be used with your applications and/or tools. ----## Get connection end points: --Run the following command: -```azurecli -az postgres server-arc endpoint list -n <server name> --k8s-namespace <namespace> --use-k8s -``` -For example: -```azurecli -az postgres server-arc endpoint list -n postgres01 --k8s-namespace arc --use-k8s -``` --It returns the list of endpoints: the PostgreSQL endpoint, the log search dashboard (Kibana), and the metrics dashboard (Grafana). For example: --```output -{ - "instances": [ - { - "endpoints": [ - { - "description": "PostgreSQL Instance", - "endpoint": "postgresql://postgres:<replace with password>@12.345.567.89:5432" - }, - { - "description": "Log Search Dashboard", - "endpoint": "https://23.456.78.99:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:postgres01'))" - }, - { - "description": "Metrics Dashboard", - "endpoint": "https://34.567.890.12:3000/d/postgres-metrics?var-Namespace=arc&var-Name=postgres01" - } - ], - "engine": "PostgreSql", - "name": "postgres01" - } - ], - "namespace": "arc" -} -``` --Use these end points to: --- Form your connection strings and connect with your client tools or applications-- Access the Grafana and Kibana dashboards from your browser--For example, you can use the end point named _PostgreSQL Instance_ to connect with psql to your server group: --```console -psql postgresql://postgres:MyPassworkd@12.345.567.89:5432 -psql (10.14 (Ubuntu 10.14-0ubuntu0.18.04.1), server 12.4 (Ubuntu 12.4-1.pgdg16.04+1)) -WARNING: psql major version 10, server major version 12. - Some psql features might not work. -SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) -Type "help" for help. --postgres=# -``` -> [!NOTE] -> -> - The password of the _postgres_ user indicated in the end point named "_PostgreSQL Instance_" is the password you chose when deploying the server group. ---## From CLI with kubectl --```console -kubectl get postgresqls/<server name> -n <namespace name> -``` --For example: -```azurecli -kubectl get postgresqls/postgres01 -n arc -``` --Those commands will produce output like the one below. You can use that information to form your connection strings: --```console -NAME STATE READY-PODS PRIMARY-ENDPOINT AGE -postgres01 Ready 3/3 12.345.567.89:5432 9d -``` --## Form connection strings --Use the connections string examples below for your server group. Copy, paste, and customize them as needed: --> [!IMPORTANT] -> SSL is required for client connections. In connection string, the SSL mode parameter should not be disabled. For more information, review [https://www.postgresql.org/docs/14/runtime-config-connection.html](https://www.postgresql.org/docs/14/runtime-config-connection.html). --### ADO.NET --```ado.net -Server=192.168.1.121;Database=postgres;Port=24276;User Id=postgres;Password={your_password_here};Ssl Mode=Require;` -``` --### C++ (libpq) --```cpp -host=192.168.1.121 port=24276 dbname=postgres user=postgres password={your_password_here} sslmode=require -``` --### JDBC --```jdbc -jdbc:postgresql://192.168.1.121:24276/postgres?user=postgres&password={your_password_here}&sslmode=require -``` --### Node.js --```node.js -host=192.168.1.121 port=24276 dbname=postgres user=postgres password={your_password_here} sslmode=require -``` --### PHP --```php -host=192.168.1.121 port=24276 dbname=postgres user=postgres password={your_password_here} sslmode=require -``` --### psql --```psql -psql "host=192.168.1.121 port=24276 dbname=postgres user=postgres password={your_password_here} sslmode=require" -``` --### Python --```python -dbname='postgres' user='postgres' host='192.168.1.121' password='{your_password_here}' port='24276' sslmode='true' -``` --### Ruby --```ruby -host=192.168.1.121; dbname=postgres user=postgres password={your_password_here} port=24276 sslmode=require -``` --## Related content -- Read about [scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-server-using-cli.md) your server group |
azure-arc | Install Arcdata Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/install-arcdata-extension.md | - Title: Install `arcdata` extension -description: Install the `arcdata` extension for Azure (`az`) CLI ------- Previously updated : 07/30/2021----# Install `arcdata` Azure CLI extension --> [!IMPORTANT] -> If you are updating to a new release, please be sure to also update to the latest version of Azure CLI and the `arcdata` extension. ---## Install latest Azure CLI --To get the latest Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli). ---## Add `arcdata` extension --To add the extension, run the following command: --```azurecli -az extension add --name arcdata -``` --[Learn more about Azure CLI extensions](/cli/azure/azure-cli-extensions-overview). --## Update `arcdata` extension --If you already have the extension, you can update it with the following command: --```azurecli -az extension update --name arcdata -``` --## Related content --[Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) |
azure-arc | Install Client Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/install-client-tools.md | - Title: Install client tools -description: Install azdata, kubectl, Azure CLI, psql, Azure Data Studio (Insiders), and the Arc extension for Azure Data Studio ------- Previously updated : 07/30/2021----# Install client tools for deploying and managing Azure Arc-enabled data services --This article points you to resources to install the tools to manage Azure Arc-enabled data services. --> [!IMPORTANT] -> If you are updating to a new release, update to the latest version of Azure Data Studio, the Azure Arc extension for Azure Data Studio, Azure (`az`) command line interface (CLI), and the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]. -> -> [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --The [`arcdata` extension for Azure CLI (`az`)](about-arcdata-extension.md) replaces `azdata` for Azure Arc-enabled data services. --## Tools for creating and managing Azure Arc-enabled data services --The following table lists common tools required for creating and managing Azure Arc-enabled data services, and how to install those tools: --| Tool | Required | Description | Installation | -||||| -| Azure CLI (`az`)<sup>1</sup> | Yes | Modern command-line interface for managing Azure services. Used to manage Azure services in general and also specifically Azure Arc-enabled data services using the CLI or in scripts for both indirectly connected mode (available now) and directly connected mode (available soon). ([More info](/cli/azure/)). | [Install](/cli/azure/install-azure-cli) | -| `arcdata` extension for Azure (`az`) CLI | Yes | Command-line tool for managing Azure Arc-enabled data services as an extension to the Azure CLI (`az`) | [Install](install-arcdata-extension.md) | -| Azure Data Studio | Yes | Rich experience tool for connecting to and querying a variety of databases including Azure SQL, SQL Server, PostrgreSQL, and MySQL. Extensions to Azure Data Studio provide an administration experience for Azure Arc-enabled data services. | [Install](/azure-data-studio/download-azure-data-studio) | -| Azure Arc extension for Azure Data Studio | Yes | Extension for Azure Data Studio that provides a management experience for Azure Arc-enabled data services.| Install from the extensions gallery in Azure Data Studio.| -| PostgreSQL extension in Azure Data Studio | No | PostgreSQL extension for Azure Data Studio that provides management capabilities for PostgreSQL. | <!--{need link} [Install](../azure-data-studio/data-virtualization-extension.md) --> Install from extensions gallery in Azure Data Studio.| -| Kubernetes CLI (kubectl)<sup>2</sup> | Yes | Command-line tool for managing the Kubernetes cluster ([More info](https://kubernetes.io/docs/tasks/tools/install-kubectl/)). | [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows) \| [Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) \| [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/) | -| `curl` <sup>3</sup> | Required for some sample scripts. | Command-line tool for transferring data with URLs. | [Windows](https://curl.haxx.se/windows/) \| Linux: install curl package | -| `oc` | Required for Red Hat OpenShift and Azure Redhat OpenShift deployments. |`oc` is the Open Shift command line interface (CLI). | [Installing the CLI](https://docs.openshift.com/container-platform/4.6/cli_reference/openshift_cli/getting-started-cli.html#installing-the-cli) ----<sup>1</sup> You must be using Azure CLI version 2.26.0 or later. Run `az --version` to find the version if needed. --<sup>2</sup> You must use `kubectl` version 1.19 or later. Also, the version of `kubectl` should be plus or minus one minor version of your Kubernetes cluster. If you want to install a specific version on `kubectl` client, see [Install `kubectl` binary via curl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl) (on Windows 10, use cmd.exe and not Windows PowerShell to run curl). --<sup>3</sup> For PowerShell, `curl` is an alias to the Invoke-WebRequest cmdlet. --## Related content --[Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) |
azure-arc | Least Privilege | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/least-privilege.md | - Title: Operate Azure Arc-enabled data services with least privileges -description: Explains how to operate Azure Arc-enabled data services with least privileges ------ Previously updated : 11/07/2021----# Operate Azure Arc-enabled data services with least privileges --Operating Arc-enabled data services with least privileges is a security best practice. Only grant users and service accounts the specific permissions required to perform the required tasks. Both Azure and Kubernetes provide a role-based access control model which can be used to grant these specific permissions. This article describes certain common scenarios in which the security of least privilege should be applied. --> [!NOTE] -> In this article, a namespace name of `arc` will be used. If you choose to use a different name, then use the same name throughout. -> In this article, the `kubectl` CLI utility is used as the example. Any tool or system that uses the Kubernetes API can be used though. --## Deploy the Azure Arc data controller --Deploying the Azure Arc data controller requires some permissions which can be considered high privilege such as creating a Kubernetes namespace or creating cluster role. The following steps can be followed to separate the deployment of the data controller into multiple steps, each of which can be performed by a user or a service account which has the required permissions. This separation of duties ensures that each user or service account in the process has just the permissions required and nothing more. --### Deploy a namespace in which the data controller will be created --This step will create a new, dedicated Kubernetes namespace into which the Arc data controller will be deployed. It is essential to perform this step first, because the following steps will use this new namespace as a scope for the permissions that are being granted. --Permissions required to perform this action: --- Namespace- - Create - - Edit (if required for OpenShift clusters) --Run a command similar to the following to create a new, dedicated namespace in which the data controller will be created. --```console -kubectl create namespace arc -``` --If you are using OpenShift, you will need to edit the `openshift.io/sa.scc.supplemental-groups` and `openshift.io/sa.scc.uid-range` annotations on the namespace using `kubectl edit namespace <name of namespace>`. Change these existing annotations to match these _specific_ UID and fsGroup IDs/ranges. --```console -openshift.io/sa.scc.supplemental-groups: 1000700001/10000 -openshift.io/sa.scc.uid-range: 1000700001/10000 -``` --## Assign permissions to the deploying service account and users/groups --This step will create a service account and assign roles and cluster roles to the service account so that the service account can be used in a job to deploy the Arc data controller with the least privileges required. --Permissions required to perform this action: --- Service account- - Create -- Role- - Create -- Role binding- - Create -- Cluster role- - Create -- Cluster role binding- - Create -- All the permissions being granted to the service account (see the arcdata-deployer.yaml below for details)--Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file. --```console -kubectl apply --namespace arc -f arcdata-deployer.yaml -``` --## Grant permissions to users to create the bootstrapper job and data controller --Permissions required to perform this action: --- Role- - Create -- Role binding- - Create --Save a copy of [arcdata-installer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arcdata-installer.yaml), and replace the placeholder `{{INSTALLER_USERNAME}}` in the file with the name of the user to grant the permissions to, for example: `john@contoso.com`. Add additional role binding subjects such as other users or groups as needed. Run the following command to create the installer permissions with the edited file. --```console -kubectl apply --namespace arc -f arcdata-installer.yaml -``` --## Deploy the bootstrapper job --Permissions required to perform this action: --- User that is assigned to the arcdata-installer-role role in the previous step--Run the following command to create the bootstrapper job that will run preparatory steps to deploy the data controller. --```console -kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper.yaml -``` --## Create the Arc data controller --Now you are ready to create the data controller itself. --First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings. --### Create the metrics and logs dashboards user names and passwords --At the top of the file, you can specify a user name and password that is used to authenticate to the metrics and logs dashboards as an administrator. Choose a secure password and share it with only those that need to have these privileges. --A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password. --```consoole -echo -n '<your string to encode here>' | base64 -# echo -n 'example' | base64 -``` --Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify SSL/TLS certificates during Kubernetes native tools deployment](monitor-certificates.md). --### Edit the data controller configuration --Edit the data controller configuration as needed: --#### REQUIRED --- `location`: Change this to be the Azure location where the _metadata_ about the data controller will be stored. Review the [list of available regions](overview.md#supported-regions).-- `logsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the logs UI certificate.-- `metricsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.--#### Recommended: review and possibly change defaults --Review these values, and update for your deployment: --- `storage..className`: the storage class to use for the data controller data and log files. If you are unsure of the available storage classes in your Kubernetes cluster, you can run the following command: `kubectl get storageclass`. The default is default which assumes there is a storage class that exists and is named default not that there is a storage class that is the default. Note: There are two className settings to be set to the desired storage class - one for data and one for logs.-- `serviceType`: Change the service type to NodePort if you are not using a LoadBalancer.-- Security For Azure Red Hat OpenShift or Red Hat OpenShift Container Platform, replace the security: settings with the following values in the data controller yaml file.-- ```yml - security: - allowDumps: false - allowNodeMetricsCollection: false - allowPodMetricsCollection: false - ``` --#### Optional --The following settings are optional. --- `name`: The default name of the data controller is arc, but you can change it if you want.-- `displayName`: Set this to the same value as the name attribute at the top of the file.-- `registry`: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and pushing them to a private container registry, enter the IP address or DNS name of your registry here.-- `dockerRegistry`: The secret to use to pull the images from a private container registry if required.-- `repository`: The default repository on the Microsoft Container Registry is arcdata. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images.-- `imageTag`: The current latest version tag is defaulted in the template, but you can change it if you want to use an older version.-- `logsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the logs UI certificate.-- `metricsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.--The following example shows a completed data controller yaml. ---Save the edited file on your local computer and run the following command to create the data controller: --```console -kubectl create --namespace arc -f <path to your data controller file> --#Example -kubectl create --namespace arc -f data-controller.yaml -``` --### Monitoring the creation status --Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: --```console -kubectl get datacontroller --namespace arc -``` --```console -kubectl get pods --namespace arc -``` --You can also check on the creation status or logs of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. --```console -kubectl describe pod/<pod name> --namespace arc -kubectl logs <pod name> --namespace arc --#Example: -#kubectl describe pod/control-2g7bl --namespace arc -#kubectl logs control-2g7b1 --namespace arc -``` --## Related content --You have several additional options for creating the Azure Arc data controller: --> **Just want to try things out?** -> Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on AKS, Amazon EKS, or GKE, or in an Azure VM. -> --- [Create a data controller in direct connectivity mode with the Azure portal](create-data-controller-direct-prerequisites.md)-- [Create a data controller in indirect connectivity mode with CLI](create-data-controller-indirect-cli.md)-- [Create a data controller in indirect connectivity mode with Azure Data Studio](create-data-controller-indirect-azure-data-studio.md)-- [Create a data controller in indirect connectivity mode from the Azure portal via a Jupyter notebook in Azure Data Studio](create-data-controller-indirect-azure-portal.md)-- [Create a data controller in indirect connectivity mode with Kubernetes tools such as `kubectl` or `oc`](create-data-controller-using-kubernetes-native-tools.md) |
azure-arc | Limitations Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-managed-instance.md | - Title: Limitations of SQL Managed Instance enabled by Azure Arc -description: Limitations of SQL Managed Instance enabled by Azure Arc ------ Previously updated : 09/07/2021----# Limitations of SQL Managed Instance enabled by Azure Arc --This article describes limitations of SQL Managed Instance enabled by Azure Arc. --## Back up and restore --### Automated backups --- User databases with SIMPLE recovery model are not backed up.-- System database `model` is not backed up in order to prevent interference with creation/deletion of database. The database gets locked when admin operations are performed.--### Point-in-time restore (PITR) --- Doesn't support restore from one SQL Managed Instance enabled by Azure Arc to another SQL Managed Instance enabled by Azure Arc. The database can only be restored to the same Arc-enabled SQL Managed Instance where the backups were created.-- Renaming databases is currently not supported, during point in time restore.-- No support for restoring a TDE enabled database currently.-- A deleted database cannot be restored currently.--## Other limitations --- Transactional replication is currently not supported.-- Log shipping is currently blocked.-- All user databases need to be in a full recovery model because they participate in an always-on-availability group--## Roles and responsibilities --The roles and responsibilities between Microsoft and its customers differ between Azure PaaS services (Platform As A Service) and Azure hybrid (like SQL Managed Instance enabled by Azure Arc). --### Frequently asked questions --This table summarizes answers to frequently asked questions regarding support roles and responsibilities. --| Question | Azure Platform As A Service (PaaS) | Azure Arc hybrid services | -|:-|::|::| -| Who provides the infrastructure? | Microsoft | Customer | -| Who provides the software?* | Microsoft | Microsoft | -| Who does the operations? | Microsoft | Customer | -| Does Microsoft provide SLAs? | Yes | No | -| WhoΓÇÖs in charge of SLAs? | Microsoft | Customer | --\* Azure services --__Why doesn't Microsoft provide SLAs on Azure Arc hybrid services?__ Customers and their partners own and operate the infrastructure that Azure Arc hybrid services run on so Microsoft can't provide the SLA. --## Related content --- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_data) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. --- **Create your own.** Follow these steps to create on your own Kubernetes cluster: - 1. [Install the client tools](install-client-tools.md) - 2. [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) - 3. [Deploy SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) --- **Learn**- - [Read more about Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services) - - [Read about Azure Arc](https://aka.ms/azurearc) |
azure-arc | Limitations Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-postgresql.md | - Title: Limitations of Azure Arc-enabled PostgreSQL -description: Limitations of Azure Arc-enabled PostgreSQL ------ Previously updated : 11/03/2021----# Limitations of Azure Arc-enabled PostgreSQL --This article describes limitations of Azure Arc-enabled PostgreSQL. ---## High availability --Configuring high availability to recover from infrastructure failures isn't yet available. --## Monitoring --Currently, local monitoring with Grafana is only available for the default `postgres` database. Metrics dashboards for user created databases will be empty. --## Configuration --System configurations that are stored in `postgresql.auto.conf` are backed up when a base backup is created. This means that changes made after the last base backup, will not be present in a restored server until a new base backup is taken to capture those changes. --## Roles and responsibilities --The roles and responsibilities between Microsoft and its customers differ between Azure managed services (Platform As A Service or PaaS) and Azure hybrid (like Azure Arc-enabled PostgreSQL). --### Frequently asked questions -The table below summarizes answers to frequently asked questions regarding support roles and responsibilities. --| Question | Azure Platform As A Service (PaaS) | Azure Arc hybrid services | -|:-|::|::| -| Who provides the infrastructure? | Microsoft | Customer | -| Who provides the software?* | Microsoft | Microsoft | -| Who does the operations? | Microsoft | Customer | -| Does Microsoft provide SLAs? | Yes | No | -| WhoΓÇÖs in charge of SLAs? | Microsoft | Customer | --\* Azure services --__Why doesn't Microsoft provide SLAs on Azure Arc hybrid services?__ Because with a hybrid service, you or your provider owns the infrastructure. --## Related content --- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. --- **Create your own.** Follow these steps to create on your own Kubernetes cluster: - 1. [Install the client tools](install-client-tools.md) - 2. [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) - 3. [Create an Azure Database for PostgreSQL server on Azure Arc](create-postgresql-server.md) --- **Learn**- - [Read more about Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services) - - [Read about Azure Arc](https://aka.ms/azurearc) |
azure-arc | List Servers Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/list-servers-postgresql.md | - Title: List the Azure Arc-enabled PostgreSQL servers created in an Azure Arc Data Controller -description: List the Azure Arc-enabled PostgreSQL servers created in an Azure Arc Data Controller ------- Previously updated : 11/03/2021----# List the Azure Arc-enabled PostgreSQL servers created in an Azure Arc Data Controller --This article explains how you can retrieve the list of servers created in your Arc Data Controller. --To retrieve this list, use either of the following methods once you are connected to the Arc Data Controller: ---## From CLI with Azure CLI extension (az) --The general format of the command is: -```azurecli -az postgres server-arc list --k8s-namespace <namespace> --use-k8s -``` --It will return an output like: -```console -[ - { - "name": "postgres01", - "state": "Ready" - } -] -``` -For more details about the parameters available for this command, run: -```azurecli -az postgres server-arc list --help -``` --## From CLI with kubectl -Run either of the following commands. --**To list the server groups irrespective of the version of Postgres, run:** -```console -kubectl get postgresqls -n <namespace> -``` -It will return an output like: -```console -NAME STATE READY-PODS PRIMARY-ENDPOINT AGE -postgres01 Ready 5/5 12.345.67.890:5432 12d -``` --## Related content: --* [Read the article about how to get the connection end points and form the connection strings to connect to your server group](get-connection-endpoints-and-connection-strings-postgresql-server.md) -* [Read the article about showing the configuration of an Azure Arc-enabled PostgreSQL server](show-configuration-postgresql-server.md) |
azure-arc | Maintenance Window | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/maintenance-window.md | - Title: Maintenance window - Azure Arc-enabled data services -description: Article describes how to set a maintenance window ------- Previously updated : 03/31/2022----# Maintenance window - Azure Arc-enabled data services --Configure a maintenance window on a data controller to define a time period for upgrades. In this time period, the Arc-enabled SQL Managed Instances on that data controller which have the `desiredVersion` property set to `auto` will be upgraded. --During setup, specify a duration, recurrence, and start date and time. After the maintenance window starts, it will run for the period of time set in the duration. The instances attached to the data controller will begin upgrades (in parallel). At the end of the set duration, any upgrades that are in progress will continue to completion. Any instances that did not begin upgrading in the window will begin upgrading in the following recurrence. --## Prerequisites --a SQL Managed Instance enabled by Azure Arc with the [`desiredVersion` property set to `auto`](upgrade-sql-managed-instance-auto.md). --## Limitations --The maintenance window duration can be from 2 hours to 8 hours. --Only one maintenance window can be set per data controller. --## Configure a maintenance window --The maintenance window has these settings: --- Duration - The length of time the window will run, expressed in hours and minutes (HH:mm).-- Recurrence - how often the window will occur. All words are case sensitive and must be capitalized. You can set weekly or monthly windows.- - Weekly - - [Week | Weekly][day of week] - - Examples: - - `--recurrence "Week Thursday"` - - `--recurrence "Weekly Saturday"` - - Monthly - - [Month | Monthly] [First | Second | Third | Fourth | Last] [day of week] - - Examples: - - `--recurrence "Month Fourth Saturday"` - - `--recurrence "Monthly Last Monday"` - - If recurrence isn't specified, it will be a one-time maintenance window. -- Start - the date and time the first window will occur, in the format `YYYY-MM-DDThh:mm` (24-hour format).- - Example: - - `--start "2022-02-01T23:00"` -- Time Zone - the [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) associated with the maintenance window.--#### CLI --To create a maintenance window, use the following command: --```cli -az arcdata dc update --maintenance-start <date and time> --maintenance-duration <time> --maintenance-recurrence <interval> --maintenance-time-zone <time zone> --k8s-namespace <namespace> --use-k8s -``` --Example: --```cli -az arcdata dc update --maintenance-start "2022-01-01T23:00" --maintenance-duration 3:00 --maintenance-recurrence "Monthly First Saturday" --maintenance-time-zone US/Pacific --k8s-namespace arc --use-k8s -``` --## Monitor the upgrades --During the maintenance window, you can view the status of upgrades. --```kubectl -kubectl -n <namespace> get sqlmi -o yaml -``` --The `status.runningVersion` and `status.lastUpdateTime` fields will show the latest version and when the status changed. --## View existing maintenance window --You can view the maintenance window in the `datacontroller` spec. --```kubectl -kubectl describe datacontroller -n <namespace> -``` --Output: --```text -Spec: - Settings: - Maintenance: - Duration: 3:00 - Recurrence: Monthly First Saturday - Start: 2022-01-01T23:00 - Time Zone: US/Pacific -``` --## Failed upgrades --There is no automatic rollback for failed upgrades. If an instance failed to upgrade automatically, manual intervention will be needed to pin the instance to its current running version, using `az sql mi-arc update`. After the issue is resolved, the version can be set back to "auto". --```cli -az sql mi-arc upgrade --name <instance name> --desired-version <version> -``` --Example: -```cli -az sql mi-arc upgrade --name sql01 --desired-version v1.2.0_2021-12-15 -``` --## Disable maintenance window --When the maintenance window is disabled, automatic upgrades will not run. --```cli -az arcdata dc update --maintenance-enabled false --k8s-namespace <namespace> --use-k8s -``` --Example: --```cli -az arcdata dc update --maintenance-enabled false --k8s-namespace arc --use-k8s -``` --## Enable maintenance window --When the maintenance window is enabled, automatic upgrades will resume. --```cli -az arcdata dc update --maintenance-enabled true --k8s-namespace <namespace> --use-k8s -``` --Example: --```cli -az arcdata dc update --maintenance-enabled true --k8s-namespace arc --use-k8s -``` --## Change maintenance window options --The update command can be used to change any of the options. In this example, I will update the start time. --```cli -az arcdata dc update --maintenance-start <date and time> --k8s-namespace arc --use-k8s -``` --Example: --```cli -az arcdata dc update --maintenance-start "2022-04-15T23:00" --k8s-namespace arc --use-k8s -``` --## Related content --[Enable automatic upgrades of a SQL Managed Instance](upgrade-sql-managed-instance-auto.md) |
azure-arc | Manage Postgresql Server With Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/manage-postgresql-server-with-azure-data-studio.md | - Title: Use Azure Data Studio to manage your PostgreSQL instance -description: Use Azure Data Studio to manage your PostgreSQL instance ------ Previously updated : 07/30/2021----# Use Azure Data Studio to manage your Azure Arc-enabled PostgreSQL server ---This article describes how to: -- manage your PostgreSQL instances with dashboard views like Overview, Connection Strings, Properties, Resource Health...-- work with your data and schema---## Prerequisites --- [Install azdata, Azure Data Studio, and Azure CLI](install-client-tools.md)-- Install in Azure Data Studio the **[!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]** and **Azure Arc** and **PostgreSQL** extensions-- [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --- Create the [Azure Arc Data Controller](./create-data-controller-indirect-cli.md)-- Launch Azure Data Studio--## Connect to the Azure Arc Data Controller --In Azure Data Studio, expand the node **Azure Arc Controllers** and select the **Connect Controller** button: --Enter the connection information to your Azure Data Controller: --- **Controller URL:**-- The URL to connect to your controller in Kubernetes. Entered in the form of `https://<IP_address_of_the_controller>:<Kubernetes_port.` - For example: -- ```console - https://12.345.67.890:30080 - ``` -- **Username:**-- Name of the user account you use to connect to the Controller. Use the name you typically use when you run `az login`. It is not the name of the PostgreSQL user you use to connect to the PostgreSQL database engine typically from psql. -- **Password:**- The password of the user account you use to connect to the Controller ---Azure data studio shows your Arc Data Controller. Expand it and it shows the list of PostgreSQL instances that it manages. --## Manage your Azure Arc-enabled PostgreSQL servers --Right-click on the PostgreSQL instance you want to manage and select [Manage] --The PostgreSQL Dashboard view: --That features several dashboards listed on the left side of that pane: --- **Overview:** - Displays summary information about your instance like name, PostgreSQL admin user name, Azure subscription ID, configuration, version of the database engine, endpoints for Grafana and Kibana... -- **Connection Strings:** - Displays various connection strings you may need to connect to your PostgreSQL instance like psql, Node.js, PHP, Ruby... -- **Diagnose and solve problems:** - Displays various resources that will help you troubleshoot your instance as we expand the troubleshooting notebooks -- **New support request:** - Request assistance from our support services starting preview announcement. --## Work with your data and schema --On the left side of the Azure Data Studio window, expand the node **Servers**: --And select [Add Connection] and fill in the connection details to your PostgreSQL instance: -- **Connection Type:** PostgreSQL-- **Server name:** enter the name of your PostgreSQL instance. For example: postgres01-- **Authentication type:** Password-- **User name:** for example, you can use the standard/default PostgreSQL admin user name. Note, this field is case-sensitive.-- **Password:** you'll find the password of the PostgreSQL username in the psql connection string in the output of the `az postgres server-arc endpoint -n postgres01` command-- **Database name:** set the name of the database you want to connect to. You can let it set to __Default__-- **Server group:** you can let it set to __Default__-- **Name (optional):** you can let this blank-- **Advanced:**- - **Host IP Address:** is the Public IP address of the Kubernetes cluster - - **Port:** is the port on which your PostgreSQL instance is listening. You can find this port at the end of the psql connection string in the output of the `az postgres server-arc endpoint -n postgres01` command. Not port 30080 on which Kubernetes is listening and that you entered when connecting to the Azure Data Controller in Azure Data Studio. - - **Other parameters:** They should be self-explicit, you can live with the default/blank values they appear with. --Select **[OK] and [Connect]** to connect to your server. --Once connected, several experiences are available: -- **New query**-- **New Notebook**-- **Expand the display of your server and browse/work on the objects inside your database**-- **...**--## Next step -[Monitor your server group](monitor-grafana-kibana.md) |
azure-arc | Managed Instance Business Continuity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-business-continuity-overview.md | - Title: Business continuity overview - SQL Managed Instance enabled by Azure Arc -description: Overview business continuity for SQL Managed Instance enabled by Azure Arc ------ Previously updated : 01/27/2022----# Overview: SQL Managed Instance enabled by Azure Arc business continuity --Business continuity is a combination of people, processes, and technology that enables businesses to recover and continue operating in the event of disruptions. In hybrid scenarios there is a joint responsibility between Microsoft and customer, such that customer owns and manages the on-premises infrastructure while the software is provided by Microsoft. --## Features --This overview describes the set of capabilities that come built-in with SQL Managed Instance enabled by Azure Arc and how you can leverage them to recover from disruptions. --| Feature | Use case | Service Tier | -|--|--|| -| Point in time restore | Use the built-in point in time restore (PITR) feature to recover from situations such as data corruptions caused by human errors. Learn more about [Point in time restore](.\point-in-time-restore.md) | Available in both General Purpose and Business Critical service tiers| -| High availability | Deploy the Azure Arc enabled SQL Managed Instance in high availability mode to achieve local high availability. This mode automatically recovers from scenarios such as hardware failures, pod/node failures, and etc. The built-in listener service automatically redirects new connections to another replica while Kubernetes attempts to rebuild the failed replica. Learn more about [high-availability in SQL Managed Instance enabled by Azure Arc](.\managed-instance-high-availability.md) |This feature is only available in the Business Critical service tier. <br> For General Purpose service tier, Kubernetes provides basic recoverability from scenarios such as node/pod crashes. | -|Disaster recovery| Configure disaster recovery by setting up another SQL Managed Instance enabled by Azure Arc in a geographically separate data center to synchronize data from the primary data center. This scenario is useful for recovering from events when an entire data center is down due to disruptions such as power outages or other events. | Available in both General Purpose and Business Critical service tiers| -| --## Related content --[Learn more about configuring point in time restore](.\point-in-time-restore.md) --[Learn more about configuring high availability in SQL Managed Instance enabled by Azure Arc](.\managed-instance-high-availability.md) --[Learn more about setting up and configuring disaster recovery in SQL Managed Instance enabled by Azure Arc](.\managed-instance-disaster-recovery.md) |
azure-arc | Managed Instance Disaster Recovery Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-cli.md | - Title: Configure failover group - CLI -description: Describes how to configure disaster recovery with a failover group for SQL Managed Instance enabled by Azure Arc with the CLI ------- Previously updated : 08/02/2023----# Configure failover group - CLI --This article explains how to configure disaster recovery for SQL Managed Instance enabled by Azure Arc with the CLI. Before you proceed, review the information and prerequisites in [SQL Managed Instance enabled by Azure Arc - disaster recovery](managed-instance-disaster-recovery.md). ---## Configure Azure failover group - direct mode --Follow the steps below if the Azure Arc data services are deployed in `directly` connected mode. --Once the prerequisites are met, run the below command to set up Azure failover group between the two instances: --```azurecli -az sql instance-failover-group-arc create --name <name of failover group> --mi <primary SQL MI> --partner-mi <Partner MI> --resource-group <name of RG> --partner-resource-group <name of partner MI RG> -``` --Example: --```azurecli -az sql instance-failover-group-arc create --name sql-fog --mi sql1 --partner-mi sql2 --resource-group rg-name --partner-resource-group rg-name -``` --The above command: --- Creates the required custom resources on both primary and secondary sites-- Copies the mirroring certificates and configures the failover group between the instances --## Configure Azure failover group - indirect mode --Follow the steps below if Azure Arc data services are deployed in `indirectly` connected mode. --1. Provision the managed instance in the primary site. -- ```azurecli - az sql mi-arc create --name <primaryinstance> --tier bc --replicas 3 --k8s-namespace <namespace> --use-k8s - ``` --2. Switch context to the secondary cluster by running ```kubectl config use-context <secondarycluster>``` and provision the managed instance in the secondary site that will be the disaster recovery instance. At this point, the system databases are not part of the contained availability group. -- > [!NOTE] - > It is important to specify `--license-type DisasterRecovery` **during** the managed instance. This will allow the DR instance to be seeded from the primary instance in the primary data center. Updating this property post deployment will not have the same effect. -- ```azurecli - az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --license-type DisasterRecovery --k8s-namespace <namespace> --use-k8s - ``` --3. Mirroring certificates - The binary data inside the Mirroring Certificate property of the managed instance is needed for the Instance Failover Group CR (Custom Resource) creation. -- This can be achieved in a few ways: -- (a) If using `az` CLI, generate the mirroring certificate file first, and then point to that file while configuring the Instance Failover Group so the binary data is read from the file and copied over into the CR. The cert files are not needed after failover group creation. -- (b) If using `kubectl`, directly copy and paste the binary data from the managed instance CR into the yaml file that will be used to create the Instance Failover Group. --- Using (a) above: -- Create the mirroring certificate file for primary instance: - ```azurecli - az sql mi-arc get-mirroring-cert --name <primaryinstance> --cert-file </path/name>.pemΓÇï --k8s-namespace <namespace> --use-k8s - ``` -- Example: - ```azurecli - az sql mi-arc get-mirroring-cert --name sqlprimary --cert-file $HOME/sqlcerts/sqlprimary.pemΓÇï --k8s-namespace my-namespace --use-k8s - ``` -- Connect to the secondary cluster and create the mirroring certificate file for secondary instance: -- ```azurecli - az sql mi-arc get-mirroring-cert --name <secondaryinstance> --cert-file </path/name>.pem --k8s-namespace <namespace> --use-k8s - ``` -- Example: -- ```azurecli - az sql mi-arc get-mirroring-cert --name sqlsecondary --cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s - ``` -- Once the mirroring certificate files are created, copy the certificate from the secondary instance to a shared/local path on the primary instance cluster and vice-versa. --4. Create the failover group resource on both sites. --- > [!NOTE] - > Ensure the SQL instances have different names for both primary and secondary sites, and the `shared-name` value should be identical on both sites. - - ```azurecli - az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for primary failover group resource> --mi <local SQL managed instance name> --role primary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<secondary IP> --partner-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s - ``` -- Example: - ```azurecli - az sql instance-failover-group-arc create --shared-name myfog --name primarycr --mi sqlinstance1 --role primary --partner-mi sqlinstance2 --partner-mirroring-url tcp://10.20.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance2.pem --k8s-namespace my-namespace --use-k8s - ``` -- On the secondary instance, run the following command to set up the failover group custom resource. The `--partner-mirroring-cert-file` in this case should point to a path that has the mirroring certificate file generated from the primary instance as described in 3(a) above. -- ```azurecli - az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for secondary failover group resource> --mi <local SQL managed instance name> --role secondary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<primary IP> --partner-mirroring-cert-file <primary.pem> --k8s-namespace <namespace> --use-k8s - ``` -- Example: - ```azurecli - az sql instance-failover-group-arc create --shared-name myfog --name secondarycr --mi sqlinstance2 --role secondary --partner-mi sqlinstance1 --partner-mirroring-url tcp://10.10.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance1.pem --k8s-namespace my-namespace --use-k8s - ``` --## Retrieve Azure failover group health state --Information about the failover group such as primary role, secondary role, and the current health status can be viewed on the custom resource on either primary or secondary site. --Run the below command on primary and/or the secondary site to list the failover groups custom resource: --```azurecli -kubectl get fog -n <namespace> -``` --Describe the custom resource to retrieve the failover group status, as follows: --```azurecli -kubectl describe fog <failover group cr name> -n <namespace> -``` --## Failover group operations --Once the failover group is set up between the managed instances, different failover operations can be performed depending on the circumstances. --Possible failover scenarios are: --- The instances at both sites are in healthy state and a failover needs to be performed: - + perform a manual failover from primary to secondary without data loss by setting `role=secondary` on the primary SQL MI. - -- Primary site is unhealthy/unreachable and a failover needs to be performed:- - + the primary SQL Managed Instance enabled by Azure Arc is down/unhealthy/unreachable - + the secondary SQL Managed Instance enabled by Azure Arc needs to be force-promoted to primary with potential data loss - + when the original primary SQL Managed Instance enabled by Azure Arc comes back online, it will report as `Primary` role and unhealthy state and needs to be forced into a `secondary` role so it can join the failover group and data can be synchronized. - --## Manual failover (without data loss) --Use `az sql instance-failover-group-arc update ...` command group to initiate a failover from primary to secondary. Any pending transactions on the geo-primary instance are replicated over to the geo-secondary instance before the failover. --### Directly connected mode -Run the following command to initiate a manual failover, in `direct` connected mode using ARM APIs: --```azurecli -az sql instance-failover-group-arc update --name <shared name of failover group> --mi <primary instance> --role secondary --resource-group <resource group> -``` -Example: --```azurecli -az sql instance-failover-group-arc update --name myfog --mi sqlmi1 --role secondary --resource-group myresourcegroup -``` -### Indirectly connected mode -Run the following command to initiate a manual failover, in `indirect` connected mode using kubernetes APIs: --```azurecli -az sql instance-failover-group-arc update --name <name of failover group resource> --role secondary --k8s-namespace <namespace> --use-k8s -``` --Example: --```azurecli -az sql instance-failover-group-arc update --name myfog --role secondary --k8s-namespace my-namespace --use-k8s -``` --## Forced failover with data loss --In the circumstance when the geo-primary instance becomes unavailable, the following commands can be run on the geo-secondary DR instance to promote to primary with a forced failover incurring potential data loss. --On the geo-secondary DR instance, run the following command to promote it to primary role, with data loss. --> [!NOTE] -> If the `--partner-sync-mode` was configured as `sync`, it needs to be reset to `async` when the secondary is promoted to primary. --### Directly connected mode -```azurecli -az sql instance-failover-group-arc update --name <shared name of failover group> --mi <instance> --role force-primary-allow-data-loss --resource-group <resource group> --partner-sync-mode async -``` -Example: --```azurecli -az sql instance-failover-group-arc update --name myfog --mi sqlmi2 --role force-primary-allow-data-loss --resource-group myresourcegroup --partner-sync-mode async -``` --### Indirectly connected mode -```azurecli -az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-primary-allow-data-loss --partner-sync-mode async -``` --When the geo-primary instance becomes available, run the below command to bring it into the failover group and synchronize the data: --### Directly connected mode -```azurecli -az sql instance-failover-group-arc update --name <shared name of failover group> --mi <old primary instance> --role force-secondary --resource-group <resource group> -``` --### Indirectly connected mode -```azurecli -az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-secondary -``` -Optionally, the `--partner-sync-mode` can be configured back to `sync` mode if desired. --## Post failover operations -Once you perform a failover from primary site to secondary site, either with or without data loss, you may need to do the following: -- Update the connection string for your applications to connect to the newly promoted primary Arc SQL managed instance-- If you plan to continue running the production workload off of the secondary site, update the `--license-type` to either `BasePrice` or `LicenseIncluded` to initiate billing for the vCores consumed.--## Related content --- [Overview: SQL Managed Instance enabled by Azure Arc business continuity](managed-instance-business-continuity-overview.md)-- [Configure failover group - portal](managed-instance-disaster-recovery-portal.md) |
azure-arc | Managed Instance Disaster Recovery Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-portal.md | - Title: Disaster recovery - SQL Managed Instance enabled by Azure Arc - portal -description: Describes how to configure disaster recovery for SQL Managed Instance enabled by Azure Arc in the portal ------ Previously updated : 08/02/2023----# Configure failover group - portal --This article explains how to configure disaster recovery for SQL Managed Instance enabled by Azure Arc with Azure portal. Before you proceed, review the information and prerequisites in [SQL Managed Instance enabled by Azure Arc - disaster recovery](managed-instance-disaster-recovery.md). ---To configure disaster recovery through Azure portal, the Azure Arc-enabled data service requires direct connectivity to Azure. --## Configure Azure failover group --1. In the portal, go to your primary availability group. -1. Under **Data Management**, select **Failover Groups**. -- Azure portal presents **Create instance failover group**. -- :::image type="content" source="media/managed-instance-disaster-recovery-portal/create-failover-group.png" alt-text="Screenshot of the Azure portal create instance failover group control."::: --1. Provide the information to define the failover group. -- * **Primary mirroring URL**: The mirroring endpoint for the failover group instance. - * **Resource group**: The resource group for the failover group instance. - * **Secondary managed instance**: The Azure SQL Managed Instance at the DR location. - * **Synchronization mode**: Select either *Sync* for synchronous mode, or *Async* for asynchronous mode. - * **Instance failover group name**: The name of the failover group. - -1. Select **Create**. --Azure portal begins to provision the instance failover group. --## View failover group --After the failover group is provisioned, you can view it in Azure portal. ---## Failover --In the disaster recovery configuration, only one of the instances in the failover group is primary. You can fail over from the portal to migrate the primary role to the other instance in your failover group. To fail over: --1. In portal, locate your managed instance. -1. Under **Data Management** select **Failover Groups**. -1. Select **Failover**. --Monitor failover progress in Azure portal. --## Set synchronization mode --To set the synchronization mode: --1. From **Failover Groups**, select **Edit configuration**. -- Azure portal shows an **Edit Configuration** control. -- :::image type="content" source="media/managed-instance-disaster-recovery-portal/edit-synchronization.png" alt-text="Screenshot of the Edit Configuration control."::: --1. Under **Edit configuration**, select your desired mode, and select **Apply**. --## Monitor failover group status in the portal --After you use the portal to change a failover group, the portal automatically reports the status as the change is applied. Changes that the portal reports include: --- Add failover group-- Edit failover group configuration-- Start failover-- Delete failover group--After you initiate the change, the portal automatically refreshes the status every two minutes. The portal automatically refreshes for two minutes. --## Delete failover group --1. From Failover Groups**, select **Delete Failover Group**. -- Azure portal asks you to confirm your choice to delete the failover group. --1. Select **Delete failover group** to proceed. Otherwise select **Cancel**, to not delete the group. ---## Related content --- [Overview: SQL Managed Instance enabled by Azure Arc business continuity](managed-instance-business-continuity-overview.md)-- [Configure failover group - CLI](managed-instance-disaster-recovery-cli.md) |
azure-arc | Managed Instance Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md | - Title: Disaster recovery - SQL Managed Instance enabled by Azure Arc -description: Describes disaster recovery for SQL Managed Instance enabled by Azure Arc ------ Previously updated : 08/02/2023----# SQL Managed Instance enabled by Azure Arc - disaster recovery --To configure disaster recovery in SQL Managed Instance enabled by Azure Arc, set up Azure failover groups. This article explains failover groups. --## Background --Azure failover groups use the same distributed availability groups technology that is in SQL Server. Because SQL Managed Instance enabled by Azure Arc runs on Kubernetes, there's no Windows failover cluster involved. For more information, see [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups). --> [!NOTE] -> - The instances in both geo-primary and geo-secondary sites need to be identical in terms of their compute & capacity, as well as service tiers they are deployed in. -> - Distributed availability groups can be set up for either General Purpose or Business Critical service tiers. --You can configure failover groups in with the CLI or in the portal. For prerequisites and instructions see the respective content below: --- [Configure failover group - portal](managed-instance-disaster-recovery-portal.md)-- [Configure failover group - CLI](managed-instance-disaster-recovery-cli.md)--## Related content --- [Overview: SQL Managed Instance enabled by Azure Arc business continuity](managed-instance-business-continuity-overview.md) |
azure-arc | Managed Instance Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-features.md | - Title: Features and Capabilities of SQL Managed Instance enabled by Azure Arc -description: Features and Capabilities of SQL Managed Instance enabled by Azure Arc ------ Previously updated : 07/30/2021----# Features and Capabilities of SQL Managed Instance enabled by Azure Arc --SQL Managed Instance enabled by Azure Arc share a common code base with the latest stable version of SQL Server. Most of the standard SQL language, query processing, and database management features are identical. The features that are common between SQL Server and SQL Database or SQL Managed Instance are: --- Language features - [Control of flow language keywords](/sql/t-sql/language-elements/control-of-flow), [Cursors](/sql/t-sql/language-elements/cursors-transact-sql), [Data types](/sql/t-sql/data-types/data-types-transact-sql), [DML statements](/sql/t-sql/queries/queries), [Predicates](/sql/t-sql/queries/predicates), [Sequence numbers](/sql/relational-databases/sequence-numbers/sequence-numbers), [Stored procedures](/sql/relational-databases/stored-procedures/stored-procedures-database-engine), and [Variables](/sql/t-sql/language-elements/variables-transact-sql).-- Database features - [Automatic tuning (plan forcing)](/sql/relational-databases/automatic-tuning/automatic-tuning), [Change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server), [Database collation](/sql/relational-databases/collations/set-or-change-the-database-collation), [Contained databases](/sql/relational-databases/databases/contained-databases), [Contained users](/sql/relational-databases/security/contained-database-users-making-your-database-portable), [Data compression](/sql/relational-databases/data-compression/data-compression), [Database configuration settings](/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql), [Online index operations](/sql/relational-databases/indexes/perform-index-operations-online), [Partitioning](/sql/relational-databases/partitions/partitioned-tables-and-indexes), and [Temporal tables](/sql/relational-databases/tables/temporal-tables) ([see getting started guide](/sql/relational-databases/tables/getting-started-with-system-versioned-temporal-tables)).-- Security features - [Application roles](/sql/relational-databases/security/authentication-access/application-roles), [Dynamic data masking](/sql/relational-databases/security/dynamic-data-masking) ([Get started with SQL Database dynamic data masking with the Azure portal](/azure/azure-sql/database/dynamic-data-masking-configure-portal)), [Row Level Security](/sql/relational-databases/security/row-level-security)-- Multi-model capabilities - [Graph processing](/sql/relational-databases/graphs/sql-graph-overview), [JSON data](/sql/relational-databases/json/json-data-sql-server), [OPENXML](/sql/t-sql/functions/openxml-transact-sql), [Spatial](/sql/relational-databases/spatial/spatial-data-sql-server), [OPENJSON](/sql/t-sql/functions/openjson-transact-sql), and [XML indexes](/sql/t-sql/statements/create-xml-index-transact-sql).---## <a name="RDBMSHA"></a> RDBMS High Availability - -|Feature|SQL Managed Instance enabled by Azure Arc| -|-|-| -|Always On failover cluster instance<sup>1</sup>| Not Applicable. Similar capabilities available.| -|Always On availability groups |Business Critical service tier.| -|Basic availability groups |Not Applicable. Similar capabilities available.| -|Minimum replica commit availability group |Business Critical service tier.| -|Clusterless availability group|Yes| -|Backup database | Yes - `COPY_ONLY` See [BACKUP - (Transact-SQL)](/sql/t-sql/statements/backup-transact-sql?view=azuresqldb-mi-current&preserve-view=true)| -|Backup compression|Yes| -|Backup mirror |Yes| -|Backup encryption|Yes| -|Back up to Azure to (back up to URL)|Yes| -|Database snapshot|Yes| -|Fast recovery|Yes| -|Hot add memory and CPU|Yes| -|Log shipping|Not currently available.| -|Online page and file restore|Yes| -|Online indexing|Yes| -|Online schema change|Yes| -|Resumable online index rebuilds|Yes| --<sup>1</sup> In the scenario where there is a pod failure, a new SQL Managed Instance will start up and re-attach to the persistent volume containing your data. [Learn more about Kubernetes persistent volumes here](https://kubernetes.io/docs/concepts/storage/persistent-volumes). --## <a name="RDBMSSP"></a> RDBMS Scalability and Performance --| Feature | SQL Managed Instance enabled by Azure Arc | -|--|--| -| Columnstore | Yes | -| Large object binaries in clustered columnstore indexes | Yes | -| Online nonclustered columnstore index rebuild | Yes | -| In-Memory OLTP | Yes | -| Persistent Main Memory | Yes | -| Table and index partitioning | Yes | -| Data compression | Yes | -| Resource Governor | Yes | -| Partitioned Table Parallelism | Yes | -| NUMA Aware and Large Page Memory and Buffer Array Allocation | Yes | -| IO Resource Governance | Yes | -| Delayed Durability | Yes | -| Automatic Tuning | Yes | -| Batch Mode Adaptive Joins | Yes | -| Batch Mode Memory Grant Feedback | Yes | -| Interleaved Execution for Multi-Statement Table Valued Functions | Yes | -| Bulk insert improvements | Yes | --## <a name="RDBMSS"></a> RDBMS Security --| Feature | SQL Managed Instance enabled by Azure Arc | -|--|--| -| Row-level security | Yes | -| Always Encrypted | Yes | -| Always Encrypted with Secure Enclaves | No | -| Dynamic data masking | Yes | -| Basic auditing | Yes | -| Fine grained auditing | Yes | -| Transparent database encryption | Yes | -| User-defined roles | Yes | -| Contained databases | Yes | -| Encryption for backups | Yes | -| SQL Server Authentication | Yes | -| Microsoft Entra authentication | No | -| Windows Authentication | Yes | --## <a name="RDBMSM"></a> RDBMS Manageability --| Feature | SQL Managed Instance enabled by Azure Arc | -|--|--| -| Dedicated administrator connection | Yes | -| PowerShell scripting support | Yes | -| Support for data-tier application component operations - extract, deploy, upgrade, delete | Yes | -| Policy automation (check on schedule and change) | Yes | -| Performance data collector | Yes | -| Standard performance reports | Yes | -| Plan guides and plan freezing for plan guides | Yes | -| Direct query of indexed views (using NOEXPAND hint) | Yes | -| Automatically maintain indexed views | Yes | -| Distributed partitioned views | Yes | -| Parallel indexed operations | Yes | -| Automatic use of indexed view by query optimizer | Yes | -| Parallel consistency check | Yes | --### <a name="Programmability"></a> Programmability --| Feature | SQL Managed Instance enabled by Azure Arc | -|--|--| -| JSON | Yes | -| Query Store | Yes | -| Temporal | Yes | -| Native XML support | Yes | -| XML indexing | Yes | -| MERGE & UPSERT capabilities | Yes | -| Date and Time datatypes | Yes | -| Internationalization support | Yes | -| Full-text and semantic search | No | -| Specification of language in query | Yes | -| Service Broker (messaging) | Yes | -| Transact-SQL endpoints | Yes | -| Graph | Yes | -| Machine Learning Services | No | -| PolyBase | No | ---### Tools --SQL Managed Instance enabled by Azure Arc supports various data tools that can help you manage your data. --| **Tool** | SQL Managed Instance enabled by Azure Arc| -| | | | -| Azure portal | Yes | -| Azure CLI | Yes | -| [Azure Data Studio](/azure-data-studio/what-is-azure-data-studio) | Yes | -| Azure PowerShell | No | -| [BACPAC file (export)](/sql/relational-databases/data-tier-applications/export-a-data-tier-application) | Yes | -| [BACPAC file (import)](/sql/relational-databases/data-tier-applications/import-a-bacpac-file-to-create-a-new-user-database) | Yes | -| [SQL Server Data Tools (SSDT)](/sql/ssdt/download-sql-server-data-tools-ssdt) | Yes | -| [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) | Yes | -| [SQL Server PowerShell](/sql/relational-databases/scripting/sql-server-powershell) | Yes | -| [SQL Server Profiler](/sql/tools/sql-server-profiler/sql-server-profiler) | Yes | -- [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] --### <a name="Unsupported"></a> Unsupported Features & Services --The following features and services are not available for SQL Managed Instance enabled by Azure Arc. --| Area | Unsupported feature or service | -|--|--| -| **Database engine** | Merge replication | -| | Stretch DB | -| | Distributed query with 3rd-party connections | -| | Linked Servers to data sources other than SQL Server and Azure SQL products | -| | System extended stored procedures (XP_CMDSHELL, etc.) | -| | FileTable, FILESTREAM | -| | CLR assemblies with the EXTERNAL_ACCESS or UNSAFE permission set | -| | Buffer Pool Extension | -| **SQL Server Agent** | SQL Server agent is supported but the following specific capabilities are not supported: Subsystems (CmdExec, PowerShell, Queue Reader, SSIS, SSAS, SSRS), Alerts, Managed Backup -| **High Availability** | Database mirroring | -| **Security** | Extensible Key Management | -| | AD Authentication for Linked Servers | -| | AD Authentication for Availability Groups (AGs) | |
azure-arc | Managed Instance High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-high-availability.md | - Title: SQL Managed Instance enabled by Azure Arc high availability- -description: Learn how to deploy SQL Managed Instance enabled by Azure Arc with high availability. --- Previously updated : 07/30/2021--------# High Availability with SQL Managed Instance enabled by Azure Arc --SQL Managed Instance enabled by Azure Arc is deployed on Kubernetes as a containerized application. It uses Kubernetes constructs such as stateful sets and persistent storage to provide built-in: --- Health monitoring-- Failure detection-- Automatic fail over to maintain service health. --For increased reliability, you can also configure SQL Managed Instance enabled by Azure Arc to deploy with extra replicas in a high availability configuration. The Arc data services data controller manages: --- Monitoring-- Failure detection-- Automatic failover--Arc-enabled data service provides this service without user intervention. The service: --- Sets up the availability group-- Configures database mirroring endpoints-- Adds databases to the availability group-- Coordinates failover and upgrade. --This document explores both types of high availability. --SQL Managed Instance enabled by Azure Arc provides different levels of high availability depending on whether the SQL managed instance was deployed as a *General Purpose* service tier or *Business Critical* service tier. --## High availability in General Purpose service tier --In the General Purpose service tier, there is only one replica available, and the high availability is achieved via Kubernetes orchestration. For instance, if a pod or node containing the managed instance container image crashes, Kubernetes attempts to stand up another pod or node, and attach to the same persistent storage. During this time, the SQL managed instance is unavailable to the applications. Applications need to reconnect and retry the transaction when the new pod is up. If `load balancer` is the service type used, then applications can reconnect to the same primary endpoint and Kubernetes will redirect the connection to the new primary. If the service type is `nodeport` then the applications will need to reconnect to the new IP address. --### Verify built-in high availability --To verify the build-in high availability provided by Kubernetes, you can: --1. Delete the pod of an existing managed instance -1. Verify that Kubernetes recovers from this action --During recover, Kubernetes bootstraps another pod and attaches the persistent storage. --### Prerequisites --- Kubernetes cluster requires [shared, remote storage](storage-configuration.md#factors-to-consider-when-choosing-your-storage-configuration) -- A SQL Managed Instance enabled by Azure Arc deployed with one replica (default)---1. View the pods. -- ```console - kubectl get pods -n <namespace of data controller> - ``` --2. Delete the managed instance pod. -- ```console - kubectl delete pod <name of managed instance>-0 -n <namespace of data controller> - ``` -- For example -- ```output - user@pc:/# kubectl delete pod sql1-0 -n arc - pod "sql1-0" deleted - ``` --3. View the pods to verify that the managed instance is recovering. -- ```console - kubectl get pods -n <namespace of data controller> - ``` -- For example: -- ```output - user@pc:/# kubectl get pods -n arc - NAME READY STATUS RESTARTS AGE - sql1-0 2/3 Running 0 22s - ``` --After all containers within the pod recover, you can connect to the managed instance. ---## High availability in Business Critical service tier --In the Business Critical service tier, in addition to what is natively provided by Kubernetes orchestration, SQL Managed Instance for Azure Arc provides a contained availability group. The contained availability group is built on SQL Server Always On technology. It provides higher levels of availability. SQL Managed Instance enabled by Azure Arc deployed with *Business Critical* service tier can be deployed with either 2 or 3 replicas. These replicas are always kept in sync with each other. --With contained availability groups, any pod crashes or node failures are transparent to the application. The contained availability group provides at least one other pod that has all the data from the primary and is ready to take on connections. --## Contained availability groups --An availability group binds one or more user databases into a logical group so that when there is a failover, the entire group of databases fails over to the secondary replica as a single unit. An availability group only replicates data in the user databases but not the data in system databases such as logins, permissions, or agent jobs. A contained availability group includes metadata from system databases such as `msdb` and `master` databases. When logins are created or modified in the primary replica, they're automatically also created in the secondary replicas. Similarly, when an agent job is created or modified in the primary replica, the secondary replicas also receive those changes. --SQL Managed Instance enabled by Azure Arc takes this concept of contained availability group and adds Kubernetes operator so these can be deployed and managed at scale. --Capabilities that contained availability groups enable: --- When deployed with multiple replicas, a single availability group named with the same name as the Arc enabled SQL managed instance is created. By default, contained AG has three replicas, including primary. All CRUD operations for the availability group are managed internally, including creating the availability group or joining replicas to the availability group created. You can't create more availability groups in an instance.--- All databases are automatically added to the availability group, including all user and system databases like `master` and `msdb`. This capability provides a single-system view across the availability group replicas. Notice both `containedag_master` and `containedag_msdb` databases if you connect directly to the instance. The `containedag_*` databases represent the `master` and `msdb` inside the availability group.--- An external endpoint is automatically provisioned for connecting to databases within the availability group. This endpoint `<managed_instance_name>-external-svc` plays the role of the availability group listener.--### Deploy SQL Managed Instance enabled by Azure Arc with multiple replicas using Azure portal --From Azure portal, on the create SQL Managed Instance enabled by Azure Arc page: -1. Select **Configure Compute + Storage** under Compute + Storage. The portal shows advanced settings. -2. Under Service tier, select **Business Critical**. -3. Check the "For development use only", if using for development purposes. -4. Under High availability, select either **2 replicas** or **3 replicas**. --![High availability settings](.\media\business-continuity\service-tier-replicas.png) ----### Deploy with multiple replicas using Azure CLI ---When a SQL Managed Instance enabled by Azure Arc is deployed in Business Critical service tier, the deployment creates multiple replicas. The setup and configuration of contained availability groups among those instances is automatically done during provisioning. --For instance, the following command creates a managed instance with 3 replicas. --Indirectly connected mode: --```azurecli -az sql mi-arc create -n <instanceName> --k8s-namespace <namespace> --use-k8s --tier <tier> --replicas <number of replicas> -``` -Example: --```azurecli -az sql mi-arc create -n sqldemo --k8s-namespace my-namespace --use-k8s --tier BusinessCritical --replicas 3 -``` --Directly connected mode: --```azurecli -az sql mi-arc create --name <name> --resource-group <group> --location <Azure location> ΓÇôsubscription <subscription> --custom-location <custom-location> --tier <tier> --replicas <number of replicas> -``` -Example: -```azurecli -az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôsubscription xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --tier BusinessCritical --replcias 3 -``` --By default, all the replicas are configured in synchronous mode. This means any updates on the primary instance are synchronously replicated to each of the secondary instances. --## View and monitor high availability status --Once the deployment is complete, connect to the primary endpoint from SQL Server Management Studio. --Verify and retrieve the endpoint of the primary replica, and connect to it from SQL Server Management Studio. -For instance, if the SQL instance was deployed using `service-type=loadbalancer`, run the below command to retrieve the endpoint to connect to: --```azurecli -az sql mi-arc list --k8s-namespace my-namespace --use-k8s -``` --or -```console -kubectl get sqlmi -A -``` --### Get the primary and secondary endpoints and AG status --Use the `kubectl describe sqlmi` or `az sql mi-arc show` commands to view the primary and secondary endpoints, and high availability status. --Example: --```console -kubectl describe sqlmi sqldemo -n my-namespace -``` -or --```azurecli -az sql mi-arc show --name sqldemo --k8s-namespace my-namespace --use-k8s -``` --Example output: --```console - "status": { - "endpoints": { - "logSearchDashboard": "https://10.120.230.404:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqldemo'))", - "metricsDashboard": "https://10.120.230.46:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqldemo-0", - "mirroring": "10.15.100.150:5022", - "primary": "10.15.100.150,1433", - "secondary": "10.15.100.156,1433" - }, - "highAvailability": { - "healthState": "OK", - "mirroringCertificate": "--BEGIN CERTIFICATE--\n...\n--END CERTIFICATE--" - }, - "observedGeneration": 1, - "readyReplicas": "2/2", - "state": "Ready" - } -``` --You can connect to the primary endpoint with SQL Server Management Studio and verify DMVs as: --```sql -SELECT * FROM sys.dm_hadr_availability_replica_states -``` ----![Availability Group](.\media\business-continuity\availability-group.png) --And the Contained Availability Dashboard: --![Container Availability Group dashboard](.\media\business-continuity\ag-dashboard.png) ---## Failover scenarios --Unlike SQL Server Always On availability groups, the contained availability group is a managed high availability solution. Hence, the failover modes are limited compared to the typical modes available with SQL Server Always On availability groups. --Deploy Business Critical service tier SQL managed instances in either two-replica configuration or three replica configuration. The effects of failures and the subsequent recoverability are different with each configuration. A three replica instance provides a higher level of availability and recovery, than a two replica instance. --In a two replica configuration, when both the node states are `SYNCHRONIZED`, if the primary replica becomes unavailable, the secondary replica is automatically promoted to primary. When the failed replica becomes available, it is updated with all the pending changes. If there are connectivity issues between the replicas, then the primary replica may not commit any transactions as every transaction needs to be committed on both replicas before a success is returned back on the primary. --In a three replica configuration, a transaction needs to commit in at least 2 of the 3 replicas before returning a success message back to the application. In the event of a failure, one of the secondaries is automatically promoted to primary while Kubernetes attempts to recover the failed replica. When the replica becomes available, it is automatically joined back with the contained availability group and pending changes are synchronized. If there are connectivity issues between the replicas, and more than 2 replicas are out of sync, primary replica won't commit any transactions. --> [!NOTE] -> It is recommended to deploy a Business Critical SQL Managed Instance in a three replica configuration than a two replica configuration to achieve near-zero data loss. ---To fail over from the primary replica to one of the secondaries, for a planned event, run the following command: --If you connect to primary, you can use following T-SQL to fail over the SQL instance to one of the secondaries: -```code -ALTER AVAILABILITY GROUP current SET (ROLE = SECONDARY); -``` ---If you connect to the secondary, you can use following T-SQL to promote the desired secondary to primary replica. -```code -ALTER AVAILABILITY GROUP current SET (ROLE = PRIMARY); -``` -### Preferred primary replica --You can also set a specific replica to be the primary replica using AZ CLI as follows: -```azurecli -az sql mi-arc update --name <sqlinstance name> --k8s-namespace <namespace> --use-k8s --preferred-primary-replica <replica> -``` --Example: -```azurecli -az sql mi-arc update --name sqldemo --k8s-namespace my-namespace --use-k8s --preferred-primary-replica sqldemo-3 -``` --> [!NOTE] -> Kubernetes will attempt to set the preferred replica, however it is not guaranteed. --- ## Restoring a database onto a multi-replica instance --Additional steps are required to restore a database into an availability group. The following steps demonstrate how to restore a database into a managed instance and add it to an availability group. --1. Expose the primary instance external endpoint by creating a new Kubernetes service. -- Determine the pod that hosts the primary replica. Connect to the managed instance and run: -- ```sql - SELECT @@SERVERNAME - ``` -- The query returns the pod that hosts the primary replica. -- Create the Kubernetes service to the primary instance by running the following command if your Kubernetes cluster uses `NodePort` services. Replace `<podName>` with the name of the server returned at previous step, `<serviceName>` with the preferred name for the Kubernetes service created. -- ```console - kubectl -n <namespaceName> expose pod <podName> --port=1533 --name=<serviceName> --type=NodePort - ``` -- For a LoadBalancer service, run the same command, except that the type of the service created is `LoadBalancer`. For example: -- ```console - kubectl -n <namespaceName> expose pod <podName> --port=1533 --name=<serviceName> --type=LoadBalancer - ``` -- Here is an example of this command run against Azure Kubernetes Service, where the pod hosting the primary is `sql2-0`: -- ```console - kubectl -n arc-cluster expose pod sql2-0 --port=1533 --name=sql2-0-p --type=LoadBalancer - ``` -- Get the IP of the Kubernetes service created: -- ```console - kubectl get services -n <namespaceName> - ``` --2. Restore the database to the primary instance endpoint. -- Add the database backup file into the primary instance container. -- ```console - kubectl cp <source file location> <pod name>:var/opt/mssql/data/<file name> -c <serviceName> -n <namespaceName> - ``` -- Example -- ```console - kubectl cp /home/WideWorldImporters-Full.bak sql2-1:var/opt/mssql/data/WideWorldImporters-Full.bak -c arc-sqlmi -n arc - ``` -- Restore the database backup file by running the command below. -- ```sql - RESTORE DATABASE test FROM DISK = '/var/opt/mssql/data/<file name>.bak' - WITH MOVE '<database name>' to '/var/opt/mssql/datf' - ,MOVE '<database name>' to '/var/opt/mssql/data/<file name>_log.ldf' - ,RECOVERY, REPLACE, STATS = 5; - GO - ``` - - Example -- ```sql - RESTORE Database WideWorldImporters - FROM DISK = '/var/opt/mssql/data/WideWorldImporters-Full.BAK' - WITH - MOVE 'WWI_Primary' TO '/var/opt/mssql/datf', - MOVE 'WWI_UserData' TO '/var/opt/mssql/data/WideWorldImporters_UserData.ndf', - MOVE 'WWI_Log' TO '/var/opt/mssql/data/WideWorldImporters.ldf', - MOVE 'WWI_InMemory_Data_1' TO '/var/opt/mssql/data/WideWorldImporters_InMemory_Data_1', - RECOVERY, REPLACE, STATS = 5; - GO - ``` --3. Add the database to the availability group. -- For the database to be added to the AG, it must run in full recovery mode and a log backup has to be taken. Run the TSQL statements below to add the restored database into the availability group. -- ```sql - ALTER DATABASE <databaseName> SET RECOVERY FULL; - BACKUP DATABASE <databaseName> TO DISK='<filePath>' - ALTER AVAILABILITY GROUP containedag ADD DATABASE <databaseName> - ``` -- The following example adds a database named `WideWorldImporters` that was restored on the instance: -- ```sql - ALTER DATABASE WideWorldImporters SET RECOVERY FULL; - BACKUP DATABASE WideWorldImporters TO DISK='/var/opt/mssql/data/WideWorldImporters.bak' - ALTER AVAILABILITY GROUP containedag ADD DATABASE WideWorldImporters - ``` --> [!IMPORTANT] -> As a best practice, you should delete the Kubernetes service created above by running this command: -> ->```console ->kubectl delete svc sql2-0-p -n arc ->``` --### Limitations --SQL Managed Instance enabled by Azure Arc availability groups has the same limitations as Big Data Cluster availability groups. For more information, see [Deploy SQL Server Big Data Cluster with high availability](/sql/big-data-cluster/deployment-high-availability#known-limitations). --## Related content --Learn more about [Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md) |
azure-arc | Managed Instance Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-overview.md | - Title: SQL Managed Instance enabled by Azure Arc Overview -description: SQL Managed Instance enabled by Azure Arc Overview ------ Previously updated : 07/19/2023----# SQL Managed Instance enabled by Azure Arc Overview --SQL Managed Instance enabled by Azure Arc is an Azure SQL data service that can be created on the infrastructure of your choice. ---## Description --SQL Managed Instance enabled by Azure Arc has near 100% compatibility with the latest SQL Server database engine, and enables existing SQL Server customers to lift and shift their applications to Azure Arc data services with minimal application and database changes while maintaining data sovereignty. At the same time, SQL Managed Instance includes built-in management capabilities that drastically reduce management overhead. --To learn more about these capabilities, watch these introductory videos. --### SQL Managed Instance enabled by Azure Arc - indirect connected mode --> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny] --### SQL Managed Instance enabled by Azure Arc - direct connected mode --> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny] --## Related content --Learn more about [Features and Capabilities of SQL Managed Instance enabled by Azure Arc](managed-instance-features.md) --[Azure Arc-enabled Managed Instance high availability](managed-instance-high-availability.md) --[Start by creating a Data Controller](create-data-controller-indirect-cli.md) --Already created a Data Controller? [Create a SQL Managed Instance enabled by Azure Arc](create-sql-managed-instance.md) |
azure-arc | Migrate Postgresql Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-postgresql-data.md | - Title: Migrate data from a PostgreSQL database into an Azure Arc-enabled PostgreSQL server- -description: Migrate data from a PostgreSQL database into an Azure Arc-enabled PostgreSQL server ------ Previously updated : 11/03/2021----# Migrate PostgreSQL database to Azure Arc-enabled PostgreSQL server --This document describes the steps to get your existing PostgreSQL database (one that not hosted in Azure Arc-enabled Data Services) into your Azure Arc-enabled PostgreSQL server. ---## Considerations --Azure Arc-enabled PostgreSQL server is the community version of PostgreSQL. So any tool that works on PostgreSQL outside of Azure Arc should work with Azure Arc-enabled PostgreSQL server. ---As such, with the set of tools you use today for Postgres, you |